CN105427257A - Image enhancement method and apparatus - Google Patents
Image enhancement method and apparatus Download PDFInfo
- Publication number
- CN105427257A CN105427257A CN201510799661.XA CN201510799661A CN105427257A CN 105427257 A CN105427257 A CN 105427257A CN 201510799661 A CN201510799661 A CN 201510799661A CN 105427257 A CN105427257 A CN 105427257A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- enhancing
- component
- component information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 9
- 230000002708 enhancing effect Effects 0.000 claims description 61
- 230000009466 transformation Effects 0.000 claims description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 11
- 238000003786 synthesis reaction Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 239000000284 extract Substances 0.000 claims description 7
- 238000005728 strengthening Methods 0.000 claims description 4
- 238000010030 laminating Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 13
- 238000012544 monitoring process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration by non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Abstract
The invention provides an image enhancement method. The method comprises: converting a target image to a YUV color space to obtain image information of the target image; extracting first Y component information from the image information; extracting high-frequency part information and low-frequency part information of the first Y component information, and performing enhancement; synthesizing the enhanced high-frequency part information and low-frequency part information of the first Y component information to obtain enhanced information of the first Y component information; reconstructing the enhanced information of the first Y component information to obtain second Y component information; superposing the second Y component information with UV component information in the image information of the target image to obtain enhanced information of the image information; converting the enhanced information of the image information to an RGB color space to obtain an enhanced image; and outputting the enhanced image of the target image. The invention furthermore provides an image enhancement apparatus. The technical problem that detailed information will become fuzzy when the image brightness is enhanced in the prior art can be solved.
Description
Technical field
The present invention relates to technical field of image processing, in particular to a kind of image enchancing method and device.
Background technology
Nowadays, intelligent imaging monitoring has been widely used in the fields such as road traffic, city security protection and power monitoring.And the video camera in complex environment (as field electric monitoring environment) is when carrying out image acquisition, easily be subject to the impact of the factor such as sleety weather, light change at night in early morning, thus cause the output image of video camera image quality after compression coding cannot meet the needs of user.Therefore, before carrying out analyzing and processing to image, be necessary to carry out certain enhancing and sharpening operation to it.
In general, image enhaucament mainly comprises edge enhancing, texture strengthens, target area strengthens and contrast strengthen.According to the difference in process space, the enhancing based on spatial domain and the enhancing based on frequency domain can be divided into.
Based on method direct process image being carried out to Pixel-level usually of spatial domain.The method can be divided into point processing and field denoising two kinds.Point processing method and gray level rectification, greyscale transformation and histogram modification etc., its object comprises makes image uniform, expand dynamic range of images and expanded contrast.Field denoising method is divided into image smoothing and sharpening two kinds.Smoothly be generally used for removal of images noise, but also easily cause the fuzzy of edge.The object of sharpening is the edge contour of outstanding object, is convenient to target identification.
Method based on frequency domain in certain transform domain of image, carries out certain to the transform coefficient values of image revise, and is a kind of method of indirect enhancing.Method based on frequency domain regards image as a kind of 2D signal, carries out two-dimensional Fourier transform to it, directly carries out computing to the conversion coefficient (as frequency components) of image, then by inverse Fourier transform to obtain the enhancing effect of image.Adopt low-pass filtering, namely only allow low frequency signal pass through, the noise in image can be removed.And adopt high-pass filtering, then can strengthen the high-frequency signals such as edge, make fuzzy image become clear.
Retinex algorithm is a kind of conventional image enchancing method based on frequency domain.It is proposed by the people such as Land one and how regulates about human visual system and perceive the color of object and the model of brightness, and the method hypothesis image has color constancy, and that is same object color under different light sources or light is constant.Theoretical according to Retinex, the imaging of the object that human eye collecting devices such as (or) cameras is observed determines primarily of two elements, is incident light and reflected light respectively.Correspondingly, a sub-picture can be regarded as and be made up of light image and reflected image two parts.Under the condition of constant color, just can reach the object of image enhaucament respectively to light image and reflected image adjustment.The method is in the performance of the excellences such as contrast strengthen effect, restraint speckle, counting yield.But method is applicable to processing the lower image of local gray-value, can strengthen the detail section of wherein dark place and can keep original brightness to a certain extent while compressed image contrast.But easily produce halation phenomenon based on the image enchancing method of Retinex in highlight regions, overall brightness approaches to average, make local detail information contrast not enough.
Intelligent image is monitored in the video intelligent analytical applications under the complex environment such as power monitoring in the wild, higher requirement is had to the overall brightness of image, the local detail contrast of respective regions, therefore, how to provide a kind of improve integral image brightness while can also keep the image processing method of the contrast even improving image detail.A technical matters being badly in need of solving to those skilled in the art.
Summary of the invention
In view of this, the invention provides a kind of image enchancing method and device, utilize this image enchancing method and device, the overall brightness that can not only improve image can also keep the contrast even improving image detail.
To achieve these goals, the technical scheme of embodiment of the present invention employing is as follows:
First aspect, the embodiment of the present invention provides a kind of image enchancing method, and described method comprises: a target image is transformed into the image information that YUV color space obtains this target image; The first Y-component information is extracted from the image information of described target image; Extract HFS information and the low frequency part information of described first Y-component information; The HFS information of described first Y-component information and low frequency part information are strengthened respectively; The HFS information of the described first Y-component information after enhancing and low frequency part information are carried out synthesizing the enhancing information obtaining described first Y-component information; The enhancing signal reconstruct of the described first Y-component information after synthesis is obtained the second Y-component information; Described second Y-component information is carried out with the UV component information in the image information of described target image the enhancing information superposing the image information obtaining described target image; The enhancing information of the image information of described target image is transformed into the enhancing image that RGB color space obtains described target image; Export the enhancing image of described target image.
Second aspect, the embodiment of the present invention also provides a kind of image intensifier device, and described image intensifier device comprises: the first color-space conversion module, for a target image is transformed into the image information that YUV color space obtains this target image; First extraction module, for extracting the first Y-component information in the image information from described target image; Second extraction module, for extracting HFS information and the low frequency part information of described first Y-component information; Strengthen module, for strengthening respectively the HFS information of described first Y-component information and low frequency part information; Synthesis module, for carrying out synthesizing by the HFS information of the described first Y-component information after enhancing and low-frequency information the enhancing information obtaining described first Y-component information; Reconstructed module, for obtaining the second Y-component information by the enhancing signal reconstruct of the described first Y-component information after synthesis; Laminating module, for carrying out with the UV component information in the image information of described target image the enhancing information superposing the image information obtaining described target image by described second Y-component information; Second color-space conversion module, the enhancing information for the image information by described target image is transformed into the enhancing image that RGB color space obtains described target image; And output module, for exporting the enhancing image of described target image.
By above-mentioned image enchancing method, the luminance component of image is decomposed into HFS information and low frequency part information, synthesizing after HFS information and low frequency part information are strengthened respectively, finally obtaining final enhancing image after overlay color information again.The effect of the contrast even improving image detail can also be kept while realizing improving integral image brightness.
For making above-mentioned purpose of the present invention, feature and advantage become apparent, preferred embodiment cited below particularly, and coordinate appended accompanying drawing, be described in detail below.
Accompanying drawing explanation
In order to the clearer explanation embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for ordinary skill user person, under the prerequisite not paying creative work, other relevant accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows the process flow diagram of the image enchancing method that the present invention first specific embodiment provides;
Fig. 2 shows two dimensional image in the present invention first specific embodiment and carries out the schematic diagram that wavelet decomposition generates sub-band images;
Fig. 3 shows the schematic diagram of Retinex algorithm in the present invention first specific embodiment;
The structured flowchart of the image intensifier device that Fig. 4 provides for the present invention second specific embodiment.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.The assembly of the embodiment of the present invention describing and illustrate in usual accompanying drawing herein can be arranged with various different configuration and design.Therefore, below to the detailed description of the embodiments of the invention provided in the accompanying drawings and the claimed scope of the present invention of not intended to be limiting, but selected embodiment of the present invention is only represented.Based on embodiments of the invention, the every other embodiment that those skilled in the art obtain under the prerequisite not making creative work, all belongs to the scope of protection of the invention.
First specific embodiment
Fig. 1 provides the process flow diagram of image enchancing method for the present invention first specific embodiment.As shown in Figure 1, the process flow diagram of image enchancing method in the present embodiment can comprise the following steps.
Step S110, is transformed into a target image image information that YUV color space obtains this target image.
This target image can be the image needing to carry out image enhancement processing that shooting obtains under a complex environment (as field electric monitoring environment).The image obtained in intelligent monitoring is the image in RGB color space, and in RGB color space, color and monochrome information mix, and in yuv space, Y-component represents monochrome information, and UV component then represents colouring information.This specific embodiment preferably carries out image enhaucament in Y-component.RGB color space conversion is that the formula of YUV color space is as follows:
Step S120, extracts the first Y-component information from the image information of described target image.
The first Y-component information is extracted from the image information of the target image be converted to, in this specific embodiment, image enhaucament mainly acts on luminance Y component, and follow-up operation processes for Y-component information, and the color UV component of image can not be changed, this UV component information retains.
Step S130, extracts HFS information and the low frequency part information of described first Y-component information.
In this step, preferably adopt discrete wavelet transformer described first Y-component information of changing commanders to change to frequency domain from transform of spatial domain, and extract the HFS information and the low frequency part information that obtain described first Y-component information.Wherein, the image information at the described violent place of HFS information representation image intensity change, the image information at the described mild place of low frequency part information representation image intensity change.
The form that the wavelet transform of arbitrary function f (t) can be expressed as:
Wherein,
represent the complex conjugate of ψ (x).
Frequency domain is changed to from transform of spatial domain, to carry out the enhancing on frequency domain to described first Y-component information by discrete wavelet transformer described second image of changing commanders.After wavelet decomposition is carried out for two dimensional image, usually can generate a low frequency sub-band image and 3 high-frequency sub-band images, as shown in Figure 2.In Fig. 2, a1 represents low frequency sub-band image, and h1, v1 and d1 represent three high-frequency sub-band images.Wherein, the low frequency sub-band image (a1) in the upper left corner represents that approximate value and original image are extremely similar, just size diminishes, it contains most energy of former figure, larger to Recovery image quality influence, the wavelet coefficient of all the other high-frequency sub-band images (h1 represents level detail, v1 represents vertical detail and d1 represents diagonal detail) is mostly smaller, but contains a lot of detailed information of original image.
Step S140 strengthens respectively to the HFS information of described first Y-component information and low frequency part information.
In specific embodiment provided by the invention, Retinex algorithm is preferably adopted to strengthen respectively described HFS information and low frequency part information.Should be appreciated that in other embodiments, the method strengthened respectively HFS information and the low frequency part information of described first Y-component information can also be other known algorithm for image enhancement.
Retinex algorithm comprises SSR (SingleScaleRetinex, single scale Retinex) and MSR (MultiScaleRetinex, multiple dimensioned Retinex).
A) .SSR (SingleScaleRetinex, single scale Retinex)
One secondary given image S (x, y) can be decomposed into two different parts: reflected image R (x, y) and luminance picture L (x, y) (also can be referred to as incident image), with reference to shown in Fig. 3.
That is: following formula is joined in the description of image that the final mankind or camera observe:
S(x,y)=R(x,y)·L(x,y)。
The final purpose estimated due to Retinex is to obtain R (x, y), and therefore, general computing method are:
r(x,y)=logS(x,y)-log[F(x,y)*S(x,y)]
Here, r (x, y) is output image, and * is convolution symbol, around function centered by F (x, y), usually adopts the form of Gassian low-pass filter:
wherein c represents that Gauss is around yardstick, and λ is a scale factor, and its value must meet:
∫∫F(x,y)dxdy=1
B) .MSR (MultiScaleRetinex, multiple dimensioned Retinex)
The method develops on the basis of SSR, and its computing formula is:
Wherein K is taken as 3 usually, that is: consider respectively to strengthen high frequency, intermediate frequency, low-frequency component, and gives weighting, thus forms final enhancing effect.
In this specific embodiment, preferably adopt single scale Retinex to strengthen respectively the HFS information of described second image and low frequency part information respectively.
Step S150, carries out synthesizing by the HFS information of the described first Y-component information after enhancing and low frequency part information the enhancing information obtaining described first Y-component information.
The enhancing information of described first Y-component information is the enhancing information in frequency domain.
Step S160, obtains the second Y-component information by the enhancing signal reconstruct of the described first Y-component information after synthesis.
In this specific embodiment, wavelet inverse transformation is preferably adopted the described enhancing information in frequency domain to be reconstructed to the described second Y-component information obtained containing Y-component in spatial domain.
Wavelet inverse transformation, is referred to as wavelet reconstruction again, the formula that discrete wavelet inverse transformation can be expressed as:
Here,
be referred to as ψ
m, nantithesis, can by a wavelet
obtained with flexible by displacement:
Typically, the inverse transformation of wavelet transform needs to utilize discrete wavelet sequence { ψ
j, k(t) }
j, k ∈ Zform wavelet frame, if the Lower and upper bounds of wavelet frame is respectively A and B, when a=b (tight frame), can be learnt by the concept of wavelet frame, being inversely transformed into of wavelet transform:
When A, B are unequal, but time relatively, the inverse transformation of wavelet transform can be approximated to be:
Step S170, carries out with the UV component information in the image information of described target image the enhancing information superposing the image information obtaining described target image by described second Y-component information.
In this step, the UV component information not carrying out processing is superposed, obtain the enhancing information of the image information of the described target image in YUV color space in the image information of described second Y-component information and described target image.
Step S180, is transformed into the enhancing information of the image information of described target image the enhancing image that RGB color space obtains described target image.
Step S190, exports the enhancing image of described target image.
The target image exported after strengthening is convenient to client and from the target image that brightness strengthens, is obtained enough image detail informations analyze, and to obtain monitor message accurately, provides reference for client makes decision.
By above-mentioned image enchancing method, the luminance component of image is decomposed into HFS information and low frequency part information, synthesizing after HFS information and low frequency part information are strengthened respectively, finally obtaining final enhancing image after overlay color information again.The effect of the contrast even improving image detail can also be kept while realizing improving integral image brightness.
Second specific embodiment
Fig. 4 is the structured flowchart of a kind of image intensifier device provided for the present invention second specific embodiment, and please refer to Fig. 4, described image intensifier device 200 comprises
First color-space conversion module 210, for being transformed into a target image image information that YUV color space obtains this target image.
First extraction module 220, for extracting the first Y-component information and a UV component information respectively in the image information from described target image.
Second extraction module 230, for extracting HFS information and the low frequency part information of described first Y-component information.
In this specific embodiment, described second extraction module 230 adopts wavelet transformation that described first Y-component information is changed to frequency domain from transform of spatial domain, obtains HFS information and the low frequency part information of described first Y-component information.
Strengthen module 240, for strengthening respectively the HFS information of described first Y-component information and low frequency part information.
In this specific embodiment, Retinex algorithm is preferably adopted to strengthen respectively described HFS information and low frequency part information.
Synthesis module 250, for carrying out synthesizing by the HFS information of the described first Y-component information after enhancing and low-frequency information the enhancing information obtaining described first Y-component information;
Reconstructed module 260, for obtaining the second Y-component information by the enhancing signal reconstruct of the described first Y-component information after synthesis.
In this specific embodiment, the enhancing information of wavelet inverse transformation to the described first Y-component information in frequency domain is preferably adopted to be reconstructed the second Y-component information obtained containing luminance Y component in spatial domain.
Laminating module 270, for carrying out with the UV component information in the image information of described target image the enhancing information superposing the image information obtaining described target image by described second Y-component information.
Second color-space conversion module 280, the enhancing information for the image information by described target image is transformed into the enhancing image that RGB color space obtains described target image.
Output module 290, for exporting the enhancing image of described target image.
It should be noted that, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually see.
In addition, the process flow diagram in accompanying drawing and block diagram show system according to multiple embodiment of the present invention, the architectural framework in the cards of method and computer program product, function and operation.In this, each square frame in process flow diagram or block diagram can represent a part for module, program segment or a code, and a part for described module, program segment or code comprises one or more executable instruction for realizing the logic function specified.Also it should be noted that at some as in the realization of replacing, the function marked in square frame also can be different from occurring in sequence of marking in accompanying drawing.Such as, in fact two continuous print square frames can perform substantially concurrently, and they also can perform by contrary order sometimes, and this determines according to involved function.Also it should be noted that, the combination of the square frame in each square frame in block diagram and/or process flow diagram and block diagram and/or process flow diagram, can realize by the special hardware based system of the function put rules into practice or action, or can realize with the combination of specialized hardware and computer instruction.
It should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.It should be noted that: represent similar terms in similar label and letter accompanying drawing below, therefore, once be defined in an a certain Xiang Yi accompanying drawing, then do not need to define further it and explain in accompanying drawing subsequently.
Claims (10)
1. an image enchancing method, is characterized in that, described method comprises:
One target image is transformed into the image information that YUV color space obtains this target image;
The first Y-component information is extracted from the image information of described target image;
Extract HFS information and the low frequency part information of described first Y-component information;
The HFS information of described first Y-component information and low frequency part information are strengthened respectively;
The HFS information of the described first Y-component information after enhancing and low frequency part information are carried out synthesizing the enhancing information obtaining described first Y-component information;
The enhancing signal reconstruct of the described first Y-component information after synthesis is obtained the second Y-component information;
Described second Y-component information is carried out with the UV component information in the image information of described target image the enhancing information superposing the image information obtaining described target image;
The enhancing information of the image information of described target image is transformed into the enhancing image that RGB color space obtains described target image;
Export the enhancing image of described target image.
2. image enchancing method as claimed in claim 1, is characterized in that: described HFS information refers to the image information at the violent place of image intensity change, and described low frequency part information refers to the image information at the mild place of image intensity change.
3. image enchancing method as claimed in claim 1, is characterized in that, extracts the described HFS information of the first Y-component information and the step of low frequency part information comprises:
HFS information and the low frequency part information of described first Y-component information is extracted by wavelet transformation.
4. image enchancing method as claimed in claim 3, is characterized in that, comprise the step that HFS information and the low frequency part information of described first Y-component information strengthen respectively:
By Retinex algorithm, the HFS information of described first Y-component information and low frequency part information are strengthened respectively.
5. image enchancing method as claimed in claim 4, is characterized in that, the step that the enhancing signal reconstruct of the described first Y-component information after synthesis obtains the second Y-component information is comprised:
By wavelet inverse transformation, the enhancing signal reconstruct of the described first Y-component information after synthesis is obtained described second Y-component information.
6. an image intensifier device, is characterized in that, described image intensifier device comprises:
First color-space conversion module, for being transformed into a target image image information that YUV color space obtains this target image;
First extraction module, for extracting the first Y-component information in the image information from described target image;
Second extraction module, for extracting HFS information and the low frequency part information of described first Y-component information;
Strengthen module, for strengthening respectively the HFS information of described first Y-component information and low frequency part information;
Synthesis module, for carrying out synthesizing by the HFS information of the described first Y-component information after enhancing and low-frequency information the enhancing information obtaining described first Y-component information;
Reconstructed module, for obtaining the second Y-component information by the enhancing signal reconstruct of the described first Y-component information after synthesis;
Laminating module, for carrying out with the UV component information in the image information of described target image the enhancing information superposing the image information obtaining described target image by described second Y-component information;
Second color-space conversion module, the enhancing information for the image information by described target image is transformed into the enhancing image that RGB color space obtains described target image; And
Output module, for exporting the enhancing image of described target image.
7. image intensifier device as claimed in claim 6, is characterized in that:
Described HFS information refers to the image information at the violent place of image intensity change, and described low frequency part information refers to the image information at the mild place of image intensity change.
8. image intensifier device as claimed in claim 7, is characterized in that: described second extraction module extracts HFS information and the low frequency part information of described first Y-component information by wavelet transformation.
9. image intensifier device as claimed in claim 7, is characterized in that: described enhancing module is strengthened the HFS image of described first Y-component information and low frequency part image respectively by Retinex algorithm.
10. image intensifier device as claimed in claim 7, is characterized in that: the enhancing signal reconstruct of the described first Y-component information after superposition is obtained described second Y-component information by wavelet inverse transformation by described reconstructed module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510799661.XA CN105427257A (en) | 2015-11-18 | 2015-11-18 | Image enhancement method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510799661.XA CN105427257A (en) | 2015-11-18 | 2015-11-18 | Image enhancement method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105427257A true CN105427257A (en) | 2016-03-23 |
Family
ID=55505438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510799661.XA Pending CN105427257A (en) | 2015-11-18 | 2015-11-18 | Image enhancement method and apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105427257A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025631A (en) * | 2017-03-13 | 2017-08-08 | 深圳市嘉和顺信息科技有限公司 | A kind of image processing method, device and the equipment of golf course figure |
CN107680064A (en) * | 2017-10-31 | 2018-02-09 | 长沙准光里电子科技有限公司 | Computer-readable recording medium |
CN108680137A (en) * | 2018-04-24 | 2018-10-19 | 天津职业技术师范大学 | Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar |
CN109389560A (en) * | 2018-09-27 | 2019-02-26 | 深圳开阳电子股份有限公司 | A kind of adaptive weighted filter image denoising method, device and image processing equipment |
CN110278425A (en) * | 2019-07-04 | 2019-09-24 | 潍坊学院 | Image enchancing method, device, equipment and storage medium |
CN110365914A (en) * | 2019-07-24 | 2019-10-22 | 中国人民解放军国防科技大学 | Image dynamic range widening method and system |
CN110599406A (en) * | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Image enhancement method and device |
CN111225278A (en) * | 2020-03-02 | 2020-06-02 | 新疆大学 | Method and device for enhancing video under low illumination |
CN113255571A (en) * | 2021-06-16 | 2021-08-13 | 中国科学院自动化研究所 | anti-JPEG compression fake image detection method |
CN113359734A (en) * | 2021-06-15 | 2021-09-07 | 苏州工业园区报关有限公司 | Logistics auxiliary robot based on AI |
CN113747046A (en) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
WO2023098251A1 (en) * | 2021-12-03 | 2023-06-08 | 腾讯音乐娱乐科技(深圳)有限公司 | Image processing method, device, and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090074317A1 (en) * | 2007-09-19 | 2009-03-19 | Samsung Electronics Co., Ltd. | System and method for reducing halo effect in image enhancement |
CN103440623A (en) * | 2013-08-02 | 2013-12-11 | 中北大学 | Method for improving image definition in foggy days based on imaging model |
-
2015
- 2015-11-18 CN CN201510799661.XA patent/CN105427257A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090074317A1 (en) * | 2007-09-19 | 2009-03-19 | Samsung Electronics Co., Ltd. | System and method for reducing halo effect in image enhancement |
CN103440623A (en) * | 2013-08-02 | 2013-12-11 | 中北大学 | Method for improving image definition in foggy days based on imaging model |
Non-Patent Citations (1)
Title |
---|
张红颖 等: "基于YUV色彩空间的Retinex夜间图像增强算法", 《科学技术与工程》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025631A (en) * | 2017-03-13 | 2017-08-08 | 深圳市嘉和顺信息科技有限公司 | A kind of image processing method, device and the equipment of golf course figure |
CN107680064A (en) * | 2017-10-31 | 2018-02-09 | 长沙准光里电子科技有限公司 | Computer-readable recording medium |
CN108680137A (en) * | 2018-04-24 | 2018-10-19 | 天津职业技术师范大学 | Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar |
CN109389560A (en) * | 2018-09-27 | 2019-02-26 | 深圳开阳电子股份有限公司 | A kind of adaptive weighted filter image denoising method, device and image processing equipment |
CN109389560B (en) * | 2018-09-27 | 2022-07-01 | 深圳开阳电子股份有限公司 | Adaptive weighted filtering image noise reduction method and device and image processing equipment |
CN110599406A (en) * | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Image enhancement method and device |
US11915392B2 (en) | 2019-03-18 | 2024-02-27 | Shanghai Linkchip Semiconductor Technology Co., Ltd. | Image enhancement method and apparatus |
CN110599406B (en) * | 2019-03-18 | 2022-05-03 | 上海立可芯半导体科技有限公司 | Image enhancement method and device |
CN110278425A (en) * | 2019-07-04 | 2019-09-24 | 潍坊学院 | Image enchancing method, device, equipment and storage medium |
CN110365914A (en) * | 2019-07-24 | 2019-10-22 | 中国人民解放军国防科技大学 | Image dynamic range widening method and system |
CN111225278A (en) * | 2020-03-02 | 2020-06-02 | 新疆大学 | Method and device for enhancing video under low illumination |
CN113747046A (en) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN113359734B (en) * | 2021-06-15 | 2022-02-22 | 苏州工业园区报关有限公司 | Logistics auxiliary robot based on AI |
CN113359734A (en) * | 2021-06-15 | 2021-09-07 | 苏州工业园区报关有限公司 | Logistics auxiliary robot based on AI |
CN113255571B (en) * | 2021-06-16 | 2021-11-30 | 中国科学院自动化研究所 | anti-JPEG compression fake image detection method |
CN113255571A (en) * | 2021-06-16 | 2021-08-13 | 中国科学院自动化研究所 | anti-JPEG compression fake image detection method |
WO2023098251A1 (en) * | 2021-12-03 | 2023-06-08 | 腾讯音乐娱乐科技(深圳)有限公司 | Image processing method, device, and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105427257A (en) | Image enhancement method and apparatus | |
CN101609549B (en) | Multi-scale geometric analysis super-resolution processing method of video blurred image | |
CN103268598B (en) | Based on the low-light (level) low altitude remote sensing image Enhancement Method of Retinex theory | |
Bhatnagar et al. | An image fusion framework based on human visual system in framelet domain | |
CN103295204B (en) | A kind of image self-adapting enhancement method based on non-down sampling contourlet transform | |
CN101103378A (en) | Device and method for creating a saliency map of an image | |
CN104574293A (en) | Multiscale Retinex image sharpening algorithm based on bounded operation | |
CN102289792A (en) | Method and system for enhancing low-illumination video image | |
CN102903081A (en) | Low-light image enhancement method based on red green blue (RGB) color model | |
Shen et al. | Convolutional neural pyramid for image processing | |
CN106875358A (en) | Image enchancing method and image intensifier device based on Bayer format | |
CN103607589B (en) | JND threshold value computational methods based on hierarchy selection visual attention mechanism | |
CN101493939A (en) | Method for detecting cooked image based on small wave domain homomorphic filtering | |
Li et al. | Low illumination video image enhancement | |
CN102930508A (en) | Image residual signal based non-local mean value image de-noising method | |
Beghdadi et al. | A critical analysis on perceptual contrast and its use in visual information analysis and processing | |
CN105427255A (en) | GRHP based unmanned plane infrared image detail enhancement method | |
Ahmed et al. | PIQI: perceptual image quality index based on ensemble of Gaussian process regression | |
Arulkumar et al. | Super resolution and demosaicing based self learning adaptive dictionary image denoising framework | |
CN106056565A (en) | MRI and PET image fusion method based on multi-scale morphology bilateral filtering decomposition and contrast compression | |
Bhutto et al. | An enhanced image fusion algorithm by combined histogram equalization and fast gray level grouping using multi-scale decomposition and gray-PCA | |
Hui et al. | Multi-channel adaptive partitioning network for block-based image compressive sensing | |
CN111311503A (en) | Night low-brightness image enhancement system | |
Ein-shoka et al. | Quality enhancement of infrared images using dynamic fuzzy histogram equalization and high pass adaptation in DWT | |
Feng et al. | Low-light color image enhancement based on Retinex |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160323 |
|
RJ01 | Rejection of invention patent application after publication |