CN109345485A - A kind of image enchancing method, device, electronic equipment and storage medium - Google Patents
A kind of image enchancing method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109345485A CN109345485A CN201811233579.0A CN201811233579A CN109345485A CN 109345485 A CN109345485 A CN 109345485A CN 201811233579 A CN201811233579 A CN 201811233579A CN 109345485 A CN109345485 A CN 109345485A
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- enhancing
- sampled images
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000002708 enhancing effect Effects 0.000 claims abstract description 156
- 238000012545 processing Methods 0.000 claims abstract description 80
- 230000006872 improvement Effects 0.000 claims abstract description 55
- 238000013135 deep learning Methods 0.000 claims abstract description 51
- 238000005070 sampling Methods 0.000 claims abstract description 24
- 238000013507 mapping Methods 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 14
- 230000001965 increasing effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 13
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 241000208340 Araliaceae Species 0.000 description 5
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 5
- 235000003140 Panax quinquefolius Nutrition 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 235000008434 ginseng Nutrition 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000003389 potentiating effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The disclosure is directed to a kind of image enchancing method, device, electronic equipment and storage mediums, which comprises obtain target image to be reinforced, and carry out down-sampling processing to the target image, obtain down-sampled images;By down-sampled images input deep learning network trained in advance, the corresponding image enhancement data of the down-sampled images are obtained;Determine match point of each pixel in the down-sampled images in the target image;For each pixel in the target image, enhancing data corresponding to the match point based on the pixel in described image enhancing data determine the corresponding targets improvement parameter of the pixel;For each pixel in the target image, it is based on the corresponding targets improvement parameter of the pixel, the pixel value of the pixel is adjusted, obtains the corresponding enhancing image of the target image.The disclosure can reduce the complexity of image enhancement processing while the image enhancement effects for keeping obtaining based on deep learning.
Description
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of image enchancing method, device, electronic equipment and deposit
Storage media.
Background technique
When personal consumer device carries out Image Acquisition, the image of high dynamic range usually can not be directly captured, and
And be also difficult to obtain the image of continuous multiple frames difference exposure, to carry out more exposure fusions to obtain the image of high dynamic range.
At this point, the image enhancement technique based on single frames is particularly important, wherein enhancing single-frame images can be only by current
Image information adjusts the parameters such as brightness of image and contrast, and the image for obtaining being similar to more exposing the high dynamic range of fusion increases
Potent fruit.
Currently used single-frame images Enhancement Method has: the image enchancing method based on deep learning.It is relevant to be based on deeply
It spends in the image enchancing method of study, by data set gathered in advance, learns the map information of image after original image to enhancing, and
The parameter of storage deep learning network adaptively enhances image to be reinforced, obtains preferable image enhancement effects.
But due to being directly to carry out the operation such as convolution to high-resolution original image, directly export high-resolution enhancing
Image afterwards causes the complexity of the image enhancement processes based on deep learning in the related technology high.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provide a kind of image enchancing method, device, electronic equipment and
Storage medium, to reduce the complexity of image enhancement processing while the image enhancement effects for keeping obtaining based on deep learning
Degree.
According to the first aspect of the embodiments of the present disclosure, a kind of image enchancing method is provided, comprising:
Target image to be reinforced is obtained, and down-sampling processing is carried out to the target image, obtains down-sampled images;
By down-sampled images input deep learning network trained in advance, the corresponding figure of the down-sampled images is obtained
Image intensifying data;Wherein, the deep learning network is according to sample image and the corresponding sample enhancing figure of the sample image
It is obtained as trained, it is to characterize the enhanced image of down-sampled images relative to the down-sampling that described image, which enhances data,
The data of the enhancing degree of image;
Determine match point of each pixel in the down-sampled images in the target image;
For each pixel in the target image, the match point based on the pixel in described image enhancing data
Corresponding enhancing data determine the corresponding targets improvement parameter of the pixel;
For each pixel in the target image, it is based on the corresponding targets improvement parameter of the pixel, adjustment should
The pixel value of pixel obtains the corresponding enhancing image of the target image.
Optionally, match point of each pixel in the determination target image in the down-sampled images,
Include:
For each pixel in the target image, correspondence picture of the pixel in the down-sampled images is determined
Vegetarian refreshments is searched and the corresponding pixel points centered on the corresponding pixel points and size is in the region of search of M × N
The smallest pixel of the absolute value of the difference of pixel value, using the pixel found as the pixel in the down-sampled images
Match point;
It wherein, is the pixel of (u, v), pair of the pixel in down-sampled images for coordinate in the target image
The coordinate for answering pixel is (u/x, v/x), and x indicates down-sampling multiple.
Optionally, described image enhancing data include: in the down-sampled images each pixel be mapped as respective pixel
The mapping parameters of point, the respective pixel point of any pixel point be in the enhanced image of the down-sampled images with the pixel point
Set identical pixel;
Each pixel in the target image, based on the pixel in described image enhancing data
With the corresponding enhancing data of point, the corresponding targets improvement parameter of the pixel is determined, comprising:
For each pixel in the target image, from each mapping parameters, the determining matching with the pixel
The corresponding target component of point, using identified target component as the corresponding targets improvement parameter of the pixel.
Optionally, described image enhancing data include: the enhanced image of the down-sampled images;
Each pixel in the target image, based on the pixel in described image enhancing data
With the corresponding enhancing data of point, the corresponding targets improvement parameter of the pixel is determined, comprising:
For each pixel in the target image, determine in the enhanced image of the down-sampled images, with this
The identical target point in match point position of pixel, calculates the pixel of the pixel value of the target point and the match point of the pixel
The ratio is determined as the corresponding targets improvement parameter of the pixel by the ratio of value.
Optionally, each pixel in the target image, is based on the corresponding targets improvement of the pixel
Parameter adjusts the pixel value of the pixel, comprising:
For each pixel in the target image, be based on the corresponding targets improvement parameter of the pixel, by with
Lower formula is adjusted the pixel value of the pixel:
AIp=Op
Wherein, A indicates the corresponding targets improvement parameter of the pixel, IpThe pixel value of the pixel, O before indicating to adjustp
Indicate the pixel value of the pixel after adjusting.
Optionally, the method also includes:
Determine first RGB image and the enhancing image of the target image under rgb color mode in RGB color
The second RGB image under color mode;
Corresponding first luminance picture of first RGB image is generated, any pixel point in first luminance picture
Brightness value are as follows: the maximum value in the RGB channel value of corresponding first pixel of the pixel, first pixel are described the
Pixel identical with the pixel position in one RGB image;
Corresponding second luminance picture of second RGB image is generated, any pixel point in second luminance picture
Brightness value are as follows: the maximum value in the RGB channel value of corresponding second pixel of the pixel, second pixel are described the
Pixel identical with the pixel position in two RGB images;
Calculate gain parameter of second luminance picture relative to first luminance picture;
Based on the gain parameter being calculated, brightness enhancing processing is carried out to second RGB image, obtains brightness increasing
Strong image.
Optionally, the gain parameter for calculating second luminance picture relative to first luminance picture, comprising:
Calculate gain parameter of second luminance picture relative to first luminance picture:
Wherein, RpIndicate that a pixel is relative to respective pixel in first luminance picture in second luminance picture
The gain parameter of point, VopIndicate the brightness value of a pixel described in second luminance picture, VipIndicate first brightness
The brightness value of the point of respective pixel described in image.
Optionally, described based on the gain parameter being calculated, second RGB image is carried out at brightness enhancing
Reason obtains brightness enhancing image, comprising:
Based on the gain parameter being calculated, second RGB image is carried out at brightness enhancing by following formula
Reason obtains brightness enhancing image:
Y‘P, c=YP, c*Rp
Wherein, Y 'P, cIndicate the channel value of the channel c of a pixel in the brightness enhancing image, YP, cIndicate described
The channel value of the channel c of respective pixel point in two RGB images, the value of c are 1,2,3, the 1 expression channels R, and 2 indicate the channels G, 3 tables
Show channel B, RpIndicate that a pixel is relative to respective pixel point in first luminance picture in second luminance picture
Gain parameter, it is a pixel described in brightness enhancing image, respective pixel point described in second RGB image, described
A pixel described in second luminance picture is identical with the coordinate of respective pixel point described in first luminance picture.
Optionally, the gain parameter for calculating second luminance picture relative to first luminance picture, comprising:
Correction process is exposed to second luminance picture, obtains third luminance picture;
Calculate gain parameter of the third luminance picture relative to first luminance picture.
Optionally, described that correction process is exposed to second luminance picture, obtain third luminance picture, comprising:
Correction process is exposed to second luminance picture by following formula, obtains third luminance picture:
Wherein, V 'opIndicate the pixel value of a pixel in the third luminance picture, VopIndicate second luminance graph
The pixel value of respective pixel point, V as inminIndicate the minimum pixel value in second luminance picture, VmaxIndicate described second
Max pixel value in luminance picture, thlIndicate the first preset threshold, thhIndicate the second preset threshold, thlAnd thhMeet: 0
< thl< thh< 1.
Optionally, second RGB image is carried out at brightness enhancing based on the gain parameter being calculated described
Reason, after obtaining the step of brightness enhances image, the method also includes:
Color enhancement processing is carried out to brightness enhancing image, obtains color enhancement image.
Optionally, described that color enhancement processing is carried out to brightness enhancing image, obtain color enhancement image, comprising:
Image is enhanced to the brightness by following formula and carries out color enhancement processing, obtains color enhancement image:
Wherein, Y "P, cIndicate the channel value of the channel c of a pixel in the color enhancement image, Y 'P, cIndicate described bright
The channel value of the channel c of respective pixel point, Y in degree enhancing imageP, cIndicate the logical of respective pixel point in second RGB image
The channel value of road c, the value of c are 1,2,3, th1, th2And th3Table shows R, G and the corresponding third predetermined threshold value of channel B respectively.
According to the second aspect of an embodiment of the present disclosure, a kind of image intensifier device is provided, comprising:
Module is obtained, is configured as obtaining target image to be reinforced, and carry out down-sampling processing to the target image,
Obtain down-sampled images;
Processing module is configured as inputting the down-sampled images into deep learning network trained in advance, obtains described
The corresponding image enhancement data of down-sampled images;Wherein, the deep learning network is according to sample image and the sample graph
It is obtained as corresponding sample enhancing image is trained, it is to characterize the enhanced figure of down-sampled images that described image, which enhances data,
As the data of the enhancing degree relative to the down-sampled images;
First determining module is configured to determine that each pixel in the target image in the down-sampled images
Match point;
Second determining module is configured as enhancing for each pixel in the target image based on described image
Enhancing data corresponding to the match point of the pixel in data determine the corresponding targets improvement parameter of the pixel;
Module is adjusted, is configured as being based on the corresponding mesh of the pixel for each pixel in the target image
Mark enhancing parameter, adjusts the pixel value of the pixel, obtains the corresponding enhancing image of the target image.
Optionally, first determining module is specifically configured to for each pixel in the target image, really
Fixed corresponding pixel points of the pixel in the down-sampled images, centered on the corresponding pixel points and size is M × N
Region of search in, the smallest pixel of absolute value of the difference with the pixel values of the corresponding pixel points is searched, by what is found
Match point of the pixel as the pixel in the down-sampled images;
It wherein, is the pixel of (u, v), pair of the pixel in down-sampled images for coordinate in the target image
The coordinate for answering pixel is (u/x, v/x), and x indicates down-sampling multiple.
Optionally, described image enhancing data include: in the down-sampled images each pixel be mapped as respective pixel
The mapping parameters of point, the respective pixel point of any pixel point be in the enhanced image of the down-sampled images with the pixel point
Set identical pixel;
Second determining module is specifically configured to reflect for each pixel in the target image from each
It penetrates in parameter, target component corresponding to the determining match point with the pixel, using identified target component as the pixel
The corresponding targets improvement parameter of point.
Optionally, described image enhancing data include: the enhanced image of the down-sampled images;
Second determining module, is specifically configured to for each pixel in the target image, determine described in
In the enhanced image of down-sampled images, target point identical with the match point position of the pixel calculates the target point
The ratio of the pixel value of the match point of pixel value and the pixel, is determined as the corresponding targets improvement of the pixel for the ratio
Parameter.
Optionally, the adjustment module is specifically configured to for each pixel in the target image, and being based on should
The corresponding targets improvement parameter of pixel, is adjusted by pixel value of the following formula to the pixel:
AIp=Op
Wherein, A indicates the corresponding targets improvement parameter of the pixel, IpThe pixel value of the pixel, O before indicating to adjustp
Indicate the pixel value of the pixel after adjusting.
Optionally, described device further include:
Third determining module is configured to determine that first RGB image of the target image under rgb color mode, with
And second RGB image of the enhancing image under rgb color mode;
First generation module is configurable to generate corresponding first luminance picture of first RGB image, and described first is bright
Spend the brightness value of any pixel point in image are as follows: the maximum value in the RGB channel value of corresponding first pixel of the pixel, institute
Stating the first pixel is pixel identical with the pixel position in first RGB image;
Second generation module is configurable to generate corresponding second luminance picture of second RGB image, and described second is bright
Spend the brightness value of any pixel point in image are as follows: the maximum value in the RGB channel value of corresponding second pixel of the pixel, institute
Stating the second pixel is pixel identical with the pixel position in second RGB image;
Computing module is configured as the gain ginseng for calculating second luminance picture relative to first luminance picture
Number;
First enhancing module is configured as carrying out second RGB image bright based on the gain parameter being calculated
Enhancing processing is spent, brightness enhancing image is obtained.
Optionally, the computing module is specifically configured to calculate second luminance picture bright relative to described first
Spend the gain parameter of image:
Wherein, RpIndicate that a pixel is relative to respective pixel in first luminance picture in second luminance picture
The gain parameter of point, VopIndicate the brightness value of a pixel described in second luminance picture, VipIndicate first brightness
The brightness value of the point of respective pixel described in image.
Optionally, the first enhancing module, is specifically configured to based on the gain parameter being calculated, by following
Formula carries out brightness enhancing processing to second RGB image, obtains brightness enhancing image:
Y‘P, c=YP, c*Rp
Wherein, Y 'P, cIndicate the channel value of the channel c of a pixel in the brightness enhancing image, YP, cIndicate described
The channel value of the channel c of respective pixel point in two RGB images, the value of c are 1,2,3, the 1 expression channels R, and 2 indicate the channels G, 3 tables
Show channel B, RpIndicate that a pixel is relative to respective pixel point in first luminance picture in second luminance picture
Gain parameter, it is a pixel described in brightness enhancing image, respective pixel point described in second RGB image, described
A pixel described in second luminance picture is identical with the coordinate of respective pixel point described in first luminance picture.
Optionally, the computing module includes: correction unit and computing unit;
The correction unit is configured as being exposed correction process to second luminance picture, obtains third brightness
Image;
The computing unit is configured as calculating gain of the third luminance picture relative to first luminance picture
Parameter.
Optionally, the correction unit is specifically configured to expose second luminance picture by following formula
Light correction process obtains third luminance picture:
Wherein, V 'opIndicate the pixel value of a pixel in the third luminance picture, VopIndicate second luminance graph
The pixel value of respective pixel point, V as inminIndicate the minimum pixel value in second luminance picture, VmaxIndicate described second
Max pixel value in luminance picture, thlIndicate the first preset threshold, thhIndicate the second preset threshold, thlAnd thhMeet: 0
< thl< thh< 1.
Optionally, described device further include:
Second enhancing module, being configured as, which enhances image to the brightness, carries out color enhancement processing, obtains color enhancement
Image.
Optionally, it is described second enhancing module, be specifically configured to by following formula to the brightness enhance image into
The processing of row color enhancement, obtains color enhancement image:
Wherein, Y "P, cIndicate the channel value of the channel c of a pixel in the color enhancement image, Y 'P, cIndicate described bright
The channel value of the channel c of respective pixel point, Y in degree enhancing imageP, cIndicate the logical of respective pixel point in second RGB image
The channel value of road c, the value of c are 1,2,3, th1, th2And th3Table shows R, G and the corresponding third predetermined threshold value of channel B respectively.
According to the third aspect of an embodiment of the present disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: when executing the program stored on the memory, realize first party as above
Image enchancing method described in face.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, when described
When instruction in storage medium is executed by the processor of electronic equipment, so that electronic equipment is able to carry out described in first aspect as above
Image enchancing method.
According to a fifth aspect of the embodiments of the present disclosure, a kind of computer program product is provided, when the computer program produces
When instruction in product is executed by the processor of electronic equipment, so that electronic equipment is able to carry out image described in first aspect as above
Enhancement Method.
The technical scheme provided by this disclosed embodiment can include the following benefits: by high-resolution target
Image carries out down-sampling processing, and the down-sampled images of obtained low resolution are inputted deep learning network, reduces image
Enhance the complexity of processing.And after down-sampled images are inputted deep learning network, based on obtained down-sampled images pair
The pixel matching result of the image enhancement data and target image and down-sampled images answered, to each picture of target image
Vegetarian refreshments carries out mapping processing, obtains the corresponding enhancing image of target image.Thus obtained enhancing image has target image
High-resolution, and maintain based on deep learning preferable image enhancement effects obtained.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is a kind of flow chart of image enchancing method shown according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of the image enchancing method shown according to another exemplary embodiment;
Fig. 3 is a kind of flow chart of the image enchancing method shown according to another exemplary embodiment;
Fig. 4 is a kind of block diagram of image intensifier device shown according to an exemplary embodiment;
Fig. 5 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment;
Fig. 6 is a kind of block diagram of device for image enhancement shown according to an exemplary embodiment;
Fig. 7 is the block diagram of another device for image enhancement shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended
The example of device and method being described in detail in claims, some aspects of the invention are consistent.
The image of high dynamic range (High-Dynamic Range, HDR) compares low-dynamic range (Low-Dynamic
Range, LDR) image, more dynamic ranges and image detail can be provided.Schemed according to the LDR of different time for exposure
Picture can synthesize final HDR image, synthesized HDR figure using the LDR image of corresponding best details of each time for exposure
As can preferably reflect the visual effect of people and object in true environment.
In consumer device, usually the image of high dynamic range can not be directly captured, and be difficult to obtain continuous more
The image of frame difference exposure, to carry out more exposure fusions to obtain the image of high dynamic range.As a result, based on the image of single frames
Enhancing technology is particularly important.Image enhancement based on single frames can join by adjusting brightness and contrast of present image etc.
Number obtains being similar to the image enhancement effects for more exposing fusion.
Currently, common single-frame images Enhancement Method has: the image enchancing method based on deep learning.But due to phase
Close based on the image enchancing method of deep learning due to being directly to carry out convolution operation to high-resolution original image, directly output
High-resolution enhancing image causes the complexity of the image enhancement processes based on deep learning high.
In order to solve problems in the prior art, the embodiment of the present disclosure provides a kind of image enchancing method, device, electronics and sets
Standby and storage medium.
In the following, a kind of image enchancing method provided by the embodiment of the present disclosure is introduced first.
Fig. 1 is a kind of flow chart of image enchancing method shown according to an exemplary embodiment, as shown in Figure 1, a kind of
Image enchancing method may comprise steps of:
In step s 11, target image to be reinforced is obtained, and down-sampling processing is carried out to target image, obtains down adopting
Sampled images.
A kind of executing subject of image enchancing method shown by the present embodiment can be electronic equipment.In concrete application
In, which can be terminal device or server.Such as: smart phone, tablet computer and desktop computer etc..Work as electricity
It, can be using the image as target image to be reinforced when sub- equipment needs to carry out image enhancement processing to a certain image.
Target image can be high-resolution image.Moreover, target image can be single channel image, it is also possible to more
Channel image.Wherein, the color space of multichannel image may is that RGB (Red, Green, Blue, RGB), YUV
(Luminance, Chrominance, Chroma, brightness, coloration, concentration) or other color spaces, the disclosure to this not
It limits.
Electronic equipment can also be carried out by internal or external photographic device photographic subjects image with other equipment
Communication receives the target image that other equipment are sent.The mode that the disclosure obtains target image to electronic equipment does not limit.
After electronic equipment obtains target image, down-sampling processing can be carried out to the target image, to obtain down adopting
Sampled images.Obtained down-sampled images can further be entered deep learning network, be adopted under by deep learning network
Sampled images carry out image enhancement processing.
In the present embodiment, x times of down-sampling can be carried out to target image and is handled, in this way, obtained down-sampled images
Resolution ratio, width and height be 1/x times of relevant parameter of target image.
In step s 12, the deep learning network that down-sampled images input is trained in advance, it is corresponding to obtain down-sampled images
Image enhancement data.
In order to carry out image enhancement processing to down-sampled images, in the present embodiment, can previously according to sample image and
The corresponding sample of sample image enhances image, is trained to deep learning network, obtains trained deep learning network.Its
In, sample image and corresponding sample enhancing image are for trained training sample.It can be obtained in existing trained library
Sampled images and corresponding sample enhancing image are as training sample.Sample enhancing image can be obtained by the way of more exposure fusions
It arrives, can also be obtained using other single-frame images Enhancement Methods, the disclosure is to this without limiting.Moreover, in the present embodiment
In, for the ease of training, the resolution ratio of sample image and corresponding sample enhancing image can be with the resolution ratio of down-sampled images
It is identical.
In the present embodiment, the structure of deep learning network can be the network model of existing any deep learning.Tool
Body can determine the number of training sample according to actual needs in training.It is also possible to set reasonable loss function
Or objective function and corresponding target value, to determine whether deep learning network trains.
After deep learning network training is good, the network parameter of deep learning network is determined that.Due to deep learning net
Network is to be trained according to sample image and corresponding sample enhancing image, thus down-sampled images are inputted trained depth
After spending learning network, the corresponding image enhancement data of available down-sampled images.The image enhancement data are characterization down-samplings
The data of image after image enhancement relative to the enhancing degree of down-sampled images.The form of the image enhancement data can have more
Kind, such as: image enhancement data may include the mapping parameters that each pixel is mapped as respective pixel point in down-sampled images,
Wherein, the respective pixel point of any pixel point is identical with the pixel position in the enhanced image of the down-sampled images
Pixel.In another example image enhancement data may include the enhanced image of down-sampled images.Also, it is to be understood that deep
It is identical to spend the type of output result image enhancement data corresponding with down-sampled images of learning network, that is to say, that work as depth
When the output result of learning network is mapping parameters, the corresponding image enhancement data of down-sampled images are mapping parameters, and when deep
When the output result for spending learning network is enhancing image, the corresponding image enhancement data of down-sampled images are down-sampled images enhancing
Image afterwards.
In step s 13, match point of each pixel in down-sampled images in target image is determined.
Since the resolution ratio of down-sampled images is lower than the resolution ratio of target image, so the corresponding image of down-sampled images increases
Strong data are not corresponding with target image.The corresponding enhancing image of target image in order to obtain in the present embodiment can first really
Match point of each pixel in down-sampled images in target image is made, then, then determines the corresponding target of the pixel
Enhance parameter, finally, being adjusted using the targets improvement parameter to the pixel value of the pixel.
In a kind of implementation, match point of each pixel in down-sampled images in target image is determined, it can be with
Include:
For each pixel in target image, corresponding pixel points of the pixel in down-sampled images are determined,
Centered on corresponding pixel points and size is in the region of search of M × N, search and the difference of the pixel value of corresponding pixel points it is absolute
It is worth the smallest pixel, the match point using the pixel found as the pixel in down-sampled images.
It wherein, is the pixel of (u, v), correspondence picture of the pixel in down-sampled images for coordinate in target image
The coordinate of vegetarian refreshments is (u/x, v/x), and x indicates down-sampling multiple.
In above-mentioned implementation, if the value of u/x or v/x is not integer, it is rounded downwards.For example, in target image
The coordinate of one pixel is (9,9), and x 2, then the coordinate of corresponding pixel points is (4,4).
The range of region of search can be set according to actual needs.Such as: M × N is 3 × 3.It usually can be by M and N
It is set as odd number.When a certain region of search part exceed down-sampled images boundary, i.e., contain in the region of search and do not exist
Virtual representation vegetarian refreshments in down-sampled images, then the pixel in down-sampled images for being included using in the region of search as search
Object is searched.For example, the coordinate of corresponding pixel points is (Isosorbide-5-Nitrae), the size of region of search is 3 × 3, then searches institute region packet
Containing the pixel in 5 down-sampled images, coordinate is respectively (1,3), (1,5), (2,3), (2,4) and (2,4), then searching model
It encloses as this 5 pixels.
For coordinate in target image be (u, v) pixel, by region of search with the pixel value of corresponding pixel points it
The smallest pixel of absolute value of the difference, as the match point of the pixel (u, v) in down-sampled images, it is meant that: it is searched
To match point be in down-sampled images with the immediate pixel of pixel (u, v).
In step S14, for each pixel in target image, based on the pixel in image enhancement data
With the corresponding enhancing data of point, the corresponding targets improvement parameter of the pixel is determined.
For each pixel in target image, after determining match point of the pixel in down-sampled images,
The corresponding enhancing data of the match point can be based on, determine the corresponding targets improvement parameter of the pixel, and be based further on
The targets improvement parameter is adjusted the pixel value of the pixel.
Specifically, the form of image enhancement data is different, the corresponding targets improvement of each pixel in target image is determined
The mode of parameter is also different.Two ways given below is illustrated.
Optionally, in the first way, image enhancement data may include: each pixel mapping in down-sampled images
For the mapping parameters of respective pixel point, the respective pixel point of any pixel point be in the enhanced image of down-sampled images with the picture
The identical pixel in vegetarian refreshments position.It is so-called that a pixel is mapped as respective pixel point, it refers specifically to: by the pixel of a pixel
Value is adjusted to the pixel value of respective pixel point.
Correspondingly, for each pixel in target image, the match point based on the pixel in image enhancement data
Corresponding enhancing data determine the corresponding targets improvement parameter of the pixel, may include:
For each pixel in target image, from each mapping parameters, the determining match point institute with the pixel
Corresponding target component, using identified target component as the corresponding targets improvement parameter of the pixel.
In the first way, when target image is single channel image, each pixel is mapped as in down-sampled images
The mapping parameters of respective pixel point can be a real number;When target image is triple channel, each pixel in down-sampled images
The mapping parameters for being mapped as respective pixel point can be one 3 × 4 matrix, include each pixel in down-sampled images in the matrix
Three channel values of point are each mapped to the mapping parameters of the respective channel value of respective pixel point, which can indicate are as follows:
Wherein, m1Indicate that the first passage value of each pixel in down-sampled images is mapped as the first of respective pixel point and leads to
The mapping parameters of road value, m2Indicate that the second channel value of each pixel in down-sampled images is mapped as the second of respective pixel point
The mapping parameters of channel value, m3Indicate that the third channel value of each pixel in down-sampled images is mapped as the of respective pixel point
The mapping parameters of triple channel value.
Optionally, in the second way, image enhancement data include: the enhanced image of down-sampled images.
Correspondingly, for each pixel in target image, the match point based on the pixel in image enhancement data
Corresponding enhancing data determine the corresponding targets improvement parameter of the pixel, may include:
For each pixel in target image, determine in the enhanced image of down-sampled images, with the pixel
The identical target point in match point position calculates the ratio of the pixel value of the pixel value of target point and the match point of the pixel, will
Ratio is determined as the corresponding targets improvement parameter of the pixel.
In the second way, when target image is single channel image, the pixel value of target point and of the pixel
The ratio of pixel value with point can be a real number;When target image is triple channel, the pixel value of target point and the pixel
Match point pixel value ratio include: three channel values of target point Yu the match point of the pixel respective channel value
Ratio.
The enhanced image of down-sampled images embodies the mapping relations of image enhancement compared to down-sampled images.Therefore,
Based on the mapping relations, the corresponding targets improvement parameter of each pixel in target image can be determined.Specifically, this is reflected
The relationship of penetrating may is that the picture of the pixel in the pixel value of the target point of each pixel and down-sampled images in down-sampled images
The ratio of element value.
In step S15, for each pixel in target image, based on the corresponding targets improvement ginseng of the pixel
Number, adjusts the pixel value of the pixel, obtains the corresponding enhancing image of target image.
For each pixel in target image, after determining the corresponding targets improvement parameter of the pixel, so that it may
To be adjusted based on the targets improvement parameter to the pixel value of the pixel.This adjustment process is exactly to carry out to target image
The process of image enhancement processing.
Optionally, in one implementation, corresponding based on the pixel for each pixel in target image
Targets improvement parameter adjusts the pixel value of the pixel, may include:
Based on the corresponding targets improvement parameter of the pixel, adjusted by pixel value of the following formula to the pixel
It is whole:
AIp=Op
Wherein, A indicates the corresponding targets improvement parameter of the pixel, IpThe pixel value of the pixel, O before indicating to adjustp
Indicate the pixel value of the pixel after adjusting.
Specifically, the form of above-mentioned targets improvement parameter is also different when the form difference of target image.For example, working as mesh
When logo image is single channel image, above-mentioned targets improvement parameter can be a real number;When target image is triple channel image, on
Stating targets improvement parameter can be one 3 × 4 matrix, include the targets improvement in corresponding three channels of the pixel in the matrix
Parameter, the matrix can indicate are as follows:
Wherein, t1Indicate the targets improvement parameter of the corresponding first passage of the pixel, t2Indicate the pixel corresponding
The targets improvement parameter in two channels, t3Indicate the targets improvement parameter of the corresponding third channel of the pixel.
In above-mentioned formula, the pixel value of each pixel in target image is adjusted, is also achieved that pair
Target image has carried out image enhancement processing.Thus, obtained enhancing image has been carried out at image enhancement to target image
Image after reason.
The technical scheme provided by this disclosed embodiment can include the following benefits: by high-resolution target
Image carries out down-sampling processing, and the down-sampled images of obtained low resolution are inputted deep learning network, reduces image
Enhance the complexity of processing.And after down-sampled images are inputted deep learning network, based on obtained down-sampled images pair
The pixel matching result of the image enhancement data and target image and down-sampled images answered, to each picture of target image
Vegetarian refreshments carries out mapping processing, obtains the corresponding enhancing image of target image.Thus obtained enhancing image has target image
High-resolution, and maintain based on deep learning preferable image enhancement effects obtained.
It is realized based on step S11-S15 and image enhancement is carried out to target image, the effect of image enhancement is specifically to be based on
Trained deep learning network implementations.But in practical applications, since training set is limited, and practical application scene always thousand
Become ten thousand to change, that is to say, that, it may appear that it cannot all be generated for the image for any scene being likely to occur since training set is limited
Preferable image enhancement effects.Under normal conditions, since training set is limited, can make, which enhances image, there is colour cast, overexposure and owes
The problems such as quick-fried.In such a case, it is possible on the basis of step S15 obtained enhancing image, to enhancing image carry out into
One step image enhancement processing.
In the embodiment depicted in figure 2, further image enhancement is carried out to enhancing image particularly directed to above-mentioned colour cast problem
Processing.
Fig. 2 is a kind of flow chart of the image enchancing method shown according to another exemplary embodiment, as shown in Fig. 2, one
Kind image enchancing method may comprise steps of:
In the step s 21, target image to be reinforced is obtained, and down-sampling processing is carried out to target image, obtains down adopting
Sampled images.
In step S22, by down-sampled images input deep learning network trained in advance, it is corresponding to obtain down-sampled images
Image enhancement data.
In step S23, match point of each pixel in down-sampled images in target image is determined.
In step s 24, for each pixel in target image, based on the pixel in image enhancement data
With the corresponding enhancing data of point, the corresponding targets improvement parameter of the pixel is determined.
In step s 25, for each pixel in target image, based on the corresponding targets improvement ginseng of the pixel
Number, adjusts the pixel value of the pixel, obtains the corresponding enhancing image of target image.
Above step S21-S25 can be identical with step S11-S15, and which is not described herein again.
In step S26, determine that first RGB image of the target image under rgb color mode, and enhancing image exist
The second RGB image under rgb color mode.
In the present embodiment, target image is threeway image.If the color mode of target image is rgb color mode,
Directly using target image as the first RGB image;If the color mode of target image is YUV or other color modes, by mesh
The color mode of logo image is converted to rgb color mode, obtains the first RGB image.The method of conversion can refer to the prior art,
Here without being discussed in detail.
It equally, can also be with reference to first RGB image of the determining target image under rgb color mode for enhancing image
Mode, determine second RGB image of the enhancing image under rgb color mode.
In step s 27, corresponding first luminance picture of the first RGB image, any pixel in the first luminance picture are generated
The brightness value of point are as follows: the maximum value in the RGB channel value of corresponding first pixel of the pixel, the first pixel are first
Pixel identical with the pixel position in RGB image.
First luminance picture generated, can be used for following step the second luminance picture generated, determine
Gain parameter of two luminance pictures relative to the first luminance picture.
In step S28, corresponding second luminance picture of the second RGB image, any pixel in the second luminance picture are generated
The brightness value of point are as follows: the maximum value in the RGB channel value of corresponding second pixel of the pixel, the second pixel are second
Pixel identical with the pixel position in RGB image.
Second luminance picture generated, can be used for above-mentioned steps the first luminance picture generated, determine
Gain parameter of two luminance pictures relative to the first luminance picture.
In step S29, gain parameter of second luminance picture relative to the first luminance picture is calculated.
After generating the first luminance picture and the second luminance picture, the first luminance picture and the second luminance graph can be based on
Picture determines gain parameter of second luminance picture relative to the first luminance picture.
Optionally, in one implementation, gain parameter of second luminance picture relative to the first luminance picture is calculated,
May include:
Calculate gain parameter of second luminance picture relative to the first luminance picture:
Wherein, RpIndicate gain of the pixel relative to respective pixel point in the first luminance picture in the second luminance picture
Parameter, VopIndicate the brightness value of a pixel in the second luminance picture, VipIndicate the bright of respective pixel point in the first luminance picture
Angle value.
In above-mentioned implementation, VopAnd VipIt is the brightness value after normalization.By above-mentioned formula as can be seen that on
It states gain parameter and embodies brightness change of the pixel relative to respective pixel point in the first luminance picture in the second luminance picture
Change situation.Wherein, the black pixel point for being 0 for the brightness value in the first luminance picture, can set 0 for gain parameter,
That is respective pixel point of the black pixel point in the second luminance picture in the first luminance picture remains as black.Based on the variation
Situation further can carry out brightness enhancing processing to the second RGB image.
In step S210, based on the gain parameter being calculated, brightness enhancing processing is carried out to the second RGB image,
Obtain brightness enhancing image.
After gain parameter of second luminance picture relative to the first luminance picture is calculated, it can be joined based on the gain
Number carries out brightness enhancing processing to the second RGB image.
Optionally, in one implementation, based on the gain parameter being calculated, the second RGB image is carried out bright
Enhancing processing is spent, brightness enhancing image is obtained, may include:
Based on the gain parameter being calculated, brightness enhancing processing is carried out to the second RGB image by following formula, is obtained
Enhance image to brightness:
Y‘P, c=YP, c*Rp
Wherein, Y 'P, cIndicate the channel value of the channel c of a pixel in brightness enhancing image, YP, cIndicate the second RGB image
The channel value of the channel c of middle respective pixel point, the value of c are 1,2,3, the 1 expression channels R, and 2 indicate the channels G, and 3 indicate channel Bs,
RpIndicate gain parameter of the pixel relative to respective pixel point in the first luminance picture in the second luminance picture, brightness enhancing
A pixel in image, respective pixel point in the second RGB image, in the second luminance picture in a pixel and the first luminance picture
The coordinate of respective pixel point is identical.
In above-mentioned implementation, the second RGB image and brightness enhancing image are threeway image, and color mode is equal
For rgb color mode.
The technical scheme provided by this disclosed embodiment can include the following benefits: by high-resolution target
Image carries out down-sampling processing, and the down-sampled images of obtained low resolution are inputted deep learning network, reduces image
Enhance the complexity of processing.And after down-sampled images are inputted deep learning network, based on obtained down-sampled images pair
The pixel matching result of the image enhancement data and target image and down-sampled images answered, to each picture of target image
Vegetarian refreshments carries out mapping processing, obtains the corresponding enhancing image of target image.Thus obtained enhancing image has target image
High-resolution, and maintain based on deep learning preferable image enhancement effects obtained.Further, it is based on target
The RGB image of image and enhancing image under RGB mode, generates corresponding luminance picture, determines gain parameter, finally
Brightness enhancing processing is carried out to RGB image of the enhancing image under RGB mode based on the gain parameter, due to RGB tri-
The channel value in channel has all carried out brightness adjustment, when solving based on deep learning progress image enhancement, since training set is limited,
So that the colour cast problem that enhancing image occurs.
In practical applications, when carrying out image enhancement based on deep learning, since training set is limited, it is also possible to will appear
Overexposure owes quick-fried problem.For this problem, in the embodiment depicted in figure 2, optionally, the calculating second in step S29 is bright
Gain parameter of the image relative to the first luminance picture is spent, may include:
Correction process is exposed to the second luminance picture, obtains third luminance picture;
Calculate gain parameter of the third luminance picture relative to the first luminance picture.
Specifically, being not present in obtained third luminance picture after being exposed correction process to the second luminance picture
Overexposure owes quick-fried pixel, in this way, can use third luminance picture, calculates third luminance picture relative to the first luminance graph
The gain parameter of picture.
Optionally, in one implementation, correction process is exposed to the second luminance picture, obtains third luminance graph
Picture may include:
Correction process is exposed to the second luminance picture by following formula, obtains third luminance picture:
Wherein, V 'opIndicate the pixel value of a pixel in third luminance picture, VopIndicate corresponding in the second luminance picture
The pixel value of pixel, VminIndicate the minimum pixel value in the second luminance picture, VmaxIndicate the maximum in the second luminance picture
Pixel value, thlIndicate the first preset threshold, thhIndicate the second preset threshold, thlAnd thhMeet: 0 < thl< thh< 1.
In above-mentioned formula, work as Vop< thlWhen, illustrate that quick-fried feelings occurs owing in the respective pixel point in the second luminance picture
Condition, adjustable Vop, so that VopValue increase;Work as Vop> thhWhen, illustrate that the respective pixel point in the second luminance picture occurs
Excessively quick-fried situation, adjustable Vop, so that VopValue reduce.And work as thl≤Vop≤thhWhen, illustrate in the second luminance picture
Respective pixel point there is no quick-fried or overexposure is owed, V can not be adjustedop。
Above-mentioned first preset threshold and the second preset threshold can determines according to actual conditions, in general, the first preset threshold
May be greater than 0 and a numerical value being closer to 0, the second preset threshold can be less than 1 and with 1 one be closer to
Numerical value.
Mode of the third luminance picture relative to the gain parameter of the first luminance picture is calculated, can be fallen into a trap with reference to step 29
Calculate mode of second luminance picture relative to the gain parameter of the first luminance picture.
In the embodiment above, when target image is multichannel image, acquired brightness enhances image, is to each logical
The channel value in road carries out the image after the brightness adjustment of same degree, in order to further increase the effect of image enhancement, this public affairs
It opens to provide and can be directed to the image enchancing method that each channel is adjusted separately, due to being adjusted respectively to each channel
It is whole, and each channel corresponds to different colors, so being substantially to carry out color enhancement processing to brightness enhancing image.
Fig. 3 is a kind of flow chart of the image enchancing method shown according to another exemplary embodiment, as shown in figure 3, one
Kind image enchancing method may comprise steps of.
In step S31, target image to be reinforced is obtained, and down-sampling processing is carried out to target image, obtain down adopting
Sampled images.
In step s 32, the deep learning network that down-sampled images input is trained in advance, it is corresponding to obtain down-sampled images
Image enhancement data.
In step S33, match point of each pixel in down-sampled images in target image is determined.
In step S34, for each pixel in target image, based on the pixel in image enhancement data
With the corresponding enhancing data of point, the corresponding targets improvement parameter of the pixel is determined.
In step s 35, for each pixel in target image, based on the corresponding targets improvement ginseng of the pixel
Number, adjusts the pixel value of the pixel, obtains the corresponding enhancing image of target image.
In step S36, determine that first RGB image of the target image under rgb color mode, and enhancing image exist
The second RGB image under rgb color mode.
In step S37, corresponding first luminance picture of the first RGB image, any pixel in the first luminance picture are generated
The brightness value of point are as follows: the maximum value in the RGB channel value of corresponding first pixel of the pixel, the first pixel are first
Pixel identical with the pixel position in RGB image.
In step S38, corresponding second luminance picture of the second RGB image, any pixel in the second luminance picture are generated
The brightness value of point are as follows: the maximum value in the RGB channel value of corresponding second pixel of the pixel, the second pixel are second
Pixel identical with the pixel position in RGB image.
In step S39, the gain parameter of corresponding first luminance picture of the second luminance picture is calculated.
In step s310, based on the gain parameter being calculated, brightness enhancing processing is carried out to the second RGB image,
Obtain brightness enhancing image.
Above step S31-S310 can be identical with step S21-S210, and which is not described herein again.
In step S311, color enhancement processing is carried out to brightness enhancing image, obtains color enhancement image.
Optionally, in one implementation, color enhancement processing is carried out to brightness enhancing image, obtains color enhancement figure
Picture may include:
Image is enhanced to brightness by following formula and carries out color enhancement processing, obtains color enhancement image:
Wherein, Y "P, cIndicate the channel value of the channel c of a pixel in color enhancement image, Y 'P, cIndicate brightness enhancing figure
The channel value of the channel c of respective pixel point, Y as inP, cIndicate the channel value of the channel c of respective pixel point in the second RGB image, c
Value be 1,2,3, th1, th2And th3Table shows R, G and the corresponding third predetermined threshold value of channel B respectively.
In above-mentioned implementation, different third predetermined threshold values can be respectively set for tri- channels R, G and B.Tool
Body, if c is 1, Y "P, cIndicate the channel value in the channel R of a pixel in color enhancement image, Y 'P, cIndicate brightness enhancing
The channel value in the channel R of respective pixel point, Y in imageP, cIndicate the channel in the channel R of respective pixel point in the second RGB image
Value;If c is 2, Y "P, cIndicate the channel value in the channel G of a pixel in color enhancement image, Y 'P, cIndicate brightness enhancing figure
The channel value in the channel G of respective pixel point, Y as inP, cIndicate the channel value in the channel G of respective pixel point in the second RGB image;
If c is 3, Y "P, cIndicate the channel value of the channel B of a pixel in color enhancement image, Y 'P, cIt indicates in brightness enhancing image
The channel value of the channel B of respective pixel point, YP, cIndicate the channel value of the channel B of respective pixel point in the second RGB image.
In above-mentioned formula, third predetermined threshold value is illustrated to brightness enhancing image relative to each of the second RGB image
The acceptance level of the brightness adjustment degree in channel.
After being adjusted respectively to the channel value in tri- channels RGB of all pixels point in brightness enhancing image, i.e.,
Color enhancement is obtained treated color enhancement image.
The technical scheme provided by this disclosed embodiment can include the following benefits: by high-resolution target
Image carries out down-sampling processing, and the down-sampled images of obtained low resolution are inputted deep learning network, reduces image
Enhance the complexity of processing.And after down-sampled images are inputted deep learning network, based on obtained down-sampled images pair
The pixel matching result of the image enhancement data and target image and down-sampled images answered, to each picture of target image
Vegetarian refreshments carries out mapping processing, obtains the corresponding enhancing image of target image.Thus obtained enhancing image has target image
High-resolution, and maintain based on deep learning preferable image enhancement effects obtained.Further, it is based on target
The RGB image of image and enhancing image under RGB mode, generates corresponding luminance picture, determines gain parameter, finally
Brightness enhancing processing is carried out to RGB image of the enhancing image under RGB mode based on the gain parameter, due to RGB tri-
The channel value in channel has all carried out brightness adjustment, when solving based on deep learning progress image enhancement, since training set is limited,
So that the colour cast problem that enhancing image occurs.Further, color enhancement processing has been carried out to brightness enhancing image, has improved figure
The effect of image intensifying processing.
Fig. 4 is a kind of block diagram of image intensifier device shown according to an exemplary embodiment.Referring to Fig. 4, the device packet
It includes: obtaining module 401, processing module 402, the first determining module 403 and adjustment module 404.Wherein,
Module 401 is obtained, is configured as obtaining target image to be reinforced, and carry out at down-sampling the target image
Reason, obtains down-sampled images;
Processing module 402 is configured as inputting the down-sampled images into deep learning network trained in advance, obtains institute
State the corresponding image enhancement data of down-sampled images;Wherein, the deep learning network is according to sample image and the sample
The corresponding sample enhancing image training of image obtains, and it is that the characterization down-sampled images are enhanced that described image, which enhances data,
Data of the image relative to the enhancing degree of the down-sampled images;
First determining module 403 is configured to determine that each pixel in the target image in the down-sampling figure
Match point as in;
Second determining module 404 is configured as increasing for each pixel in the target image based on described image
Enhancing data corresponding to the match point of the pixel, determine the corresponding targets improvement parameter of the pixel in strong data;
Module 405 is adjusted, is configured as each pixel in the target image, it is corresponding based on the pixel
Targets improvement parameter adjusts the pixel value of the pixel, obtains the corresponding enhancing image of the target image.
The technical scheme provided by this disclosed embodiment can include the following benefits: by high-resolution target
Image carries out down-sampling processing, and the down-sampled images of obtained low resolution are inputted deep learning network, reduces image
Enhance the complexity of processing.And after down-sampled images are inputted deep learning network, based on obtained down-sampled images pair
The pixel matching result of the image enhancement data and target image and down-sampled images answered, to each picture of target image
Vegetarian refreshments carries out mapping processing, obtains the corresponding enhancing image of target image.Thus obtained enhancing image has target image
High-resolution, and maintain based on deep learning preferable image enhancement effects obtained.
Optionally, first determining module 403 is specifically configured to for each pixel in the target image
Point determines corresponding pixel points of the pixel in the down-sampled images, centered on the corresponding pixel points and size
To search the smallest pixel of absolute value of the difference with the pixel value of the corresponding pixel points, looking into the region of search of M × N
Match point of the pixel found as the pixel in the down-sampled images;
It wherein, is the pixel of (u, v), pair of the pixel in down-sampled images for coordinate in the target image
The coordinate for answering pixel is (u/x, v/x), and x indicates down-sampling multiple.
Optionally, described image enhancing data include: in the down-sampled images each pixel be mapped as respective pixel
The mapping parameters of point, the respective pixel point of any pixel point be in the enhanced image of the down-sampled images with the pixel point
Set identical pixel;
Second determining module 404, is specifically configured to for each pixel in the target image, from each
In mapping parameters, target component corresponding to the determining match point with the pixel, using identified target component as the picture
The corresponding targets improvement parameter of vegetarian refreshments.
Optionally, described image enhancing data include: the enhanced image of the down-sampled images;
Second determining module 404 is specifically configured to determine institute for each pixel in the target image
It states in the enhanced image of down-sampled images, target point identical with the match point position of the pixel calculates the target point
Pixel value and the pixel match point pixel value ratio, the ratio is determined as the corresponding target of the pixel and is increased
Strong parameter.
Optionally, the adjustment module 405 is specifically configured to for each pixel in the target image, base
In the corresponding targets improvement parameter of the pixel, it is adjusted by pixel value of the following formula to the pixel:
AIp=Op
Wherein, A indicates the corresponding targets improvement parameter of the pixel, IpThe pixel value of the pixel, O before indicating to adjustp
Indicate the pixel value of the pixel after adjusting.
Optionally, described device further include:
Third determining module is configured to determine that first RGB image of the target image under rgb color mode, with
And second RGB image of the enhancing image under rgb color mode;
First generation module is configurable to generate corresponding first luminance picture of first RGB image, and described first is bright
Spend the brightness value of any pixel point in image are as follows: the maximum value in the RGB channel value of corresponding first pixel of the pixel, institute
Stating the first pixel is pixel identical with the pixel position in first RGB image;
Second generation module is configurable to generate corresponding second luminance picture of second RGB image, and described second is bright
Spend the brightness value of any pixel point in image are as follows: the maximum value in the RGB channel value of corresponding second pixel of the pixel, institute
Stating the second pixel is pixel identical with the pixel position in second RGB image;
Computing module is configured as the gain ginseng for calculating second luminance picture relative to first luminance picture
Number;
First enhancing module is configured as carrying out second RGB image bright based on the gain parameter being calculated
Enhancing processing is spent, brightness enhancing image is obtained.
Optionally, the computing module is specifically configured to calculate second luminance picture bright relative to described first
Spend the gain parameter of image:
Wherein, RpIndicate that a pixel is relative to respective pixel in first luminance picture in second luminance picture
The gain parameter of point, VopIndicate the brightness value of a pixel described in second luminance picture, VipIndicate first brightness
The brightness value of the point of respective pixel described in image.
Optionally, the first enhancing module, is specifically configured to based on the gain parameter being calculated, by following
Formula carries out brightness enhancing processing to second RGB image, obtains brightness enhancing image:
Y‘P, c=YP, c*Rp
Wherein, Y 'P, cIndicate the channel value of the channel c of a pixel in the brightness enhancing image, YP, cIndicate described
The channel value of the channel c of respective pixel point in two RGB images, the value of c are 1,2,3, the 1 expression channels R, and 2 indicate the channels G, 3 tables
Show channel B, RpIndicate that a pixel is relative to respective pixel point in first luminance picture in second luminance picture
Gain parameter, it is a pixel described in brightness enhancing image, respective pixel point described in second RGB image, described
A pixel described in second luminance picture is identical with the coordinate of respective pixel point described in first luminance picture.
Optionally, the computing module includes: correction unit and computing unit;
The correction unit is configured as being exposed correction process to second luminance picture, obtains third brightness
Image;
The computing unit is configured as calculating gain of the third luminance picture relative to first luminance picture
Parameter.
Optionally, the correction unit is specifically configured to expose second luminance picture by following formula
Light correction process obtains third luminance picture:
Wherein, V 'opIndicate the pixel value of a pixel in the third luminance picture, VopIndicate second luminance graph
The pixel value of respective pixel point, V as inminIndicate the minimum pixel value in second luminance picture, VmaxIndicate described second
Max pixel value in luminance picture, thlIndicate the first preset threshold, thhIndicate the second preset threshold, thlAnd thhMeet: 0
< thl< thh< 1.
Optionally, described device further include:
Second enhancing module, being configured as, which enhances image to the brightness, carries out color enhancement processing, obtains color enhancement
Image.
Optionally, it is described second enhancing module, be specifically configured to by following formula to the brightness enhance image into
The processing of row color enhancement, obtains color enhancement image:
Wherein, Y "P, cIndicate the channel value of the channel c of a pixel in the color enhancement image, Y 'P, cIndicate described bright
The channel value of the channel c of respective pixel point, Y in degree enhancing imageP, cIndicate the logical of respective pixel point in second RGB image
The channel value of road c, the value of c are 1,2,3, th1, th2And th3Table shows R, G and the corresponding third predetermined threshold value of channel B respectively.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
In addition, the embodiment of the present application also provides one corresponding to a kind of image enchancing method provided by above-described embodiment
Kind electronic equipment, as shown in figure 5, the electronic equipment may include:
Processor 510;
Memory 520 for storage processor executable instruction;
Wherein, the processor 510 is configured as: real when executing the executable instruction stored on the memory 520
A kind of the step of image enchancing method provided by existing the embodiment of the present application.
It is understood that the electronic equipment can be server or terminal device, in a particular application, which is set
Standby can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, and medical treatment is set
It is standby, body-building equipment, personal digital assistant etc..
Fig. 6 is a kind of block diagram of device 600 for image enhancement shown according to an exemplary embodiment.For example, dress
Setting 600 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, medical treatment
Equipment, body-building equipment, personal digital assistant etc..
Referring to Fig. 6, device 600 may include following one or more components: processing component 602, memory 604, electric power
Component 606, multimedia component 608, audio component 610, the interface 612 of input/output (I/O), sensor module 614, and
Communication component 616.
The integrated operation of the usual control device 600 of processing component 602, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 602 may include that one or more processors 620 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 602 may include one or more modules, just
Interaction between processing component 602 and other assemblies.For example, processing component 602 may include multi-media module, it is more to facilitate
Interaction between media component 608 and processing component 602.
Memory 604 is configured as storing various types of data to support the operation in equipment 600.These data are shown
Example includes the instruction of any application or method for operating on device 600, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 604 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 606 provides electric power for the various assemblies of device 600.Power supply module 606 may include power management system
System, one or more power supplys and other with for device 600 generate, manage, and distribute the associated component of electric power.
Multimedia component 608 includes the screen of one output interface of offer between described device 600 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 608 includes a front camera and/or rear camera.When equipment 600 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 610 is configured as output and/or input audio signal.For example, audio component 610 includes a Mike
Wind (MIC), when device 600 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 604 or via communication set
Part 616 is sent.In some embodiments, audio component 610 further includes a loudspeaker, is used for output audio signal.
I/O interface 612 provides interface between processing component 602 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 614 includes one or more sensors, and the state for providing various aspects for device 600 is commented
Estimate.For example, sensor module 614 can detecte the state that opens/closes of equipment 600, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 600, and sensor module 614 can be with 600 1 components of detection device 600 or device
Position change, the existence or non-existence that user contacts with device 600,600 orientation of device or acceleration/deceleration and device 600
Temperature change.Sensor module 614 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 614 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between device 600 and other equipment.Device
600 can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or 5G) or them
Combination.In one exemplary embodiment, communication component 616 is received via broadcast channel from the wide of external broadcasting management system
Broadcast signal or broadcast related information.In one exemplary embodiment, the communication component 616 further includes near-field communication (NFC)
Module, to promote short range communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) can be based in NFC module
Technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 600 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 604 of instruction, above-metioned instruction can be executed by the processor 620 of device 600 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
Fig. 7 is a kind of block diagram of device 700 for image enhancement shown according to an exemplary embodiment.For example, dress
Setting 700 may be provided as a server.Referring to Fig. 7, device 700 includes processing component 722, further comprises one or more
A processor, and the memory resource as representated by memory 732, can be by the finger of the execution of processing component 722 for storing
It enables, such as application program.The application program stored in memory 732 may include it is one or more each correspond to
The module of one group of instruction.In addition, processing component 722 is configured as executing instruction, to execute above-mentioned image enchancing method.
Device 700 can also include the power management that a power supply module 726 is configured as executive device 700, and one has
Line or radio network interface 750 are configured as device 700 being connected to network and input and output (I/O) interface 758.Dress
Setting 700 can operate based on the operating system for being stored in memory 732, such as Windows ServerTM, Mac OS XTM,
UnixTM, LinuxTM, FreeBSDTM or similar.
In addition, the embodiment of the present application also provides a kind of non-transitorycomputer readable storage medium, when the storage is situated between
When instruction in matter is executed by the processor of electronic equipment, so that electronic equipment is able to carry out one provided by the embodiment of the present application
The step of kind image enchancing method.
In addition, the embodiment of the present application also provides a kind of computer program product, when in the computer program product
When instruction is executed by the processor of electronic equipment, so that the step of electronic equipment is able to carry out above-mentioned image enchancing method.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.
Claims (10)
1. a kind of image enchancing method characterized by comprising
Target image to be reinforced is obtained, and down-sampling processing is carried out to the target image, obtains down-sampled images;
By down-sampled images input deep learning network trained in advance, obtains the corresponding image of the down-sampled images and increase
Strong data;Wherein, the deep learning network is according to sample image and the corresponding sample enhancing image instruction of the sample image
It gets, it is to characterize the enhanced image of down-sampled images relative to the down-sampled images that described image, which enhances data,
Enhancing degree data;
Determine match point of each pixel in the down-sampled images in the target image;
For each pixel in the target image, the match point institute based on the pixel in described image enhancing data is right
The enhancing data answered determine the corresponding targets improvement parameter of the pixel;
For each pixel in the target image, it is based on the corresponding targets improvement parameter of the pixel, adjusts the pixel
The pixel value of point, obtains the corresponding enhancing image of the target image.
2. the method according to claim 1, wherein each pixel in the determination target image exists
Match point in the down-sampled images, comprising:
For each pixel in the target image, respective pixel of the pixel in the down-sampled images is determined
Point searches the picture with the corresponding pixel points centered on the corresponding pixel points and size is in the region of search of M × N
The smallest pixel of absolute value of the difference of plain value, using the pixel found as the pixel in the down-sampled images
Match point;
It wherein, is the pixel of (u, v), correspondence picture of the pixel in down-sampled images for coordinate in the target image
The coordinate of vegetarian refreshments is (u/x, v/x), and x indicates down-sampling multiple.
3. the method according to claim 1, wherein described image enhancing data include: the down-sampled images
In each pixel be mapped as the mapping parameters of respective pixel point, the respective pixel point of any pixel point is the down-sampled images
Pixel identical with the pixel position in enhanced image;
Each pixel in the target image, the match point based on the pixel in described image enhancing data
Corresponding enhancing data determine the corresponding targets improvement parameter of the pixel, comprising:
For each pixel in the target image, from each mapping parameters, the determining match point institute with the pixel
Corresponding target component, using identified target component as the corresponding targets improvement parameter of the pixel.
4. the method according to claim 1, wherein described image enhancing data include: the down-sampled images
Enhanced image;
Each pixel in the target image, the match point based on the pixel in described image enhancing data
Corresponding enhancing data determine the corresponding targets improvement parameter of the pixel, comprising:
It for each pixel in the target image, determines in the enhanced image of the down-sampled images, with the pixel
The identical target point in match point position of point, calculates the pixel value of the pixel value of the target point and the match point of the pixel
The ratio is determined as the corresponding targets improvement parameter of the pixel by ratio.
5. method according to claim 1-4, which is characterized in that the method also includes:
Determine first RGB image and the enhancing image of the target image under rgb color mode in rgb color mould
The second RGB image under formula;
Corresponding first luminance picture of first RGB image is generated, the brightness of any pixel point in first luminance picture
Value are as follows: the maximum value in the RGB channel value of corresponding first pixel of the pixel, first pixel are described first
Pixel identical with the pixel position in RGB image;
Corresponding second luminance picture of second RGB image is generated, the brightness of any pixel point in second luminance picture
Value are as follows: the maximum value in the RGB channel value of corresponding second pixel of the pixel, second pixel are described second
Pixel identical with the pixel position in RGB image;
Calculate gain parameter of second luminance picture relative to first luminance picture;
Based on the gain parameter being calculated, brightness enhancing processing is carried out to second RGB image, obtains brightness enhancing figure
Picture.
6. according to the method described in claim 5, it is characterized in that, described calculate second luminance picture relative to described the
The gain parameter of one luminance picture, comprising:
Correction process is exposed to second luminance picture, obtains third luminance picture;
Calculate gain parameter of the third luminance picture relative to first luminance picture.
7. according to the method described in claim 5, it is characterized in that, described based on the gain parameter being calculated, to institute
After the step of stating the second RGB image and carry out brightness enhancing processing, obtaining brightness enhancing image, the method also includes:
Color enhancement processing is carried out to brightness enhancing image, obtains color enhancement image.
8. a kind of image intensifier device characterized by comprising
Module is obtained, is configured as obtaining target image to be reinforced, and carry out down-sampling processing to the target image, obtain
Down-sampled images;
Processing module is configured as inputting the down-sampled images into deep learning network trained in advance, obtain it is described under adopt
The corresponding image enhancement data of sampled images;Wherein, the deep learning network is according to sample image and the sample image pair
What the sample enhancing image training answered obtained, it is to characterize the enhanced image phase of down-sampled images that described image, which enhances data,
For the data of the enhancing degree of the down-sampled images;
First determining module, of each pixel in the down-sampled images being configured to determine that in the target image
With point;
Second determining module, is configured as each pixel in the target image, enhances data based on described image
In the pixel match point corresponding to enhancing data, determine the corresponding targets improvement parameter of the pixel;
Module is adjusted, is configured as increasing for each pixel in the target image based on the corresponding target of the pixel
Strong parameter adjusts the pixel value of the pixel, obtains the corresponding enhancing image of the target image.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: when executing the program stored on the memory, realize claim 1-7 appoint
Image enchancing method described in one.
10. a kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of electronic equipment
When device executes, so that electronic equipment is able to carry out image enchancing method as claimed in claim 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811233579.0A CN109345485B (en) | 2018-10-22 | 2018-10-22 | Image enhancement method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811233579.0A CN109345485B (en) | 2018-10-22 | 2018-10-22 | Image enhancement method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109345485A true CN109345485A (en) | 2019-02-15 |
CN109345485B CN109345485B (en) | 2021-04-16 |
Family
ID=65311530
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811233579.0A Active CN109345485B (en) | 2018-10-22 | 2018-10-22 | Image enhancement method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109345485B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919869A (en) * | 2019-02-28 | 2019-06-21 | 腾讯科技(深圳)有限公司 | A kind of image enchancing method, device and storage medium |
CN110049242A (en) * | 2019-04-18 | 2019-07-23 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
EP3742390A1 (en) * | 2019-05-22 | 2020-11-25 | Samsung Electronics Co., Ltd. | Image processing apparatus and image processing method thereof |
CN112084936A (en) * | 2020-09-08 | 2020-12-15 | 济南博观智能科技有限公司 | Face image preprocessing method, device, equipment and storage medium |
CN112261438A (en) * | 2020-10-16 | 2021-01-22 | 腾讯科技(深圳)有限公司 | Video enhancement method, device, equipment and storage medium |
CN112308785A (en) * | 2019-08-01 | 2021-02-02 | 武汉Tcl集团工业研究院有限公司 | Image denoising method, storage medium and terminal device |
CN112884849A (en) * | 2021-02-03 | 2021-06-01 | 无锡安科迪智能技术有限公司 | Panoramic image splicing and color matching method and device |
CN113781320A (en) * | 2021-08-02 | 2021-12-10 | 中国科学院深圳先进技术研究院 | Image processing method and device, terminal equipment and storage medium |
CN113822809A (en) * | 2021-03-10 | 2021-12-21 | 无锡安科迪智能技术有限公司 | Dim light enhancement method and system |
CN114257741A (en) * | 2021-12-15 | 2022-03-29 | 浙江大学 | Vehicle-mounted HDR method with rapid response |
CN115601274A (en) * | 2021-07-07 | 2023-01-13 | 荣耀终端有限公司(Cn) | Image processing method and device and electronic equipment |
EP4105877A4 (en) * | 2020-02-19 | 2023-08-09 | Huawei Technologies Co., Ltd. | Image enhancement method and image enhancement apparatus |
CN117314795A (en) * | 2023-11-30 | 2023-12-29 | 成都玖锦科技有限公司 | SAR image enhancement method by using background data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8670630B1 (en) * | 2010-12-09 | 2014-03-11 | Google Inc. | Fast randomized multi-scale energy minimization for image processing |
CN104091310A (en) * | 2014-06-24 | 2014-10-08 | 三星电子(中国)研发中心 | Image defogging method and device |
CN107133933A (en) * | 2017-05-10 | 2017-09-05 | 广州海兆印丰信息科技有限公司 | Mammography X Enhancement Method based on convolutional neural networks |
CN108305236A (en) * | 2018-01-16 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image enhancement processing method and device |
CN108648163A (en) * | 2018-05-17 | 2018-10-12 | 厦门美图之家科技有限公司 | A kind of Enhancement Method and computing device of facial image |
-
2018
- 2018-10-22 CN CN201811233579.0A patent/CN109345485B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8670630B1 (en) * | 2010-12-09 | 2014-03-11 | Google Inc. | Fast randomized multi-scale energy minimization for image processing |
CN104091310A (en) * | 2014-06-24 | 2014-10-08 | 三星电子(中国)研发中心 | Image defogging method and device |
CN107133933A (en) * | 2017-05-10 | 2017-09-05 | 广州海兆印丰信息科技有限公司 | Mammography X Enhancement Method based on convolutional neural networks |
CN108305236A (en) * | 2018-01-16 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image enhancement processing method and device |
CN108648163A (en) * | 2018-05-17 | 2018-10-12 | 厦门美图之家科技有限公司 | A kind of Enhancement Method and computing device of facial image |
Non-Patent Citations (2)
Title |
---|
ZUNLIN FAN等: "Dim infrared image enhancement based on convolutional neural network", 《NEUROCOMPUTING》 * |
樊民革等: "云天背景下的红外弱小目标检测算法", 《电子测量技术》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020173320A1 (en) * | 2019-02-28 | 2020-09-03 | 腾讯科技(深圳)有限公司 | Image enhancement method and apparatus, and storage medium |
CN109919869A (en) * | 2019-02-28 | 2019-06-21 | 腾讯科技(深圳)有限公司 | A kind of image enchancing method, device and storage medium |
US11790497B2 (en) | 2019-02-28 | 2023-10-17 | Tencent Technology (Shenzhen) Company Limited | Image enhancement method and apparatus, and storage medium |
CN110049242A (en) * | 2019-04-18 | 2019-07-23 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
EP3742390A1 (en) * | 2019-05-22 | 2020-11-25 | Samsung Electronics Co., Ltd. | Image processing apparatus and image processing method thereof |
US11836890B2 (en) | 2019-05-22 | 2023-12-05 | Samsung Electronics Co., Ltd. | Image processing apparatus and image processing method thereof |
US11295412B2 (en) | 2019-05-22 | 2022-04-05 | Samsung Electronics Co., Ltd. | Image processing apparatus and image processing method thereof |
CN112308785A (en) * | 2019-08-01 | 2021-02-02 | 武汉Tcl集团工业研究院有限公司 | Image denoising method, storage medium and terminal device |
CN112308785B (en) * | 2019-08-01 | 2024-05-28 | 武汉Tcl集团工业研究院有限公司 | Image denoising method, storage medium and terminal equipment |
EP4105877A4 (en) * | 2020-02-19 | 2023-08-09 | Huawei Technologies Co., Ltd. | Image enhancement method and image enhancement apparatus |
CN112084936A (en) * | 2020-09-08 | 2020-12-15 | 济南博观智能科技有限公司 | Face image preprocessing method, device, equipment and storage medium |
CN112084936B (en) * | 2020-09-08 | 2024-05-10 | 济南博观智能科技有限公司 | Face image preprocessing method, device, equipment and storage medium |
CN112261438A (en) * | 2020-10-16 | 2021-01-22 | 腾讯科技(深圳)有限公司 | Video enhancement method, device, equipment and storage medium |
CN112261438B (en) * | 2020-10-16 | 2022-04-15 | 腾讯科技(深圳)有限公司 | Video enhancement method, device, equipment and storage medium |
CN112884849A (en) * | 2021-02-03 | 2021-06-01 | 无锡安科迪智能技术有限公司 | Panoramic image splicing and color matching method and device |
CN113822809B (en) * | 2021-03-10 | 2023-06-06 | 无锡安科迪智能技术有限公司 | Dim light enhancement method and system thereof |
CN113822809A (en) * | 2021-03-10 | 2021-12-21 | 无锡安科迪智能技术有限公司 | Dim light enhancement method and system |
CN115601274A (en) * | 2021-07-07 | 2023-01-13 | 荣耀终端有限公司(Cn) | Image processing method and device and electronic equipment |
CN113781320A (en) * | 2021-08-02 | 2021-12-10 | 中国科学院深圳先进技术研究院 | Image processing method and device, terminal equipment and storage medium |
CN114257741A (en) * | 2021-12-15 | 2022-03-29 | 浙江大学 | Vehicle-mounted HDR method with rapid response |
CN117314795A (en) * | 2023-11-30 | 2023-12-29 | 成都玖锦科技有限公司 | SAR image enhancement method by using background data |
CN117314795B (en) * | 2023-11-30 | 2024-02-27 | 成都玖锦科技有限公司 | SAR image enhancement method by using background data |
Also Published As
Publication number | Publication date |
---|---|
CN109345485B (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109345485A (en) | A kind of image enchancing method, device, electronic equipment and storage medium | |
CN110675310B (en) | Video processing method and device, electronic equipment and storage medium | |
CN104038704B (en) | The shooting processing method and processing device of backlight portrait scene | |
WO2016011747A1 (en) | Skin color adjustment method and device | |
CN111709890B (en) | Training method and device for image enhancement model and storage medium | |
CN108154465B (en) | Image processing method and device | |
CN110958401B (en) | Super night scene image color correction method and device and electronic equipment | |
CN105760884B (en) | The recognition methods of picture type and device | |
CN109360261A (en) | Image processing method, device, electronic equipment and storage medium | |
CN110677734B (en) | Video synthesis method and device, electronic equipment and storage medium | |
CN105528765B (en) | Method and device for processing image | |
CN107231505B (en) | Image processing method and device | |
CN105957037B (en) | Image enchancing method and device | |
CN105791790B (en) | Image processing method and device | |
CN108040204A (en) | A kind of image capturing method based on multi-cam, device and storage medium | |
CN113450431B (en) | Virtual hair dyeing method, device, electronic equipment and storage medium | |
KR20120064401A (en) | Apparatus and method for generating high dynamic range image | |
CN108156381B (en) | Photographing method and device | |
CN107564073B (en) | Skin color identification method and device and storage medium | |
EP3273437A1 (en) | Method and device for enhancing readability of a display | |
CN112785537A (en) | Image processing method, device and storage medium | |
CN114331852A (en) | Method and device for processing high dynamic range image and storage medium | |
KR102600849B1 (en) | Method and apparatus for processing image, and storage medium | |
CN113254118B (en) | Skin color display device | |
WO2023231009A1 (en) | Focusing method and apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |