CN108205804A - Image processing method, device and electronic equipment - Google Patents
Image processing method, device and electronic equipment Download PDFInfo
- Publication number
- CN108205804A CN108205804A CN201611168058.2A CN201611168058A CN108205804A CN 108205804 A CN108205804 A CN 108205804A CN 201611168058 A CN201611168058 A CN 201611168058A CN 108205804 A CN108205804 A CN 108205804A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- filtering
- pixel
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000001914 filtration Methods 0.000 claims abstract description 261
- 238000002156 mixing Methods 0.000 claims abstract description 110
- 238000000034 method Methods 0.000 claims abstract description 52
- 230000004927 fusion Effects 0.000 claims description 307
- 230000006870 function Effects 0.000 claims description 54
- 230000001815 facial effect Effects 0.000 claims description 36
- 238000005070 sampling Methods 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 230000000630 rising effect Effects 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 9
- 230000002146 bilateral effect Effects 0.000 claims description 4
- 235000013399 edible fruits Nutrition 0.000 claims description 3
- 238000002844 melting Methods 0.000 claims description 3
- 230000008018 melting Effects 0.000 claims description 3
- 230000005055 memory storage Effects 0.000 claims description 3
- 230000006399 behavior Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 13
- 230000000717 retained effect Effects 0.000 description 13
- 230000008569 process Effects 0.000 description 11
- 238000007499 fusion processing Methods 0.000 description 9
- 238000013515 script Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000007500 overflow downdraw method Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 239000000155 melt Substances 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
This application discloses a kind of image processing method, a kind of image processing apparatus and a kind of electronic equipment.Wherein, described image processing method, including:The first image based on original image is carried out to protect side filtering, obtains filtering image;The second image based on original image with the third image based on filtering image is merged, obtains blending image.Using the above method, due to after side filtering is protected, it performs and the mixing operation of the second image based on original image, it therefore can be while eliminating noise, retaining edge, retain the grain details of original image to a certain extent, so as to avoid due to the excessively smooth caused image fault phenomenon of flat site so that blending image is more true, effectively improves picture quality.
Description
Technical field
This application involves image processing fields, and in particular to a kind of image processing method.The application is related to a kind of figure simultaneously
As processing unit and a kind of electronic equipment.
Background technology
The image absorbed by photographic device would generally include the noise (also referred to as noise spot) of some influence image aesthetic feelings,
These noises may be caused by being interfered when absorbing or transmitting image by random signal (also referred to as noise), it is also possible to shot
Object has itself, such as:Face's blackening, flaw in facial image etc..In order to promote image aesthetic feeling, can usually use
Filtering technique handles image, to eliminate noise.
The use of commonplace wave filter is at present edge preserving filter (also referred to as protecting side filtering algorithm), wherein the most classical
It is two-sided filter.The wave filter is made of two parts, and one is the spatial filter determined by image geometry space length, and one
A is the codomain wave filter determined by image pixel difference, and the value of output pixel depends on the weighted array of neighborhood territory pixel value.Due to
Two-sided filter both considers the space length between other pixels in pending pixel and neighborhood, also examines in weighted calculation
The difference of the pixel value in pending pixel and neighborhood between other pixels is considered, therefore two-sided filter can be eliminated preferably and be made an uproar
Sound obtains the smoothed image after denoising, and can keep the edge details of image.Other than above-mentioned two-sided filter, also
Wave filter based on surface blur algorithm etc. can equally keep the edge details of image while denoising.
When being applied particularly to image processing field, the problem of above-mentioned edge preserving filter generally existing is similary:Protecting side denoising
While, the grain details of original image can not be retained, i.e.,:Flat site after processing in image is excessively smooth, leads to image
There is more apparent quality compared with original image and decline in distortion.
Invention content
The embodiment of the present application provides a kind of image processing method and device, can not be protected with solving existing guarantor's side filtering technique
It stays grain details, lead to the problem of image fault.The embodiment of the present application also provides a kind of image processing apparatus and a kind of electronics
Equipment.
The application provides a kind of image processing method, including:
The first image based on original image is carried out to protect side filtering, obtains filtering image;
The second image based on original image with the third image based on filtering image is merged, obtains fusion figure
Picture.
Optionally, the original image includes:Facial image;
Before carrying out protecting side filtering to the first image based on original image, including:
Receive the U.S. face processing request for facial image;
After blending image is obtained, including:
With the U.S. face processing request of the handling result response based on the blending image.
Optionally, described pair of the second image based on original image is merged with the third image based on filtering image,
Including:
Fusion coefficients w corresponding to each pixel of blending image, the w satisfactions are set respectively using predetermined manner:0≤w
≤1.0;
Following Weighted Fusion operation is performed to the second image and third image:
For each pixel, using the pixel corresponding w and 1-w as fusion weight, to the second image and third image
Respective pixel values weighted sum, and using obtained numerical value as fusion after pixel value.
Optionally, the fusion coefficients w set respectively using predetermined manner corresponding to each pixel of blending image, packet
It includes:
For each pixel of blending image, operations described below is performed:
Obtain the Grad for corresponding to the pixel in third image;
Using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel, wherein, the input ginseng of the f (x)
Number is the Grad;
Wherein, when performing the Weighted Fusion operation using the fusion coefficients that the fusion function calculates so that third
The fusion weight of image edge region is more than its fusion weight in non-edge, and the second image melts non-edge
Close the fusion weight that weight is more than its edge region.
Optionally, when performing the Weighted Fusion operation, for each pixel in blending image, if with the pixel pair
The w answered as the fusion weight of pixel in the second image, using corresponding 1-w as the fusion weight of pixel in third image, institute
It states default fusion function f (x) and meets the following conditions:
When x is more than edge threshold, the value of f (x) is less than first threshold, and when x is less than edge threshold, the value of f (x) is more than
First threshold.
Optionally, the condition that the fusion function f (x) meets further includes:When x is more than edge threshold, with the increasing of x
Greatly, the value of f (x) is decremented to second threshold.
Optionally, when performing the Weighted Fusion operation, for each pixel in blending image, if with the pixel pair
Fusion weights of the 1-w answered as pixel in the second image, using corresponding w as the fusion weight of pixel in third image, institute
It states default fusion function f (x) and meets the following conditions:
When x is more than edge threshold, the value of f (x) is more than first threshold, and when x is less than edge threshold, the value of f (x) is less than
First threshold.
Optionally, the condition that the fusion function f (x) meets further includes:When x is more than edge threshold, with the increasing of x
Greatly, the value of f (x) is incremented to third threshold value.
Optionally, after being set respectively corresponding to the fusion coefficients w of each pixel of blending image using predetermined manner, profit
Weighted Fusion is performed with GPU to the second image and third image to operate.
Optionally, first image based on original image and second image respectively include:Original image;It is described
Third image based on filtering image includes:Filtering image.
Optionally, before carrying out protecting side filtering to the first image based on original image, including:Original image is carried out
It is down-sampled;First image based on original image includes:Original image after down-sampled;
After filtering image is obtained, to the second image based on original image and the third image based on filtering image
Before being merged, including:A liter sampling is carried out to filtering image according to down-sampled coefficient;It is described based on original image second
Image includes:Original image;The third image based on filtering image, including:Rise the filtering image after sampling.
Optionally, before carrying out protecting side filtering to the first image based on original image, including:It will be based on rgb color
The original image in space is transformed into the color space comprising luminance component;
Described pair of the first image based on original image carries out protecting side filtering, including:After performing the conversion operation
The luminance component of original image carries out protecting side filtering;
Described pair of the second image based on original image is merged with the third image based on filtering image, including:Needle
To luminance component, the original image after the execution conversion operation and filtering image are merged;
After blending image is obtained, including:
Blending image is converted back into rgb color space, and will convert back the blending image of rgb color space as image at
Manage result.
Optionally, the original image includes:Facial image.
Optionally, first image based on original image and the second image respectively include:Original image;It is described to be based on
The third image of filtering image includes:Filtering image;
Before carrying out protecting side filtering to the first image based on original image, including:It determines to wrap in the original image
The first area of the pixel containing face;
Described pair of the first image based on original image carries out protecting side filtering, including:To the first area in original image
It carries out protecting side filtering;
The second image based on original image is merged with the third image based on filtering image, including:
Original image with filtering image is merged in first area, retains original image in non-first area.
Optionally, it before first area merges original image with filtering image, further includes:In firstth area
The second area for including default human face pixel is determined in domain;
It is described that original image is merged with filtering image in first area, including:
Original image with filtering image is merged in the first area not comprising second area, and is protected in second area
Stay original image.
Optionally, before carrying out protecting side filtering to the first image based on original image, including:Determine the original graph
As in comprising face pixel first area and, to original image carry out it is down-sampled;
Described pair of the first image based on original image carries out protecting side filtering, including:To in the original image after down-sampled
Corresponding first area carry out protect side filtering;
After filtering image is obtained, before mixing operation is performed, including:According to down-sampled coefficient to filtering image
Carry out a liter sampling;
The second image based on original image is merged with the third image based on filtering image, including:
Original image with rising the filtering image after sampling is merged in first area, is retained in non-first area original
Image.
Optionally, it before the filtering image after being sampled in first area to original image with liter merges, further includes:
The second area for including default human face pixel is determined in the first area;
It is described that original image is merged with rising the filtering image after sampling in first area, including:
Original image is carried out, and with rising the filtering image after sampling second in the first area not comprising second area
Region retains original image.
Optionally, the default human face includes:Eyes or face.
Optionally, the first image based on original image is carried out using following algorithm protecting side filtering:Bilateral filtering algorithm,
Surface blur algorithm or guiding filtering algorithm.
Optionally, the method is implemented on the mobile terminal device.
Correspondingly, the application also provides a kind of image processing apparatus, including:
Image filtering unit for carrying out protecting side filtering to the first image based on original image, obtains filtering image;
Image fusion unit, for being carried out to the second image based on original image and the third image based on filtering image
Fusion, obtains blending image.
Optionally, the original image includes:Facial image;Described device further includes:
Request reception unit is handled, for before carrying out protecting side filtering to the first image based on original image, receiving
Request is handled for the U.S. face of facial image;
Request-response unit, for after blending image is obtained, being responded with the handling result based on the blending image
U.S.'s face processing request.
Optionally, described image integrated unit, including:
Fusion coefficients set subelement, for setting melting corresponding to each pixel of blending image respectively using predetermined manner
Collaboration number w, the w satisfactions:0≤w≤1.0;
Fusion performs subelement, is operated for performing following Weighted Fusion to the second image and third image:For described
Each pixel using the pixel corresponding w and 1-w as fusion weight, weights the respective pixel values of the second image and third image
Summation, and using obtained numerical value as the pixel value after fusion.
Optionally, the fusion coefficients setting subelement is specifically used for, and for each pixel of blending image, obtains third
Correspond to the Grad of the pixel in image, and using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel,
Wherein, the input parameter of the f (x) is the Grad;
When described image integrated unit performs Weighted Fusion operation using the fusion coefficients that the fusion function calculates, make
The fusion weight for obtaining third image edge region is more than its fusion weight in non-edge, and the second image is in non-edge area
The fusion weight in domain is more than the fusion weight of its edge region.
Optionally, the fusion performs subelement and is specifically used for, and for each pixel of blending image, is corresponded to the pixel
W as the fusion weight of pixel in the second image, using corresponding 1-w as the fusion weight of pixel in third image, and will
The numerical value that weighted sum obtains is as the pixel value after fusion;
Fusion function f (x), which is preset, used by the fusion coefficients setting subelement meets the following conditions:When x is more than side
During edge threshold value, the value of f (x) is less than first threshold, and when x is less than edge threshold, the value of f (x) is more than first threshold.
Optionally, the condition that fusion function f (x) meets is preset used by the fusion coefficients setting subelement also to wrap
It includes:When x is more than edge threshold, with the increase of x, the value of f (x) is decremented to second threshold.
Optionally, the fusion performs subelement and is specifically used for, and for each pixel of blending image, is corresponded to the pixel
1-w as the fusion weight of pixel in the second image, using corresponding w as the fusion weight of pixel in third image, and will
The numerical value that weighted sum obtains is as the pixel value after fusion;
Fusion function f (x), which is preset, used by the fusion coefficients setting subelement meets the following conditions:When x is more than side
During edge threshold value, the value of f (x) is more than first threshold, and when x is less than edge threshold, the value of f (x) is less than first threshold.
Optionally, the condition that fusion function f (x) meets is preset used by the fusion coefficients setting subelement also to wrap
It includes:When x is more than edge threshold, with the increase of x, the value of f (x) is incremented to third threshold value.
Optionally, the fusion performs subelement, is specifically used for, and the second image and third image are performed using GPU and added
Weigh mixing operation.
Optionally, first image based on original image and second image respectively include:Original image;It is described
Third image based on filtering image includes:Filtering image.
Optionally, described device further includes:Down-sampled unit, for being protected to the first image based on original image
Before the filtering of side, original image is carried out down-sampled;First image based on original image includes:It is original after down-sampled
Image;
Described device further includes:Sampling unit is risen, for after filtering image is obtained, to the based on original image
Before two images are merged with the third image based on filtering image, filtering image rise according to down-sampled coefficient and is adopted
Sample;Second image based on original image includes:Original image;The third image based on filtering image, including:It rises
Filtering image after sampling.
Optionally, described device further includes:Color space converting unit, for the first image based on original image
Protect before the filtering of side, the original image based on rgb color space is transformed into the color space comprising luminance component;
Described image filter unit is carried out specifically for the luminance component to the original image after the execution conversion operation
Protect side filtering;
Described image integrated unit, specifically for being directed to luminance component, to the original image after the execution conversion operation
And filtering image is merged;
Described device further includes:Color space recovery unit, for after blending image is obtained, blending image to be converted
Rgb color space is returned, and the blending image of rgb color space will be converted back as processing result image.
Optionally, the original image includes:Facial image.
Optionally, first image based on original image and the second image respectively include:Original image;It is described to be based on
The third image of filtering image includes:Filtering image;
Described device further includes:First area determination unit, for being protected to the first image based on original image
Before the filtering of side, determine to include the first area of face pixel in the original image;
Described image filter unit, specifically for carrying out protecting side filtering to the first area in original image;
Described image integrated unit, specifically for being merged in first area to original image with filtering image, non-
First area retains original image.
Optionally, described device further includes:Second area determination unit, for original image and being filtered in first area
Before image is merged, determine to include the second area of default human face pixel in the first area;
Described image integrated unit, specifically for scheming in the first area not comprising second area to original image and filtering
Retain original image as being merged, and in second area and non-first area.
Optionally, described device further includes:First area determination unit and down-sampled unit;The first area determines list
Member, for before carrying out protecting side filtering to the first image based on original image, determining to include face in the original image
The first area of pixel;The down-sampled unit, for the first image based on original image is carried out protect side filtering before,
Original image is carried out down-sampled;
Described image filter unit, specifically for being filtered to the corresponding first area in the original image after down-sampled
Wave;
Described device further includes:Sampling unit is risen, for after filtering image is obtained, before mixing operation is performed,
Including:A liter sampling is carried out to filtering image according to down-sampled coefficient;
Described image integrated unit, specifically for being carried out in first area to original image with rising the filtering image after sampling
Fusion retains original image in non-first area.
Optionally, it further includes:Second area determination unit, for the filter after first area samples original image with liter
Before wave image is merged, determine to include the second area of default human face pixel in the first area;
Described image integrated unit, specifically for being sampled in the first area not comprising second area to original image with rising
Filtering image afterwards is merged, and retains original image in second area and non-first area.
Optionally, the default human face includes:Eyes or face.
Optionally, described device is deployed in mobile terminal device.
In addition, the application also provides a kind of electronic equipment, including:
Processor;
Memory, for store code;
Wherein, the processor is coupled in the memory, for reading the code of the memory storage, and performs such as
Lower operation:The first image based on original image is carried out to protect side filtering, obtains filtering image;To second based on original image
Image is merged with the third image based on filtering image, obtains blending image.
Optionally, the original image includes:Facial image;The operation that the processor performs further includes:To being based on
First image of original image protect before the filtering of side, receives the U.S. face processing request for facial image;It is being merged
After image, with the U.S. face processing request of the handling result response based on the blending image.
Optionally, described pair of the second image based on original image is merged with the third image based on filtering image,
Including:
Fusion coefficients w corresponding to each pixel of blending image, the w satisfactions are set respectively using predetermined manner:0≤w
≤1.0;
Following Weighted Fusion operation is performed to the second image and third image:
For each pixel, using the pixel corresponding w and 1-w as fusion weight, to the second image and third image
Respective pixel values weighted sum, and using obtained numerical value as fusion after pixel value.
Optionally, the fusion coefficients w set respectively using predetermined manner corresponding to each pixel of blending image, packet
It includes:
For each pixel of blending image, operations described below is performed:
Obtain the Grad for corresponding to the pixel in third image;
Using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel, wherein, the input ginseng of the f (x)
Number is the Grad;
Wherein, when performing the Weighted Fusion operation using the fusion coefficients that the fusion function calculates so that third
The fusion weight of image edge region is more than its fusion weight in non-edge, and the second image melts non-edge
Close the fusion weight that weight is more than its edge region.
Compared with prior art, the application has the following advantages:
The image processing method that the application provides is filtered carrying out guarantor side to the first image based on original image
After wave image, the second image based on original image is merged to obtain fusion figure with the third image based on filtering image
Picture.Using the above method, due to after side filtering is protected, perform with the mixing operation of the second image based on original image,
Therefore the grain details of original image while eliminating noise, retaining edge, can be retained to a certain extent, so as to
It avoids due to the excessively smooth caused image fault phenomenon of flat site so that blending image is more true, effectively improves image
Quality.
Description of the drawings
Fig. 1 is a kind of flow chart of the embodiment of image processing method of the application;
Fig. 2 is the down-sampled schematic diagram for protecting side filtering again of elder generation provided by the embodiments of the present application;
Fig. 3 is the pictorial diagram of fusion function f (x) provided by the embodiments of the present application;
Fig. 4 is effect contrast figure provided by the embodiments of the present application;
Fig. 5 is a kind of schematic diagram of the embodiment of image processing apparatus of the application;
Fig. 6 is a kind of flow chart of another embodiment of image processing method of the application;
Fig. 7 is the schematic diagram of the first area provided by the embodiments of the present application comprising face pixel;
Fig. 8 is the process chart provided by the embodiments of the present application for being weighted fusion;
Fig. 9 is the schematic diagram of face facial area mask provided by the embodiments of the present application;
Figure 10 is a kind of schematic diagram of another embodiment of image processing apparatus of the application;
Figure 11 is the schematic diagram of the embodiment of a kind of electronic equipment of the application.
Specific embodiment
Many details are elaborated in the following description in order to fully understand the application.But the application can
Much to implement different from other manner described here, those skilled in the art can be in the feelings without prejudice to the application intension
Similar popularization is done under condition, therefore, the application is not limited by following public specific implementation.
In this application, a kind of image processing method is each provided, a kind of image processing apparatus and a kind of electronics are set
It is standby, it is described in detail one by one in the following embodiments.
It please refers to Fig.1, is a kind of flow chart of the embodiment of image processing method of the application.The method includes such as
Lower step:
Step 101 carries out the first image based on original image to protect side filtering, obtains filtering image.
The filtering image refers to, the first image based on original image is carried out to protect the figure obtained after the filtering process of side
Picture.Protect the filtering algorithm when filtering adoptable protect to include:Bilateral filtering algorithm, surface blur algorithm or guiding filter
Wave algorithm etc..When it is implemented, any one in above-mentioned algorithm may be used to the first figure based on original image in this step
As being filtered, filtering image is obtained.The filtering image can preferably retain original image while noise is eliminated
Edge details achieve the effect that enhance edge.
First image based on original image can be original image, this step can sample in this case
Any guarantor that face is enumerated filters when filtering algorithm protects original image.
Since the various execution efficiencys for protecting side filtering algorithm are heavily dependent on size and the filtering of picture size
The radius of window, image can be smoother when radius is big, and boundary also becomes apparent from, but for each pixel neighbour to be treated
Domain pixel is just more, and leading to processing, time-consuming, efficiency is low, more low especially for high-definition picture treatment effeciency.
For this problem, the present embodiment provides the first down-sampled preferred embodiment filtered again, i.e.,:Sheet can performed
Down-sampled to original image progress before step, first image based on original image can be:It is original after down-sampled
Image.
The process of described down-sampled namely usually said reduction image resolution ratio.For example, for the original of a width N × M
For image, if down-sampled coefficient is k, a pixel can be taken every k pixel in each row and column of original image
Form the new image of a width, the image be exactly perform it is down-sampled after original image.
It after down-sampled to original image, then perform this step and the original image after down-sampled is carried out to protect side filtering, please join
Fig. 2 is seen, for the down-sampled schematic diagram for protecting side filtering again of elder generation provided in this embodiment.Wherein (a) is original image, and (b) is pair
Original image after down-sampled carries out the schematic diagram of bilateral filtering, and (c) is filtering image.
By down-sampled, the radius of original image size and filter window can be reduced simultaneously, therefore can promote guarantor
Side filtering execution efficiency, some needs carry out in real time protect side filtering process application scenarios (such as:Live preview) under or
Person be in mobile terminal when carrying out protecting side filtering process in the limited equipment of computing capabilitys, using this preferred embodiment,
Can reduce and protect taking for side filtering process, can effective boostfiltering efficiency.
Certainly given above is for the preferred embodiment of boostfiltering efficiency, in the specific implementation, for differentiating
The not high original image of rate or under the application scenarios to treatment effeciency no requirement (NR), can not perform above-mentioned down-sampled place
Reason process, but it is also possible that the filtering of guarantor side is directly carried out to original image.
Step 102 merges the second image based on original image with the third image based on filtering image, obtains
Blending image.
Blending image described in this step refers to the second image based on original image and the third based on filtering image
After image carries out fusion treatment, obtained image.
Since the filtering image that step 101 obtains is while side denoising is protected, it will usually lose the texture in original image
Details so as to cause image fault, in the circle as shown in (c) in Fig. 2, due to the loss of original grain details, causes to handle
Image afterwards lacks " texture ", untrue.
In order to solve this problem, this step can be by the second image based on original image and the based on filtering image
Three images are merged, i.e.,:By extracting information, the generation same ruler of third width from the second image of identical size and third image
Very little image, i.e.,:Blending image.Due to being extracted information from the second image based on original image in fusion process,
The grain details of original image can be retained to a certain extent, so that blending image is more true, avoids being distorted, improved
Picture quality.
The second image based on original image with the third image based on filtering image is merged, difference may be used
Image interfusion method, such as:Weighted Fusion method, HIS fusion methods, KL transformation fusion method or Wavelet Transform Fusion method etc..
Preferably, it is contemplated that Weighted Fusion method has easy realization, arithmetic speed fast and is convenient for excellent using GPU execution etc.
Point uses the preferred embodiment of Weighted Fusion in the present embodiment.
In the second image based on original image, the third image based on filtering image and blending image, each pixel
And region have correspondence namely:There is correspondence, base in these three images between the pixel in same coordinate position
In the correspondence, the region that is determined in a wherein image, for example, fringe region, region comprising face pixel etc., together
There is also corresponding regions in other two images for sample.This correspondence of three images repeats no more below.
Based on above-mentioned correspondence, during Weighted Fusion, can by the second image based on original image, with being based on
The pixel value weighted sum of each two respective pixel in the third image of filtering image, and using obtained numerical value as to be generated
The pixel value of respective pixel in blending image, so as to obtain blending image.
In the specific implementation, it is described if not carrying out the extra process such as down-sampled to original image before this step
The second image based on original image can be original image, and the third image based on filtering image can be filtering figure
Picture.This step is weighted fusion to original image and filtering image, obtains blending image.
Original image with filtering image is weighted and is merged, can blending image first be corresponded to using predetermined manner setting
The fusion coefficients w of each pixel, the w satisfactions:0≤w≤1.0, in the specific implementation, all fusion coefficients w are not all 0,
It is not all 1;Then following Weighted Fusion is performed to original image and filtering image to operate:For each pixel, with the picture
The corresponding fusion coefficients w of element and, 1 with the difference 1-w of the fusion coefficients, as fusion weight, to original image and filtering
The respective pixel values weighted sum of image, and using obtained numerical value as the pixel value after fusion.
That is, the pixel value of any pixel can be calculated by one of the following formula in blending image:
Result=w × org_value+ (1-w) × bilateral_value;--- -- formula 1
Result=(1-w) × org_value+w × bilateral_value;--- -- formula 2
Wherein, w is the fusion coefficients of the corresponding pixel, and org_value is the pixel value of respective pixel in original image,
Bilateral_value is the pixel value of respective pixel in filtering image, and result is the pixel value after fusion.
As easy-to-use embodiment, all fusion coefficients can be arranged to identical, specific value can be with
It is determined according to actual demand.Such as:It can be arranged to 0.5, that is, the weight of original image and filtering image in fusion process
All it is 0.5;If it is desired to more keep grain details, according to formula 1, w can be set greater than to 0.5 numerical value, if
Using formula 2, w can be set smaller than to 0.5 value;If it is desired to preferably keep edge details, then may be used with it is upper
State opposite setting.
Preferably, in order to obtain better syncretizing effect, edge details and grain details these two aspects can be taken into account simultaneously
Syncretizing effect, the present embodiment provides the preferred embodiments that fusion coefficients are determined with the fusion function based on pixel gradient.
Specifically, following manner, which may be used, determines the fusion coefficients for corresponding to each pixel of blending image:Due to filtering
Image has largely eliminated the noise in image, this step can be directed to each pixel of blending image, perform
Operations described below:The Grad for corresponding to the pixel in filtering image is first obtained, for example, for the pixel that coordinate position is (i, j),
Obtain the Grad for the pixel being similarly in filtering image at (i, j) position;Then by the output of default fusion function f (x)
It is worth the fusion coefficients w as the pixel, wherein, the input parameter of the f (x) is the Grad.
The Grad may be used derivation mode and calculate, and can also be calculated using various gradient operators.For figure
Edge as in changes slowly along edge direction pixel, and pixel variation in vertical edge direction is violent.By the gradient for solving pixel
Whether value can judge in fringe region pixel, if Grad is more than edge threshold, generally it can be thought that the pixel
In fringe region, otherwise in flat site.Whether the edge threshold is, for judging the pixel in image in edge
The threshold value in region.
The preset fusion function f (x) is using pixel gradient as independent variable, and according to the Grad of input, the output phase should
Fusion coefficients.There is following characteristic using the Weighted Fusion process that default fusion function performs:Can filtering image be existed
The fusion weight of fringe region be more than its in non-edge (i.e.:Flat site) fusion weight, original image is in non-edge
The fusion weight in region is more than the fusion weight of its edge region.Namely:For filtering image, each pixel in fringe region
Fusion weight be both greater than the fusion weight of each pixel in flat site;For original image, each pixel in flat site
Fusion weight be both greater than the fusion weight of each pixel of fringe region.
When being weighted fusion to original image and filtering image, for following two fusion weight set-up modes, institute
Using default fusion function in the form of will be different:1) for each pixel in blending image, with the corresponding w of the pixel
Fusion weight as original image, the fusion weight using corresponding 1-w as filtering image;2) for every in blending image
A pixel using the corresponding 1-w of the pixel as the fusion weight of original image, is weighed using corresponding w as the fusion of filtering image
Weight.Both of these case is illustrated separately below.
1) the fusion weight using w as original image, the fusion weight using 1-w as filtering image
In this way, the pixel value of any pixel can be calculated by the following formula in blending image:
Result=f (x) × org_value+ [1-f (x)] × bilateral_value;--- ----formula 3
Wherein, x is the Grad for corresponding to the pixel in filtering image, and fusion function f (x) then meets the following conditions:Work as x
During more than edge threshold, the value of f (x) is less than first threshold, and when x is less than edge threshold, the value of f (x) is more than first threshold.
It can be seen that fusion function f (x) can export different fusion coefficients, on side according to different pixel gradient x
Edge region, the fusion weight of filtering image can get a promotion, and help to retain marginal information, in flat site, original image
Fusion weight can get a promotion, and grain details can be retained to a certain extent.
In the specific implementation, the edge threshold can be rule of thumb preset, can also be according to being calculated
The distribution of each Grad be configured and adjust, the coefficient in f (x) expression formulas can also be adjusted by preset interface
It is whole so that the Weighted Fusion process based on f (x) meets properties described above.
It is further preferred that in order to preferably retain edge details, f (x), can be on the basis of above-mentioned condition is met
Meet the following conditions:When x is more than edge threshold, with the increase of x, the value of f (x) is decremented to second threshold.I.e.:With edge
The gradual enhancing of feature, the weight of filtering image can gradually increase, so as to further promote the effect for retaining edge details.
As relatively simple embodiment, fusion function can be linear function, and a concrete example is given below
Son, refers to Fig. 3, and fusion function f (x) is following form:
Wherein, it is 6 to preset edge threshold, first threshold 0.5, second threshold 0.When x is more than 6, the value of f (x) is small
In 0.5, when x is less than 6, the value of f (x) is more than 0.5.
Using above-mentioned fusion function f (x), edge region, the fusion weight of filtering image pixel is more than original image picture
The fusion weight of element, and with the increase of x, the increasing weight in original image of weight of filtering image so that filtering
The edge details of image are preferably retained, and in flat site, the weight of original image is more than filtering image, so that former
The grain details of beginning image are retained to a certain extent.
2) the fusion weight using 1-w as original image, the fusion weight using w as filtering image
In this way, the pixel value of any pixel can be calculated by the following formula in blending image:
Result=[1-f (x)] × org_value+f (x) × bilateral_value;--- ----formula 4
Wherein, x is the Grad for corresponding to the pixel in filtering image, and fusion function f (x) meets the following conditions:When x is big
When edge threshold, the value of f (x) is more than first threshold, and when x is less than edge threshold, the value of f (x) is less than first threshold.So as to
Edge region, the fusion weight of filtering image can get a promotion, and help to retain marginal information, in flat site, original graph
The fusion weight of picture can get a promotion, and grain details can be retained to a certain extent.
It is further preferred that in order to preferably retain edge details, f (x), can be on the basis of above-mentioned condition is met
Meet the following conditions:When x is more than edge threshold, with the increase of x, the value of f (x) is incremented to third threshold value.I.e.:With edge
The gradual enhancing of feature, the weight of filtering image can gradually increase, so as to further promote the effect for retaining edge details.
A specific example is given below, fusion function f (x) can be following form:
Wherein, it is 6 to preset edge threshold, first threshold 0.5, and third threshold value is 1.When x is more than 6, the value of f (x) is big
In 0.5, when x is less than 6, the value of f (x) is less than 0.5.
It should be noted that above in the example 1) provided with 2) two kinds fusion weight set-up modes, letter is merged
Several forms is fairly simple, easy to implement, in practical applications, can design the increasingly complex fusion function of other forms,
Such as:Can be the linear function or curvilinear function using different parameters, as long as meeting the characteristic of the foregoing description, just
All it is possible.
Using the preferred embodiment based on fusion function, this step can first obtain the gradient of each pixel in filtering image
Then value calculates the value of default fusion function f (x), as blending image respective pixel using each Grad as input respectively
Fusion coefficients w, and the pixel value of each pixel after fusion is calculated according to above-mentioned corresponding formula 3 or formula 4, so as to obtain
Blending image.
Preferably, in order to improve the execution efficiency for carrying out fusion treatment, the present embodiment provides be weighted fusion using GPU
Preferred embodiment.Compared with CPU, GPU is according to its congenital hardware configuration with efficient parallel estimated performance, to height weight
Multiple and only local correlation image procossing calculating has apparent acceleration advantage.Weighted Fusion process in the present embodiment is come
It says, the ranking operation of identity logic is carried out for each pixel, and be that processing sequence is unrelated, therefore GPU can be placed on
Upper execution, so as to greatly promote image co-registration efficiency.
When it is implemented, can set corresponding to each pixel of blending image fusion coefficients w after, by original image,
Filtering image and each fusion coefficients are written in as 2 d texture in shader (tinter) script run on GPU, then
The shader scripts are run, trigger the mixing operation that original image and filtering image are performed on GPU.
It should be noted that in the specific implementation, if employed before this step 102 it is first down-sampled filter again it is excellent
Select embodiment, then in this step, second image based on original image can be original image, described based on filter
The third image of wave image can be the filtering image risen after sampling.This step is to original image and rises the filtering image after sampling
Fusion is weighted, obtains blending image.
Specifically, this step before fusion coefficients are set, can first obtain step 101 according to down-sampled coefficient
Filtering image carries out a liter sampling (obtaining the filtering image identical with the resolution ratio of original image), such as:Two-wire may be used
Shape interpolation algorithm carries out a liter sampling, then calculates the Grad x of each pixel for the filtering image risen after sampling again, and further
Using f (x) as the fusion coefficients of respective pixel, finally further according to fusion coefficients, to original image with rising the filtering figure after sampling
As being weighted fusion.When it is implemented, image co-registration process can also utilize GPU complete, for example, can by original image,
It rises the filtering image after sampling and each fusion coefficients is written as 2 d texture in shader scripts, by running shader
Script completes Weighted Fusion process on GPU;It can also be using original image, filtering image and each fusion coefficients as two dimension
In texture write-in shader scripts, liter sampling and Weighted Fusion process are completed on GPU by running shader scripts.
The above-mentioned first down-sampled embodiment being filtered again and then merged with original image to original image, can be comprehensive
The advantages of grain details can be retained by closing the efficient and high-resolution of low resolution filtering, especially with based on pixel ladder
The fusion function f (x) of degree can determine fusion weight according to pixel value variation degree, take into account holding edge as fusion coefficients
With the demand for retaining grain details, the picture quality after fusion is promoted.
So far, the embodiment of image processing method provided in this embodiment is carried out by above-mentioned steps 101-102
Detailed description.In the specific implementation, before step 101 is performed, the image processing requests for original image can be received, and
After step 102 is finished, asked using obtained blending image as handling result with responding described image processing.For example,
The original image can be facial image, and described image processing request can be U.S. face processing request, can will finally obtain
Blending image as handling result to respond the U.S. face processing request.
In the specific implementation, it can also be changed on the basis of the above embodiment, such as:It can be only to brightness point
Amount is handled, so as to further boostfiltering and the processing speed of fusion.It is described to be based on using this embodiment
First image of original image can be that the original image based on rgb color space is transformed into the color comprising luminance component
Original image behind space, described pair of the first image based on original image carry out protecting side filtering, including:To performing the conversion
The luminance component of original image after operation is filtered.Second image based on original image can be to perform above-mentioned
Original image after conversion operation, the third image based on filtering image can be filtering image, and described pair based on original
Second image of image is merged with the third image based on filtering image, including:For luminance component, to performing above-mentioned turn
The original image and filtering image changed after operation is merged.And after blending image is obtained, blending image is converted back
Rgb color space, and the blending image of rgb color space will be converted back as processing result image.
Specifically, for the original image based on rgb color space, when being filtered or fusion treatment, usually need
It to be handled for tri- components of R, G, B, in order to improve treatment effeciency, can will be based on RGB before step 101 is performed
The original image of color space is transformed into the color space comprising luminance component, such as:YCrCb color spaces or Lab colors
Space, each pixel includes luminance component Y and two chromatic components after original image is transformed into YCrCb color spaces, is transformed into
Each pixel then includes luminance component L and two color difference components after Lab color spaces.
After the conversion that color space is carried out to original image, the operations such as subsequent filtering, fusion can be only for brightness point
Amount is handled, so as to reduce data processing amount.It, can be to the original image after execution conversion operation for filtering
Luminance component be filtered;For fusion process, the luminance component of transformed original image and above-mentioned filter can be directed to
The luminance component for the filtering image that wave obtains is merged, and retains other components of transformed original image, so as to obtain
Blending image;Finally, blending image is converted back into rgb color space from corresponding color space, so as to obtain final image
Handling result.Conversion between the above-mentioned color space being related to may be used existing conversion formula and realize, no longer superfluous herein
It states.
In conclusion image processing method provided in this embodiment, due to after side filtering is protected, having carried out image co-registration
Processing, therefore the grain details of original image while eliminating noise, retaining edge, can be retained to a certain extent, from
It and can be to avoid due to the excessively smooth caused image fault phenomenon of flat site so that image is more true, effectively improves figure
Image quality amount.
Specific implementation effect refers to Fig. 4, is effect contrast figure provided in this embodiment, wherein (a) is the prior art
Only to original image protect the design sketch of side filtering, (b) is the design sketch of the present embodiment, i.e.,:On the basis of side filtering is protected,
Design sketch after original image is merged with filtering image.Be not difficult to find out, in (a) flat site of image it is excessively smooth,
It is untrue;(b) image in is due to remaining texure, more really.
Although it should be appreciated by those skilled in the art that it is above-mentioned be by taking facial image as an example to the implementation result of the present embodiment into
Capable explanation, but method provided in this embodiment can be used for handling other images, can equally protect side filter
Retain the grain details in image while wave, so as to improve picture quality.
In the above embodiments, a kind of image processing method is provided, corresponding, the application also provides a kind of figure
As processing unit.Fig. 5 is please referred to, is a kind of embodiment schematic diagram of image processing apparatus of the application.Since device is implemented
Example is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to the part explanation of embodiment of the method
It can.Device embodiment described below is only schematical.
A kind of image processing apparatus of the present embodiment, including:Image filtering unit 501, for based on original image
First image carries out protecting side filtering, obtains filtering image;Image fusion unit 502, for the second figure based on original image
As being merged with the third image based on filtering image, blending image is obtained.
Optionally, the original image includes:Facial image;Described device further includes:
Request reception unit is handled, for before carrying out protecting side filtering to the first image based on original image, receiving
Request is handled for the U.S. face of facial image;
Request-response unit, for after blending image is obtained, being responded with the handling result based on the blending image
U.S.'s face processing request.
Optionally, described image integrated unit, including:
Fusion coefficients set subelement, for setting melting corresponding to each pixel of blending image respectively using predetermined manner
Collaboration number w, the w satisfactions:0≤w≤1.0;
Fusion performs subelement, is operated for performing following Weighted Fusion to the second image and third image:For described
Each pixel using the pixel corresponding w and 1-w as fusion weight, weights the respective pixel values of the second image and third image
Summation, and using obtained numerical value as the pixel value after fusion.
Optionally, the fusion coefficients setting subelement is specifically used for, and for each pixel of blending image, obtains third
Correspond to the Grad of the pixel in image, and using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel,
Wherein, the input parameter of the f (x) is the Grad;
When described image integrated unit performs Weighted Fusion operation using the fusion coefficients that the fusion function calculates, make
The fusion weight for obtaining third image edge region is more than its fusion weight in non-edge, and the second image is in non-edge area
The fusion weight in domain is more than the fusion weight of its edge region.
Optionally, the fusion performs subelement and is specifically used for, and for each pixel of blending image, is corresponded to the pixel
W as the fusion weight of pixel in the second image, using corresponding 1-w as the fusion weight of pixel in third image, and will
The numerical value that weighted sum obtains is as the pixel value after fusion;
Fusion function f (x), which is preset, used by the fusion coefficients setting subelement meets the following conditions:When x is more than side
During edge threshold value, the value of f (x) is less than first threshold, and when x is less than edge threshold, the value of f (x) is more than first threshold.
Optionally, the condition that fusion function f (x) meets is preset used by the fusion coefficients setting subelement also to wrap
It includes:When x is more than edge threshold, with the increase of x, the value of f (x) is decremented to second threshold.
Optionally, the fusion performs subelement and is specifically used for, and for each pixel of blending image, is corresponded to the pixel
1-w as the fusion weight of pixel in the second image, using corresponding w as the fusion weight of pixel in third image, and will
The numerical value that weighted sum obtains is as the pixel value after fusion;
Fusion function f (x), which is preset, used by the fusion coefficients setting subelement meets the following conditions:When x is more than side
During edge threshold value, the value of f (x) is more than first threshold, and when x is less than edge threshold, the value of f (x) is less than first threshold.
Optionally, the condition that fusion function f (x) meets is preset used by the fusion coefficients setting subelement also to wrap
It includes:When x is more than edge threshold, with the increase of x, the value of f (x) is incremented to third threshold value.
Optionally, the fusion performs subelement, is specifically used for, and the second image and third image are performed using GPU and added
Weigh mixing operation.
Optionally, first image based on original image and second image respectively include:Original image;It is described
Third image based on filtering image includes:Filtering image.
Optionally, described device further includes:Down-sampled unit, for being protected to the first image based on original image
Before the filtering of side, original image is carried out down-sampled;First image based on original image includes:It is original after down-sampled
Image;
Described device further includes:Sampling unit is risen, for after filtering image is obtained, to the based on original image
Before two images are merged with the third image based on filtering image, filtering image rise according to down-sampled coefficient and is adopted
Sample;Second image based on original image includes:Original image;The third image based on filtering image, including:It rises
Filtering image after sampling.
Optionally, described device further includes:Color space converting unit, for the first image based on original image
Protect before the filtering of side, the original image based on rgb color space is transformed into the color space comprising luminance component;
Described image filter unit is carried out specifically for the luminance component to the original image after the execution conversion operation
Protect side filtering;
Described image integrated unit, specifically for being directed to luminance component, to the original image after the execution conversion operation
And filtering image is merged;
Described device further includes:Color space recovery unit, for after blending image is obtained, blending image to be converted
Rgb color space is returned, and the blending image of rgb color space will be converted back as processing result image.
In addition, if the image processing method that the application is provided is applied to in the processing of facial image, due to combining
Guarantor side filtering and the advantages of keep grain details, thus in removal face skin blemishes, reach the same of U.S. face mill bark effect
When, the texture of skin can be retained so that facial image is more true, natural.
When the image processing method for providing the application is applied to facial image, it is contemplated that facial image has its own
Some features, such as:Human face region can be clearly marked off, partial organ has more special dermatoglyph etc. in face,
Corresponding optimization processing can also be done when the method is embodied.Another reality that specific optimal enforcement mode is presented below
It applies in example and illustrates.
Fig. 6 is referred to, is a kind of flow chart of another embodiment of image processing method of the application.The method packet
Include following steps:
Step 601 determines to include the first area of face pixel in original image.
In the present embodiment, original image is facial image, and not only neck can be also included comprising face in usual facial image
The bodies such as portion, shoulder and background etc..Since the main purpose of filtering is the flaw in order to improve face face, reach
The effect of U.S. face mill skin, therefore can be filtered only for the region comprising face pixel, so as to ensure image procossing effect
While fruit, filtration efficiency can be improved.In order to realize the purpose, this step determines to include face pixel in original image
First area.
When it is implemented, it may be used arbitrary in human face detection tech, feature location technology or complexion model technology
It is a kind of that the first area for including face pixel is identified from original image, above-mentioned three kinds of technologies can also be combined and implemented, from
And make recognition result more accurate.The first area can be the rectangular area comprising face pixel or follow people
The region of face contour shape.As shown in fig. 7, the region that wherein black surround is included is the first area described in the present embodiment.
Step 602 carries out the first area in original image to protect side filtering, obtains filtering image.
This step can be specified when being filtered using guarantor's side filtering algorithm to original image only to the firstth area therein
Domain is filtered, therefore obtains the effect for protecting side noise reduction comprising filter result in the corresponding first area of the filtering image of generation
Fruit, other regions for being not belonging to first area are not filtered then, remain the pixel value in original image.Due to this step
Entire original image is not filtered, therefore can be with boostfiltering efficiency.
Step 603 determines to include the second area of default human face pixel in the first area.
In order to preferably protect the dermatoglyph of human face, this step can be determined in the first region comprising default people
The second area of face pixel, wherein the default human face includes:Eyes or face.The number of second area can be with
For one or more than one, in the specific implementation, can be determined according to specific demand, such as:This step can determine pair
It should can also determine three second areas in a second area of face, correspond to left eye, right eye, face respectively.
When it is implemented, since step 601 according to original image determines first area, step 602 has obtained filtering figure
Picture, this step can utilize original image or filtering image, using appointing in feature location technology or complexion model technology
Meaning is a kind of, and the second area for including human face pixel is further identified from first area, can also be by above two skill
Art combines, so as to make recognition result more accurate.
Step 604 merges original image with filtering image in the first area not comprising second area, second
Region and non-first area retain original image, to obtain blending image.
This step performs the mixing operation based on region division, below to the embodiment of this step by taking Weighted Fusion as an example
It is described:The pixel in second area and non-first area for blending image is right using the pixel value of original image
In the first area not comprising second area (i.e.:Second area is rejected from first area) in pixel then use to original graph
As being weighted the numerical value after merging as pixel value with filtering image, so as to obtain blending image.
When it is implemented, it can be realized by the setting and adjustment to fusion coefficients using unified weighting process flow
To the different disposal of different zones, so as to facilitate the implementation process of technical solution, it can specifically include step 604-1 to 604-4,
It is described further with reference to Fig. 8.
Step 604-1, using predetermined manner setting corresponding to the fusion coefficients w of each pixel of blending image.
The w meets:0≤w≤1.0.According to predetermined manner setting corresponding to the fusion coefficients of each pixel of blending image
W, it can be provided fixed numbers can also be arranged to the fusion function f (x) using filtering image respective pixel gradient as independent variable
Output valve.The set-up mode and fusion function f (x) about fusion coefficients provided in embodiment of the method before is expired
Condition of foot etc., is suitable for this step, the word before related description refers in embodiment of the method, details are not described herein again.
Step 604-2, to being wherein not belonging to the pixel of the first area, its fusion coefficients is set as making original image
Fusion weight be 1 numerical value.
Specifically, it is weighted fusion according to the formula 1 in embodiment before or formula 3, then can will not belong to
1 is set as in the fusion coefficients of the pixel of first area;It is weighted according to the formula 2 in embodiment before or formula 4
Fusion, then the fusion coefficients for the pixel that can will not belong to first area are set as 0.
Step 604-3, to wherein belonging to the pixel of the second area, its fusion coefficients w is set as making original image
Fusion weight be 1 numerical value.
Specific setting method is similar with step 604-2, is repeated no more.
Step 604-4, for each pixel of blending image, respectively using the pixel corresponding w and 1-w as fusion weight,
Respective pixel values weighted sum to original image and filtering image, and using obtained numerical value as fusion after pixel value, from
And obtain blending image.
By determining and to fusion coefficients setting of the above-mentioned steps to first area and second area, comprising pre-
If the second area of human face pixel and being not belonging in other regions of first area, the fusion coefficients of each pixel all by
The numerical value that the fusion weight for being set as making original image is 1 is (i.e.:The fusion weight of filtering image is 0), not comprising the secondth area
In the first area in domain, the fusion coefficients of each pixel can be set using the predetermined manner in step 604-1.
If second area and the region for being not belonging to first area represented using black, the white table in other regions
Show, then can obtain the effect (abbreviation face facial area mask) of similar face mask, as shown in figure 9, it is the present embodiment
The schematic diagram of the face facial area mask of offer, wherein, first area is face contour shape.Due to black region wherein
In domain, the fusion weight of original image is all 1, thus in fusion process only in white area according to fusion weight to original graph
Picture and filtering image are weighted fusion, and the pixel value of original image is then remained to black region, can not only reduce fusion
Calculation amount promotes fusion efficiencies, also has the advantages that:It, can due to performing Weighted Fusion operation in white area
To keep the grain details of facial skin to a certain extent;And since the second area in black remains original image
Pixel value, thus can retain well default human face (such as:Eyes, face etc.) dermatoglyph so that at image
It is more true, natural to manage result.
So far, the embodiment of image processing method provided in this embodiment is carried out by above-mentioned steps 601-604
It is described in detail.
It should be noted that provided in embodiment of the method before based on down-sampled preferred embodiment, can also
Implementation is combined with the present embodiment.Specifically, in step 601 determines original image after the first area comprising face pixel,
The original image can be carried out down-sampled, step 602 then can be to corresponding firstth area in the original image after down-sampled
Domain is filtered, and obtains filtering image, and before step 604 is performed, first filtering image can be carried out according to down-sampled coefficient
Sampling is risen, then step 604 is weighted fusion in first area to original image and the filtering image risen after sampling, second
Region and non-first area retain original image, to obtain blending image.
Same reason, embodiment before provide using GPU carry out image co-registration preferred embodiment, equally
Implementation can be combined with the present embodiment.Specifically, after completing to the setting of fusion coefficients, step 604 is performed using GPU, i.e.,:
Original image, filtering image and each fusion coefficients can be written in the shader run on GPU as 2 d texture
In script, the shader scripts are then run, Weighted Fusion operation is performed on GPU.
In conclusion the preferred embodiment for facial image is present embodiments provided, due to not comprising the secondth area
Weighted Fusion operation is performed in the first area in domain, and retains original image, therefore fusion calculation can be reduced in second area
Amount promotes fusion efficiencies;The grain details of facial skin are kept to a certain extent;And default face can be retained well
Organ (such as:Eyes, face etc.) dermatoglyph so that processing result image is more true, natural.
It should be noted that above-described embodiment provided is a kind of preferred embodiment for facial image, specific
It can be changed on the basis of above-mentioned preferred embodiment as needed in.
For example, under the application scenarios of less demanding of the textural characteristics to organs such as eyes, faces, step can not be performed
603 determining second areas, corresponding step 604 can be weighted original image with filtering image in first area and merge,
Retain original image (that is, can be without carrying out step 604-3 provided in this embodiment during specific implementation) in non-first area, that
It, can due to being only filtered in first area and Weighted Fusion while protecting side filtering, keeping certain face grain details
To promote execution efficiency.
For another example, under the application scenarios of less demanding to execution efficiency, step 601 can not be performed, that is, uncertain to include
The first area of face pixel, and original image is filtered using guarantor's side filtering algorithm in step 602, step 603
It can then determine that step 604 is in non-second area to original comprising the second area for presetting human face pixel in original image
Image is weighted with filtering image and merges, and retains original image (that is, can not perform sheet during specific implementation in second area
The step 604-2 that embodiment provides), then while protecting side filtering, keeping certain face grain details, due in the secondth area
Original image is remained in domain, therefore the grain details of human face in second area can be retained well so that at image
It is more true, natural to manage result.
Corresponding above provide a kind of another embodiment of image processing method of the application, the application is also
Another embodiment of corresponding image processing apparatus is provided.Figure 10 is please referred to, is a kind of image processing apparatus of the application
The schematic diagram of another embodiment.Since device embodiment is substantially similar to embodiment of the method, so describe fairly simple, it is related
Part illustrates referring to the part of embodiment of the method.Device embodiment described below is only schematical.
A kind of image processing apparatus of the present embodiment, including:First area determination unit 1001 is described original for determining
The first area of face pixel is included in image;Facial image filter unit 1002, for the first area in original image
It carries out protecting side filtering, obtains filtering image;Second area determination unit 1003, for being determined in the first area comprising pre-
If the second area of human face pixel;Facial image integrated unit 1004, in the first area not comprising second area
Original image with filtering image is merged, and retains original image in second area and non-first area, to be melted
Close image.
In addition, present invention also provides a kind of electronic equipment;The electronic equipment embodiment is as follows:
1 is please referred to Fig.1, it illustrates the schematic diagrames of the embodiment of a kind of electronic equipment of the application.
The electronic equipment, including:Processor 1101;Memory 1102, for store code;
Wherein, the processor is coupled in the memory, for reading the code of the memory storage, and performs such as
Lower operation:The first image based on original image is carried out to protect side filtering, obtains filtering image;To second based on original image
Image is merged with the third image based on filtering image, obtains blending image.
Optionally, the original image includes:Facial image;The operation that the processor performs further includes:To being based on
First image of original image protect before the filtering of side, receives the U.S. face processing request for facial image;It is being merged
After image, with the U.S. face processing request of the handling result response based on the blending image.
Optionally, described pair of the second image based on original image is merged with the third image based on filtering image,
Including:
Fusion coefficients w corresponding to each pixel of blending image, the w satisfactions are set respectively using predetermined manner:0≤w
≤1.0;Following Weighted Fusion operation is performed to the second image and third image:For each pixel, corresponded to the pixel
W and 1-w for fusion weight, the respective pixel values weighted sum to the second image and third image, and by obtained numerical value make
For the pixel value after fusion.
Optionally, the fusion coefficients w set respectively using predetermined manner corresponding to each pixel of blending image, packet
It includes:
For each pixel of blending image, operations described below is performed:Obtain the gradient for corresponding to the pixel in third image
Value;Using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel, wherein, the input parameter of the f (x) is
The Grad;
Wherein, when performing the Weighted Fusion operation using the fusion coefficients that the fusion function calculates so that third
The fusion weight of image edge region is more than its fusion weight in non-edge, and the second image melts non-edge
Close the fusion weight that weight is more than its edge region.
Optionally, when performing the Weighted Fusion operation, for each pixel in blending image, if with the pixel pair
The w answered as the fusion weight of pixel in the second image, using corresponding 1-w as the fusion weight of pixel in third image, institute
It states default fusion function f (x) and meets the following conditions:When x is more than edge threshold, the value of f (x) is less than first threshold, when x is less than
During edge threshold, the value of f (x) is more than first threshold.
Optionally, when performing the Weighted Fusion operation, for each pixel in blending image, if with the pixel pair
Fusion weights of the 1-w answered as pixel in the second image, using corresponding w as the fusion weight of pixel in third image, institute
It states default fusion function f (x) and meets the following conditions:When x is more than edge threshold, the value of f (x) is more than first threshold, when x is less than
During edge threshold, the value of f (x) is less than first threshold.
Although the application is disclosed as above with preferred embodiment, it is not for limiting the application, any this field skill
Art personnel are not being departed from spirit and scope, can make possible variation and modification, therefore the guarantor of the application
Shield range should be subject to the range that the application claim is defined.
In a typical configuration, computing device includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, CD-ROM read-only memory (CD-ROM),
Digital versatile disc (DVD) or other optical storages, magnetic tape cassette, the storage of tape magnetic rigid disk or other magnetic storage apparatus
Or any other non-transmission medium, available for storing the information that can be accessed by a computing device.It defines, calculates according to herein
Machine readable medium does not include the data-signal and carrier wave of non-temporary computer readable media (transitory media), such as modulation.
It will be understood by those skilled in the art that embodiments herein can be provided as method, system or computer program product.
Therefore, complete hardware embodiment, complete software embodiment or the embodiment in terms of combining software and hardware can be used in the application
Form.It is deposited moreover, the application can be used to can be used in one or more computers for wherein including computer usable program code
The shape of computer program product that storage media is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Formula.
Claims (28)
1. a kind of image processing method, which is characterized in that including:
The first image based on original image is carried out to protect side filtering, obtains filtering image;
The second image based on original image with the third image based on filtering image is merged, obtains blending image.
2. according to the method described in claim 1, it is characterized in that, the original image includes:Facial image;
Before carrying out protecting side filtering to the first image based on original image, including:
Receive the U.S. face processing request for facial image;
After blending image is obtained, including:
With the U.S. face processing request of the handling result response based on the blending image.
3. according to the method described in claim 1, it is characterized in that, described pair of the second image based on original image is with being based on filter
The third image of wave image is merged, including:
Fusion coefficients w corresponding to each pixel of blending image, the w satisfactions are set respectively using predetermined manner:0≤w≤
1.0;
Following Weighted Fusion operation is performed to the second image and third image:
For each pixel, using the pixel corresponding w and 1-w as fusion weight, to the phase of the second image and third image
Pixel value weighted sum is answered, and using obtained numerical value as the pixel value after fusion.
4. according to the method described in claim 3, it is characterized in that, described set respectively using predetermined manner corresponding to fusion figure
As the fusion coefficients w of each pixel, including:
For each pixel of blending image, operations described below is performed:
Obtain the Grad for corresponding to the pixel in third image;
Using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel, wherein, the input parameter of the f (x) is
The Grad;
Wherein, when performing the Weighted Fusion operation using the fusion coefficients that the fusion function calculates so that third image
The fusion weight of edge region is more than its fusion weight in non-edge, and the second image is weighed in the fusion of non-edge
The great fusion weight in its edge region.
5. according to the method described in claim 4, it is characterized in that, when performing the Weighted Fusion operation, scheme for fusion
Each pixel as in, if using the corresponding w of the pixel as the fusion weight of pixel in the second image, using corresponding 1-w as
The fusion weight of pixel in third image, the default fusion function f (x) meet the following conditions:
When x is more than edge threshold, the value of f (x) is less than first threshold, and when x is less than edge threshold, the value of f (x) is more than first
Threshold value.
6. according to the method described in claim 5, it is characterized in that, the condition that the fusion function f (x) meets further includes:Work as x
During more than edge threshold, with the increase of x, the value of f (x) is decremented to second threshold.
7. according to the method described in claim 4, it is characterized in that, when performing the Weighted Fusion operation, scheme for fusion
As in each pixel, if using the corresponding 1-w of the pixel as the fusion weight of pixel in the second image, using corresponding w as
The fusion weight of pixel in third image, the default fusion function f (x) meet the following conditions:
When x is more than edge threshold, the value of f (x) is more than first threshold, and when x is less than edge threshold, the value of f (x) is less than first
Threshold value.
8. the method according to the description of claim 7 is characterized in that the condition that the fusion function f (x) meets further includes:Work as x
During more than edge threshold, with the increase of x, the value of f (x) is incremented to third threshold value.
9. it according to the method described in claim 3, it is characterized in that, is set respectively corresponding to blending image using predetermined manner
After the fusion coefficients w of each pixel, Weighted Fusion is performed to the second image and third image using GPU and is operated.
10. according to the method described in claim 1, it is characterized in that, first image and described based on original image
Two images respectively include:Original image;The third image based on filtering image includes:Filtering image.
11. according to the method described in claim 1, it is characterized in that, carrying out guarantor side to the first image based on original image
Before filtering, including:Original image is carried out down-sampled;First image based on original image includes:After down-sampled
Original image;
After filtering image is obtained, carried out to the second image based on original image and the third image based on filtering image
Before fusion, including:A liter sampling is carried out to filtering image according to down-sampled coefficient;Second image based on original image
Including:Original image;The third image based on filtering image, including:Rise the filtering image after sampling.
12. according to the method described in claim 1, it is characterized in that, carrying out guarantor side to the first image based on original image
Before filtering, including:Original image based on rgb color space is transformed into the color space comprising luminance component;
Described pair of the first image based on original image carries out protecting side filtering, including:To original after the execution conversion operation
The luminance component of image carries out protecting side filtering;
Described pair of the second image based on original image is merged with the third image based on filtering image, including:For bright
Component is spent, the original image after the execution conversion operation and filtering image are merged;
After blending image is obtained, including:
Blending image is converted back into rgb color space, and the blending image of rgb color space will be converted back as image procossing knot
Fruit.
13. according to the method described in claim 1, it is characterized in that, the original image includes:Facial image.
14. according to the method for claim 13, which is characterized in that first image and the second figure based on original image
As respectively including:Original image;The third image based on filtering image includes:Filtering image;
Before carrying out protecting side filtering to the first image based on original image, including:It determines to include people in the original image
The first area of face pixel;
Described pair of the first image based on original image carries out protecting side filtering, including:First area in original image is carried out
Protect side filtering;
The second image based on original image is merged with the third image based on filtering image, including:
Original image with filtering image is merged in first area, retains original image in non-first area.
15. according to the method for claim 14, which is characterized in that original image and filtering image are carried out in first area
Before fusion, further include:The second area for including default human face pixel is determined in the first area;
It is described that original image is merged with filtering image in first area, including:
Original image with filtering image is merged in the first area not comprising second area, and retains original in second area
Beginning image.
16. according to the method for claim 13, which is characterized in that guarantor side is being carried out to the first image based on original image
Before filtering, including:Determine in the original image comprising face pixel first area and, original image is dropped
Sampling;
Described pair of the first image based on original image carries out protecting side filtering, including:To the phase in the original image after down-sampled
First area is answered to carry out protecting side filtering;
After filtering image is obtained, before mixing operation is performed, including:Filtering image is carried out according to down-sampled coefficient
Rise sampling;
The second image based on original image is merged with the third image based on filtering image, including:
Original image with rising the filtering image after sampling is merged in first area, retains original graph in non-first area
Picture.
17. according to the method for claim 16, which is characterized in that in first area to original image with rising the filter after sampling
Before wave image is merged, further include:The second area for including default human face pixel is determined in the first area;
It is described that original image is merged with rising the filtering image after sampling in first area, including:
Original image is carried out, and with rising the filtering image after sampling in second area in the first area not comprising second area
Retain original image.
18. the method according to claim 15 or 17, which is characterized in that the default human face includes:Eyes or mouth
Bar.
19. according to claim 1-17 any one of them methods, which is characterized in that using following algorithm to being based on original image
The first image carry out protect side filtering:Bilateral filtering algorithm, surface blur algorithm or guiding filtering algorithm.
20. according to claim 1-17 any one of them methods, which is characterized in that the method is real on the mobile terminal device
It applies.
21. a kind of image processing apparatus, which is characterized in that including:
Image filtering unit for carrying out protecting side filtering to the first image based on original image, obtains filtering image;
Image fusion unit, for melting to the second image based on original image and the third image based on filtering image
It closes, obtains blending image.
22. device according to claim 21, which is characterized in that the original image includes:Facial image;Described device
It further includes:
Request reception unit is handled, for before carrying out protecting side filtering to the first image based on original image, reception to be directed to
The U.S. face processing request of facial image;
Request-response unit, for after blending image is obtained, described in the handling result response based on the blending image
U.S. face processing request.
23. device according to claim 21, which is characterized in that described image integrated unit, including:
Fusion coefficients set subelement, for setting the fusion system corresponding to each pixel of blending image respectively using predetermined manner
Number w, the w satisfactions:0≤w≤1.0;
Fusion performs subelement, is operated for performing following Weighted Fusion to the second image and third image:For described each
Pixel using the pixel corresponding w and 1-w as fusion weight, is asked the respective pixel values weighting of the second image and third image
With, and using obtained numerical value as the pixel value after fusion.
24. device according to claim 23, which is characterized in that the fusion coefficients setting subelement is specifically used for, needle
To each pixel of blending image, the Grad for corresponding to the pixel in third image is obtained, and by default fusion function f (x)
Fusion coefficients w of the output valve as the pixel, wherein, the input parameter of the f (x) is the Grad;
When described image integrated unit performs Weighted Fusion operation using the fusion coefficients that the fusion function calculates so that the
The fusion weight of three image edge regions is more than its fusion weight in non-edge, and the second image is in non-edge
Merge the fusion weight that weight is more than its edge region.
25. a kind of electronic equipment, which is characterized in that including:
Processor;
Memory, for store code;
Wherein, the processor is coupled in the memory, for reading the code of the memory storage, and performs following behaviour
Make:The first image based on original image is carried out to protect side filtering, obtains filtering image;To the second image based on original image
It is merged with the third image based on filtering image, obtains blending image.
26. electronic equipment according to claim 25, which is characterized in that the original image includes:Facial image;It is described
The operation that processor performs further includes:Before carrying out protecting side filtering to the first image based on original image, receive for people
The U.S. face processing request of face image;After blending image is obtained, described in the handling result response based on the blending image
U.S. face processing request.
27. electronic equipment according to claim 25, which is characterized in that described pair of the second image based on original image with
Third image based on filtering image is merged, including:
Fusion coefficients w corresponding to each pixel of blending image, the w satisfactions are set respectively using predetermined manner:0≤w≤
1.0;
Following Weighted Fusion operation is performed to the second image and third image:
For each pixel, using the pixel corresponding w and 1-w as fusion weight, to the phase of the second image and third image
Pixel value weighted sum is answered, and using obtained numerical value as the pixel value after fusion.
28. electronic equipment according to claim 25, which is characterized in that described set respectively using predetermined manner is corresponded to
The fusion coefficients w of each pixel of blending image, including:
For each pixel of blending image, operations described below is performed:
Obtain the Grad for corresponding to the pixel in third image;
Using the output valve of default fusion function f (x) as the fusion coefficients w of the pixel, wherein, the input parameter of the f (x) is
The Grad;
Wherein, when performing the Weighted Fusion operation using the fusion coefficients that the fusion function calculates so that third image
The fusion weight of edge region is more than its fusion weight in non-edge, and the second image is weighed in the fusion of non-edge
The great fusion weight in its edge region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611168058.2A CN108205804B (en) | 2016-12-16 | 2016-12-16 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611168058.2A CN108205804B (en) | 2016-12-16 | 2016-12-16 | Image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108205804A true CN108205804A (en) | 2018-06-26 |
CN108205804B CN108205804B (en) | 2022-05-31 |
Family
ID=62602369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611168058.2A Active CN108205804B (en) | 2016-12-16 | 2016-12-16 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108205804B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672885A (en) * | 2019-01-08 | 2019-04-23 | 中国矿业大学(北京) | A kind of video image encoding and decoding method for mine intelligent monitoring |
CN109767385A (en) * | 2018-12-20 | 2019-05-17 | 深圳市资福医疗技术有限公司 | A kind of method and apparatus removing image chroma noise |
CN109829864A (en) * | 2019-01-30 | 2019-05-31 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and storage medium |
CN109978808A (en) * | 2019-04-25 | 2019-07-05 | 北京迈格威科技有限公司 | A kind of method, apparatus and electronic equipment for image co-registration |
CN110503704A (en) * | 2019-08-27 | 2019-11-26 | 北京迈格威科技有限公司 | Building method, device and the electronic equipment of three components |
CN110738612A (en) * | 2019-09-27 | 2020-01-31 | 深圳市安健科技股份有限公司 | Method for reducing noise of X-ray perspective image and computer readable storage medium |
CN110895789A (en) * | 2018-09-13 | 2020-03-20 | 杭州海康威视数字技术股份有限公司 | Face beautifying method and device |
CN110910326A (en) * | 2019-11-22 | 2020-03-24 | 上海商汤智能科技有限公司 | Image processing method and device, processor, electronic device and storage medium |
CN110956592A (en) * | 2019-11-14 | 2020-04-03 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2020113824A1 (en) * | 2018-12-04 | 2020-06-11 | 深圳市华星光电半导体显示技术有限公司 | Image processing method |
WO2020124355A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳市大疆创新科技有限公司 | Image processing method, image processing device, and unmanned aerial vehicle |
CN111861929A (en) * | 2020-07-24 | 2020-10-30 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image optimization processing method, system and device |
CN112070848A (en) * | 2020-09-18 | 2020-12-11 | 厦门美图之家科技有限公司 | Image pigment separation method, device, electronic equipment and readable storage medium |
CN112419161A (en) * | 2019-08-20 | 2021-02-26 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN112508859A (en) * | 2020-11-19 | 2021-03-16 | 聚融医疗科技(杭州)有限公司 | Method and system for automatically measuring thickness of endometrium based on wavelet transformation |
CN112967182A (en) * | 2019-12-12 | 2021-06-15 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment and storage medium |
CN112991477A (en) * | 2021-01-28 | 2021-06-18 | 明峰医疗系统股份有限公司 | PET image processing method based on deep learning |
CN113808038A (en) * | 2021-09-08 | 2021-12-17 | 瑞芯微电子股份有限公司 | Image processing method, medium, and electronic device |
CN115115554A (en) * | 2022-08-30 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Image processing method and device based on enhanced image and computer equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100103194A1 (en) * | 2008-10-27 | 2010-04-29 | Huawei Technologies Co., Ltd. | Method and system for fusing images |
CN101930604A (en) * | 2010-09-08 | 2010-12-29 | 中国科学院自动化研究所 | Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis |
US20110194788A1 (en) * | 2010-02-09 | 2011-08-11 | Indian Institute Of Technology Bombay | System and Method for Fusing Images |
CN102789638A (en) * | 2012-07-16 | 2012-11-21 | 北京市遥感信息研究所 | Image fusion method based on gradient field and scale space theory |
CN104318524A (en) * | 2014-10-15 | 2015-01-28 | 烟台艾睿光电科技有限公司 | Method, device and system for image enhancement based on YCbCr color space |
CN105574834A (en) * | 2015-12-23 | 2016-05-11 | 小米科技有限责任公司 | Image processing method and apparatus |
CN105931210A (en) * | 2016-04-15 | 2016-09-07 | 中国航空工业集团公司洛阳电光设备研究所 | High-resolution image reconstruction method |
-
2016
- 2016-12-16 CN CN201611168058.2A patent/CN108205804B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100103194A1 (en) * | 2008-10-27 | 2010-04-29 | Huawei Technologies Co., Ltd. | Method and system for fusing images |
US20110194788A1 (en) * | 2010-02-09 | 2011-08-11 | Indian Institute Of Technology Bombay | System and Method for Fusing Images |
CN101930604A (en) * | 2010-09-08 | 2010-12-29 | 中国科学院自动化研究所 | Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis |
CN102789638A (en) * | 2012-07-16 | 2012-11-21 | 北京市遥感信息研究所 | Image fusion method based on gradient field and scale space theory |
CN104318524A (en) * | 2014-10-15 | 2015-01-28 | 烟台艾睿光电科技有限公司 | Method, device and system for image enhancement based on YCbCr color space |
CN105574834A (en) * | 2015-12-23 | 2016-05-11 | 小米科技有限责任公司 | Image processing method and apparatus |
CN105931210A (en) * | 2016-04-15 | 2016-09-07 | 中国航空工业集团公司洛阳电光设备研究所 | High-resolution image reconstruction method |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110895789A (en) * | 2018-09-13 | 2020-03-20 | 杭州海康威视数字技术股份有限公司 | Face beautifying method and device |
CN110895789B (en) * | 2018-09-13 | 2023-05-02 | 杭州海康威视数字技术股份有限公司 | Face beautifying method and device |
WO2020113824A1 (en) * | 2018-12-04 | 2020-06-11 | 深圳市华星光电半导体显示技术有限公司 | Image processing method |
WO2020124355A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳市大疆创新科技有限公司 | Image processing method, image processing device, and unmanned aerial vehicle |
CN111344736A (en) * | 2018-12-18 | 2020-06-26 | 深圳市大疆创新科技有限公司 | Image processing method, image processing device and unmanned aerial vehicle |
CN109767385B (en) * | 2018-12-20 | 2023-04-28 | 深圳市资福医疗技术有限公司 | Method and device for removing image chroma noise |
CN109767385A (en) * | 2018-12-20 | 2019-05-17 | 深圳市资福医疗技术有限公司 | A kind of method and apparatus removing image chroma noise |
CN109672885A (en) * | 2019-01-08 | 2019-04-23 | 中国矿业大学(北京) | A kind of video image encoding and decoding method for mine intelligent monitoring |
CN109672885B (en) * | 2019-01-08 | 2020-08-04 | 中国矿业大学(北京) | Video image coding and decoding method for intelligent monitoring of mine |
CN109829864B (en) * | 2019-01-30 | 2021-05-18 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and storage medium |
CN109829864A (en) * | 2019-01-30 | 2019-05-31 | 北京达佳互联信息技术有限公司 | Image processing method, device, equipment and storage medium |
CN109978808A (en) * | 2019-04-25 | 2019-07-05 | 北京迈格威科技有限公司 | A kind of method, apparatus and electronic equipment for image co-registration |
CN112419161B (en) * | 2019-08-20 | 2022-07-05 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN112419161A (en) * | 2019-08-20 | 2021-02-26 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN110503704A (en) * | 2019-08-27 | 2019-11-26 | 北京迈格威科技有限公司 | Building method, device and the electronic equipment of three components |
CN110738612A (en) * | 2019-09-27 | 2020-01-31 | 深圳市安健科技股份有限公司 | Method for reducing noise of X-ray perspective image and computer readable storage medium |
CN110738612B (en) * | 2019-09-27 | 2022-04-29 | 深圳市安健科技股份有限公司 | Method for reducing noise of X-ray perspective image and computer readable storage medium |
CN110956592B (en) * | 2019-11-14 | 2023-07-04 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110956592A (en) * | 2019-11-14 | 2020-04-03 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN110910326A (en) * | 2019-11-22 | 2020-03-24 | 上海商汤智能科技有限公司 | Image processing method and device, processor, electronic device and storage medium |
CN112967182A (en) * | 2019-12-12 | 2021-06-15 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment and storage medium |
CN112967182B (en) * | 2019-12-12 | 2022-07-29 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment and storage medium |
CN111861929A (en) * | 2020-07-24 | 2020-10-30 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image optimization processing method, system and device |
CN112070848A (en) * | 2020-09-18 | 2020-12-11 | 厦门美图之家科技有限公司 | Image pigment separation method, device, electronic equipment and readable storage medium |
CN112508859A (en) * | 2020-11-19 | 2021-03-16 | 聚融医疗科技(杭州)有限公司 | Method and system for automatically measuring thickness of endometrium based on wavelet transformation |
CN112991477A (en) * | 2021-01-28 | 2021-06-18 | 明峰医疗系统股份有限公司 | PET image processing method based on deep learning |
CN113808038A (en) * | 2021-09-08 | 2021-12-17 | 瑞芯微电子股份有限公司 | Image processing method, medium, and electronic device |
CN115115554A (en) * | 2022-08-30 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Image processing method and device based on enhanced image and computer equipment |
CN115115554B (en) * | 2022-08-30 | 2022-11-04 | 腾讯科技(深圳)有限公司 | Image processing method and device based on enhanced image and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108205804B (en) | 2022-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108205804A (en) | Image processing method, device and electronic equipment | |
US10554857B2 (en) | Method for noise-robust color changes in digital images | |
US10152781B2 (en) | Method for image processing using local statistics convolution | |
Chaudhury | Acceleration of the Shiftable $\mbi {O}{(1)} $ Algorithm for Bilateral Filtering and Nonlocal Means | |
Li et al. | Weighted guided image filtering | |
US9495582B2 (en) | Digital makeup | |
US8014034B2 (en) | Image contrast enhancement | |
US8514303B2 (en) | Advanced imaging systems and methods utilizing nonlinear and/or spatially varying image processing | |
CN109658330B (en) | Color development adjusting method and device | |
CN106780417A (en) | A kind of Enhancement Method and system of uneven illumination image | |
CN109743473A (en) | Video image 3 D noise-reduction method, computer installation and computer readable storage medium | |
CN105243371A (en) | Human face beauty degree detection method and system and shooting terminal | |
CN107871303A (en) | A kind of image processing method and device | |
CN111353955A (en) | Image processing method, device, equipment and storage medium | |
CN111161177B (en) | Image self-adaptive noise reduction method and device | |
Zhu et al. | Detail-preserving arbitrary style transfer | |
CN107945139A (en) | A kind of image processing method, storage medium and intelligent terminal | |
CN114862729A (en) | Image processing method, image processing device, computer equipment and storage medium | |
Ponomaryov et al. | Fuzzy color video filtering technique for sequences corrupted by additive Gaussian noise | |
Igarashi et al. | Accuracy improvement of histogram-based image filtering | |
CN110751603A (en) | Method and system for enhancing image contrast and terminal equipment | |
WO2020241337A1 (en) | Image processing device | |
CN103559692B (en) | Method and device for processing image | |
CN113012079A (en) | Low-brightness vehicle bottom image enhancement method and device and storage medium | |
Jiji et al. | Enhancement of underwater deblurred images using gradient guided filter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201201 Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China Applicant after: Zebra smart travel network (Hong Kong) Limited Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands Applicant before: Alibaba Group Holding Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |