CN108205804B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN108205804B
CN108205804B CN201611168058.2A CN201611168058A CN108205804B CN 108205804 B CN108205804 B CN 108205804B CN 201611168058 A CN201611168058 A CN 201611168058A CN 108205804 B CN108205804 B CN 108205804B
Authority
CN
China
Prior art keywords
image
fusion
pixel
original image
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611168058.2A
Other languages
Chinese (zh)
Other versions
CN108205804A (en
Inventor
秦文煜
黄英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201611168058.2A priority Critical patent/CN108205804B/en
Publication of CN108205804A publication Critical patent/CN108205804A/en
Application granted granted Critical
Publication of CN108205804B publication Critical patent/CN108205804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device and an electronic device. The image processing method comprises the following steps: performing edge-preserving filtering on a first image based on an original image to obtain a filtered image; and fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image. By adopting the method, because the fusion operation with the second image based on the original image is executed after the edge-preserving filtering, the texture details of the original image can be preserved to a certain extent while the noise is eliminated and the edge is preserved, thereby avoiding the image distortion phenomenon caused by the over-smooth flat area, ensuring the fused image to be more real and effectively improving the image quality.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method. The application also relates to an image processing device and an electronic device.
Background
An image captured by an image capturing apparatus usually contains noises (also referred to as noise points) that affect the aesthetic sense of the image, and these noises may be caused by interference of random signals (also referred to as noises) when the image is captured or transmitted, or may be possessed by the subject itself, for example: dark spots, blemishes, etc. of the face in the face image. To enhance the aesthetic appearance of the image, the image may be processed, typically using filtering techniques, to eliminate noise.
The filter currently used in a relatively common way is an edge-preserving filter (also called edge-preserving filtering algorithm), the most classical of which is a bilateral filter. The filter is composed of two parts, one is a spatial filter determined by the geometric spatial distance of the image, the other is a value range filter determined by the difference of image pixels, and the value of the output pixel depends on the weighted combination of the values of the neighborhood pixels. When the bilateral filter is used for weighting calculation, the spatial distance between the pixel to be processed and other pixels in the neighborhood is considered, and the difference of pixel values between the pixel to be processed and other pixels in the neighborhood is also considered, so that the bilateral filter can better eliminate noise, obtain a denoised smooth image, and can keep the edge details of the image. Besides the above bilateral filter, there are also filters based on surface blurring algorithm, etc., which can also remove noise while maintaining edge details of the image.
When the edge-preserving filter is specifically applied to the field of image processing, the same problems generally exist in the edge-preserving filter: while preserving edge denoising, texture details of the original image cannot be preserved, namely: the flat area in the processed image is too smooth, resulting in image distortion and a more significant quality degradation compared with the original image.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, and aims to solve the problems that texture details cannot be reserved and image distortion is caused in the existing edge-preserving filtering technology. The embodiment of the application also provides an image processing device and electronic equipment.
The application provides an image processing method, comprising the following steps:
performing edge-preserving filtering on a first image based on an original image to obtain a filtered image;
and fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image.
Optionally, the original image includes: a face image;
before edge-preserving filtering a first image based on an original image, the method comprises the following steps:
receiving a beauty processing request aiming at a face image;
after obtaining the fusion image, the method comprises the following steps:
responding to the beauty processing request with a processing result based on the fused image.
Optionally, the fusing the second image based on the original image and the third image based on the filtered image includes:
respectively setting a fusion coefficient w corresponding to each pixel of the fusion image in a preset mode, wherein w satisfies the following conditions: w is more than or equal to 0 and less than or equal to 1.0;
performing the following weighted fusion operation on the second image and the third image:
and for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value.
Optionally, the setting the fusion coefficient w corresponding to each pixel of the fusion image in a preset manner includes:
for each pixel of the fused image, performing the following operations:
acquiring a gradient value corresponding to the pixel in the third image;
taking the output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, wherein the input parameter of f (x) is the gradient value;
when the weighted fusion operation is executed by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is greater than that in the non-edge area, and the fusion weight of the second image in the non-edge area is greater than that in the edge area.
Optionally, when the weighted fusion operation is performed, if w corresponding to the pixel is used as the fusion weight of the pixel in the second image and 1-w corresponding to the pixel is used as the fusion weight of the pixel in the third image, the preset fusion function f (x) satisfies the following condition:
when x is greater than the edge threshold, the value of f (x) is less than the first threshold, and when x is less than the edge threshold, the value of f (x) is greater than the first threshold.
Optionally, the condition satisfied by the fusion function f (x) further includes: when x is greater than the edge threshold, the value of f (x) decreases to a second threshold as x increases.
Optionally, when the weighted fusion operation is performed, if 1-w corresponding to each pixel in the fused image is used as the fusion weight of the pixel in the second image, and the corresponding w is used as the fusion weight of the pixel in the third image, the preset fusion function f (x) satisfies the following condition:
when x is greater than the edge threshold, the value of f (x) is greater than the first threshold, and when x is less than the edge threshold, the value of f (x) is less than the first threshold.
Optionally, the condition satisfied by the fusion function f (x) further includes: when x is greater than the edge threshold, the value of f (x) is incremented to a third threshold as x increases.
Optionally, after the fusion coefficients w corresponding to each pixel of the fused image are respectively set in a preset manner, a weighted fusion operation is performed on the second image and the third image by using the GPU.
Optionally, the first image and the second image based on the original image respectively include: an original image; the third image based on the filtered image comprises: and filtering the image.
Optionally, before performing edge-preserving filtering on the first image based on the original image, the method includes: down-sampling an original image; the first image based on the original image includes: down-sampling the original image;
after obtaining the filtered image, before fusing the second image based on the original image and the third image based on the filtered image, the method includes: up-sampling the filtered image according to the down-sampling coefficient; the second image based on the original image includes: an original image; the third image based on the filtered image comprises: up-sampled filtered image.
Optionally, before performing edge-preserving filtering on the first image based on the original image, the method includes: converting an original image based on an RGB color space into a color space containing a luminance component;
the edge-preserving filtering of the first image based on the original image comprises: performing edge-preserving filtering on the brightness component of the original image after the conversion operation is performed;
the fusing the second image based on the original image and the third image based on the filtered image comprises: fusing the original image and the filtered image after the conversion operation is performed aiming at the brightness component;
after obtaining the fusion image, the method comprises the following steps:
and converting the fused image back to the RGB color space, and taking the fused image converted back to the RGB color space as an image processing result.
Optionally, the original image includes: a face image.
Optionally, the first image and the second image based on the original image respectively include: an original image; the third image based on the filtered image comprises: filtering the image;
before edge-preserving filtering a first image based on an original image, the method comprises the following steps: determining a first region containing face pixels in the original image;
the edge-preserving filtering of the first image based on the original image comprises: performing edge-preserving filtering on a first region in an original image;
fusing a second image based on the original image with a third image based on the filtered image, comprising:
and fusing the original image and the filtered image in the first area, and reserving the original image in the non-first area.
Optionally, before the fusing the original image and the filtered image in the first region, the method further includes: determining a second region containing preset human face organ pixels in the first region;
the fusing the original image and the filtered image in the first area comprises:
and fusing the original image and the filtered image in the first area without the second area, and reserving the original image in the second area.
Optionally, before performing edge-preserving filtering on the first image based on the original image, the method includes: determining a first region containing face pixels in the original image, and performing down-sampling on the original image;
the edge-preserving filtering of the first image based on the original image comprises: performing edge-preserving filtering on a corresponding first region in the original image after the down-sampling;
after obtaining the filtered image, before performing the fusion operation, the method includes: up-sampling the filtered image according to the down-sampling coefficient;
fusing a second image based on the original image with a third image based on the filtered image, comprising:
and fusing the original image and the up-sampled filtered image in the first area, and reserving the original image in the non-first area.
Optionally, before the fusing the original image and the upsampled filtered image in the first region, the method further includes: determining a second region containing preset human face organ pixels in the first region;
the fusing the original image and the up-sampled filtered image in the first region includes:
the original image and the up-sampled filtered image are processed in a first area which does not contain a second area, and the original image is reserved in the second area.
Optionally, the presetting of the face organ includes: eyes, or mouth.
Optionally, the following algorithm is adopted to perform edge-preserving filtering on the first image based on the original image: a bilateral filtering algorithm, a surface blurring algorithm, or a guided filtering algorithm.
Optionally, the method is implemented on a mobile terminal device.
Correspondingly, the present application also provides an image processing apparatus, comprising:
the image filtering unit is used for carrying out edge-preserving filtering on a first image based on an original image to obtain a filtered image;
and the image fusion unit is used for fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image.
Optionally, the original image includes: a face image; the device further comprises:
a processing request receiving unit, configured to receive a beauty processing request for a face image before performing edge-preserving filtering on a first image based on an original image;
a request responding unit for responding the beauty processing request with a processing result based on the fused image after obtaining the fused image.
Optionally, the image fusion unit includes:
a fusion coefficient setting subunit, configured to set a fusion coefficient w corresponding to each pixel of the fused image in a preset manner, where w satisfies: w is more than or equal to 0 and less than or equal to 1.0;
a fusion execution subunit, configured to perform the following weighted fusion operation on the second image and the third image: and for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value.
Optionally, the fusion coefficient setting subunit is specifically configured to, for each pixel of the fusion image, obtain a gradient value corresponding to the pixel in the third image, and use an output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, where an input parameter of f (x) is the gradient value;
when the image fusion unit executes the weighted fusion operation by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is larger than that in the non-edge area, and the fusion weight of the second image in the non-edge area is larger than that in the edge area.
Optionally, the fusion execution subunit is specifically configured to, for each pixel of the fusion image, use w corresponding to the pixel as a fusion weight of a pixel in the second image, use corresponding 1-w as a fusion weight of a pixel in the third image, and use a numerical value obtained by weighted summation as a fused pixel value;
the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is less than the first threshold, and when x is less than the edge threshold, the value of f (x) is greater than the first threshold.
Optionally, the condition that the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies further includes: when x is greater than the edge threshold, the value of f (x) decreases to a second threshold as x increases.
Optionally, the fusion execution subunit is specifically configured to, for each pixel of the fusion image, use 1-w corresponding to the pixel as a fusion weight of a pixel in the second image, use corresponding w as a fusion weight of a pixel in the third image, and use a value obtained by weighted summation as a fused pixel value;
the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is greater than the first threshold, and when x is less than the edge threshold, the value of f (x) is less than the first threshold.
Optionally, the condition that the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies further includes: when x is greater than the edge threshold, the value of f (x) is incremented to a third threshold as x increases.
Optionally, the fusion execution subunit is specifically configured to perform, by using the GPU, a weighted fusion operation on the second image and the third image.
Optionally, the first image and the second image based on the original image respectively include: an original image; the third image based on the filtered image comprises: and filtering the image.
Optionally, the apparatus further comprises: the down-sampling unit is used for down-sampling the original image before edge-preserving filtering is carried out on the first image based on the original image; the first image based on the original image includes: down-sampling the original image;
the device further comprises: the up-sampling unit is used for up-sampling the filtered image according to the down-sampling coefficient after the filtered image is obtained and before the second image based on the original image and the third image based on the filtered image are fused; the second image based on the original image includes: an original image; the third image based on the filtered image comprises: up-sampled filtered image.
Optionally, the apparatus further comprises: a color space conversion unit for converting an original image based on an RGB color space into a color space containing a luminance component before edge-preserving filtering a first image based on the original image;
the image filtering unit is specifically configured to perform edge-preserving filtering on the luminance component of the original image after the conversion operation is performed;
the image fusion unit is specifically configured to fuse, for a luminance component, the original image and the filtered image after the conversion operation is performed;
the device further comprises: and the color space recovery unit is used for converting the fused image back to the RGB color space after the fused image is obtained, and taking the fused image converted back to the RGB color space as an image processing result.
Optionally, the original image includes: a face image.
Optionally, the first image and the second image based on the original image respectively include: an original image; the third image based on the filtered image comprises: filtering the image;
the device further comprises: the image processing device comprises a first area determining unit, a second area determining unit and a processing unit, wherein the first area determining unit is used for determining a first area containing face pixels in an original image before edge-preserving filtering is carried out on the first image based on the original image;
the image filtering unit is specifically used for performing edge-preserving filtering on a first region in an original image;
the image fusion unit is specifically configured to fuse the original image and the filtered image in the first region, and retain the original image in the non-first region.
Optionally, the apparatus further comprises: the second region determining unit is used for determining a second region containing preset human face organ pixels in the first region before the original image and the filtered image are fused in the first region;
the image fusion unit is specifically configured to fuse the original image and the filtered image in a first region that does not include a second region, and retain the original image in the second region and a non-first region.
Optionally, the apparatus further comprises: a first region determining unit and a down-sampling unit; the first region determining unit is used for determining a first region containing face pixels in an original image before edge-preserving filtering is carried out on the first image based on the original image; the down-sampling unit is used for down-sampling the original image before edge-preserving filtering is carried out on the first image based on the original image;
the image filtering unit is specifically configured to filter a corresponding first region in the down-sampled original image;
the device further comprises: an upsampling unit, configured to, after obtaining the filtered image and before performing the fusion operation, include: up-sampling the filtered image according to the down-sampling coefficient;
the image fusion unit is specifically configured to fuse the original image and the up-sampled filtered image in the first region, and retain the original image in the non-first region.
Optionally, the method further includes: the second region determining unit is used for determining a second region containing preset human face organ pixels in the first region before the original image and the up-sampled filtering image are fused in the first region;
the image fusion unit is specifically configured to fuse the original image and the upsampled filtered image in a first region that does not include a second region, and retain the original image in the second region and a non-first region.
Optionally, the presetting of the face organ includes: eyes, or mouth.
Optionally, the apparatus is deployed in a mobile terminal device.
In addition, the present application also provides an electronic device, including:
a processor;
a memory for storing code;
wherein the processor is coupled to the memory, and is configured to read the code stored in the memory and perform the following operations: performing edge-preserving filtering on a first image based on an original image to obtain a filtered image; and fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image.
Optionally, the original image includes: a face image; the processor performs operations further comprising: before edge-preserving filtering is carried out on a first image based on an original image, a beautifying processing request aiming at a face image is received; after the fused image is obtained, the beauty processing request is responded with a processing result based on the fused image.
Optionally, the fusing the second image based on the original image and the third image based on the filtered image includes:
respectively setting a fusion coefficient w corresponding to each pixel of the fusion image in a preset mode, wherein w satisfies the following conditions: w is more than or equal to 0 and less than or equal to 1.0;
performing the following weighted fusion operation on the second image and the third image:
and for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value.
Optionally, the setting the fusion coefficient w corresponding to each pixel of the fusion image in a preset manner includes:
for each pixel of the fused image, performing the following operations:
acquiring a gradient value corresponding to the pixel in the third image;
taking the output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, wherein the input parameter of f (x) is the gradient value;
when the weighted fusion operation is executed by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is greater than that in the non-edge area, and the fusion weight of the second image in the non-edge area is greater than that in the edge area.
Compared with the prior art, the method has the following advantages:
according to the image processing method, after the edge-preserving filtering is carried out on the first image based on the original image to obtain the filtering image, the second image based on the original image and the third image based on the filtering image are fused to obtain the fusion image. By adopting the method, because the fusion operation with the second image based on the original image is executed after the edge-preserving filtering, the texture details of the original image can be preserved to a certain extent while the noise is eliminated and the edge is preserved, thereby avoiding the image distortion phenomenon caused by the over-smooth flat area, ensuring the fused image to be more real and effectively improving the image quality.
Drawings
FIG. 1 is a flow chart of an embodiment of an image processing method of the present application;
fig. 2 is a schematic diagram of downsampling and then edge-preserving filtering according to an embodiment of the present application;
FIG. 3 is a graphical illustration of a fusion function f (x) provided by an embodiment of the present application;
FIG. 4 is a graph comparing the effects provided by the examples of the present application;
FIG. 5 is a schematic diagram of an embodiment of an image processing apparatus of the present application;
FIG. 6 is a flow chart of another embodiment of an image processing method of the present application;
fig. 7 is a schematic diagram of a first region including face pixels according to an embodiment of the present application;
FIG. 8 is a flowchart of a process for performing weighted fusion according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a face region mask according to an embodiment of the present application;
FIG. 10 is a schematic diagram of another embodiment of an image processing apparatus of the present application;
FIG. 11 is a schematic diagram of an embodiment of an electronic device of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit and scope of this application, and it is therefore not limited to the specific implementations disclosed below.
In the present application, an image processing method, an image processing apparatus, and an electronic device are provided, respectively, and detailed description is made one by one in the following embodiments.
Please refer to fig. 1, which is a flowchart illustrating an embodiment of an image processing method according to the present application. The method comprises the following steps:
step 101, performing edge-preserving filtering on a first image based on an original image to obtain a filtered image.
The filtered image is an image obtained by performing edge-preserving filtering processing on a first image based on an original image. The edge-preserving filtering algorithm which can be used for edge-preserving filtering comprises the following steps: bilateral filtering algorithms, surface blurring algorithms, or guided filtering algorithms, etc. In specific implementation, the step may use any one of the above algorithms to filter the first image based on the original image to obtain a filtered image. The filtering image can better keep the edge details of the original image while eliminating noise, thereby achieving the effect of enhancing the edge.
The first image based on the original image may be the original image, in which case this step may apply edge-preserving filtering to the original image by using any one of the edge-preserving filtering algorithms listed above.
Because the execution efficiency of various edge-preserving filtering algorithms depends on the size of the image size and the radius of the filtering window to a great extent, the image is smoother and the boundary is more obvious when the radius is large, but more neighborhood pixels need to be processed for each pixel, so that the processing time consumption is long, the efficiency is low, and particularly the processing efficiency is low for high-resolution images.
To address this problem, the present embodiment provides a preferred implementation of downsampling and then filtering, that is: the original image may be down-sampled before performing this step, and the first image based on the original image may be: down-sampling the original image.
The down-sampling is also known as a process of reducing the resolution of the image. For example, for an N × M original image, if the down-sampling coefficient is k, every k pixels in each row and each column of the original image may be taken as one pixel to form a new image, which is the original image after down-sampling is performed.
After the original image is down-sampled, the down-sampled original image is edge-preserving filtered by performing the present step, please refer to fig. 2, which is a schematic diagram of the down-sampling and edge-preserving filtering in this embodiment. The method comprises the following steps of (a) obtaining an original image, (b) obtaining a schematic diagram of bilateral filtering of the original image after down sampling, and (c) obtaining a filtered image.
By means of downsampling, the size of an original image and the radius of a filtering window can be reduced at the same time, so that execution efficiency of edge-preserving filtering can be improved, and by means of the optimal implementation mode, time consumption of edge-preserving filtering processing can be reduced and filtering efficiency can be effectively improved in some application scenes (such as real-time preview) needing real-time edge-preserving filtering processing or when edge-preserving filtering processing is carried out on equipment with limited computing capacity, such as a mobile terminal.
Of course, the above embodiments are given for improving the filtering efficiency, and in the specific implementation, for an original image with low resolution or in an application scenario where there is no requirement for processing efficiency, the above process of downsampling may not be performed, but it is also possible to perform edge-preserving filtering directly on the original image.
And 102, fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image.
The fused image in this step is an image obtained by fusing a second image based on the original image and a third image based on the filtered image.
As the filtered image obtained in step 101 is subjected to edge denoising, and meanwhile, the texture details in the original image are usually lost, which causes image distortion, as shown in a circle shown in (c) of fig. 2, the processed image lacks "texture" and is not real due to the loss of the original texture details.
To solve this problem, the present step may fuse the second image based on the original image with the third image based on the filtered image, i.e.: generating a third image of the same size by extracting information from the second and third images of the same size, namely: and fusing the images. Because the information is extracted from the second image based on the original image in the fusion process, the texture details of the original image can be retained to a certain extent, so that the fused image is more real, the distortion is avoided, and the image quality is improved.
The second image based on the original image and the third image based on the filtered image are fused, and different image fusion methods can be adopted, such as: a weighted fusion method, an HIS fusion method, a KL transform fusion method, or a wavelet transform fusion method.
Preferably, in consideration of the advantages of easy implementation, fast operation speed, and convenience for execution by the GPU, the weighted fusion method is preferably adopted in this embodiment.
In the second image based on the original image, the third image based on the filtered image, and the fused image, each pixel and each region have a correspondence relationship, that is: there is a correspondence between pixels in the same coordinate position in the three images, and based on the correspondence, a region determined in one of the images, for example, an edge region, a region including a face pixel, and the like, there is also a corresponding region in the other two images. This correspondence of the three images will not be described in detail later.
Based on the corresponding relationship, in the weighted fusion process, the pixel values of every two corresponding pixels in the second image based on the original image and the third image based on the filtered image can be weighted and summed, and the obtained numerical value is used as the pixel value of the corresponding pixel in the fused image to be generated, so that the fused image is obtained.
In a specific implementation, if the original image is not subjected to additional processing such as down-sampling before this step, the second image based on the original image may be the original image, and the third image based on the filtered image may be the filtered image. In the step, the original image and the filtered image are subjected to weighted fusion to obtain a fused image.
The original image and the filtered image are weighted and fused, a fusion coefficient w corresponding to each pixel of the fused image can be set in a preset mode, and w satisfies the following conditions: w is more than or equal to 0 and less than or equal to 1.0, and in specific implementation, all fusion coefficients w are not all 0, nor are all 1; the following weighted fusion operation is then performed on the original image and the filtered image: and for each pixel, taking a fusion coefficient w corresponding to the pixel and a difference value 1-w between 1 and the fusion coefficient as fusion weights, carrying out weighted summation on corresponding pixel values of the original image and the filtered image, and taking the obtained numerical value as a fused pixel value.
That is, the pixel value of any pixel in the fused image can be calculated by one of the following formulas:
result ═ w × org _ value + (1-w) × binary _ value; - - - -equation 1
result ═ 1-w × org _ value + w × binary _ value; - - - -equation 2
Wherein w is a fusion coefficient corresponding to the pixel, org _ value is a pixel value of a corresponding pixel in the original image, binary _ value is a pixel value of a corresponding pixel in the filtered image, and result is a fused pixel value.
As a simple and easy implementation manner, all the fusion coefficients may be set to be the same, and the specific value may be determined according to actual requirements. For example: can be set to 0.5, i.e. the weight of the original image and the filtered image during the fusion process is 0.5; if it is desired to maintain more texture detail, w may be set to a value greater than 0.5 if equation 1 is used, and may be set to a value less than 0.5 if equation 2 is used; if it is desired to better retain edge detail, the opposite arrangement to that described above may be employed.
Preferably, in order to achieve a better fusion effect, and simultaneously take into account the fusion effects of both edge details and texture details, the present embodiment provides a preferred implementation manner of determining the fusion coefficient by using a fusion function based on pixel gradient.
Specifically, the fusion coefficient corresponding to each pixel of the fused image may be determined as follows: since the filtered image has largely removed the noise in the image, this step may perform the following for each pixel of the fused image: first, obtaining a gradient value corresponding to the pixel in the filtered image, for example, for a pixel with a coordinate position (i, j), obtaining a gradient value of a pixel in the filtered image at the (i, j) position as well; and then taking the output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, wherein the input parameter of f (x) is the gradient value.
The gradient value can be calculated by adopting a derivation mode, and can also be calculated by adopting various gradient operators. For an edge in an image, the pixels change slowly along the edge direction and the pixels change strongly along the vertical edge direction. By solving the gradient value of the pixel, whether the pixel is in the edge region can be judged, if the gradient value is larger than the edge threshold value, the pixel can be generally considered to be in the edge region, otherwise, the pixel is in the flat region. The edge threshold is a threshold for determining whether a pixel in the image is in an edge region.
The preset fusion function f (x) takes the pixel gradient as an independent variable, and outputs a corresponding fusion coefficient according to the input gradient value. The weighted fusion process performed with the preset fusion function has the following characteristics: the fusion weight of the filtered image in the edge region can be made larger than that in the non-edge region (i.e. flat region), and the fusion weight of the original image in the non-edge region is larger than that in the edge region. Namely: for the filtered image, the fusion weight of each pixel point in the edge area is greater than that of each pixel point in the flat area; for the original image, the fusion weight of each pixel point in the flat area is greater than that of each pixel point in the edge area.
When the original image and the filtered image are subjected to weighted fusion, the form of the adopted preset fusion function is different for the following two fusion weight setting modes: 1) regarding each pixel in the fused image, taking w corresponding to the pixel as the fusion weight of the original image, and taking the corresponding 1-w as the fusion weight of the filtered image; 2) and regarding each pixel in the fused image, taking 1-w corresponding to the pixel as the fusion weight of the original image, and taking the corresponding w as the fusion weight of the filtered image. These two cases will be described separately below.
1) Taking w as the fusion weight of the original image and 1-w as the fusion weight of the filtering image
In this way, the pixel value of any pixel in the fused image can be calculated by the following formula:
result ═ f (x) × org _ value + [1-f (x) × bilateral _ value; - - - - - -equation 3
Wherein x is a gradient value corresponding to the pixel in the filtered image, and the fusion function f (x) satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is less than the first threshold, and when x is less than the edge threshold, the value of f (x) is greater than the first threshold.
Therefore, the fusion function f (x) can output different fusion coefficients according to different pixel gradients x, the fusion weight of the filtering image is improved in the edge area, retention of edge information is facilitated, the fusion weight of the original image is improved in the flat area, and texture details are retained to a certain extent.
In a specific implementation, the edge threshold may be preset empirically, or may be set and adjusted according to a distribution of each calculated gradient value, or may be adjusted by a preset interface to a coefficient in the expression f (x), so that the weighted fusion process based on f (x) meets the above characteristics.
Further preferably, in order to better retain the edge details, f (x) may further satisfy the following condition on the basis that the above condition is satisfied: when x is greater than the edge threshold, the value of f (x) decreases to a second threshold as x increases. Namely: with the gradual enhancement of the edge features, the weight of the filtered image is gradually increased, so that the effect of preserving the edge details can be further improved.
As a simple and easy implementation, the fusion function can be a linear function, and a specific example is given below, please refer to fig. 3, where the fusion function f (x) is in the form:
Figure BDA0001182972160000141
the preset edge threshold is 6, the first threshold is 0.5, and the second threshold is 0. When x is greater than 6, the value of f (x) is less than 0.5, and when x is less than 6, the value of f (x) is greater than 0.5.
By adopting the fusion function f (x), in the edge area, the fusion weight of the pixels of the filtering image is greater than that of the pixels of the original image, and along with the increase of x, the weight of the filtering image is greater than that of the original image more and more, so that the edge details of the filtering image are better preserved, and in the flat area, the weight of the original image is greater than that of the filtering image, so that the texture details of the original image are preserved to a certain extent.
2) Taking 1-w as the fusion weight of the original image and taking w as the fusion weight of the filtered image
In this way, the pixel value of any pixel in the fused image can be calculated by the following formula:
result [1-f (x) ] × org _ value + f (x) × bilateral _ value; - - - - - -equation 4
Wherein x is a gradient value corresponding to the pixel in the filtered image, and the fusion function f (x) satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is greater than the first threshold, and when x is less than the edge threshold, the value of f (x) is less than the first threshold. Therefore, in the edge area, the fusion weight of the filtering image can be improved, which is beneficial to retaining the edge information, in the flat area, the fusion weight of the original image can be improved, and the texture details can be retained to a certain extent.
Further preferably, in order to better retain the edge details, f (x) may further satisfy the following condition on the basis that the above condition is satisfied: when x is greater than the edge threshold, the value of f (x) is incremented to a third threshold as x increases. Namely: with the gradual enhancement of the edge features, the weight of the filtered image is gradually increased, so that the effect of preserving the edge details can be further improved.
Given a specific example below, the fusion function f (x) may be of the form:
Figure BDA0001182972160000142
the preset edge threshold is 6, the first threshold is 0.5, and the third threshold is 1. When x is greater than 6, the value of f (x) is greater than 0.5, and when x is less than 6, the value of f (x) is less than 0.5.
It should be noted that, in the example given above for the two fusion weight setting manners 1) and 2), the form of the fusion function is relatively simple and easy to implement, and in practical applications, other forms of more complex fusion functions may be designed, such as: it may be a linear function or a curved function with different parameters, as long as the characteristics described above are satisfied.
By adopting the preferred embodiment based on the fusion function, the step may first obtain the gradient value of each pixel in the filtered image, then calculate the value of the preset fusion function f (x) by taking each gradient value as input, as the fusion coefficient w of the corresponding pixel of the fused image, and calculate the pixel value of each pixel after fusion according to the corresponding formula 3 or formula 4, thereby obtaining the fused image.
Preferably, in order to improve the execution efficiency of the fusion processing, the embodiment provides a preferred implementation of weighted fusion by using the GPU. Compared with the CPU, the GPU has obvious acceleration advantage on image processing calculation which is highly repeated and only locally related according to the innate hardware structure with efficient parallel calculation characteristic. In the weighted fusion process in the embodiment, the same logical weighted operation is performed for each pixel, and the processing sequence is irrelevant, so that the weighted fusion process can be performed on a GPU, thereby greatly improving the image fusion efficiency.
In specific implementation, after the fusion coefficient w corresponding to each pixel of the fusion image is set, the original image, the filtered image, and each fusion coefficient may be written as a two-dimensional texture into a shader script running on the GPU, and then the shader script is run to trigger the fusion operation of the original image and the filtered image to be executed on the GPU.
It should be noted that, in this embodiment, if a preferred implementation of down-sampling and then filtering is adopted before this step 102, in this step, the second image based on the original image may be an original image, and the third image based on the filtered image may be an up-sampled filtered image. In the step, the original image and the up-sampled filtering image are subjected to weighted fusion to obtain a fused image.
Specifically, in this step, before setting the fusion coefficient, the filtered image obtained in step 101 may be up-sampled according to a down-sampled coefficient (that is, a filtered image with the same resolution as the original image is obtained), for example: the double linear interpolation algorithm can be adopted for up-sampling, then the gradient value x of each pixel is calculated according to the up-sampled filtering image, f (x) is further used as the fusion coefficient of the corresponding pixel, and finally the original image and the up-sampled filtering image are subjected to weighted fusion according to the fusion coefficient. In specific implementation, the image fusion process may also be completed by using a GPU, for example, the original image, the up-sampled filtered image, and each fusion coefficient may be written into a shader script as a two-dimensional texture, and the weighted fusion process is completed on the GPU by running the shader script; or the original image, the filtered image and each fusion coefficient can be written into a shader script as a two-dimensional texture, and the process of up-sampling and weighted fusion can be completed on the GPU by running the shader script.
The implementation mode of firstly performing downsampling on the original image, then performing filtering, and then fusing with the original image can integrate the advantages of high efficiency of low-resolution filtering and capability of keeping texture details at high resolution, and particularly adopts a fusion function f (x) based on pixel gradient as a fusion coefficient, so that fusion weight can be determined according to the change degree of pixel values, the requirements of keeping edges and keeping texture details are met, and the quality of the fused image is improved.
So far, the implementation of the image processing method provided by this embodiment is described in detail through the above steps 101-102. In a specific implementation, before step 101 is executed, an image processing request for the original image may be received, and after step 102 is completed, the obtained fused image is used as a processing result to respond to the image processing request. For example, the original image may be a face image, the image processing request may be a beauty processing request, and finally, the resultant fused image may be used as a processing result in response to the beauty processing request.
In the specific implementation, modifications may be made to the above-described embodiments, for example: only the luminance component can be processed, so that the processing speed of filtering and fusion can be further improved. In this embodiment, the first image based on the original image may be an original image obtained by converting an original image based on an RGB color space into a color space including a luminance component, and the edge-preserving filtering may include: and filtering the brightness component of the original image after the conversion operation is performed. The second image based on the original image may be the original image after the conversion operation is performed, and the third image based on the filtered image may be the filtered image, and the fusing the second image based on the original image and the third image based on the filtered image includes: and fusing the original image and the filtered image after the conversion operation is performed on the brightness component. And after the fused image is obtained, converting the fused image back to the RGB color space, and taking the fused image converted back to the RGB color space as an image processing result.
Specifically, for the original image based on the RGB color space, when performing the filtering or fusing process, it is usually required to perform a process for R, G, B three components, and in order to improve the processing efficiency, the original image based on the RGB color space may be converted into a color space containing a luminance component before performing step 101, for example: YCrCb color space, or Lab color space, each pixel contains a luminance component Y and two chrominance components after the original image is converted to the YCrCb color space, and each pixel contains a luminance component L and two chrominance components after the original image is converted to the Lab color space.
After the color space conversion is performed on the original image, subsequent operations such as filtering and fusion can be performed only on the brightness component, so that the data processing amount is reduced. For the filtering process, the luminance component of the original image after the conversion operation is performed may be filtered; for the fusion process, the luminance component of the converted original image and the luminance component of the filtered image obtained by filtering can be fused, and other components of the converted original image are retained, so that a fused image is obtained; and finally, converting the fused image from the corresponding color space back to the RGB color space, thereby obtaining a final image processing result. The conversion between the color spaces mentioned above can be realized by using the existing conversion formula, and is not described herein again.
In summary, according to the image processing method provided by this embodiment, since the image fusion processing is performed after the edge preserving filtering, the texture details of the original image can be preserved to a certain extent while the noise is removed and the edge is preserved, so that the image distortion phenomenon caused by the excessively smooth flat region can be avoided, the image is more real, and the image quality is effectively improved.
Fig. 4 is a comparison diagram of the effect provided by the present embodiment, in which (a) is an effect diagram of performing edge-preserving filtering only on an original image in the prior art, and (b) is an effect diagram of the present embodiment, that is: and on the basis of edge preserving filtering, fusing the original image and the filtered image to obtain an effect image. It can be easily seen that the flat area of the image in (a) is too smooth and unreal; (b) the image in (1) is more real because partial texture is reserved.
It should be understood by those skilled in the art that although the implementation effect of the present embodiment is described above by taking a face image as an example, the method provided by the present embodiment may also be used for processing other images, and may also preserve texture details in the image while preserving edge filtering, thereby improving image quality.
In the foregoing embodiment, an image processing method is provided, and correspondingly, the present application further provides an image processing apparatus. Please refer to fig. 5, which is a schematic diagram of an embodiment of an image processing apparatus according to the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
An image processing apparatus of the present embodiment includes: an image filtering unit 501, configured to perform edge preserving filtering on a first image based on an original image to obtain a filtered image; an image fusion unit 502, configured to fuse a second image based on the original image and a third image based on the filtered image to obtain a fused image.
Optionally, the original image includes: a face image; the device further comprises:
a processing request receiving unit, configured to receive a beauty processing request for a face image before performing edge-preserving filtering on a first image based on an original image;
a request responding unit for responding the beauty processing request with a processing result based on the fused image after obtaining the fused image.
Optionally, the image fusion unit includes:
a fusion coefficient setting subunit, configured to set a fusion coefficient w corresponding to each pixel of the fused image in a preset manner, where w satisfies: w is more than or equal to 0 and less than or equal to 1.0;
a fusion execution subunit, configured to perform the following weighted fusion operation on the second image and the third image: and for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value.
Optionally, the fusion coefficient setting subunit is specifically configured to, for each pixel of the fusion image, obtain a gradient value corresponding to the pixel in the third image, and use an output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, where an input parameter of f (x) is the gradient value;
when the image fusion unit executes the weighted fusion operation by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is larger than that in the non-edge area, and the fusion weight of the second image in the non-edge area is larger than that in the edge area.
Optionally, the fusion execution subunit is specifically configured to, for each pixel of the fusion image, use w corresponding to the pixel as a fusion weight of a pixel in the second image, use corresponding 1-w as a fusion weight of a pixel in the third image, and use a numerical value obtained by weighted summation as a fused pixel value;
the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is less than the first threshold, and when x is less than the edge threshold, the value of f (x) is greater than the first threshold.
Optionally, the condition that the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies further includes: when x is greater than the edge threshold, the value of f (x) decreases to a second threshold as x increases.
Optionally, the fusion execution subunit is specifically configured to, for each pixel of the fusion image, use 1-w corresponding to the pixel as a fusion weight of a pixel in the second image, use corresponding w as a fusion weight of a pixel in the third image, and use a value obtained by weighted summation as a fused pixel value;
the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is greater than the first threshold, and when x is less than the edge threshold, the value of f (x) is less than the first threshold.
Optionally, the condition that the preset fusion function f (x) adopted by the fusion coefficient setting subunit satisfies further includes: when x is greater than the edge threshold, the value of f (x) is incremented to a third threshold as x increases.
Optionally, the fusion execution subunit is specifically configured to perform, by using the GPU, a weighted fusion operation on the second image and the third image.
Optionally, the first image and the second image based on the original image respectively include: an original image; the third image based on the filtered image comprises: and filtering the image.
Optionally, the apparatus further comprises: the down-sampling unit is used for down-sampling the original image before edge-preserving filtering is carried out on the first image based on the original image; the first image based on the original image includes: down-sampling the original image;
the device further comprises: the up-sampling unit is used for up-sampling the filtered image according to the down-sampling coefficient after the filtered image is obtained and before the second image based on the original image and the third image based on the filtered image are fused; the second image based on the original image includes: an original image; the third image based on the filtered image comprises: up-sampled filtered image.
Optionally, the apparatus further comprises: a color space conversion unit for converting an original image based on an RGB color space into a color space containing a luminance component before edge-preserving filtering is performed on a first image based on the original image;
the image filtering unit is specifically configured to perform edge-preserving filtering on the luminance component of the original image after the conversion operation is performed;
the image fusion unit is specifically configured to fuse, for a luminance component, the original image and the filtered image after the conversion operation is performed;
the device further comprises: and the color space recovery unit is used for converting the fused image back to the RGB color space after the fused image is obtained, and taking the fused image converted back to the RGB color space as an image processing result.
In addition, if the image processing method provided by the application is applied to the processing of the face image, the advantages of edge-preserving filtering and texture detail keeping are combined, so that the skin flaws of the face are removed, the effect of beautifying and skin-polishing is achieved, meanwhile, the texture of the skin can be kept, and the face image is more real and natural.
When the image processing method provided by the application is applied to the face image, the face image has some characteristics of the face image, such as: the human face region can be clearly divided, part of organs in the human face have special skin textures and the like, and corresponding optimization treatment can be carried out when the method is specifically implemented. Specific optimization embodiments are illustrated in another example provided below.
Please refer to fig. 6, which is a flowchart illustrating an image processing method according to another embodiment of the present application. The method comprises the following steps:
step 601, determining a first region containing face pixels in the original image.
In this embodiment, the original image is a face image, and the face image usually includes not only a face but also a body part such as a neck and shoulders, a background, and the like. The main purpose of filtering is to improve the flaws of the face and achieve the effect of beautifying and peeling, so that filtering can be performed only on the region containing face pixels, and the filtering efficiency can be improved while the image processing effect is ensured. To achieve this, this step determines a first region of the original image containing face pixels.
In specific implementation, any one of a face detection technology, a feature positioning technology or a skin color model technology can be adopted to identify the first region containing face pixels from the original image, and the three technologies can also be combined to implement, so that the identification result is more accurate. The first region may be a rectangular region containing face pixels, or may be a region following the contour shape of a face. As shown in fig. 7, the area included in the black frame is the first area described in this embodiment.
Step 602, performing edge preserving filtering on the first region in the original image to obtain a filtered image.
In the step, when the edge-preserving filtering algorithm is adopted to filter the original image, the first region is appointed to be filtered only, so that the corresponding first region of the generated filtered image contains a filtering result, the edge-preserving and noise-reducing effects are obtained, other regions which do not belong to the first region are not filtered, and the pixel values in the original image are reserved. Because the whole original image is not filtered in the step, the filtering efficiency can be improved.
Step 603, determining a second region containing preset human face organ pixels in the first region.
In order to better protect the skin texture of the face organ, this step may determine a second region containing the pixels of the predetermined face organ in the first region, wherein the predetermined face organ includes: eyes, or mouth. The number of the second regions may be one or more, and in a specific implementation, may be determined according to specific requirements, for example: this step may determine one second region corresponding to the mouth, or may determine three second regions corresponding to the left eye, the right eye, and the mouth, respectively.
In specific implementation, because the first region is determined according to the original image in step 601 and the filtered image is obtained in step 602, in this step, the second region including the face organ pixels can be further identified from the first region by using the original image or the filtered image and adopting any one of a feature localization technology or a skin color model technology, or the two technologies can be combined, so that the identification result is more accurate.
And step 604, fusing the original image and the filtered image in a first area which does not contain a second area, and reserving the original image in the second area and the non-first area to obtain a fused image.
The step performs a fusion operation based on region division, and the following describes an implementation of the step by taking weighted fusion as an example: and for pixels in a second area and a non-first area of the fused image, adopting the pixel value of the original image, and for pixels in a first area not comprising the second area (namely, removing the second area from the first area), adopting a numerical value obtained by weighting and fusing the original image and the filtered image as the pixel value, thereby obtaining the fused image.
In specific implementation, different processing on different areas can be implemented by setting and adjusting the fusion coefficient and adopting a uniform weighting processing flow, so as to facilitate the implementation process of the technical solution, which specifically includes steps 604-1 to 604-4, which are further described below with reference to fig. 8.
Step 604-1, a preset mode is adopted to set a fusion coefficient w corresponding to each pixel of the fusion image.
The w satisfies: w is more than or equal to 0 and less than or equal to 1.0. The fusion coefficient w corresponding to each pixel of the fused image is set in a preset manner, and can be set to be a fixed value, or can be set to be an output value of a fusion function f (x) taking the gradient of the corresponding pixel of the filtered image as an independent variable. The setting method of the fusion coefficient and the conditions satisfied by the fusion function f (x) provided in the previous method embodiments are all applicable to this step, and for the relevant description, reference is made to the text in the previous method embodiments, and details are not repeated here.
And step 604-2, setting the fusion coefficient of the pixels which do not belong to the first area to a value which enables the fusion weight of the original image to be 1.
Specifically, if formula 1 or formula 3 in the previous embodiment is used for weighted fusion, the fusion coefficient of the pixel not belonging to the first region may be set to 1; if weighted fusion is performed using formula 2 or formula 4 in the previous embodiment, the fusion coefficient of the pixels not belonging to the first region may be set to 0.
And step 604-3, setting the fusion coefficient w of the pixels belonging to the second area to a value which enables the fusion weight of the original image to be 1.
The specific setting method is similar to step 604-2 and will not be described again.
And step 604-4, taking w and 1-w corresponding to each pixel as fusion weights respectively for each pixel of the fusion image, carrying out weighted summation on corresponding pixel values of the original image and the filtering image, and taking the obtained numerical value as a fused pixel value, thereby obtaining the fusion image.
Through the determination of the first region and the second region and the setting of the fusion coefficient in the above steps, in the second region containing the pixels of the predetermined human face organ and other regions not belonging to the first region, the fusion coefficient of each pixel is set to a value such that the fusion weight of the original image is 1 (i.e., the fusion weight of the filtered image is 0), and in the first region not containing the second region, the fusion coefficient of each pixel may be set in the predetermined manner in step 604-1.
If the second region and the region that does not belong to the first region are represented by black and the other regions are represented by white, an effect similar to a face mask (referred to as a face region mask for short) can be obtained, as shown in fig. 9, which is a schematic diagram of the face region mask provided in this embodiment, where the first region is in a shape of a face contour. Because the fusion weight of the original image is 1 in the black area, the original image and the filtering image are weighted and fused only in the white area according to the fusion weight in the fusion process, and the pixel value of the original image is reserved in the black area, so that the fusion calculation amount can be reduced, the fusion efficiency is improved, and the method has the following beneficial effects: due to the fact that the weighted fusion operation is carried out in the white area, the texture details of the face skin can be kept to a certain extent; and because the pixel values of the original image are reserved in the black second area, the skin textures of preset human face organs (such as eyes, mouth and the like) can be well reserved, so that the image processing result is more real and natural.
So far, the implementation of the image processing method provided in this embodiment is described in detail through the above steps 601-604.
It should be noted that the preferred embodiment based on down-sampling provided in the previous method embodiment can also be implemented in combination with the present embodiment. Specifically, after determining a first region containing face pixels in the original image in step 601, the original image may be down-sampled, in step 602, the corresponding first region in the down-sampled original image may be filtered to obtain a filtered image, before performing step 604, the filtered image may be up-sampled according to a down-sampling coefficient, then in step 604, the original image and the up-sampled filtered image are weighted and fused in the first region, and the original image is retained in the second region and the non-first region to obtain a fused image.
For the same reason, the preferred embodiment of image fusion by using GPU provided in the previous embodiment can also be implemented in combination with the present embodiment. Specifically, after the setting of the fusion coefficient is completed, step 604 is executed by the GPU, that is: the original image, the filtered image, and each fusion coefficient may be written as a two-dimensional texture into a shader script that runs on the GPU, respectively, and then the shader script is run to perform a weighted fusion operation on the GPU.
In summary, the embodiment provides a preferred embodiment for a face image, and since the weighted fusion operation is performed in the first region that does not include the second region and the original image is retained in the second region, the amount of fusion calculation can be reduced and the fusion efficiency can be improved; texture details of the facial skin are preserved to some extent; and the skin texture of the preset human face organs (such as eyes, mouth and the like) can be well reserved, so that the image processing result is more real and natural.
It should be noted that the above embodiment provides a preferred embodiment of a face image, and in a specific application, the embodiment may be modified based on the above preferred embodiment as needed.
For example, in an application scenario where the requirement on texture features of organs such as eyes and mouth is not high, the step 603 for determining the second region may not be performed, and the corresponding step 604 may perform weighted fusion on the original image and the filtered image in the first region, and retain the original image in the non-first region (that is, the step 604-3 provided in this embodiment may not be performed in specific implementation), so that while performing edge filtering and retaining a certain face texture detail, the execution efficiency may be improved because only the first region is subjected to filtering and weighted fusion.
For another example, in an application scenario with a low requirement on execution efficiency, step 601 may not be executed, that is, a first region including face pixels is not determined, and an edge-preserving filtering algorithm is used to filter the original image in step 602, step 603 may determine a second region including preset face organ pixels in the original image, step 604 performs weighted fusion on the original image and the filtered image in the non-second region, and retains the original image in the second region (that is, step 604-2 provided in this embodiment may not be executed in specific implementation), so while edge-preserving filtering and retaining a certain face texture detail, because the original image is retained in the second region, the texture detail of the face organ in the second region may be well retained, so that the image processing result is more real and natural.
The foregoing provides another embodiment of an image processing method according to the present application, and correspondingly, another embodiment of a corresponding image processing apparatus. Please refer to fig. 10, which is a schematic diagram of another embodiment of an image processing device of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
An image processing apparatus of the present embodiment includes: a first region determining unit 1001 configured to determine a first region containing face pixels in the original image; a face image filtering unit 1002, configured to perform edge preserving filtering on a first region in an original image to obtain a filtered image; a second region determining unit 1003, configured to determine a second region including preset face organ pixels in the first region; the face image fusion unit 1004 is configured to fuse the original image and the filtered image in a first region that does not include a second region, and retain the original image in the second region and the non-first region to obtain a fused image.
In addition, the application also provides an electronic device; the electronic equipment comprises the following embodiments:
referring to fig. 11, a schematic diagram of an embodiment of an electronic device of the present application is shown.
The electronic device includes: a processor 1101; a memory 1102 for storing code;
wherein the processor is coupled to the memory, and is configured to read the code stored in the memory and perform the following operations: performing edge-preserving filtering on a first image based on an original image to obtain a filtered image; and fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image.
Optionally, the original image includes: a face image; the processor performs operations further comprising: before edge-preserving filtering is carried out on a first image based on an original image, a beautifying processing request aiming at a face image is received; after the fused image is obtained, the beauty processing request is responded with a processing result based on the fused image.
Optionally, the fusing the second image based on the original image and the third image based on the filtered image includes:
respectively setting a fusion coefficient w corresponding to each pixel of the fusion image in a preset mode, wherein w satisfies the following conditions: w is more than or equal to 0 and less than or equal to 1.0; performing the following weighted fusion operation on the second image and the third image: and for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value.
Optionally, the setting the fusion coefficient w corresponding to each pixel of the fusion image in a preset manner includes:
for each pixel of the fused image, performing the following operations: acquiring a gradient value corresponding to the pixel in the third image; taking the output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, wherein the input parameter of f (x) is the gradient value;
when the weighted fusion operation is executed by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is greater than that in the non-edge area, and the fusion weight of the second image in the non-edge area is greater than that in the edge area.
Optionally, when the weighted fusion operation is performed, if w corresponding to the pixel is used as the fusion weight of the pixel in the second image and 1-w corresponding to the pixel is used as the fusion weight of the pixel in the third image, the preset fusion function f (x) satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is less than the first threshold, and when x is less than the edge threshold, the value of f (x) is greater than the first threshold.
Optionally, when the weighted fusion operation is performed, if 1-w corresponding to each pixel in the fused image is used as the fusion weight of the pixel in the second image, and the corresponding w is used as the fusion weight of the pixel in the third image, the preset fusion function f (x) satisfies the following condition: when x is greater than the edge threshold, the value of f (x) is greater than the first threshold, and when x is less than the edge threshold, the value of f (x) is less than the first threshold.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (22)

1. An image processing method, comprising:
performing edge-preserving filtering on a first image based on an original image to obtain a filtered image;
fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image, wherein the fusing comprises the following steps:
respectively setting a fusion coefficient w corresponding to each pixel of the fusion image in a preset mode, wherein w satisfies the following conditions: w is more than or equal to 0 and less than or equal to 1.0;
performing the following weighted fusion operation on the second image and the third image:
for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value;
wherein, adopting the preset mode to set up the fusion coefficient w corresponding to each pixel of the fusion image respectively comprises:
for each pixel of the fused image, performing the following operations:
acquiring a gradient value corresponding to the pixel in the third image;
taking the output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, wherein the input parameter of f (x) is the gradient value;
when the weighted fusion operation is executed by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is greater than that in the non-edge area, and the fusion weight of the second image in the non-edge area is greater than that in the edge area.
2. The method of claim 1, wherein the original image comprises: a face image;
before edge-preserving filtering a first image based on an original image, the method comprises the following steps:
receiving a beauty processing request aiming at a face image;
after obtaining the fusion image, the method comprises the following steps:
responding to the beauty processing request with a processing result based on the fused image.
3. The method according to claim 1, wherein when performing the weighted fusion operation, for each pixel in the fused image, if w corresponding to the pixel is taken as the fusion weight of the pixel in the second image and 1-w is taken as the fusion weight of the pixel in the third image, the preset fusion function f (x) satisfies the following condition:
when x is greater than the edge threshold, the value of f (x) is less than the first threshold, and when x is less than the edge threshold, the value of f (x) is greater than the first threshold.
4. The method of claim 3, wherein the condition satisfied by the fusion function f (x) further comprises: when x is greater than the edge threshold, the value of f (x) decreases to a second threshold as x increases.
5. The method according to claim 1, wherein when performing the weighted fusion operation, for each pixel in the fused image, if 1-w corresponding to the pixel is taken as the fusion weight of the pixel in the second image and the corresponding w is taken as the fusion weight of the pixel in the third image, the preset fusion function f (x) satisfies the following condition:
when x is greater than the edge threshold, the value of f (x) is greater than the first threshold, and when x is less than the edge threshold, the value of f (x) is less than the first threshold.
6. The method of claim 5, wherein the condition satisfied by the fusion function f (x) further comprises: when x is greater than the edge threshold, the value of f (x) is incremented to a third threshold as x increases.
7. The method according to claim 1, wherein the weighted fusion operation is performed on the second image and the third image using the GPU after the fusion coefficient w corresponding to each pixel of the fused image is set in a preset manner, respectively.
8. The method of claim 1, wherein the first image and the second image based on the original image respectively comprise: an original image; the third image based on the filtered image comprises: and filtering the image.
9. The method of claim 1, prior to edge-preserving filtering the first image based on the original image, comprising: down-sampling an original image; the first image based on the original image includes: down-sampling the original image;
after obtaining the filtered image, before fusing the second image based on the original image and the third image based on the filtered image, the method includes: up-sampling the filtered image according to the down-sampling coefficient; the second image based on the original image includes: an original image; the third image based on the filtered image comprises: up-sampled filtered image.
10. The method of claim 1, prior to edge-preserving filtering the first image based on the original image, comprising: converting an original image based on an RGB color space into a color space containing a luminance component;
the edge-preserving filtering of the first image based on the original image comprises: performing edge-preserving filtering on the brightness component of the original image after the conversion operation is performed;
the fusing the second image based on the original image and the third image based on the filtered image comprises: fusing the original image and the filtered image after the conversion operation is performed aiming at the brightness component;
after obtaining the fusion image, the method comprises the following steps:
and converting the fused image back to the RGB color space, and taking the fused image converted back to the RGB color space as an image processing result.
11. The method of claim 1, wherein the original image comprises: a face image.
12. The method of claim 11, wherein the first and second images based on the original image respectively comprise: an original image; the third image based on the filtered image comprises: filtering the image;
before edge-preserving filtering a first image based on an original image, the method comprises the following steps: determining a first region containing face pixels in the original image;
the edge-preserving filtering of the first image based on the original image comprises: performing edge-preserving filtering on a first region in an original image;
fusing a second image based on the original image with a third image based on the filtered image, comprising:
and fusing the original image and the filtered image in the first area, and reserving the original image in the non-first area.
13. The method of claim 12, further comprising, prior to the fusing the original image and the filtered image in the first region: determining a second region containing preset human face organ pixels in the first region;
the fusing the original image and the filtered image in the first area comprises:
and fusing the original image and the filtered image in the first area without the second area, and reserving the original image in the second area.
14. The method of claim 11, prior to edge-preserving filtering the first image based on the original image, comprising: determining a first region containing face pixels in the original image, and performing down-sampling on the original image;
the edge-preserving filtering of the first image based on the original image comprises: performing edge-preserving filtering on a corresponding first region in the original image after the down-sampling;
after obtaining the filtered image, before performing the fusion operation, the method includes: up-sampling the filtered image according to the down-sampling coefficient;
fusing a second image based on the original image with a third image based on the filtered image, comprising:
and fusing the original image and the up-sampled filtered image in the first area, and reserving the original image in the non-first area.
15. The method of claim 14, further comprising, prior to the fusing the original image and the upsampled filtered image in the first region: determining a second region containing preset human face organ pixels in the first region;
the fusing the original image and the up-sampled filtered image in the first region includes:
the original image and the up-sampled filtered image are processed in a first area which does not contain a second area, and the original image is reserved in the second area.
16. The method of claim 13 or 15, wherein the predetermined face organ comprises: eyes, or mouth.
17. The method according to any of claims 1-15, characterized in that the edge-preserving filtering is performed on the first image based on the original image using the following algorithm: a bilateral filtering algorithm, a surface blurring algorithm, or a guided filtering algorithm.
18. The method according to any of claims 1-15, characterized in that the method is implemented on a mobile terminal device.
19. An image processing apparatus characterized by comprising:
the image filtering unit is used for carrying out edge-preserving filtering on a first image based on an original image to obtain a filtered image;
the image fusion unit is used for fusing a second image based on the original image and a third image based on the filtering image to obtain a fused image;
the image fusion unit includes:
a fusion coefficient setting subunit, configured to set a fusion coefficient w corresponding to each pixel of the fused image in a preset manner, where w satisfies: w is more than or equal to 0 and less than or equal to 1.0;
a fusion execution subunit, configured to perform the following weighted fusion operation on the second image and the third image: for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value;
the fusion coefficient setting subunit is specifically configured to, for each pixel of the fusion image, obtain a gradient value corresponding to the pixel in the third image, and use an output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, where an input parameter of f (x) is the gradient value;
when the image fusion unit executes the weighted fusion operation by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is larger than that in the non-edge area, and the fusion weight of the second image in the non-edge area is larger than that in the edge area.
20. The apparatus of claim 19, wherein the original image comprises: a face image; the device further comprises:
a processing request receiving unit, configured to receive a beauty processing request for a face image before performing edge-preserving filtering on a first image based on an original image;
a request responding unit for responding the beauty processing request with a processing result based on the fused image after obtaining the fused image.
21. An electronic device, comprising:
a processor;
a memory for storing code;
wherein the processor is coupled to the memory, and is configured to read the code stored in the memory and perform the following operations: performing edge-preserving filtering on a first image based on an original image to obtain a filtered image; fusing the second image based on the original image and the third image based on the filtering image to obtain a fused image, wherein the fusing comprises the following steps:
respectively setting a fusion coefficient w corresponding to each pixel of the fusion image in a preset mode, wherein w satisfies the following conditions: w is more than or equal to 0 and less than or equal to 1.0;
performing the following weighted fusion operation on the second image and the third image:
for each pixel, taking w and 1-w corresponding to the pixel as fusion weights, carrying out weighted summation on corresponding pixel values of the second image and the third image, and taking the obtained numerical value as a fused pixel value;
wherein, adopting the preset mode to set up the fusion coefficient w corresponding to each pixel of the fusion image respectively comprises:
for each pixel of the fused image, performing the following operations:
acquiring a gradient value corresponding to the pixel in the third image;
taking the output value of a preset fusion function f (x) as a fusion coefficient w of the pixel, wherein the input parameter of f (x) is the gradient value;
when the weighted fusion operation is executed by using the fusion coefficient calculated by the fusion function, the fusion weight of the third image in the edge area is greater than that in the non-edge area, and the fusion weight of the second image in the non-edge area is greater than that in the edge area.
22. The electronic device of claim 21, wherein the original image comprises: a face image; the processor performs operations further comprising: before edge-preserving filtering is carried out on a first image based on an original image, a beautifying processing request aiming at a face image is received; after the fused image is obtained, the beauty processing request is responded with a processing result based on the fused image.
CN201611168058.2A 2016-12-16 2016-12-16 Image processing method and device and electronic equipment Active CN108205804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611168058.2A CN108205804B (en) 2016-12-16 2016-12-16 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611168058.2A CN108205804B (en) 2016-12-16 2016-12-16 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108205804A CN108205804A (en) 2018-06-26
CN108205804B true CN108205804B (en) 2022-05-31

Family

ID=62602369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611168058.2A Active CN108205804B (en) 2016-12-16 2016-12-16 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108205804B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895789B (en) * 2018-09-13 2023-05-02 杭州海康威视数字技术股份有限公司 Face beautifying method and device
CN109636749B (en) * 2018-12-04 2020-10-16 深圳市华星光电半导体显示技术有限公司 Image processing method
WO2020124355A1 (en) * 2018-12-18 2020-06-25 深圳市大疆创新科技有限公司 Image processing method, image processing device, and unmanned aerial vehicle
CN109767385B (en) * 2018-12-20 2023-04-28 深圳市资福医疗技术有限公司 Method and device for removing image chroma noise
CN109672885B (en) * 2019-01-08 2020-08-04 中国矿业大学(北京) Video image coding and decoding method for intelligent monitoring of mine
CN109829864B (en) * 2019-01-30 2021-05-18 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN109978808B (en) * 2019-04-25 2022-02-01 北京迈格威科技有限公司 Method and device for image fusion and electronic equipment
CN112419161B (en) * 2019-08-20 2022-07-05 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN110503704B (en) * 2019-08-27 2023-07-21 北京迈格威科技有限公司 Method and device for constructing three-dimensional graph and electronic equipment
CN110738612B (en) * 2019-09-27 2022-04-29 深圳市安健科技股份有限公司 Method for reducing noise of X-ray perspective image and computer readable storage medium
CN110956592B (en) * 2019-11-14 2023-07-04 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN110910326B (en) * 2019-11-22 2023-07-28 上海商汤智能科技有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN112967182B (en) * 2019-12-12 2022-07-29 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN111861929A (en) * 2020-07-24 2020-10-30 深圳开立生物医疗科技股份有限公司 Ultrasonic image optimization processing method, system and device
CN112508859A (en) * 2020-11-19 2021-03-16 聚融医疗科技(杭州)有限公司 Method and system for automatically measuring thickness of endometrium based on wavelet transformation
CN112991477B (en) * 2021-01-28 2023-04-18 明峰医疗系统股份有限公司 PET image processing method based on deep learning
CN113808038A (en) * 2021-09-08 2021-12-17 瑞芯微电子股份有限公司 Image processing method, medium, and electronic device
CN115115554B (en) * 2022-08-30 2022-11-04 腾讯科技(深圳)有限公司 Image processing method and device based on enhanced image and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930604A (en) * 2010-09-08 2010-12-29 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN102789638A (en) * 2012-07-16 2012-11-21 北京市遥感信息研究所 Image fusion method based on gradient field and scale space theory
CN104318524A (en) * 2014-10-15 2015-01-28 烟台艾睿光电科技有限公司 Method, device and system for image enhancement based on YCbCr color space
CN105574834A (en) * 2015-12-23 2016-05-11 小米科技有限责任公司 Image processing method and apparatus
CN105931210A (en) * 2016-04-15 2016-09-07 中国航空工业集团公司洛阳电光设备研究所 High-resolution image reconstruction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394487B (en) * 2008-10-27 2011-09-14 华为技术有限公司 Image synthesizing method and system
US8805111B2 (en) * 2010-02-09 2014-08-12 Indian Institute Of Technology Bombay System and method for fusing images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930604A (en) * 2010-09-08 2010-12-29 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN102789638A (en) * 2012-07-16 2012-11-21 北京市遥感信息研究所 Image fusion method based on gradient field and scale space theory
CN104318524A (en) * 2014-10-15 2015-01-28 烟台艾睿光电科技有限公司 Method, device and system for image enhancement based on YCbCr color space
CN105574834A (en) * 2015-12-23 2016-05-11 小米科技有限责任公司 Image processing method and apparatus
CN105931210A (en) * 2016-04-15 2016-09-07 中国航空工业集团公司洛阳电光设备研究所 High-resolution image reconstruction method

Also Published As

Publication number Publication date
CN108205804A (en) 2018-06-26

Similar Documents

Publication Publication Date Title
CN108205804B (en) Image processing method and device and electronic equipment
US9495582B2 (en) Digital makeup
CN110706174B (en) Image enhancement method, terminal equipment and storage medium
US9142009B2 (en) Patch-based, locally content-adaptive image and video sharpening
WO2018082185A1 (en) Image processing method and device
WO2016031189A1 (en) Image processing apparatus, image processing method, recording medium, and program
CN111340732B (en) Low-illumination video image enhancement method and device
CN111353955A (en) Image processing method, device, equipment and storage medium
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
WO2022016326A1 (en) Image processing method, electronic device, and computer-readable medium
CN113436112A (en) Image enhancement method, device and equipment
CN109345479B (en) Real-time preprocessing method and storage medium for video monitoring data
CN116612263B (en) Method and device for sensing consistency dynamic fitting of latent vision synthesis
CN114862729A (en) Image processing method, image processing device, computer equipment and storage medium
CN112334942A (en) Image processing method and device
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
CN110415188A (en) A kind of HDR image tone mapping method based on Multiscale Morphological
CN109741274B (en) Image processing method and device
CN112822343B (en) Night video oriented sharpening method and storage medium
GUAN et al. A dual-tree complex wavelet transform-based model for low-illumination image enhancement
CN112541873B (en) Image processing method based on bilateral filter
CN109712094B (en) Image processing method and device
CN110503603B (en) Method for obtaining light field refocusing image based on guide up-sampling
CN112465719A (en) Transform domain image denoising method and system
Ogawa et al. Adaptive subspace-based inverse projections via division into multiple sub-problems for missing image data restoration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201201

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Limited

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant