WO2017092592A1 - 一种图像融合的方法、装置和设备 - Google Patents

一种图像融合的方法、装置和设备 Download PDF

Info

Publication number
WO2017092592A1
WO2017092592A1 PCT/CN2016/106877 CN2016106877W WO2017092592A1 WO 2017092592 A1 WO2017092592 A1 WO 2017092592A1 CN 2016106877 W CN2016106877 W CN 2016106877W WO 2017092592 A1 WO2017092592 A1 WO 2017092592A1
Authority
WO
WIPO (PCT)
Prior art keywords
template
image
pixel
fusion
area
Prior art date
Application number
PCT/CN2016/106877
Other languages
English (en)
French (fr)
Inventor
秦文煜
黄英
邹建法
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2017092592A1 publication Critical patent/WO2017092592A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/153Transformations for image registration, e.g. adjusting or mapping for alignment of images using elastic snapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of computer image processing technologies, and in particular, to a method, device and device for image fusion.
  • the present invention provides a method, an apparatus and a device for image fusion, so as to reduce the amount of computation of image fusion, and reduce time cost and resource consumption.
  • the invention provides a method for image fusion, the method comprising:
  • the pixel values of the pixels in the fourth template are respectively used as weights of corresponding pixel points in the image, and the pixels of the fusion region and the fused material in the image are weighted and fused.
  • the determining the fusion region in the image to obtain the first template comprises:
  • the feature point including contour points
  • the area other than the fusion target in the image is removed to obtain the first template.
  • the first template is downsampled, and the obtaining the second template includes: using an affine transformation, downsampling the first template, so that the pixel of the second template is obtained.
  • Number of the first template Times the N is a positive integer of 2 or more;
  • Upsampling the third template, and obtaining the fourth template includes: using an inverse affine transformation, upsampling the third template, so that the number of pixels of the fourth template is the third N times the template.
  • the method before the normalizing the pixel values of the pixels in the second template, the method further includes:
  • the smoothing of the edge of the second template includes:
  • the predefined smoothing template is affine to the area to be smoothed to obtain a smoothed second template.
  • the affining the predefined smoothing template to the area to be smoothed comprises:
  • a pixel value of a pixel in the smoothing template is affineted as a pixel value of a corresponding pixel in the area to be smoothed.
  • the affining the predefined smoothing template to the area to be smoothed comprises:
  • the smoothing area on the smoothing template and the area to be smoothed on the second template are respectively triangulated in the same manner to obtain the same number of triangular areas;
  • Each of the triangular regions in the smoothing template is affixed to a triangular region corresponding to a position in the second template.
  • the method before the normalizing the pixel values of the pixels in the second template, the method further includes:
  • the brightness adjustment is performed on the smoothed second template.
  • the brightness adjustment of the smoothed second template according to the brightness statistics result includes:
  • the luminance values of the pixels in the smoothed second template are respectively added to the difference values.
  • performing weighted fusion on each pixel point and the fused material in the fused region of the image includes:
  • Imagei_new weight_maski*Imagei_old+(1-weight_maski)*Colori, determining the pixel value of each pixel obtained after the fusion;
  • Imagei_new is the pixel value of the i-th pixel obtained after the fusion region is merged in the image
  • weight_maski is the pixel value of the i-th pixel in the fourth template
  • Imagei_old is the i-th fusion region in the image.
  • Colori is the pixel value of the ith pixel provided by the fused material.
  • the method is applied to a beauty-like APP
  • the fusion area is a face area; the fusion material is a foundation color.
  • the invention also provides an apparatus for image fusion, the apparatus comprising:
  • a template determining unit configured to determine a fusion area in the image, to obtain a first template
  • a downsampling unit configured to downsample the first template to obtain a second template
  • a normalization unit configured to normalize pixel values of each pixel in the second template to obtain a third template
  • An upsampling unit configured to upsample the third template, to obtain a fourth template, where the number of pixels of the fourth template is equal to the number of pixels of the first template;
  • the weighting and merging unit is configured to use the pixel values of the pixels in the fourth template as the weights of the corresponding pixel points in the image, and perform weighted fusion on each pixel of the fused region and the fused material in the image.
  • the template determining unit is specifically configured to:
  • the feature point including contour points
  • the area other than the fusion target in the image is removed to obtain the first template.
  • the downsampling unit is specifically configured to downsample the first template by using an affine transformation, so that the number of pixels of the second template is the first template. Times, the N is a positive integer of 2 or more;
  • the upsampling unit is specifically configured to upsample the third template by using an inverse affine transformation, such that the number of pixels of the fourth template is N times of the third template.
  • the device further comprises:
  • An edge smoothing unit is configured to perform smoothing on the edge of the second template, and output the smoothed second template to the normalization unit.
  • the edge smoothing unit is specifically configured to:
  • the predefined smoothing template is affine to the area to be smoothed to obtain a smoothed second template.
  • the edge smoothing unit when the edge smoothing unit affines the predefined smoothing template to the area to be smoothed, the edge smoothing unit performs:
  • a pixel value of a pixel in the smoothing template is affineted as a pixel value of a corresponding pixel in the area to be smoothed.
  • the edge smoothing unit when the edge smoothing unit affines the predefined smoothing template to the area to be smoothed, the edge smoothing unit performs:
  • the smoothing area on the smoothing template and the area to be smoothed on the second template are respectively triangulated in the same manner to obtain the same number of triangular areas;
  • Each of the triangular regions in the smoothing template is affixed to a triangular region corresponding to a position in the second template.
  • the device further comprises:
  • a brightness adjustment unit configured to acquire a second template output by the edge smoothing unit, perform brightness statistics on the fused area in the image, perform brightness adjustment on the acquired second template according to the brightness statistical result, and adjust the brightness
  • the second template is output to the normalization unit.
  • the brightness adjustment unit when the brightness adjustment unit performs brightness adjustment on the smoothed second template according to the brightness statistics result, the brightness adjustment unit performs:
  • the luminance values of the pixels in the smoothed second template are respectively added to the difference values.
  • the weighting and combining unit is specifically configured to:
  • Imagei_new weight_maski*Imagei_old+(1-weight_maski)*Colori, determining the pixel value of each pixel obtained after the fusion;
  • Imagei_new is the pixel value of the i-th pixel obtained after the fusion region is merged in the image
  • weight_maski is the pixel value of the i-th pixel in the fourth template
  • Imagei_old is the i-th fusion region in the image.
  • Colori is the pixel value of the ith pixel provided by the fused material.
  • the device is applied to a beauty APP
  • the fusion area is a face area; the fusion material is a foundation color.
  • the invention also provides an apparatus, including
  • One or more processors are One or more processors;
  • One or more programs the one or more programs being stored in the memory, executed by the one or more processors to:
  • the pixel values of the pixels in the fourth template are respectively used as weights of corresponding pixel points in the image, and the pixels of the fusion region and the fused material in the image are weighted and fused.
  • the present invention adopts the method of downsampling the image, performs weight calculation on the downsampled fusion region, and then upsamples back to the original image size, and obtains the fusion of each pixel of the original region in the original image.
  • the corresponding weights greatly reduce the amount of calculations caused by the weight calculation, reducing the time cost and resource consumption.
  • FIG. 1 is a flowchart of a main method according to an embodiment of the present invention
  • 3a is a schematic diagram of a face image according to an embodiment of the present invention.
  • Figure 3b is a schematic view of the feature point positioning of Figure 3a;
  • Figure 3c is a schematic view of the first template region obtained based on Figure 3b;
  • Figure 3d is the area to be smoothed based on the contour of the face in Figure 3c;
  • Figure 3e is a schematic diagram of smoothing using a triangulation method in combination with a smoothing template
  • FIG. 4 is a structural diagram of a device according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of a device according to an embodiment of the present invention.
  • the word “if” as used herein may be interpreted as “when” or “when” or “in response to determining” or “in response to detecting.”
  • the phrase “if determined” or “if detected (conditions or events stated)” may be interpreted as “when determined” or “in response to determination” or “when detected (stated condition or event) “Time” or “in response to a test (condition or event stated)”.
  • FIG. 1 is a flowchart of a main method according to an embodiment of the present invention. As shown in FIG. 1 , the method mainly includes the following steps:
  • a fusion region in the image is determined to obtain a first template.
  • the image involved in this step is an image to be subjected to fusion processing, and the fusion region in the image refers to an area in which fusion processing is required.
  • the fused area may be a specified target area, or may be a target area determined by means of feature point positioning, which will be detailed in subsequent embodiments.
  • the fusion region in the image is actually extracted, and the first template is obtained, which can be obtained by removing the region other than the fusion target in the image.
  • the first template is downsampled to obtain a second template.
  • the first template may be subjected to down sampling processing, that is, the number of pixels in the first template is reduced.
  • an affine transformation may be adopted, for example, a method of scaling and transforming the first template, and setting an affine parameter, so that the number of pixels of the second template is 1/N of the first template.
  • the N is a positive integer of 2 or more.
  • the edge of the second template may be further smoothed.
  • the edges of the image such as simple blur mode, Gaussian blur mode, median filter mode, and Gaussian filter mode.
  • the edge of the second template may be smoothed by using a predefined smoothing template, which will be detailed in the following embodiments.
  • the brightness of the fusion region in the original image may be performed, and the smoothed second template may be performed according to the brightness statistics result.
  • the brightness adjustment, the specific adjustment method will be detailed in the subsequent embodiments.
  • the pixel values of the pixels in the second template are normalized to obtain a third template.
  • Determining the third template is actually determining the fusion weights used by each pixel in subsequent image fusion.
  • the weight coefficient is represented by the pixel value of each pixel in the second template.
  • the pixel value of each pixel is normalized.
  • the third template is upsampled to obtain a fourth template, and the number of pixels of the fourth template is equal to the number of pixels of the first template.
  • the calculation amount is mainly reflected in the process of determining the fusion weight.
  • the weight needs to be upsampled to obtain the weight of each pixel corresponding to the fusion region in the original image. Therefore, in this step, the third template including the weight information is upsampled to obtain a fourth template.
  • the inverse affine transformation may be adopted, that is, the affine parameter used in the affine transformation in step 102 is used to perform inverse affine transformation on the third template to obtain the number of pixels and the first template.
  • the same fourth template may be adopted, that is, the affine parameter used in the affine transformation in step 102 is used to perform inverse affine transformation on the third template to obtain the number of pixels and the first template.
  • the pixel values of the pixels in the fourth template are respectively used as the weights of the corresponding pixel points in the image, and the pixels of the fusion region in the image and the fused material are weight-fused.
  • a weighted fusion method is adopted, and the pixel value of each pixel obtained after the fusion may be determined by using the following formula:
  • Imagei_new weight_maski*Imagei_old+(1-weight_maski)*Colori(1)
  • Imagei_new is the pixel value of the i-th pixel obtained after the fusion region is merged in the image
  • weight_maski is the pixel value of the i-th pixel in the fourth template
  • Imagei_old is the pixel value of the i-th pixel of the fusion region in the image.
  • Colori is the pixel value of the ith pixel provided for the fused material.
  • the fused material may be one image, one or more types of color sets, and the like.
  • the method provided by the present invention can be applied to the fusion processing of a still image, and the real-time performance can be ensured because the calculation amount is greatly reduced, and thus it can also be applied to the fusion processing of the video image.
  • the execution body of the foregoing method provided by the present invention may be an application in a user terminal, or may be a plug-in in a user terminal application or a functional unit such as a Software Development Kit (SDK), or may be located in a server.
  • SDK Software Development Kit
  • the embodiment of the present invention does not specifically limit this.
  • the above applications may be, for example, image processing applications, beauty applications, and the like.
  • the above method will be described in detail by taking a foundation test of a face in a beauty application as an example in conjunction with FIG.
  • FIG. 2 is a flowchart of a detailed method according to an embodiment of the present invention.
  • a face in an image is implemented.
  • Make a foundation test which combines the face area of the image with the foundation color.
  • the process may specifically include the following steps:
  • feature point positioning is performed on the face region in the image to obtain a contour point of the face and a contour point of the preset organ.
  • the specific manner of the feature point positioning is not limited, and any feature point positioning manner such as positioning based on the SDM (Supervised Descent Method) model and id-exp model positioning may be adopted, and finally The position information of the above feature points is obtained.
  • SDM Supervised Descent Method
  • the feature points as shown in FIG. 3b that is, the contour points of the face and the contour points of the eyes, eyebrows, and mouth can be obtained. It should be noted that, in FIG. 3b, it is only schematic. In order to facilitate the observation of the effect of exaggerating the feature points, the number and size of the feature points that are located may not be consistent with those in FIG. 3b when the actual feature points are located.
  • the area other than the face area in the image is removed to obtain the first template.
  • the present embodiment is actually an area other than the outline point of the face, and an area surrounded by the outline points of the eyes, the eyebrows, and the mouth.
  • the resulting first template area is schematically shown in the white area of Figure 3c, and the actual pixel values of the individual pixel points are not shown in Figure 3c.
  • the first template is downsampled by using an affine transformation method, so that the number of pixels of the second template is 1/N times of the first template.
  • the so-called affine transformation is a linear transformation from two-dimensional coordinates to two-dimensional coordinates.
  • Affine transformations can be implemented through a series of atomic transformations, including: Translation, Scale, Flip, Rotation, and Shear.
  • the affine transformation involved in the embodiment of the present invention is a scaling transformation in which an appropriate scaling parameter (ie, an affine parameter) is set, and the first template is reduced to 1/N times, for example, reduced to 1/4 times. In the process of zooming out, the number of sampling points, that is, the number of pixels is also reduced by 1/N times, that is, the so-called image pixels are less.
  • N The larger the value of N is, the smaller the amount of calculation is, and the smaller the value of N is, the higher the quality of image processing is, which can be measured and selected according to actual needs.
  • the contour points in the second template are respectively expanded outward and inward by M pixel points to obtain a region to be smoothed.
  • This step is actually to prepare the edge of the image to determine the area to be smoothed.
  • the outline point of the face in the second template, the outline point of the eye, the outline point of the eyebrow, and the outline point of the mouth may be respectively inward and/or Or M pixels are expanded outward, and M is a preset positive integer, for example, 3, to obtain a strip-shaped area to be smoothed, respectively.
  • the so-called inward and outward expansion may be an extension of the two directions along the normal direction of the line connecting the contour points.
  • Fig. 3d Schematically shown in Fig. 3d, only the area to be smoothed by the contour of the face is shown in Fig. 3d, and the area to be smoothed by the eye, eyebrow and mouth contour is similar.
  • the pre-defined face smoothing template is affine to the area to be smoothed by using a triangulation method to obtain a smoothed second template.
  • the purpose is to make the brightness of the face in the image gently and gradually reduce the abrupt gradient, making the edge of the face softer and more natural, thus improving the image quality.
  • a smooth template method is employed in this embodiment. Since the general shape of the face is substantially the same, the shape of the face can be used in advance to form a template whose edge has been smoothed. In this template, the smoothed edge region can be appropriately larger.
  • the predefined smooth template can be affine to the area to be smoothed, thereby completing the smoothing of the area to be smoothed. In this way, edge smoothing can be achieved quickly, and the computational amount and time overhead caused by real-time fuzzy smoothing are avoided.
  • the triangulation method used in this step is that the smooth region on the smoothing template and the region to be smoothed on the second template are triangulated in the same manner to obtain the same number of triangular regions.
  • the smoothed area as an example, the inner and outer edges are divided into m points, and then the smoothed area is divided into 2m triangles, and m is a preset positive integer, as shown in FIG. 3e.
  • the smoothing area on the smoothing template adopts the same splitting method, and each triangular area in the smoothing template is affixed to the triangular area corresponding to the position in the second template.
  • the affine referred to in this step refers to the affine of the pixel value corresponding to the pixel of the position, that is, the pixel value of the pixel on the smooth template is affixed to the pixel at the corresponding position in the area to be smoothed on the second template, for example,
  • the pixel point A of the smooth region on the smoothing template is affixed to the pixel point a of the area to be smoothed on the second template, and the pixel point a takes the pixel value of the pixel point A. If the pixel point A is the top of a triangle obtained by the segmentation Angle, then pixel a is the apex angle of the triangle at the corresponding position on the second template.
  • brightness statistics are performed on the face region in the image, and brightness adjustment is performed on the smoothed second template according to the brightness statistics result.
  • the purpose of this step is to enable the brightness of the smoothed second template to match the actual brightness of the face in the original image as much as possible.
  • the mean value of the brightness of the face region in the original image and the mean value of the brightness of the smoothed second template may be counted to determine the difference between the two; and then the brightness values of the pixels in each of the smoothed second templates are added. The difference.
  • other specific methods of brightness adjustment can also be adopted, and will not be enumerated here.
  • the pixel values in the brightness-adjusted second template are normalized to obtain a third template.
  • the weight coefficient is represented by the pixel value of each pixel in the second template.
  • the pixel value of each pixel is normalized and processed.
  • the pixel value of each pixel in the third template can be used as the weight used by the pixel in the subsequent fusion.
  • the calculation of the weight is one of the important processings in the image fusion.
  • the weighting calculation is performed based on the template with fewer pixels, and then the sampling is performed. The weights of all the pixels are obtained, and the weight calculation is directly performed based on the original image size, which greatly reduces the calculation cost and the time cost.
  • the third template is inverse affine transformed using the affine parameters employed in 203 to obtain a fourth template.
  • the scaling is used in 203.
  • the scaling transformation is also used.
  • the corresponding affine parameters need to be set according to the affine parameters set in 203, thereby implementing the inverse transformation.
  • the number of pixels of the fourth template obtained by inverse affine transformation is raised to be the same as the first template. That is to say, this step is actually to increase the resolution by counting up the number of pixels of the first template.
  • the inverse affine transformation process since the number of pixel points increases, for the added pixel points, the pixel value can be obtained by interpolation.
  • the pixel values of the pixels in the fourth template are respectively used as the weights of the corresponding pixel points in the image, and the pixel points and the foundation color of the face region in the image are weighted and fused.
  • Imagei_new is the pixel value of the i-th pixel obtained by the fusion of the face region in the image
  • Imagei_old is the face region in the image (excluding the eyebrow
  • Colori is the pixel value of the foundation color.
  • the pixel value of each pixel foundation color may take the same value.
  • the pixel values of the pixel points involved in the embodiment of the present invention relate to the values of the three channels R, G, and B.
  • the three channels of each pixel point R, G, and B need to be respectively performed.
  • the values are processed separately, which is a relatively well-known content and will be briefly described herein.
  • the apparatus may include: a template determining unit 01, a downsampling unit 02, a normalization unit 03, an upsampling unit 04, and a weighting and combining unit 05, and may further include an edge smoothing unit 06 and a brightness adjusting unit 07. .
  • the main functions of each component are as follows:
  • the template determination unit 01 is responsible for determining the fusion region in the image to obtain the first template. Specifically, the template determining unit 01 may first perform feature point positioning on the fusion target in the image, and the feature point includes the contour point; then, using the located feature point, the area other than the fusion target in the image is removed to obtain the first template.
  • the downsampling unit 02 is responsible for downsampling the first template to obtain a second template.
  • There are many ways to downsample the image such as the nearest neighbor downsampling method and the B-spline downsampling method.
  • the downsampling unit 02 uses an affine transformation method to downsample the first template, so that the number of pixels of the second template is 1/N times of the first template, and N is 2 or more. Positive integer.
  • the normalization unit 03 is responsible for normalizing the pixel values of the pixels in the second template to obtain a third template.
  • the upsampling unit 04 is responsible for upsampling the third template to obtain a fourth template, and the number of pixels of the fourth template is equal to the number of pixels of the first template.
  • methods for upsampling such as bilateral filtering, guided filtering, bidirectional interpolation, and the like.
  • the upsampling unit 04 may perform upsampling on the third template by using an inverse affine transformation, so that the number of pixels of the fourth template is N times of the third template.
  • the weighting and merging unit 05 is responsible for weighting the pixel values of the pixels in the fourth template as the weights of the corresponding pixel points in the image, and performing weighted fusion on each pixel of the fused region in the image and the fused material.
  • Imagei_new is the pixel value of the i-th pixel obtained after the fusion region is merged in the image
  • weight_maski is the pixel value of the i-th pixel in the fourth template
  • Imagei_old is the pixel value of the i-th pixel of the fusion region in the image.
  • Colori is the pixel value of the ith pixel provided for the fused material.
  • the edge smoothing unit 06 is responsible for smoothing the edge of the second template, and outputting the smoothed second template to the normalization unit 03.
  • the edge smoothing unit 06 may expand the contour points of the fused area in the second template outward and/or inward by M pixels, respectively, where M is a preset positive integer, and the area surrounded by the extended pixel points is used as The area to be smoothed; the predefined smoothing template is affine to the area to be smoothed, and the smoothed second template is obtained.
  • the pixel value of the pixel in the smoothing template may be affineted as the pixel value of the corresponding pixel in the area to be smoothed.
  • the shape of the fused area may be pre-formed to form a template whose edge has been smoothed, and the template is a smooth template.
  • This method affixes the pre-defined smoothing template to the area to be smoothed, thereby smoothing the smoothing area, and can quickly achieve edge smoothing, thereby avoiding the calculation amount and time overhead caused by real-time fuzzy smoothing.
  • the edge smoothing unit 06 may adopt a triangulation method when the predefined smoothing template is affine to the area to be smoothed, that is, the smoothing area on the smoothing template and the to-be-smoothing area on the second template are respectively adopted. Triangulation in the same way to get the same number of triangles; then, the triangles in the smoothing template are respectively Affine to the triangular area of the corresponding position in the second template.
  • the brightness adjusting unit 07 is responsible for acquiring the second template output by the edge smoothing unit 06, performing brightness statistics on the fused area in the image, and performing brightness adjustment on the acquired second template according to the brightness statistical result, and outputting the second template after the brightness adjustment. Normalization unit 03 is given.
  • the brightness adjustment unit 07 can make the brightness of the smoothed second template match the actual brightness of the face in the original image as much as possible.
  • the difference between the brightness average of the fused area in the image and the brightness average of the smoothed second template may be determined; and then the brightness values of the pixels in the smoothed second template are respectively determined. Plus the difference.
  • the device can be applied to an image processing type APP, and can also be applied to a beauty type APP or the like.
  • the device can be embodied in the form of an application, which can be an application running native to the device (native APP) or a web application (webApp) on the device browser.
  • it can also be embodied in the form of a plug-in or SDK in the application.
  • the present invention can also be applied to other image fusion scenes, such as merging red apples in one image with yellow apples in another image. .
  • the above method and apparatus provided by the embodiments of the present invention may be embodied by a computer program that is set up and operates in a device.
  • the device may include one or more processors, and also includes a memory and one or more programs, as shown in FIG.
  • the one or more programs are stored in a memory and executed by the one or more processors to implement the method flow and/or device operations illustrated in the above-described embodiments of the present invention.
  • the method flow executed by one or more of the above processors may include:
  • the pixel values of the pixels in the fourth template are respectively used as the weights of the corresponding pixel points in the image, and the pixels of the fusion region and the fused material in the image are weighted and fused.
  • the method and apparatus provided by the present invention can have the following advantages:
  • the invention adopts the method of downsampling the image, performs weight calculation on the downsampled fusion region, and then upsamples back to the original image size, and obtains the weight corresponding to each pixel of the fusion region in the original image during fusion. Reduce the amount of calculations caused by weight calculations, reducing time cost and resource consumption.
  • the edge smoothing and/or brightness adjustment of the downsampled fusion region is further improved.
  • One step reduces the amount of calculation, reducing time cost and resource consumption.
  • the predefined smoothing template can be used, and the triangulation method can be further combined to quickly achieve edge smoothing, and the computational amount and time overhead caused by real-time fuzzy smoothing are avoided.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

提供了一种图像融合的方法、装置和设备,其中方法包括:确定图像中的融合区域,得到第一模板;对所述第一模板进行降采样,得到第二模板;对第二模板中的各像素点的像素值进行归一化,得到第三模板;对所述第三模板进行升采样,得到第四模板,所述第四模板的像素点个数等于所述第一模板的像素点个数;将所述第四模板中各像素点的像素值分别作为所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。另外,在所述归一化之前还可以包括利用预定义的平滑模板对第二模板进行边缘平滑以及基于图像中融合区域的亮度,对第二模板进行亮度调整的步骤。其能够降低图像融合的计算量,降低时间成本和资源消耗。

Description

一种图像融合的方法、装置和设备
本申请要求2015年12月03日递交的申请号为201510881167.8、发明名称为“一种图像融合的方法、装置和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机图像处理技术领域,特别涉及一种图像融合的方法、装置和设备。
背景技术
随着智能终端的不断普及,人们利用智能终端进行图像处理的需求越来越高,各类美颜类APP受到爱美人士的广泛青睐。在这类APP中,常常会涉及到图像融合处理,现有的图像融合处理的复杂度较大,使得在图像融合涉及到的像素面积较大时,计算量引起的时间成本很大,实时性难以保证,并且对系统资源的消耗和占用都很大。
发明内容
有鉴于此,本发明提供了一种图像融合的方法、装置和设备,以便于降低图像融合的计算量,降低时间成本和资源消耗。
具体技术方案如下:
本发明提供了一种图像融合的方法,该方法包括:
确定图像中的融合区域,得到第一模板;
对所述第一模板进行降采样,得到第二模板;
对第二模板中的各像素点的像素值进行归一化,得到第三模板;
对所述第三模板进行升采样,得到第四模板,所述第四模板的像素点个数等于所述第一模板的像素点个数;
将所述第四模板中各像素点的像素值分别作为所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
根据本发明一优选实施方式,所述确定图像中的融合区域,得到第一模板包括:
对图像中的融合目标进行特征点定位,所述特征点包括轮廓点;
利用定位出的特征点,去除图像中除融合目标之外的区域,得到第一模板。
根据本发明一优选实施方式,对所述第一模板进行降采样,得到第二模板包括:采 用仿射变换的方式,对所述第一模板进行降采样,使得到的第二模板的像素点个数为第一模板的
Figure PCTCN2016106877-appb-000001
倍,所述N为2以上的正整数;
对所述第三模板进行升采样,得到第四模板包括:采用逆仿射变换的方式,对所述第三模板进行升采样,使得到的第四模板的像素点个数为所述第三模板的N倍。
根据本发明一优选实施方式,在所述对第二模板中的各像素点的像素值进行归一化之前,该方法还包括:
对第二模板的边缘进行平滑处理。
根据本发明一优选实施方式,所述对第二模板的边缘进行平滑处理包括:
将所述第二模板中融合区域的轮廓点分别向外和/或向内扩展M个像素点,所述M为预设的正整数,将扩展的像素点所包围的区域作为待平滑区域;
将预定义的平滑模板仿射到所述待平滑区域,得到平滑后的第二模板。
根据本发明一优选实施方式,所述将预定义的平滑模板仿射到所述待平滑区域包括:
将所述平滑模板中像素点的像素值,仿射为所述待平滑区域中对应位置像素点的像素值。
根据本发明一优选实施方式,所述将预定义的平滑模板仿射到所述待平滑区域包括:
在所述平滑模板上的平滑区域和所述第二模板上的待平滑区域分别采用相同的方式进行三角剖分,得到相同个数的三角区域;
将所述平滑模板中各三角区域分别仿射到第二模板中对应位置的三角区域。
根据本发明一优选实施方式,在所述对第二模板中的各像素点的像素值进行归一化之前,该方法还包括:
对所述图像中的融合区域进行亮度统计;
依据亮度统计结果,对平滑处理后的第二模板进行亮度调整。
根据本发明一优选实施方式,所述依据亮度统计结果,对平滑处理后的第二模板进行亮度调整包括:
确定所述图像中的融合区域的亮度均值和平滑处理后的第二模板的亮度均值之间的差值;
将平滑处理后的第二模板中各像素点的亮度值分别加上所述差值。
根据本发明一优选实施方式,对所述图像中融合区域各像素点和融合素材进行加权融合包括:
利用Imagei_new=weight_maski*Imagei_old+(1-weight_maski)*Colori,确定融合后得到的各像素点的像素值;
其中Imagei_new为所述图像中融合区域融合后得到的第i个像素点的像素值,weight_maski为所述第四模板中第i个像素点的像素值,Imagei_old为所述图像中融合区域第i个像素点的像素值,Colori为融合素材提供的第i个像素点的像素值。
根据本发明一优选实施方式,该方法应用于美颜类APP;
所述融合区域为人脸区域;所述融合素材为粉底色。
本发明还提供了一种图像融合的装置,该装置包括:
模板确定单元,用于确定图像中的融合区域,得到第一模板;
降采样单元,用于对所述第一模板进行降采样,得到第二模板;
归一化单元,用于对第二模板中的各像素点的像素值进行归一化,得到第三模板;
升采样单元,用于对所述第三模板进行升采样,得到第四模板,所述第四模板的像素点个数等于所述第一模板的像素点个数;
加权融合单元,用于将所述第四模板中各像素点的像素值分别作为所述所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
根据本发明一优选实施方式,所述模板确定单元,具体用于:
对图像中的融合目标进行特征点定位,所述特征点包括轮廓点;
利用定位出的特征点,去除图像中除融合目标之外的区域,得到第一模板。
根据本发明一优选实施方式,所述降采样单元,具体用于采用仿射变换的方式,对所述第一模板进行降采样,使得到的第二模板的像素点个数为第一模板的
Figure PCTCN2016106877-appb-000002
倍,所述N为2以上的正整数;
所述升采样单元,具体用于采用逆仿射变换的方式,对所述第三模板进行升采样,使得到的第四模板的像素点个数为所述第三模板的N倍。
根据本发明一优选实施方式,该装置还包括:
边缘平滑单元,用于对所述第二模板的边缘进行平滑处理,将平滑处理后的第二模板输出给所述归一化单元。
根据本发明一优选实施方式,所述边缘平滑单元,具体用于:
将所述第二模板中融合区域的轮廓点分别向外和/或向内扩展M个像素点,所述M为预设的正整数,将扩展的像素点所包围的区域作为待平滑区域;
将预定义的平滑模板仿射到所述待平滑区域,得到平滑后的第二模板。
根据本发明一优选实施方式,所述边缘平滑单元在将预定义的平滑模板仿射到所述待平滑区域时,具体执行:
将所述平滑模板中像素点的像素值,仿射为所述待平滑区域中对应位置像素点的像素值。
根据本发明一优选实施方式,所述边缘平滑单元在将预定义的平滑模板仿射到所述待平滑区域时,具体执行:
在所述平滑模板上的平滑区域和所述第二模板上的待平滑区域分别采用相同的方式进行三角剖分,得到相同个数的三角区域;
将所述平滑模板中各三角区域分别仿射到第二模板中对应位置的三角区域。
根据本发明一优选实施方式,该装置还包括:
亮度调整单元,用于获取所述边缘平滑单元输出的第二模板,对所述图像中的融合区域进行亮度统计,依据亮度统计结果,对获取的第二模板进行亮度调整,将亮度调整后的第二模板输出给所述归一化单元。
根据本发明一优选实施方式,所述亮度调整单元在依据亮度统计结果,对平滑处理后的第二模板进行亮度调整时,具体执行:
确定所述图像中的融合区域的亮度均值和平滑处理后的第二模板的亮度均值之间的差值;
将平滑处理后的第二模板中各像素点的亮度值分别加上所述差值。
根据本发明一优选实施方式,所述加权融合单元,具体用于:
利用Imagei_new=weight_maski*Imagei_old+(1-weight_maski)*Colori,确定融合后得到的各像素点的像素值;
其中Imagei_new为所述图像中融合区域融合后得到的第i个像素点的像素值,weight_maski为所述第四模板中第i个像素点的像素值,Imagei_old为所述图像中融合区域第i个像素点的像素值,Colori为融合素材提供的第i个像素点的像素值。
根据本发明一优选实施方式,该装置应用于美颜类APP;
所述融合区域为人脸区域;所述融合素材为粉底色。
本发明还提供了一种设备,包括
一个或者多个处理器;
存储器;
一个或者多个程序,所述一个或者多个程序存储在所述存储器中,被所述一个或者多个处理器执行以实现如下操作:
确定图像中的融合区域,得到第一模板;
对所述第一模板进行降采样,得到第二模板;
对第二模板中的各像素点的像素值进行归一化,得到第三模板;
对所述第三模板进行升采样,得到第四模板,所述第四模板的像素点个数等于所述第一模板的像素点个数;
将所述第四模板中各像素点的像素值分别作为所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
由以上技术方案可以看出,本发明采用对图像进行降采样的方式,对降采样后的融合区域进行权重计算,然后再升采样回原始图像大小,得到原始图像中融合区域各像素点在融合时对应的权重,大大降低了因权重计算所带来的计算量,降低了时间成本和资源消耗。
附图说明
图1为本发明实施例提供的主要方法流程图;
图2为本发明实施例提供的一种详细方法流程图;
图3a为本发明实施例提供的人脸图像的示意图;
图3b为对图3a进行特征点定位的示意图;
图3c为基于图3b得到的第一模板区域的示意图;
图3d为图3c中基于人脸轮廓产生的待平滑区域;
图3e为利用三角剖分法结合平滑模板进行平滑的示意图;
图4为本发明实施例提供的装置结构图;
图5为本发明实施例提供的设备结构图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面结合附图和具体实施例对本发明进行详细描述。
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和 “该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
图1为本发明实施例提供的主要方法流程图,如图1中所示,该方法主要包括以下步骤:
在101中,确定图像中的融合区域,得到第一模板。
本步骤中涉及的图像是需要进行融合处理的图像,图像中的融合区域指的是需要进行融合处理的区域。融合区域可以是指定的目标区域,也可以是通过特征点定位的方式确定的目标区域,具体将在后续实施例中详述。本步骤实际上是将图像中的融合区域截取出来,得到第一模板,可以通过将图像中融合目标之外的区域去除的方式得到。
在102中,对第一模板进行降采样,得到第二模板。
为了降低对融合区域进行融合处理所产生的计算量,在本步骤中,可以对第一模板进行降采样处理,即减少第一模板中的像素个数。
图像降采样的方式有很多,例如最近邻降采样法、B样条降采样法等。在本发明实施例中可以采用仿射变换的方式,例如采用对第一模板进行缩放变换的方式,设置仿射参数,使得到的第二模板的像素点个数为第一模板的1/N倍,所述N为2以上的正整数。
由于图像融合是否自然通常体现在融合区域的边缘处,因此优选地,为了使得图像融合更加的自然,可以进一步对第二模板的边缘进行平滑处理。对图像边缘进行平滑处理的方式也有很多,例如简单模糊方式、高斯模糊方式、中值滤波方式、高斯滤波方式等。在本发明实施例中可以利用预定义的平滑模板对第二模板的边缘进行平滑处理,具体将在后续实施例中详述。
另外,为了降低原始图像亮度与平滑处理后第二模板的亮度之间的差异所产生的影响,可以对原始的图像中融合区域进行亮度统计,依据亮度统计结果对平滑处理后的第二模板进行亮度调整,具体的调整方式将在后续实施例中详述。
在103中,对第二模板中的各像素点的像素值进行归一化,得到第三模板。
确定第三模板实际上就是确定各像素点在后续进行图像融合时,所采用的融合权重。在进行融合时,为了体现图像各像素点的特征,权重系数由第二模板中各像素点的像素值体现,在本步骤中采用对各像素点的像素值进行归一化的方式。
在104中,对第三模板进行升采样,得到第四模板,第四模板的像素点个数等于第一模板的像素点个数。
在图像融合过程中,计算量主要体现在确定融合权重的过程,进行降采样得到权重后,需要对权重进行升采样,得到原始图像中融合区域对应各像素点的权重。因此在本步骤中,对包含权重信息的第三模板进行升采样,得到第四模板。
同样升采样的方式也存在多种,可以采用诸如双边滤波、引导滤波、双向插值等方式。在本发明实施例中,可以采用逆仿射变换的方式,即利用步骤102中仿射变换所采用的仿射参数,对第三模板进行逆仿射变换,得到像素点个数与第一模板相同的第四模板。
在105中,将第四模板中各像素点的像素值分别作为图像中对应像素点的权重,对图像中融合区域各像素点和融合素材进行加权融合。
本发明实施例中采用的是加权融合的方式,可以利用如下公式确定融合后得到的各像素点的像素值:
Imagei_new=weight_maski*Imagei_old+(1-weight_maski)*Colori(1)
其中Imagei_new为图像中融合区域融合后得到的第i个像素点的像素值,weight_maski为第四模板中第i个像素点的像素值,Imagei_old为图像中融合区域第i个像素点的像素值,Colori为融合素材提供的第i个像素点的像素值。在本发明实施例中,融合素材可以是一副图像、色彩集中的一种或多种等。
本发明所提供的方法可以应用于对静态图像的融合处理,由于大大降低了计算量,能够保证实时性,因此也可以应用于对视频图像的融合处理。另外,本发明提供的上述方法的执行主体可以为用户终端中的应用,也可以为用户终端应用中的插件或软件开发工具包(Software Development Kit,SDK)等功能单元,或者,还可以位于服务器端,本发明实施例对此不进行特别限定。上述应用可以是诸如图像处理类应用、美颜类应用等等。下面结合图2,以美颜类应用中对人脸进行粉底试妆为例,对上述方法进行详细描述。
图2为本发明实施例提供的一种详细方法流程图,在本实施例中实现对图像中人脸 进行粉底试妆,即将图像中的人脸区域与粉底色进行融合。如图2中所示,该流程可以具体包括以下步骤:
在201中,对图像中的人脸区域进行特征点定位,得到人脸的轮廓点以及预设的器官的轮廓点。
在本发明实施例中,并不对特征点定位的具体方式进行限制,可以采用诸如基于SDM(Supervised Descent Method,监督下降方法)模型的定位、id-exp模型定位等任意特征点定位方式,最终可以得到上述特征点的位置信息。
假设对图3a中的人脸区域进行特征点定位,可以得到如图3b中所示的特征点,即人脸的轮廓点以及眼睛、眉毛、嘴巴的轮廓点。需要说明的是,图3b中仅仅是示意性的,为了方便查看夸大了特征点的效果,在实际的特征点定位时,定位出的特征点数量和粒度可能与图3b中不一致。
在202中,利用定位出的特征点,去除图像中除人脸区域之外的区域,得到第一模板。
在本实施例中,实际上是去除人脸的轮廓点之外的区域,以及去除眼睛、眉毛、嘴巴的轮廓点所包围的区域。得到的第一模板区域示意性的如图3c中白色区域所示,图3c中未体现出各像素点的实际像素值。
在203中,采用仿射变换的方式,对第一模板进行降采样,使得到的第二模板的像素点个数为第一模板的1/N倍。
所谓仿射变换是一种二维坐标到二维坐标之间的线性变换。仿射变换可以通过一系列的原子变换来实现,包括:平移(Translation)、缩放(Scale)、翻转(Flip)、旋转(Rotation)和剪切(Shear)。其中本发明实施例涉及的仿射变换是其中的缩放变换,设置合适的缩放参数(即仿射参数),将第一模板缩小至原来的1/N倍,例如缩小至原来的1/4倍,在缩小的过程中,采样点即像素点的个数也减少至原来的1/N倍,也就是通常所说的图像像素变少了。
其中,N值越大,所产生的计算量越小,N值越小,对图片处理的质量越高,具体可以根据实际的需求进行衡量和选取。
在204中,将第二模板中的轮廓点分别向外和向内扩展M个像素点,得到待平滑区域。
本步骤实际上是为图像的边缘平滑做准备,确定待平滑区域。在本步骤中,可以将第二模板中人脸的轮廓点、眼睛的轮廓点、眉毛的轮廓点以及嘴巴的轮廓点分别向内和/ 或向外扩展M个像素点,M为预设的正整数,例如取3,分别得到带状的待平滑区域。所谓向内和向外扩展可以是沿着轮廓点连线的法线方向进行的两个方向的扩展。
示意性的如图3d所示,图3d中仅示出了人脸轮廓产生的待平滑区域,眼睛、眉毛、嘴巴轮廓产生的待平滑区域类似。
在205中,采用三角剖分方式,将预定义的人脸平滑模板仿射到待平滑区域,得到平滑后的第二模板。
对人脸边缘进行平滑处理,目的是使图像中人脸亮度平缓渐变,减小突变梯度,使得人脸边缘更加柔和自然,从而改善图像质量。
为了加快平滑处理的速度,在本实施例中采用平滑模板的方式。由于人脸的大致形状是基本相同的,因此可以预先利用人脸的形状形成一个边缘已经平滑处理的模板,在本模板中,经平滑处理的边缘区域可以适当大一点。在对第二模板进行平滑处理时,可以将预先定义好的平滑模板仿射到待平滑区域,从而完成对待平滑区域的平滑。这种方式,可以快速实现边缘平滑,避免了实时进行模糊平滑所带来的计算量和时间开销。
在本步骤中采用的三角剖分法,是在平滑模板上的平滑区域和第二模板上的待平滑区域采用相同的方式进行三角剖分,得到相同的三角区域个数。以待平滑区域为例,将内外边缘均分成m个点,然后待平滑区域被分成2m个三角,m为预设的正整数,如图3e中所示。平滑模板上的平滑区域采用相同的剖分方式,将平滑模板中各三角区域分别仿射到第二模板中对应位置的三角区域。
其中,本步骤中涉及的仿射指的是对应位置像素点的像素值的仿射,即将平滑模板上像素点的像素值仿射到第二模板上待平滑区域中对应位置的像素点,例如将平滑模板上平滑区域的像素点A仿射到第二模板上待平滑区域的像素点a,则像素点a取像素点A的像素值,若像素点A为剖分得到的某三角形的顶角,那么像素点a为第二模板上对应位置的三角形的顶角。
在206中,对图像中的人脸区域进行亮度统计,依据亮度统计结果,对平滑处理后的第二模板进行亮度调整。
本步骤的目的是,使得平滑后的第二模板的亮度能够尽量的与原始图像中人脸实际亮度匹配。例如,可以统计原始图像中人脸区域亮度的均值以及平滑后的第二模板的亮度均值,确定两者的差值;然后将各平滑后的第二模板中各像素点的亮度值均加上该差值。当然,还可以采用其他亮度调整的具体方式,在此不再一一列举。
在207中,对亮度调整后的第二模板中的各像素值进行归一化,得到第三模板。
为了在图像融合时,体现图像各像素点的特征,权重系数由第二模板中各像素点的像素值体现,在本步骤中采用对各像素点的像素值进行归一化处理后,得到的第三模板中各像素点的像素值就可以作为该像素点在后续融合时采用的权重。
权重的计算是图像融合中带来较大计算量的其中一个重要处理,在本发明实施例中,首先通过降采样的方式,基于像素点较少的模板进行权重计算后,在升采样回去,得到所有像素点的权重,相比较直接基于原始图像大小进行权重计算,大大降低了计算消耗和时间成本。
在208中,利用203中采用的仿射参数,将第三模板进行逆仿射变换,得到第四模板。
在203中采用的是缩放变换,在本步骤中同样采用缩放变换,但本步骤中需要依据在203中设置的仿射参数,设置对应的仿射参数,从而实现逆变换。将逆仿射变换后得到的第四模板的像素点个数升至与第一模板相同。也就是说,本步骤实际上是要升采样回第一模板的像素点个数,提高分辨率。在逆仿射变换过程中,由于像素点个数增多,对于增加的像素点,其像素值可以采用插值的方式得到。
在209中,将第四模板中各像素点的像素值分别作为图像中对应像素点的权重,对图像中人脸区域各像素点和粉底色进行加权融合。
本步骤在进行加权融合时,可以采用上述的公式(1),其中Imagei_new为图像中人脸区域融合后得到的第i个像素点的像素值,Imagei_old为图像中人脸区域(不包含眉毛、眼睛和嘴巴的区域)第i个像素点的像素值,Colori为粉底色的像素值,在本实施例中,各像素粉底色的像素值可以取相同的值。
本发明实施例中涉及的像素点的像素值涉及到R、G、B三个通道的值,在进行上述仿射以及融合处理时,需要分别对各像素点R、G、B三个通道的值分别进行处理,这是较为公知的内容,在此仅作简单说明。
以上是对本发明所提供方法进行的详细描述,下面结合图4对本发明提供的装置进行详细描述。如图4所示,该装置可以包括:模板确定单元01、降采样单元02、归一化单元03、升采样单元04以及加权融合单元05,还可以进一步包括边缘平滑单元06和亮度调整单元07。各组成单元的主要功能如下:
模板确定单元01负责确定图像中的融合区域,得到第一模板。具体地,模板确定单元01可以首先对图像中的融合目标进行特征点定位,特征点包括轮廓点;然后利用定位出的特征点,去除图像中除融合目标之外的区域,得到第一模板。
降采样单元02负责对第一模板进行降采样,得到第二模板。图像降采样的方式有很多,例如最近邻降采样法、B样条降采样法等。在本发明实施例中降采样单元02采用仿射变换的方式,对第一模板进行降采样,使得到的第二模板的像素点个数为第一模板的1/N倍,N为2以上的正整数。
归一化单元03负责对第二模板中的各像素点的像素值进行归一化,得到第三模板。
升采样单元04负责对第三模板进行升采样,得到第四模板,第四模板的像素点个数等于第一模板的像素点个数。同样升采样的方式也存在多种,可以采用诸如双边滤波、引导滤波、双向插值等方式。在本发明实施例中,升采样单元04可以采用逆仿射变换的方式,对第三模板进行升采样,使得到的第四模板的像素点个数为第三模板的N倍。
加权融合单元05负责将第四模板中各像素点的像素值分别作为图像中对应像素点的权重,对图像中融合区域各像素点和融合素材进行加权融合。
具体可以采用Imagei_new=weight_maski*Imagei_old+(1-weight_maski)*Colori,确定融合后得到的各像素点的像素值;
其中Imagei_new为图像中融合区域融合后得到的第i个像素点的像素值,weight_maski为第四模板中第i个像素点的像素值,Imagei_old为图像中融合区域第i个像素点的像素值,Colori为融合素材提供的第i个像素点的像素值。
为了使得融合区域边缘平缓渐变,减小突变梯度,更加融合和自然,边缘平滑单元06负责对第二模板的边缘进行平滑处理,将平滑处理后的第二模板输出给归一化单元03。
具体地,边缘平滑单元06可以将第二模板中融合区域的轮廓点分别向外和/或向内扩展M个像素点,M为预设的正整数,将扩展的像素点所包围的区域作为待平滑区域;将预定义的平滑模板仿射到待平滑区域,得到平滑后的第二模板。在进行仿射时,可以将平滑模板中像素点的像素值,仿射为待平滑区域中对应位置像素点的像素值。
其中可以预先利用融合区域的形状形成一个边缘已经平滑处理的模板,该模板就是平滑模板。这种将预先定义好的平滑模板仿射到待平滑区域,从而完成对待平滑区域的平滑的方式,可以快速实现边缘平滑,避免了实时进行模糊平滑所带来的计算量和时间开销。
更进一步地,边缘平滑单元06在将预定义的平滑模板仿射到待平滑区域时,可以采用三角剖分的方式,即在平滑模板上的平滑区域和第二模板上的待平滑区域分别采用相同的方式进行三角剖分,得到相同个数的三角区域;然后将平滑模板中各三角区域分别 仿射到第二模板中对应位置的三角区域。
亮度调整单元07负责获取边缘平滑单元06输出的第二模板,对图像中的融合区域进行亮度统计,依据亮度统计结果,对获取的第二模板进行亮度调整,将亮度调整后的第二模板输出给归一化单元03。亮度调整单元07可以使得平滑后的第二模板的亮度能够尽量的与原始图像中人脸实际亮度匹配。在进行亮度调整时,可以确定图像中的融合区域的亮度均值和平滑处理后的第二模板的亮度均值之间的差值;然后将平滑处理后的第二模板中各像素点的亮度值分别加上差值。
该装置可以应用于图像处理类APP,也可以应用于美颜类APP等。该装置可以体现为一个应用的形式,可以是运行于设备本地的应用程序(nativeAPP),也可以是设备浏览器上的一个网页程序(webApp)。除此之外,也可以体现为应用中的插件或SDK的形式。
除了应用于上述美颜类APP中诸如粉底试妆的场景之外,本发明还可以应用于其他图像融合的场景,例如将一副图像中红色的苹果与另一幅图像中黄色的苹果进行融合。
本发明实施例提供的上述方法和装置可以以设置并运行于设备中的计算机程序体现。该设备可以包括一个或多个处理器,还包括存储器和一个或多个程序,如图5中所示。其中该一个或多个程序存储于存储器中,被上述一个或多个处理器执行以实现本发明上述实施例中所示的方法流程和/或装置操作。例如,被上述一个或多个处理器执行的方法流程,可以包括:
确定图像中的融合区域,得到第一模板;
对第一模板进行降采样,得到第二模板;
对第二模板中的各像素点的像素值进行归一化,得到第三模板;
对第三模板进行升采样,得到第四模板,第四模板的像素点个数等于第一模板的像素点个数;
将第四模板中各像素点的像素值分别作为图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
由以上描述可以看出,本发明提供的方法和装置可以具备以下优点:
1)本发明采用对图像进行降采样的方式,对降采样后的融合区域进行权重计算,然后再升采样回原始图像大小,得到原始图像中融合区域各像素点在融合时对应的权重,大大降低了因权重计算所带来的计算量,降低了时间成本和资源消耗。
2)在权重计算过程中,对降采样后的融合区域进行边缘平滑和/或亮度调整,更进 一步降低了计算量,降低了时间成本和资源消耗。
3)在进行边缘平滑时,采用预定义的平滑模板,还可以进一步结合三角剖分的方式,可以快速实现边缘平滑,避免了实时进行模糊平滑所带来的计算量和时间开销。
4)可以在高分辨率的画质下,仍保持实时性。不仅可以应用于静态图像,也可以应用于视频图像。
5)当应用于粉底试妆时,由于仅对人脸区域的边缘进行平滑处理,能够有效保留人脸自身的纹理信息。无需用户手工操作,能够根据人脸肤色的亮度自动调整融合权重,得到更加真实的试妆体验。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (23)

  1. 一种图像融合的方法,其特征在于,该方法包括:
    确定图像中的融合区域,得到第一模板;
    对所述第一模板进行降采样,得到第二模板;
    对第二模板中的各像素点的像素值进行归一化,得到第三模板;
    对所述第三模板进行升采样,得到第四模板,所述第四模板的像素点个数等于所述第一模板的像素点个数;
    将所述第四模板中各像素点的像素值分别作为所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
  2. 根据权利要求1所述的方法,其特征在于,所述确定图像中的融合区域,得到第一模板包括:
    对图像中的融合目标进行特征点定位,所述特征点包括轮廓点;
    利用定位出的特征点,去除图像中除融合目标之外的区域,得到第一模板。
  3. 根据权利要求1所述的方法,其特征在于,对所述第一模板进行降采样,得到第二模板包括:采用仿射变换的方式,对所述第一模板进行降采样,使得到的第二模板的像素点个数为第一模板的
    Figure PCTCN2016106877-appb-100001
    倍,所述N为2以上的正整数;
    对所述第三模板进行升采样,得到第四模板包括:采用逆仿射变换的方式,对所述第三模板进行升采样,使得到的第四模板的像素点个数为所述第三模板的N倍。
  4. 根据权利要求1所述的方法,其特征在于,在所述对第二模板中的各像素点的像素值进行归一化之前,该方法还包括:
    对第二模板的边缘进行平滑处理。
  5. 根据权利要求4所述的方法,其特征在于,所述对第二模板的边缘进行平滑处理包括:
    将所述第二模板中融合区域的轮廓点分别向外和/或向内扩展M个像素点,所述M为预设的正整数,将扩展的像素点所包围的区域作为待平滑区域;
    将预定义的平滑模板仿射到所述待平滑区域,得到平滑后的第二模板。
  6. 根据权利要求5所述的方法,其特征在于,所述将预定义的平滑模板仿射到所述待平滑区域包括:
    将所述平滑模板中像素点的像素值,仿射为所述待平滑区域中对应位置像素点的像 素值。
  7. 根据权利要求5所述的方法,其特征在于,所述将预定义的平滑模板仿射到所述待平滑区域包括:
    在所述平滑模板上的平滑区域和所述第二模板上的待平滑区域分别采用相同的方式进行三角剖分,得到相同个数的三角区域;
    将所述平滑模板中各三角区域分别仿射到第二模板中对应位置的三角区域。
  8. 根据权利要求4所述的方法,其特征在于,在所述对第二模板中的各像素点的像素值进行归一化之前,该方法还包括:
    对所述图像中的融合区域进行亮度统计;
    依据亮度统计结果,对平滑处理后的第二模板进行亮度调整。
  9. 根据权利要求8所述的方法,其特征在于,所述依据亮度统计结果,对平滑处理后的第二模板进行亮度调整包括:
    确定所述图像中的融合区域的亮度均值和平滑处理后的第二模板的亮度均值之间的差值;
    将平滑处理后的第二模板中各像素点的亮度值分别加上所述差值。
  10. 根据权利要求1所述的方法,其特征在于,对所述图像中融合区域各像素点和融合素材进行加权融合包括:
    利用Imagei_new=weight_maski*Imagei_old+(1-weight_maski)*Colori,确定融合后得到的各像素点的像素值;
    其中Imagei_new为所述图像中融合区域融合后得到的第i个像素点的像素值,weight_maski为所述第四模板中第i个像素点的像素值,Imagei_old为所述图像中融合区域第i个像素点的像素值,Colori为融合素材提供的第i个像素点的像素值。
  11. 根据权利要求1至10任一权项所述的方法,其特征在于,该方法应用于美颜类APP;
    所述融合区域为人脸区域;所述融合素材为粉底色。
  12. 一种图像融合的装置,其特征在于,该装置包括:
    模板确定单元,用于确定图像中的融合区域,得到第一模板;
    降采样单元,用于对所述第一模板进行降采样,得到第二模板;
    归一化单元,用于对第二模板中的各像素点的像素值进行归一化,得到第三模板;
    升采样单元,用于对所述第三模板进行升采样,得到第四模板,所述第四模板的像 素点个数等于所述第一模板的像素点个数;
    加权融合单元,用于将所述第四模板中各像素点的像素值分别作为所述所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
  13. 根据权利要求12所述的装置,其特征在于,所述模板确定单元,具体用于:
    对图像中的融合目标进行特征点定位,所述特征点包括轮廓点;
    利用定位出的特征点,去除图像中除融合目标之外的区域,得到第一模板。
  14. 根据权利要求12所述的装置,其特征在于,所述降采样单元,具体用于采用仿射变换的方式,对所述第一模板进行降采样,使得到的第二模板的像素点个数为第一模板的
    Figure PCTCN2016106877-appb-100002
    倍,所述N为2以上的正整数;
    所述升采样单元,具体用于采用逆仿射变换的方式,对所述第三模板进行升采样,使得到的第四模板的像素点个数为所述第三模板的N倍。
  15. 根据权利要求12所述的装置,其特征在于,该装置还包括:
    边缘平滑单元,用于对所述第二模板的边缘进行平滑处理,将平滑处理后的第二模板输出给所述归一化单元。
  16. 根据权利要求15所述的装置,其特征在于,所述边缘平滑单元,具体用于:
    将所述第二模板中融合区域的轮廓点分别向外和/或向内扩展M个像素点,所述M为预设的正整数,将扩展的像素点所包围的区域作为待平滑区域;
    将预定义的平滑模板仿射到所述待平滑区域,得到平滑后的第二模板。
  17. 根据权利要求16所述的装置,其特征在于,所述边缘平滑单元在将预定义的平滑模板仿射到所述待平滑区域时,具体执行:
    将所述平滑模板中像素点的像素值,仿射为所述待平滑区域中对应位置像素点的像素值。
  18. 根据权利要求16所述的装置,其特征在于,所述边缘平滑单元在将预定义的平滑模板仿射到所述待平滑区域时,具体执行:
    在所述平滑模板上的平滑区域和所述第二模板上的待平滑区域分别采用相同的方式进行三角剖分,得到相同个数的三角区域;
    将所述平滑模板中各三角区域分别仿射到第二模板中对应位置的三角区域。
  19. 根据权利要求15所述的装置,其特征在于,该装置还包括:
    亮度调整单元,用于获取所述边缘平滑单元输出的第二模板,对所述图像中的融合 区域进行亮度统计,依据亮度统计结果,对获取的第二模板进行亮度调整,将亮度调整后的第二模板输出给所述归一化单元。
  20. 根据权利要求19所述的装置,其特征在于,所述亮度调整单元在依据亮度统计结果,对平滑处理后的第二模板进行亮度调整时,具体执行:
    确定所述图像中的融合区域的亮度均值和平滑处理后的第二模板的亮度均值之间的差值;
    将平滑处理后的第二模板中各像素点的亮度值分别加上所述差值。
  21. 根据权利要求12所述的装置,其特征在于,所述加权融合单元,具体用于:
    利用Imagei_new=weight_maski*Imagei_old+(1-weight_maski)*Colori,确定融合后得到的各像素点的像素值;
    其中Imagei_new为所述图像中融合区域融合后得到的第i个像素点的像素值,weight_maski为所述第四模板中第i个像素点的像素值,Imagei_old为所述图像中融合区域第i个像素点的像素值,Colori为融合素材提供的第i个像素点的像素值。
  22. 根据权利要求12至21任一权项所述的装置,其特征在于,该装置应用于美颜类APP;
    所述融合区域为人脸区域;所述融合素材为粉底色。
  23. 一种设备,包括
    一个或者多个处理器;
    存储器;
    一个或者多个程序,所述一个或者多个程序存储在所述存储器中,被所述一个或者多个处理器执行以实现如下操作:
    确定图像中的融合区域,得到第一模板;
    对所述第一模板进行降采样,得到第二模板;
    对第二模板中的各像素点的像素值进行归一化,得到第三模板;
    对所述第三模板进行升采样,得到第四模板,所述第四模板的像素点个数等于所述第一模板的像素点个数;
    将所述第四模板中各像素点的像素值分别作为所述图像中对应像素点的权重,对所述图像中融合区域各像素点和融合素材进行加权融合。
PCT/CN2016/106877 2015-12-03 2016-11-23 一种图像融合的方法、装置和设备 WO2017092592A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510881167.8A CN106846241B (zh) 2015-12-03 2015-12-03 一种图像融合的方法、装置和设备
CN201510881167.8 2015-12-03

Publications (1)

Publication Number Publication Date
WO2017092592A1 true WO2017092592A1 (zh) 2017-06-08

Family

ID=58796304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/106877 WO2017092592A1 (zh) 2015-12-03 2016-11-23 一种图像融合的方法、装置和设备

Country Status (2)

Country Link
CN (1) CN106846241B (zh)
WO (1) WO2017092592A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876718A (zh) * 2017-11-23 2018-11-23 北京旷视科技有限公司 图像融合的方法、装置及计算机存储介质
CN110033420A (zh) * 2018-01-12 2019-07-19 北京京东金融科技控股有限公司 一种图像融合的方法和装置
CN110211082A (zh) * 2019-05-31 2019-09-06 浙江大华技术股份有限公司 一种图像融合方法、装置、电子设备及存储介质
CN110956592A (zh) * 2019-11-14 2020-04-03 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备和存储介质
CN111311528A (zh) * 2020-01-22 2020-06-19 广州虎牙科技有限公司 图像融合优化方法、装置、设备和介质
CN111563552A (zh) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 图像融合方法以及相关设备、装置
CN111783647A (zh) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 人脸融合模型的训练方法、人脸融合方法、装置及设备
EP4125032A1 (en) * 2021-07-09 2023-02-01 Rockwell Collins, Inc. Configurable low resource subsample image mask for merging in a distorted image space
CN116402693A (zh) * 2023-06-08 2023-07-07 青岛瑞源工程集团有限公司 一种基于遥感技术的市政工程图像处理方法及装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680033B (zh) * 2017-09-08 2021-02-19 北京小米移动软件有限公司 图片处理方法及装置
CN108024010B (zh) * 2017-11-07 2018-09-14 赵敏 基于电量测量的手机监控系统
CN110060210B (zh) * 2018-01-19 2021-05-25 腾讯科技(深圳)有限公司 图像处理方法及相关装置
CN110390657B (zh) * 2018-04-20 2021-10-15 北京中科晶上超媒体信息技术有限公司 一种图像融合方法
CN108648148A (zh) * 2018-05-10 2018-10-12 东南大学 一种基于数字升采样再三次样条的数字图像任意点插值方法
CN110728618B (zh) * 2018-07-17 2023-06-27 淘宝(中国)软件有限公司 虚拟试妆的方法、装置、设备及图像处理方法
CN109712361A (zh) * 2019-01-14 2019-05-03 余海军 实时防暴力开启平台
CN113012054B (zh) * 2019-12-20 2023-12-05 舜宇光学(浙江)研究院有限公司 基于抠图的样本增强方法和训练方法及其系统和电子设备
CN113313661A (zh) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 图像融合方法、装置、电子设备及计算机可读存储介质
CN117135288B (zh) * 2023-10-25 2024-02-02 钛玛科(北京)工业科技有限公司 图像拼接方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110110601A1 (en) * 2008-09-24 2011-05-12 Li Hong Method and device for image deblurring using joint bilateral filtering
US8670630B1 (en) * 2010-12-09 2014-03-11 Google Inc. Fast randomized multi-scale energy minimization for image processing
CN103839244A (zh) * 2014-02-26 2014-06-04 南京第五十五所技术开发有限公司 一种实时的图像融合方法及装置
CN103973958A (zh) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 图像处理方法及设备
US20150030242A1 (en) * 2013-07-26 2015-01-29 Rui Shen Method and system for fusing multiple images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160134A1 (en) * 2006-01-10 2007-07-12 Segall Christopher A Methods and Systems for Filter Characterization
CN100555325C (zh) * 2007-08-29 2009-10-28 华中科技大学 一种基于非子采样轮廓波变换的图像融合方法
US20120269458A1 (en) * 2007-12-11 2012-10-25 Graziosi Danillo B Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers
JP5146335B2 (ja) * 2009-01-22 2013-02-20 ソニー株式会社 画像処理装置および方法、並びにプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110110601A1 (en) * 2008-09-24 2011-05-12 Li Hong Method and device for image deblurring using joint bilateral filtering
US8670630B1 (en) * 2010-12-09 2014-03-11 Google Inc. Fast randomized multi-scale energy minimization for image processing
CN103973958A (zh) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 图像处理方法及设备
US20150030242A1 (en) * 2013-07-26 2015-01-29 Rui Shen Method and system for fusing multiple images
CN103839244A (zh) * 2014-02-26 2014-06-04 南京第五十五所技术开发有限公司 一种实时的图像融合方法及装置

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876718A (zh) * 2017-11-23 2018-11-23 北京旷视科技有限公司 图像融合的方法、装置及计算机存储介质
CN108876718B (zh) * 2017-11-23 2022-03-22 北京旷视科技有限公司 图像融合的方法、装置及计算机存储介质
CN110033420A (zh) * 2018-01-12 2019-07-19 北京京东金融科技控股有限公司 一种图像融合的方法和装置
CN110033420B (zh) * 2018-01-12 2023-11-07 京东科技控股股份有限公司 一种图像融合的方法和装置
CN110211082A (zh) * 2019-05-31 2019-09-06 浙江大华技术股份有限公司 一种图像融合方法、装置、电子设备及存储介质
CN110956592A (zh) * 2019-11-14 2020-04-03 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备和存储介质
CN111311528A (zh) * 2020-01-22 2020-06-19 广州虎牙科技有限公司 图像融合优化方法、装置、设备和介质
CN111563552B (zh) * 2020-05-06 2023-09-05 浙江大华技术股份有限公司 图像融合方法以及相关设备、装置
CN111563552A (zh) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 图像融合方法以及相关设备、装置
CN111783647A (zh) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 人脸融合模型的训练方法、人脸融合方法、装置及设备
CN111783647B (zh) * 2020-06-30 2023-11-03 北京百度网讯科技有限公司 人脸融合模型的训练方法、人脸融合方法、装置及设备
EP4125032A1 (en) * 2021-07-09 2023-02-01 Rockwell Collins, Inc. Configurable low resource subsample image mask for merging in a distorted image space
US11979679B2 (en) 2021-07-09 2024-05-07 Rockwell Collins, Inc. Configurable low resource subsample image mask for merging in a distorted image space
CN116402693B (zh) * 2023-06-08 2023-08-15 青岛瑞源工程集团有限公司 一种基于遥感技术的市政工程图像处理方法及装置
CN116402693A (zh) * 2023-06-08 2023-07-07 青岛瑞源工程集团有限公司 一种基于遥感技术的市政工程图像处理方法及装置

Also Published As

Publication number Publication date
CN106846241A (zh) 2017-06-13
CN106846241B (zh) 2020-06-02

Similar Documents

Publication Publication Date Title
WO2017092592A1 (zh) 一种图像融合的方法、装置和设备
EP3338217B1 (en) Feature detection and masking in images based on color distributions
US9547908B1 (en) Feature mask determination for images
US11544887B2 (en) Method for generating facial animation from single image
WO2016161553A1 (en) Avatar generation and animations
CN110390632B (zh) 基于妆容模板的图像处理方法、装置、存储介质及终端
CN111598998A (zh) 三维虚拟模型重建方法、装置、计算机设备和存储介质
EP3807839B1 (en) Deformity edge detection
KR101141643B1 (ko) 캐리커쳐 생성 기능을 갖는 이동통신 단말기 및 이를 이용한 생성 방법
WO2013189101A1 (zh) 一种基于单幅图像的头发建模和肖像编辑方法
WO2009067958A1 (fr) Système de génération de portrait et procédé de génération de portrait selon une image
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
KR102187143B1 (ko) 3차원 컨텐츠 생성 장치 및 그 3차원 컨텐츠 생성 방법
WO2017092593A1 (zh) 一种调整融合素材的方法、装置和设备
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112036284B (zh) 图像处理方法、装置、设备及存储介质
WO2017173578A1 (zh) 一种图像增强方法及装置
US20150206344A1 (en) 3D Model Enhancement
Zhao et al. Extended non-local means filter for surface saliency detection
JP6731753B2 (ja) 画像処理装置、画像処理方法、画像処理システムおよびプログラム
TWI723123B (zh) 圖像融合的方法、裝置和設備
CN108109115B (zh) 人物图像的增强方法、装置、设备及存储介质
Gao et al. Multiscale phase congruency analysis for image edge visual saliency detection
JP6244885B2 (ja) 画像処理装置、画像処理方法、及びプログラム
CN111612712A (zh) 一种人脸端正度的确定方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869902

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869902

Country of ref document: EP

Kind code of ref document: A1