WO2012149772A1 - 一种渐变动画的生成方法和装置 - Google Patents

一种渐变动画的生成方法和装置 Download PDF

Info

Publication number
WO2012149772A1
WO2012149772A1 PCT/CN2011/080199 CN2011080199W WO2012149772A1 WO 2012149772 A1 WO2012149772 A1 WO 2012149772A1 CN 2011080199 W CN2011080199 W CN 2011080199W WO 2012149772 A1 WO2012149772 A1 WO 2012149772A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
adjacent
difference
brightness
Prior art date
Application number
PCT/CN2011/080199
Other languages
English (en)
French (fr)
Inventor
董兰芳
夏泽举
吴媛
覃景繁
Original Assignee
华为技术有限公司
中国科学技术大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 中国科学技术大学 filed Critical 华为技术有限公司
Priority to ES11864760.1T priority Critical patent/ES2625263T3/es
Priority to KR1020127024480A priority patent/KR101388542B1/ko
Priority to CN201180002501.8A priority patent/CN102449664B/zh
Priority to EP11864760.1A priority patent/EP2706507B1/en
Priority to PCT/CN2011/080199 priority patent/WO2012149772A1/zh
Priority to JP2013512750A priority patent/JP5435382B2/ja
Priority to US13/627,700 priority patent/US8531484B2/en
Publication of WO2012149772A1 publication Critical patent/WO2012149772A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates to an image processing technique, and in particular to a method and apparatus for generating a gradient animation.
  • the general method of gradual animation is to transform the two images in two directions on the basis of image deformation.
  • the two images are called source image and target image in the order of playback, respectively.
  • the deformation includes two variants of the source image to the target image, the target image to the source image.
  • Image gradation fusion is performed on the two deformed images to produce a series of intermediate images, thereby achieving a smooth gradation of the image. Therefore, the quality and related characteristics of image deformation technology are one of the key factors affecting image gradation.
  • Image deformation technology has been widely used in film and television special effects and advertising design. Through extensive and in-depth research on image warping technology, a series of methods with spatial image as the core have been formed. In image warping, spatial mapping is the core, and image deformation techniques can be roughly classified into three categories:
  • Block-based deformation Typical algorithms include 2 mesh deformation algorithms and triangulation-based deformation algorithms. Their common idea is to divide the whole image into several blocks, and then combine the deformation of the whole image with the deformation of each small piece. The significant advantage of this type of algorithm is that the deformation speed is fast, but the preprocessing work of dividing the image into small pieces is cumbersome, and the reasonable validity of the block will directly affect the final deformation effect.
  • a typical algorithm is a radial basis function based deformation algorithm.
  • the basic idea of this algorithm is to treat the image as a series of scattered points.
  • the spatial mapping of all points on the image is done by specifying the spatial mapping of special points and some suitable radial basis functions.
  • This algorithm is relatively straightforward, but since the radial basis function is generally a complex function such as a Gaussian function, the deformation speed is very slow. In addition, this algorithm is difficult to guarantee the stable boundary of the deformed image.
  • Embodiments of the present invention provide a method and apparatus for generating a gradation animation from a plurality of images to improve the gradation visual effect.
  • An embodiment of the present invention provides a method for generating a gradation animation, including: performing tone preprocessing on adjacent images in a plurality of images to reduce a difference in hue of the adjacent images;
  • the feature point difference degree of the image determines the number of intermediate frames between adjacent images, the feature point difference degree is calculated according to the pixel distance of the corresponding feature point of the adjacent image, and the middle is generated by image deformation technology between adjacent images
  • An intermediate frame image of the number of frames, the intermediate frame image is inserted between adjacent images, and a gradation animation is generated by the plurality of images and the intermediate frame image inserted between all adjacent images in the plurality of images.
  • An embodiment of the present invention provides a gradation animation generating apparatus, including: a tone preprocessing module, configured to perform tone preprocessing on adjacent images in a plurality of images to reduce a difference in hue of the adjacent images; a frame generating module, configured to determine an intermediate frame number according to a feature point difference degree of the adjacent image subjected to the tone preprocessing by the tone preprocessing module, where the feature point difference degree is calculated according to a pixel distance of the corresponding feature point of the adjacent image Obtaining, generating, by the image deformation technique between the adjacent images, the number of intermediate frame images, inserting an intermediate frame image between adjacent images; and an animation generating module, configured by the plurality of images and all adjacent ones of the plurality of images Intermediate frame map inserted between images Like generating a gradient animation.
  • a tone preprocessing module configured to perform tone preprocessing on adjacent images in a plurality of images to reduce a difference in hue of the adjacent images
  • a frame generating module configured to determine an intermediate frame number according to a
  • An embodiment of the present invention provides a method for generating a music playing background, which includes: receiving a plurality of images for generating an animation; performing tone preprocessing on adjacent images of the plurality of images to reduce a difference in hue of the adjacent image; determining an intermediate frame number according to a feature point difference degree of the adjacent image after the tone preprocessing, and generating the number of intermediate frame images by an image deformation technique between adjacent images, in the adjacent image Intersecting an intermediate frame image, and generating a gradation animation from the plurality of images and the intermediate frame image inserted between all adjacent images of the plurality of images; using the gradation animation as a playing background of the music player.
  • An embodiment of the present invention provides a music player, including: a tone preprocessing module, configured to perform tone preprocessing on adjacent images in the plurality of images to reduce a difference in hue of the adjacent images; a frame generation module, configured to determine an intermediate frame number according to a feature point difference degree of the adjacent image processed by the tone preprocessing module, where the feature point difference degree is calculated according to a pixel distance of the corresponding feature point of the adjacent image,
  • the intermediate frame image is generated by an image deformation technique between adjacent images, and an intermediate frame image is inserted between adjacent images;
  • the animation generation module is configured according to the plurality of images and the middle of insertion between all adjacent images in the plurality of images
  • the frame image generates a gradation animation; a play module: for playing a music file, and playing the gradation animation on a video display interface of the music file when the remaining play time of the music file is greater than zero.
  • the intermediate frame image of the number of intermediate frames determined according to the difference degree of the feature points is inserted between adjacent images by the tone preprocessing, and then the gradation animation is generated, and the generated gradation animation is smooth and natural.
  • the gradient effect of the gradient animation is perfected.
  • FIG. 1 is a flow chart of an embodiment of a method for generating a gradual animation according to the present invention
  • FIG. 2 is a schematic diagram of another embodiment of a method for generating a gradation animation according to the present invention.
  • FIG. 3 is a flow chart of a tone gradation preprocessing according to an embodiment of the present invention.
  • FIG. 4 is a flow chart of luminance gradation preprocessing according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of determining the number of intermediate frames in an embodiment of the present invention.
  • FIG. 6 is a schematic structural view of an embodiment of a device for generating a gradation animation from a plurality of face images according to the present invention
  • FIG. 7 is a flow chart of an embodiment of a method for generating a background of a music player according to the present invention
  • FIG. 8 is a schematic structural diagram of a music player in an embodiment of the present invention.
  • An embodiment of the present invention provides a method for generating a gradation animation from a plurality of images, the method comprising: performing tone preprocessing on adjacent ones of the plurality of images to reduce a difference in hue of the adjacent images; The number of intermediate points between adjacent images is determined by the feature point difference degree of the adjacent image after the tone preprocessing, and the feature point difference degree is calculated according to the pixel distance of the corresponding feature point of the adjacent image, and is passed between adjacent images.
  • An image warping technique generates an intermediate frame image of the number of intermediate frames, and inserts the intermediate frame image between adjacent images, and generates an intermediate frame image inserted between the plurality of images and all adjacent images in the plurality of images Gradient animation. Please refer to FIG. 1.
  • FIG. 1 Please refer to FIG. 1.
  • FIG. 1 is a flowchart of an embodiment of a method for generating a gradation animation from multiple images according to the present invention, including: S101: Perform tone preprocessing on adjacent images in the plurality of images to reduce a difference in hue of the adjacent images, so that the generated animation is smoother from one play of the adjacent images to another;
  • S103 Determine an intermediate frame number according to a feature point difference degree of the adjacent image after the tone preprocessing, and generate the quantity of the intermediate frame image by using an image deformation technique between adjacent images;
  • S105 Generate a gradation animation from the plurality of images and the intermediate frame images inserted by all two adjacent images of the plurality of images.
  • the image is a face image.
  • the performing tone preprocessing on adjacent ones of the plurality of images comprises: performing tone preprocessing on adjacent images in the plurality of face images.
  • the method before performing the tone preprocessing on the adjacent images in the plurality of face images in S101, the method further includes: sorting the plurality of face images to reduce the neighbors in total The difference in the image.
  • Performing tone preprocessing on adjacent ones of the plurality of images refers to: performing tone preprocessing on adjacent images in the plurality of sorted face images.
  • FIG. 2 A flowchart of an embodiment of the present invention is shown in FIG. 2, and the method includes:
  • S205 Determine an intermediate frame number according to the similarity of the adjacent images, and generate an intermediate frame image of the intermediate frame number by using an image deformation technique between adjacent images.
  • the step of sorting the plurality of face images by S201 specifically includes: sorting according to a face size.
  • the specific steps are:
  • the image size is counted to find the smallest image size, or given a picture size, all the images are converted to the same image size;
  • the face size under the image transformed size is counted, and the plurality of images are sorted from small to large or from large to small according to the face size under the transformed size;
  • the face size may be a face area, a face width, a face length, and the like.
  • the gradient animation effect of adjacent face images is affected by the difference in face size in adjacent images. The larger the difference in face size, the less natural and smooth the animation effect under the same conditions; the smaller the difference in face size, the smoother and more natural the animation effect under the same conditions. Therefore, compared with the animation effect without this sorting process, the overall effect of forming a gradient based on the face size sorting of the face size is better under the same subsequent gradation processing method.
  • the sorting the plurality of face images by S201 further includes sorting according to image brightness.
  • the specific steps are:
  • the gradient animation effect of adjacent face images is affected by the difference in brightness of adjacent images.
  • the overall brightness is smoother from dark to bright, or from light to dark, which can improve the visual effect of multiple image gradient animations in general.
  • the overall effect of generating animations from multiple face images based on face size sorting is smoother and more natural than that achieved under the same subsequent processing method.
  • the method further includes: calculating a hue difference according to the hue of the adjacent image, obtaining an absolute value of the hue difference according to the hue difference, and determining a hue requirement in the adjacent image according to the difference when the absolute value of the difference is greater than the first threshold
  • the adjusted image and the color tone adjustment method are used to adjust the color tone of the image whose color tone needs to be adjusted according to the color tone adjustment method.
  • Calculating the hue difference of the adjacent image according to the hue of the adjacent image comprises: subtracting the average hue value of the second image from the average image value of the first image in the adjacent image to obtain the adjacent image
  • the hue difference of the image to be adjusted according to the hue adjustment manner includes: if the hue difference is greater than zero, reducing the hue of each pixel of the first image or increasing the hue of each pixel of the second image; If the hue difference is less than zero, the hue of each pixel of the first image is increased or the hue of each pixel of the second image is lowered.
  • FIG. 3 is a flowchart of preprocessing of the tone gradient animation in an embodiment, the process includes:
  • the process of calculating the difference in hue between the first image and the second image of the adjacent image in S301 specifically includes: first, converting the first image S and the second image D into a HIS color model, respectively, so as to acquire a hue value of an arbitrary pixel in the image;
  • the second image is scaled to the same scale of the first image, and the width and height of the first image are respectively W and H, and the width and height are in units of pixels;
  • the number is the unit; Afterwards, respectively acquiring the tone values of the corresponding pixels on the first image and the second image, and calculating a sum of differences Hdt of the tonal values of the corresponding pixels on the first image and the second image, as shown in formula (1);
  • Hdt 2 ⁇ ( Hue ( S ij ) - Hue ( E) ij ))
  • Hdm Hdt / (wxh) ( 2 )
  • Hdm Hdt / (wxh) ( 2 )
  • the second image pixel average tone value is relatively low, and S503 appropriately increases the tone value of all the pixels of the second image.
  • the first threshold value is 0.1
  • the hue value of each pixel of the second image is increased by 0.8 X IHdml;
  • the first image pixel average tonal value is relatively low, and S505 appropriately increases the tonal value of all pixels of the first image.
  • the first image is per pixel.
  • the hue value is increased by 0.8 X IHdml;
  • the S203 method further includes: performing luminance pre-processing on adjacent ones of the plurality of images to reduce a luminance difference of the adjacent images;
  • the feature point difference degree determining the number of intermediate frames between adjacent images includes: determining the number of intermediate frames between adjacent images according to the feature point difference degree of the adjacent images after the tone preprocessing and the brightness preprocessing.
  • the brightness pre-processing specifically includes: calculating a brightness difference of the adjacent image according to the brightness of the adjacent image, and calculating an absolute value of the two degrees difference according to the brightness difference, when the difference absolute value is greater than the second threshold First, determining an image and brightness adjustment mode in which the brightness in the adjacent image needs to be adjusted according to the difference, and then adjusting the brightness according to the brightness adjustment mode. Like brightness adjustment.
  • Calculating the brightness difference of the adjacent image according to the brightness of the adjacent image comprises: subtracting the average brightness value of the second image from the average brightness value of the first image in the adjacent image to obtain the adjacent image
  • the brightness difference of the image to be adjusted according to the brightness adjustment manner includes: if the brightness difference is greater than zero, reducing the brightness of each pixel of the first image or increasing the brightness of each pixel of the second image; If the difference in luminance is less than zero, the brightness of each pixel of the first image is increased or the brightness of each pixel of the second image is decreased.
  • FIG. 4 is a flow chart showing the preprocessing of the brightness gradient animation in one embodiment of the present invention, the process includes:
  • S401 calculates the brightness similarity between the first image and the second image as follows:
  • the first image S and the second image D are respectively converted into HIS color models to obtain brightness values of arbitrary pixels in the image;
  • the second image is scaled to the same scale of the first image, where the width and height of the first image are set to W and H, respectively, and the width and height are in units of pixels;
  • a corresponding rectangular area is respectively constructed on the first image and the second image, the width of the rectangle is w (0 ⁇ w ⁇ W), the height of the rectangle is h (0 ⁇ h ⁇ H), and the width and height of the rectangle are in pixels.
  • the number is a unit; after that, the brightness values of the grid point pixels on the first image and the second image are respectively obtained, and the sum of the differences of the brightness values of the pixels on the first image and the second image corresponding to the grid point is calculated (Intensity difference total, Idt), as shown in equation (3): 1
  • Idt ⁇ 2 ⁇ ( Intensity ( S TD ) - Intensity ( DJ d )
  • Idm Image pixel average brightness difference
  • the luminance similarity of the first image and the second image is represented by the average luminance difference amount Idm of the first image and the second image.
  • the rectangular width and height are ⁇ and 11, respectively.
  • the second image pixel average luminance value is relatively small, and S403 obtains the second image and the first image by appropriately increasing the luminance values of all the pixels of the second image.
  • the first threshold value is 0.1, the brightness value of each pixel of the second image is increased by 0.8 X lldml;
  • the second image pixel average brightness value is relatively large, and the second image and the first image can be obtained by appropriately increasing the brightness values of all the pixels of the first image.
  • the first threshold is 0.1, and the brightness value of each pixel of the first image is 0.8 X lldml;
  • the difference in hue and brightness of the adjacent image processed by the gradient animation is first evaluated.
  • the tone preprocessing is performed, and then the subsequent gradation animation processing is performed; if the automatic evaluation result is a difference Hours, subsequent gradual animation processing of the group of images.
  • the determining, by the S205, the number of the intermediate frames according to the similarity of the adjacent images comprises: determining the number of the intermediate frames according to the feature point difference degree of the adjacent images.
  • the feature point extraction method includes:
  • the face image library is trained by the active contour model (ASM) algorithm, with ASM training. As a result, a feature point detection file is obtained;
  • ASM active contour model
  • the Adaboost algorithm is used to obtain the face region in the image.
  • the Adaboost algorithm is the most commonly used face detection algorithm.
  • the feature point detection file output by the ASM training algorithm is used in the face region to perform face feature point localization.
  • the number of face feature points is selected 45.
  • the feature point difference uses a normalized absolute distance method.
  • the adjacent images are referred to as the source image and the target image, respectively, in the order in which they are played. Methods as below:
  • the present invention uses the relative difference of the features of the source image and the target image to represent the source image and the target image.
  • the degree of feature difference of the image is the degree of feature difference of the image.
  • the gradient animation process has different choices for the intermediate frame numbers of the gradient animation source image and the target image. Determining the number of the intermediate frames according to the size of the feature point difference value of the adjacent image, including: determining that the number of the intermediate frames is the first quantity when the feature point difference value of the adjacent image is located in the first interval When the similarity value of the adjacent image is located in the second interval, determining that the number of the intermediate frames is the second number; wherein, the value of the first interval is smaller than the value of the second interval, the first quantity Less than the second quantity.
  • adjacent images are referred to as a source image and a P] image, respectively, in order of playback.
  • the larger the feature similarity between the source image and the target image the smaller the relative difference of the features, the fewer intermediate frames required for the gradient animation process; the characteristics of the source image and the target image
  • FIG. 5 is a flowchart of determining the number of intermediate frames in an embodiment of the present invention, and the process includes:
  • the value of the first interval is (0, L), and the value of the second interval is (L, 2L), and a preferred value of the embodiment L of the present invention is 0.25.
  • the value of the first embodiment of the present invention is N, and the second value of the embodiment of the present invention is 1.5 *N, and N is a natural number.
  • N can take any value between 16 and 24, and those skilled in the art can take other natural numbers according to actual needs.
  • an intermediate frame image is generated from the source image and the target image.
  • the process includes:
  • Source Control Points SCP
  • DCP Destination Control Points
  • ICP intermediate control point
  • ICP(t) (l-t)*SCP(t)+t* DCP(t) [0,1] ( 11 )
  • SCP and ICP(t) are used as source control points and target control points to image the source image (Source Image, SI) to obtain the source warped image (SWI(t)); DCP and ICP(t) ) Decompose the target image (Destination Image, DI) as the source control point and the target control point respectively to obtain the image (Destination Warped Image, DWI(t)); and formulate SWI(t) and DWI(t) according to the formula (12) Image fusion is performed to obtain an intermediate image (Inter Image, INTER_I(t)).
  • N is the number of sheets in the middle image, and return to S603.
  • N is the number of sheets in the middle image
  • the gradation animation belongs to a gradation animation with a fixed playing time.
  • the method further includes determining whether the current remaining time of the playing duration is greater than zero;
  • Performing tone preprocessing on adjacent images in the method includes: performing tone preprocessing on adjacent images in the plurality of images if the current remaining time is greater than zero.
  • FIG. 6 is a schematic structural diagram of an embodiment of a gradual animation device generated by a plurality of images according to the present invention.
  • the apparatus includes: a 601 tone preprocessing module, configured to perform tone preprocessing on adjacent ones of the plurality of images to reduce a difference in hue of the adjacent images, so that the generated animation is from the adjacent image
  • the 604 intermediate frame generation module is configured to determine the number of intermediate frames according to the feature point difference degree of the adjacent image subjected to the tone preprocessing by the tone preprocessing module, and pass between adjacent images.
  • Image warping techniques generate the number of intermediate frame images between adjacent images Generating the number of intermediate frame images by an image warping technique; 605 an animation generating module, configured to generate a gradation animation from the plurality of images and the intermediate frame images inserted between all adjacent images of the plurality of images.
  • the plurality of images are a plurality of face images; the tone preprocessing module is configured to perform tone preprocessing on adjacent images in the plurality of face images to reduce The hue of the adjacent image is small.
  • Another embodiment of the present invention further includes a sorting module for sorting the plurality of face images to reduce the difference of adjacent images as a whole, and causing the generated animation to be played from one of the adjacent images to another
  • the tone preprocessing module is configured to perform tone preprocessing on adjacent ones of the plurality of images processed by the sorting module.
  • the sorting module is configured to sort the plurality of face images according to a face size.
  • the sorting module is configured to sort the plurality of face images according to image brightness
  • a 601 tone pre-processing module is configured to calculate a hue difference of the adjacent image according to the hue of the adjacent image, and calculate an absolute value of the hue difference according to the hue difference, when the absolute value of the difference is greater than the first pavilion At the time of the value, the image in which the hue needs to be adjusted and the hue adjustment mode are determined according to the difference, and the hue adjustment is performed on the image whose hue needs to be adjusted according to the hue adjustment method.
  • the determining, by the 603 intermediate frame generating module, the number of the intermediate frames according to the similarity of the adjacent images comprises: determining the number of intermediate frames according to the feature point difference degree of the adjacent images.
  • the determining, by the 603 intermediate frame generating module, the number of the intermediate frames according to the feature point difference degree of the adjacent image specifically includes: when the similarity value of the adjacent image is located in the first interval, determining that the number of the intermediate frames is the first a quantity; when the similarity value of the adjacent image is located in the second interval, determining that the number of the intermediate frames is a second number; wherein, the value of the first interval is smaller than the value of the second interval, One quantity is less than the second quantity.
  • the device further includes a brightness pre-processing module: configured to perform brightness pre-processing on adjacent images in the plurality of face images; and the intermediate frame generating module is configured to perform pre-processing and brightness according to the tone
  • the pre-processed adjacent image generates a gradient animation.
  • the brightness pre-processing module is specifically configured to: obtain a brightness difference of the adjacent image according to the brightness of the adjacent image, and obtain an absolute value of the brightness difference according to the brightness difference, when the absolute value of the difference is greater than the second threshold And determining an image and brightness adjustment mode in the adjacent image that needs to be adjusted according to the difference, and performing brightness adjustment on the image to be adjusted according to the brightness adjustment mode.
  • the gradation animation belongs to the animation with a fixed playing duration
  • the device further includes: a determining module, configured to determine whether the current remaining time of the playing duration is greater than zero; Tone preprocessing the adjacent images of the plurality of images when the current remaining time of the playback duration is greater than zero.
  • the embodiment of the invention provides a method for generating a background of a music player, which is characterized in that: Please refer to FIG. 7, FIG. 7 for a structural diagram of an embodiment of the present invention, including:
  • S701 receives a plurality of images for generating an animation
  • S703 performs tone preprocessing on adjacent ones of the plurality of images to reduce a difference in hue of the adjacent images
  • S705 determines the number of intermediate frames according to the feature point difference degree of the adjacent image after the tone preprocessing, generates the number of intermediate frame images by image deformation technology between adjacent images, and inserts an intermediate frame image between adjacent images, And generating an gradation animation on the image and the intermediate frame image inserted between all adjacent images in the plurality of images;
  • S707 uses the gradient animation generated by the plurality of images as the playback background of the music player.
  • the plurality of images are multiple face images; and performing tone preprocessing on adjacent ones of the plurality of images includes: performing a phase in the plurality of face images The adjacent image is subjected to tone preprocessing.
  • the tone preprocessing module is configured to perform tone preprocessing on adjacent images in the plurality of face images to reduce a difference in hue of the adjacent images.
  • the middle is determined according to the feature point difference degree of the adjacent image
  • the number of frames previously includes: performing the feature point positioning on the face image in the plurality of images.
  • the feature point positioning method includes: automatically detecting a feature point of a face by automatically detecting.
  • the feature point for positioning the face by automatic detection is: for a given picture, on the basis of the face detection, the user does not need to perform more manual operations to automatically detect the key feature points of the face, Face positioning is convenient and fast.
  • the automatic detection of the feature points of the face is detected by the active contour model algorithm to detect the feature points of the face.
  • Performing the feature point positioning on the face image includes: performing feature point positioning on the face image by using an overall drag or a single point drag.
  • the overall drag method divides the feature points of the face image into feature points of the face contour, the eyebrows, the eyes, the nose, and the mouth; the feature points of the five parts of the face contour, the eyebrows, the eyes, the nose, and the mouth are respectively taken as a whole Drag.
  • the left and right eyebrows and the left and right eyes are also dragged separately.
  • the overall drag avoids the fact that in the manual positioning mode, the automatic detection of the positioning feature points is far from the actual feature point template of the face, and the moving of the feature points one by one is too cumbersome.
  • the single-dot drag method selects feature points one by one to achieve accurate face feature positioning operation.
  • the embodiment of the invention mainly adopts an automatic detection and positioning feature point positioning method.
  • the feature point is positioned by the overall drag or single point drag on the face image, which is not satisfactory to us.
  • the feature points of the detected positioning are adjusted.
  • the method further comprises: acquiring a current remaining time of the music file by capturing a time stamp of the music file. Determining whether the current remaining time is greater than zero; performing image tone preprocessing on adjacent images in the image, and performing tone presetting on adjacent images in the plurality of images when the current remaining time is greater than zero deal with.
  • the photo is dynamically loaded while the music is playing, and each time only two photos that are to be transformed by the face are loaded in the memory, and then destroyed after being transformed. Then load the new two images and consume no memory.
  • the time interval between loading photos affects the fluency of the playback background. If it is too small, it is too dazzling. The original face image of the two images cannot be discerned, and the frames will always be in the process of change.
  • the optimal time interval adopted by the embodiment of the present invention is 3-5 seconds, but is not limited to the value of the time interval.
  • the embodiment of the present invention provides a music player, wherein the music player includes: 801 a tone preprocessing module, configured to perform tone preprocessing on adjacent images in the plurality of images to reduce And a 803 intermediate frame generating module, configured to determine an intermediate frame number according to a feature point difference degree of the adjacent image processed by the tone preprocessing module, where the feature point difference degree is according to the adjacent The pixel distance corresponding to the feature point of the image is calculated, the number of intermediate frame images are generated by image deformation technology between adjacent images, and the intermediate frame image is inserted between adjacent images; 805 animation generation module, according to multiple images and An intermediate frame image inserted between all adjacent images in a plurality of images generates a gradation animation; a playing module: for playing a music file, and when the remaining playing time of the music file is greater than zero, the gradation animation is in the music The video is displayed on the video display interface.
  • a tone preprocessing module configured to perform tone preprocessing on adjacent images in the plurality of images to reduce
  • the music player provided by the embodiment of the present invention further includes: 807 a storage module, configured to store the music file and the plurality of images.
  • the music player provided by the embodiment of the present invention further includes: 809 display module, configured to present a video display interface of the music file.
  • modules in the apparatus in the embodiments may be distributed in the apparatus of the embodiment according to the embodiment, or may be correspondingly changed in one or more apparatuses different from the embodiment.
  • the modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明实施例提供了一种由多幅图像生成渐变动画的方法和设备,包括:对多幅图像中的相邻图像进行色调预处理;根据经过色调预处理后相邻图像的特征点差异度确定相邻图像间的中间帧数量,在相邻图像间通过图像变形技术生成所述数量的中间帧图像,在相邻图像间插入中间帧图像,由多幅图像及所述多幅图像中所有相邻图像间插入的中间帧图像生成渐变动画。本发明生成的渐变动画平滑、自然,改善了渐变动画的渐变效果。

Description

一种渐变动画的生成方法和装置
技术领域
本发明涉及一种图像处理技术, 具体涉及一种渐变动画的生成方法和 装置。
背景技术
由多幅图像生成渐变动画的方法和装置目前有着广泛的应用。 渐动画 变的普遍方法是在实现图像变形的基础上, 先分别对两幅图像进行两个方 向的变形, 两幅图像按照播放的先后时间顺序分别称为源图像和目标图像, 两个方向的变形包括源图像到目标图像、 目标图像到源图像两种变形。 再 对两幅变形图像进行图像灰度融合, 产生一系列的中间图像, 从而实现图 像的平滑渐变。 因此, 图像变形技术的好坏及相关特性是影响图像渐变的 关键因素之一。
图像变形技术目前已广泛应用于影视特效和广告设计当中。 人们通过 对图像变形技术广泛而深入的研究, 形成了以空间映像为核心的一系列方 法。 在图像变形中, 空间映射是核心, 据此可将图像变形技术大致分为 3 类:
(1)基于块的变形。 典型的算法包括 2次网状变形算法和基于三角剖分 的变形算法。 它们的共同思想是先将整幅图像分成若干块, 再将整幅图像 的变形用每一小块的变形来结合实现。 这类算法的显著优点是变形速度快, 但是将图像分成小块这一预处理工作比较繁瑣, 而且分块的合理有效性将 直接影响最终的变形效果。
(2)基于线的变形。 这种算法的思想是在图像上构造一系列的特征线, 图像上每个像素的偏移量由该像素与这些特征线距离的综合来决定。 这种 方法仍然存在变形速度较慢的问题, 且不太直观。
(3)基于点的变形, 典型的算法是基于径向基函数的变形算法。 这种算 法的基本思想是将图像看成是众多散乱的点构成, 通过一些指定特殊点的 空间映射关系和某种合适的径向基函数来完成图像上所有点的空间映射。 这种算法比较直观, 但是由于径向基函数一般为高斯函数等较为复杂的函 数, 故变形速度很慢, 此外, 这种算法难以保证变形图像的稳定边界。
人们对动画渐变的效果要求越来越高, 但目前的图像变形技术实现的 由多幅图像生成渐变动画的渐变质量却难以控制, 有待进一步提高。 发明内容
本发明实施例提供了一种由多幅图像生成渐变动画的方法和装置, 以改 善其渐变视觉效果。
本发明实施例提供了一种渐变动画的生成方法, 包括: 对多幅图像中的 相邻图像进行色调预处理,以减小所述相邻图像的色调差;根据经过色调预 处理后相邻图像的特征点差异度确定相邻图像间的中间帧数量, 所述特征 点差异度根据所述相邻图像对应特征点的像素距离计算得到, 在相邻图像 间通过图像变形技术生成所述中间帧数量的中间帧图像, 在相邻图像间插 入所述中间帧图像, 由所述多幅图像及所述多幅图像中所有相邻图像间插 入的中间帧图像生成渐变动画。
本发明实施例提供了一种渐变动画的生成装置, 包括: 色调预处理模块, 用于对多幅图像中的相邻图像进行色调预处理, 以减小所述相邻图像的色 调差; 中间帧生成模块, 用于根据经过色调预处理模块进行色调预处理后 的相邻图像的特征点差异度确定中间帧数量, 所述特征点差异度根据所述 相邻图像对应特征点的像素距离计算得到, 在相邻图像间通过图像变形技 术生成所述数量的中间帧图像, 在相邻图像间插入中间帧图像; 动画生成 模块, 用于由多幅图像及所述多幅图像中所有相邻图像间插入的中间帧图 像生成渐变动画。
本发明实施例提供了一种音乐播放背景的生成方法,其特征在于, 包括: 接收用于生成动画的多幅图像; 对所述多幅图像中的相邻图像进行色调预 处理, 以减小所述相邻图像的色调差; 根据经过色调预处理后相邻图像的 特征点差异度确定中间帧数量, 在相邻图像间通过图像变形技术生成所述 数量的中间帧图像, 在相邻图像间插入中间帧图像, 由多幅图像及所述多 幅图像中所有相邻图像间插入的中间帧图像生成渐变动画; 将所述渐变动 画作为所述音乐播放器的播放背景。
本发明实施例提供了一种音乐播放器, 包括: 色调预处理模块, 用于对 所述多幅图像中的相邻图像进行色调预处理, 以减小所述相邻图像的色调 差; 中间帧生成模块, 用于根据经过色调预处理模块处理后的相邻图像的 特征点差异度确定中间帧数量, 所述特征点差异度根据所述相邻图像对应 特征点的像素距离计算得到, 在相邻图像间通过图像变形技术生成所述数 量的中间帧图像, 在相邻图像间插入中间帧图像; 动画生成模块, 根据多 幅图像及所述多幅图像中所有相邻图像间插入的中间帧图像生成渐变动 画; 播放模块: 用于播放音乐文件, 并且在所述音乐文件的剩余播放时间 大于零时, 将所述渐变动画在所述音乐文件的视频显示界面上播放。
本发明实施例通过色调预处理、 在相邻图像之间插入根据特征点差异度确 定的中间帧数量的中间帧图像, 进而生成渐变动画, 生成的渐变动画平滑、 自然, ?丈善了渐变动画的渐变效果。
附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对 实施例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易见, 下面描述中的附图是本发明的一些实施例 , 对于本领域普通技术人员来讲, 在不付出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明渐变动画生成方法一个实施例的流程图;
图 2为本发明渐变动画生成方法另一个实施例的示意图;
图 3为本发明一个实施例中色调渐变预处理的流程图;
图 4为本发明一个实施例中亮度渐变预处理的流程图;
图 5为本发明一个实施例中中间帧数量确定的流程图;
图 6为本发明由多幅人脸图像生成渐变动画装置的一个实施例的结构 示意图;
图 7为本发明音乐播放器播放背景的生成方法一个实施例的流程图; 图 8位本发明实施例中的音乐播放器的结构示意图。 具体实 式 为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本 发明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描 述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人负在没有做出创造性劳动的前 提下所获得的所有其他实施例, 都属于本发明保护的范围。
本发明实施例提供了一种由多幅图像生成渐变动画的方法, 方法包括: 对所述多幅图像中的相邻图像进行色调预处理, 以减小所述相邻图像的色 调差; 根据经过色调预处理后相邻图像的特征点差异度确定相邻图像间的 中间帧数量, 所述特征点差异度根据所述相邻图像对应特征点的像素距离 计算得到, 在相邻图像间通过图像变形技术生成所述中间帧数量的中间帧 图像, 在相邻图像间插入所述中间帧图像, 由所述多幅图像及所述多幅图 像中所有相邻图像间插入的中间帧图像生成渐变动画。 请参考图 1 , 图 1提 供了本发明由多幅图像生成渐变动画方法一个实施例的流程图, 包括: S101 , 对多幅图像中的相邻图像进行色调预处理, 以减小所述相邻图 像的色调差, 使生成的动画从所述相邻图像的一张播放到另一张时更加平 滑;
S103 , 根据经过色调预处理后所述相邻图像的特征点差异度确定中间 帧数量, 在相邻图像间通过图像变形技术生成所述数量的中间帧图像;
S 105 , 由多幅图像及所述多幅图像中所有两幅相邻图像插入的中间帧 图像生成渐变动画。
在本发明的一种实施方式中, 所述图像是人脸图像。 所述对所述多幅 图像中的相邻图像进行色调预处理包括: 对所述多幅人脸图像中的相邻图 像进行色调预处理。
本发明的另一种实现方式中, 在 S101对所述多幅人脸图像中的相邻图 像进行色调预处理之前还包括: 对所述多幅人脸图像排序, 以在总体上减 少相邻图像的差异。 所述对所述多幅图像中的相邻图像进行色调预处理指: 对排序后的多幅人脸图像中的相邻图像进行色调预处理。
本发明实施例的流程图如附图 2所示, 所述方法包括:
S201 , 对所述多幅人脸图像排序, 以在总体上减少相邻图像的差异, 使生成的动画更加平滑自然;
S203 , 对多幅人脸图像中的相邻图像进行图像色调预处理, 以减小所 述相邻图像的色调差, 使生成的动画从所述相邻图像的一张播放到另一张 时更加平滑;
S205 , 根据所述相邻图像的相似度确定中间帧数量, 在相邻图像间通 过图像变形技术生成所述中间帧数量的中间帧图像;
S 207 , 由多幅人脸图像及所述多幅人脸图像中所有两幅相邻图像插入 的中间帧图像生成渐变动画。
进一步的, S201的所述对所述多幅人脸图像排序具体包括: 根据人脸大 小排序。 具体步骤是:
在读取完所有图片后, 对图片大小进行统计, 找出最小的图片尺寸, 或 给定一个图片尺寸, 将所有图片都变换到同一图片尺寸下;
统计在图像变换后尺寸下的人脸尺寸, 根据变换尺寸下的人脸尺寸对多 幅图像进行从小到大或从大到小的排序;
再对排序后的图片序列进行下一步处理。
在具体实施例中, 人脸尺寸可以是人脸面积、 人脸宽度、 人脸长度等。 相邻人脸图像的渐变动画效果受到相邻图像中人脸尺寸差异的影响。 人 脸尺寸差异越大, 在同等条件下实现的动画效果就越不自然、 平滑; 人脸 尺寸差异越小, 在同等条件下实现的动画效果就越平滑、 自然。 因此, 相 比于没有此排序过程的动画效果, 基于人脸尺寸排序的多幅人脸图片形成 渐变的整体效果在同等后续渐变处理方法下实现的渐变效果更好。
S201的所述对所述多幅人脸图像排序还包括, 根据图像亮度排序。 具体步骤是:
计算图像所有采样点的平均亮度值, 并把它作为图像的亮度值。
按照上面的方法, 在分别计算出多幅人脸图片的平均亮度值后, 根据平 均亮度值对多幅图像进行从小到大或者从大到小的排序;
再对排序后的图片序列进行下一步处理。
相邻人脸图像的渐变动画效果受相邻图像的亮度差异的影响。 亮度差异 越大, 在同等条件下实现的动画效果就越不平滑、 自然; 亮度差异越小, 在同等条件下实现的动画效果就越平滑、 自然。 对于排序后的多幅图片生 成的动画, 在总体上亮度从暗到明, 或从明到暗的过渡更加平滑, 能够在 总体上改善多幅图片渐变动画的视觉效果。 相比于没有此排序过程的动画 效果, 基于人脸尺寸排序的由多幅人脸图片生成动画的整体效果比在同等 后续处理方法下实现的动画效果更平滑、 自然。
具体的, S203所述对所述多幅人脸图像中的相邻图像进行图像色调预处 理具体包括: 根据所述相邻图像的色调进行计算得到色调差异, 根据色调 差异得到色调差异绝对值, 当差异绝对值大于第一阔值时, 根据差异确定 所述相邻图像中的色调需要调整的图像和色调调整方式, 再按照色调调整 方式对所述色调需要调整的图像进行色调调整。
所述根据所述相邻图像的色调进行计算得到所述相邻图像的色调差异 包括: 由相邻图像中第一图像的平均色调值减去第二图像的平均色调值得 到所述相邻图像的色调差异; 所述按照色调调整方式对所述需要调整的图 像进行色调调整包括: 如果所述色调差异大于零, 降低第一图像每个像素 的色调或提高第二图像每个像素的色调; 如果所述色调差异小于零, 提高 第一图像每个像素的色调或降低第二图像每个像素的色调。
请参考图 3, 图 3提供了一个实施例中色调渐变动画预处理的流程图, 过 程包括:
S301 ,计算相邻图像中第一图像减去第二图像像素平均色调得到的差值 Hdm;
若 Hdm绝对值大于第一阈值, 且 Hdm大于 0,
S303, 适当提高第二图像每个像素的色调值;
若 Hdm绝对值大于第一阈值, 且 Hdm小于 0,
S305, 适当提高第一图像每个像素的色调值。
S301中计算相邻图像第一图像和第二图像的色调差异的过程具体包括: 首先, 把第一图像 S和第二图像 D分别转换为 HIS颜色模型, 以便获取图 像中任意像素的色调值;
其次, 把第二图像缩放到第一图像的相同尺度下, 设第一图像的宽度和 高度分别为 W和 H, 宽度和高度以像素个数为单位;
然后, 在第一图像和第二图像上分别构造相应的矩形区域,矩形宽度为 w(0< w <=W), 矩形高度为 h(0< h <=H) , 矩形宽度和高度以像素个数为单 位; 之后, 分别获取第一图像和第二图像上对应像素的色调值, 计算第一图 像和第二图像上对应像素的色调值的差异之和 Hdt, 如公式 (1) 所示;
Hdt = 2∑(Hue(Sij ) - Hue(E)ij ))
(1)
最后, 把 Hdt除以所有网格点的个数, 获得图像像素平均色调差值 Hdm, 如公式 (2) 所示:
Hdm= Hdt/(wxh) (2) 我们用第一图像和第二图像的平均色调差值 Hdm表示第一图像和第二 图像的色调相似度。 在具体实施例中, 所述矩形宽度和高度分别为 \¥和11。
如果当前 Hdm是正值, 并且大于第一阈值, 那么说明第二图像像素平均 色调值比较低, S503适当提高第二图像所有像素的色调值, 在具体实施例 中 , 第一阈值取值 0.1, 第二图像每个像素的色调值自加 0.8 X IHdml;
如果当前 Hdm是负值, 并且大于第一阈值, 那么说明第一图像像素平均 色调值比较低, S505适当提高第一图像所有像素的色调值, 在具体实施例 中, 第一图像每个像素的色调值自加 0.8 X IHdml;
如果当前 Hdm接近零, 那么说明第一图像和第二图像的色调近似, 不需 要进行色调调节。
在本发明实施例中, S203方法还包括: 对所述多幅图像中的相邻图像进 行亮度预处理, 以減少所述相邻图像的亮度差; 所述根据经过色调预处理 后相邻图像的特征点差异度确定相邻图像间的中间帧数量包括: 根据经过 色调预处理和亮度预处理后相邻图像的特征点差异度确定相邻图像间的中 间帧数量。
所述亮度预处理具体包括:根据所述相邻图像的亮度进行计算得到所述 相邻图像的亮度差异, 根据所述亮度差异计算得到两度差异绝对值, 当差 异绝对值大于第二阈值时, 先根据差异确定所述相邻图像中的亮度需要调 整的图像和亮度调整方式, 再按照亮度调整方式对所述亮度需要调整的图 像进行亮度调整。
所述根据所述相邻图像的亮度进行计算得到所述相邻图像的亮度差异 包括: 由相邻图像中第一图像的平均亮度值减去第二图像的平均亮度值得 到所述相邻图像的亮度差异; 所述按照亮度调整方式对所述需要调整的图 像进行亮度调整包括: 如果所述亮度差异大于零, 降低第一图像每个像素 的亮度或提高第二图像每个像素的亮度; 如果所述亮度差异小于零, 提高 第一图像每个像素的亮度或降低第二图像每个像素的亮度。
请参考图 4,图 4提供了本发明一个实施例中亮度渐变动画预处理的流程 图, 过程包括:
S401 ,计算相邻图像中第一图像减去第二图像像素平均亮度得到的差值 Idm;
若 Idm绝对值大于第二阈值, 且 Idm大于 0,
S403, 适当提高第二图像每个像素的亮度值;
若 Idm绝对值大于第二阈值, 且 Idm小于 0,
S405, 适当提高第一图像每个像素的亮度值。
S401计算第一图像和第二图像亮度相似度的过程如下:
首先, 把第一图像 S和第二图像 D分别转换为 HIS颜色模型, 以便获取图 像中任意像素的亮度值;
其次, 把第二图像缩放到第一图像的相同尺度下, 这里设定第一图像的 宽度和高度分别为 W和 H, 宽度和高度均以像素个数为单位;
然后, 在第一图像和第二图像上分别构造相应的矩形区域, 矩形宽度为 w(0< w <W),矩形高度为 h(0< h <H),矩形宽度和高度均以像素个数为单位; 之后, 分别获取第一图像和第二图像上网格点像素的亮度值, 计算网格 点对应的第一图像和第二图像上像素的亮度值的差异之和 (Intensity difference total, Idt), 如公式 (3 ) 所示: 1
Idt = ^ 2^ ( Intensity ( ST D )— Intensity ( DJ d ) 接着,除以所有网格点的个数, 获得图像像素平均亮度差值 Idm(Intensity difference mean) , 如公式 ( 4 ) 所示:
Idm = Idt / ( w x h) ( 4 ) 用第一图像和第二图像的平均亮度差值量 Idm表示第一图像和第二图像 的亮度相似度。 在具体实施例中, 所述矩形宽度和高度分别为 \ 和11。
如果当前 Idm是正值, 并且大于第二阈值, 那么说明第二图像像素平均 亮度值比较小, S403通过适当增大第二图像的所有像素的亮度值, 来获得 第二图像与第一图像更好的相似性, 在具体实施例中, 第一阈值取值 0.1, 第二图像每个像素的亮度值自加 0.8 X lldml;
如果当前 Idm是负值, 并且也比较大, 那么说明第二图像像素平均亮度 值比较大, 可以通过适当增加第一图像的所有像素的亮度值, 来获得第二 图像与第一图像更好的相似性在具体实施例中, 第一阈值取值 0.1 , 第一图 像每个像素的亮度值自加 0.8 X lldml;
如果当前 Idm接近零, 那么说明第一图像和第二图像的亮度比较近似, 不需要进行亮度调节。
在相邻图像的色调差异较大的情况下,实现的彩色图像渐变动画效果一 般很难保证。 所以, 本发明实施例先对待渐变动画处理的相邻图像的色调 和亮度差异进行评价, 当差异较大时, 进行色调预处理, 然后再进行后续 的渐变动画处理; 如果自动评价结果为差异较小时, 直接对该组图片进行 后续的渐变动画处理。
S205所述根据相邻图像的相似度确定中间帧数量包括: 根据所述相邻图 像的特征点差异度确定中间帧数量。
在本发明的一个实施例中, 所述特征点提取方法包括:
先通过主动轮廓模型 (ASM)算法对人脸图像库进行训练, 有 ASM训练 结果获得特征点检测文件;
再对于输入的含有人脸的图像, 使用 Adaboost算法来获取图像中的人 脸区域, Adaboost算法是目前最为普遍使用的人脸检测算法;
最后, 在人脸区域中使用 ASM训练算法输出的特征点检测文件进行人 脸特征点定位。
在本发明的一个实施例中, 人脸特征点的数目选择 45。
在本发明的一个具体实施例中, 特征点差异度采用一种基于归一化的绝 对距离方法。 相邻幅图像按照播放的先后时间顺序分别称为源图像和目标 图像。 方法如下:
先定义缩放系数 xScale和 yScale, 计算方法如公式 (5 ) ( 6 ) 所示: xScale = Dx/Sx ( 5 )
yScale = Dy/Sy ( ) 设源图像宽、 高分别为 Sx、 Sy, 目标图像的宽、 高分别为 Dx、 Dy。 再把 N个特征点在目标图像的的位置 Di ( l≤i≤N ), 映射转换到源图像 尺度下的位置为 D'i ( l≤i≤N ), 计算方法如公式(7 ) ( 8 ) 所示:
(D:)X =(D /xScale f 7 )
Figure imgf000013_0001
设 是源图像 N个特征点的位置, 其中, l≤i≤N。
接下来计算源图像和目标图像的特征绝对差异 Re, 如公式 (9 ) 所示:
Figure imgf000013_0002
最后计算源图像和目标图像的特征相对差异 如公式(10 ) 所示: aRe=Re/Sf ( \Q ) 设源图像的人脸宽度为 Sf。
本发明使用源图像和目标图像的特征相对差异 表示源图像和目标图 像的特征差异度。
在源图像和目标图像不同的人脸特征差异情况下, 渐变动画处理中对于 渐变动画源图像和目标图像的中间帧数有不同的选择。 根据相邻图像的特 征点差异度值的大小确定所述中间帧数量, 包括: 当所述相邻图像的特征 点差异度值位于第一区间时, 确定所述中间帧的数量为第一数量; 当所述 相邻图像的相似度值位于第二区间时, 确定所述中间帧的数量为第二数量; 其中, 所述第一区间的取值小于第二区间的取值, 第一数量小于第二数量。
在本发明的实施例中,相邻幅图像按照播放的先后时间顺序分别称为源 图像和 P]标图像。 从直观上评价, 要实现比较自然的渐变动画, 源图像和 目标图像的特征相似度这个值越大, 特征相对差异越小, 渐变动画过程需 要的中间帧越少; 源图像和目标图像的特征相似度这个值越小, 特征相对 差异越大, 渐变过程需要的中间帧越多。 对特定的源图像和目标图像进行 评价后, 需要根据评价结果, 选择不同的中间帧数目进行渐变处理。
请参考图 5 , 图 5提供了本发明一个实施例中中间帧数量确定的流程图, 过程包括:
当 aRe小于 L时, 插入中间帧数目为 N个;
当 大于 L小于 2L时, 插入中间帧数目为 1.5*N个;
当 ^大于 2L时, 插入中间帧数目为 2* N个。
本发明实施例中所述第一区间的取值为(0, L ),第二区间的取值为(L, 2L ), 本发明实施例 L的一个较优取值为 0.25 , 本领域技术人员可以根据实 际需要取其它数值; 本发明实施例所述的第一数值为 N, 本发明实施例所述 的第二数值为 1.5 *N, N取自然数,本发明的一个较佳实施例中 N可以取 16 ~ 24之间的任意一个数值, 本领域技术人员可以根据实际需要取其它自然数。
在得到中间帧数目之后, 由源图像和目标图像生成中间帧图像。 过程包 括:
对源图像和目标图像进行特征点选择, 分别产生源控制点 (Source Control Points, SCP)和目标控制点 (Destination Control Points , DCP);
由 SCP和 DCP产生中间控制点 (Inter Control Points, ICP), 将 t时刻的 ICP 表示为 ICP(t)。 本文为线性过渡过程, 令 t=0, 计算 ICP(t)公式如( 11 )所示:
ICP(t)=(l-t)*SCP(t)+t* DCP(t) [0,1] ( 11 )
将 SCP和 ICP(t)分别作为源控制点和目标控制点对源图像 (Source Image, SI)进行图像变形,得到源变形图像 (Source Warped Image, SWI(t)); 将 DCP和 ICP(t)分别作为源控制点和目标控制点对目标图像 (Destination Image, DI)进行图像变形, 得到图像 (Destination Warped Image, DWI(t) ); 将 SWI(t)和 DWI(t)按公式( 12 )进行图像融合得到中间图像 (Inter Image, INTER_I(t))。
INTER_I(t)=t*SWI(t)+(l-t)*DWI(t) ( 12 )
At =
最后, 将 t增加一个变形步长 N , N为中间图像的张数, 返回 S603。 综上, 每经过一个 At就得到一张介于源图像和目标图像之间的中间过渡 图像 INTER_I(t), 经过 N个 ^就完成了图像渐变。
所述渐变动画属于播放时长固定的渐变动画, 在 S101对排序后的相邻图 像进行图像渐变预处理之前还包括判断所述播放时长的当前剩余时间是否 大于零; 所述对所述多幅图像中的相邻图像进行色调预处理包括: 若所述 当前剩余时间大于零, 对所述多幅图像中的相邻图像进行色调预处理。
本发明提出了一种由多幅图像生成渐变动画的装置。 请参考图 6, 图 6提 供了本发明由多幅图像生成渐变动画装置一个实施例的结构示意图。 装置 包括: 601色调预处理模块, 用于对所述多幅图像中的相邻图像进行色调预 处理, 以减小所述相邻图像的色调差, 使生成的动画从所述相邻图像的一 张切换到另一张时更加平滑; 603中间帧生成模块, 用于根据经过色调预处 理模块进行色调预处理后的相邻图像的特征点差异度确定中间帧数量, 在 相邻图像间通过图像变形技术生成所述数量的中间帧图像, 在相邻图像间 通过图像变形技术生成所述数量的中间帧图像; 605动画生成模块, 用于由 多幅图像及所述多幅图像中所有相邻图像间插入的中间帧图像生成渐变动 画。
在本发明的一个实施例中, 所述多幅图像为多幅人脸图像; 所述色调预 处理模块, 用于对所述多幅人脸图像中的相邻图像进行色调预处理, 以减 小所述相邻图像的色调差。
本发明的另一实施例还包括排序模块, 用于对所述多幅人脸图像排序, 以在总体上减少相邻图像的差异, 使生成的动画从相邻图像的一张播放到 另一张时更加平滑; 所述色调预处理模块用于对排序模块处理后的所述多 幅图像中的相邻图像进行色调预处理。
所述排序模块用于根据人脸大小对所述多幅人脸图像排序。
所述排序模块用于根据图像亮度对所述多幅人脸图像排序
601色调预处理模块用于用于根据所述相邻图像的色调进行计算得到所 述相邻图像的色调差异, 根据所述的色调差异计算得到色调差异绝对值, 当差异绝对值大于第一阁值时, 根据差异确定所述相邻图像中的需要调整 色调的图像和色调调整方式, 再按照色调调整方式对所述色调需要调整的 图像进行色调调整。
603中间帧生成模块用于根据相邻图像的相似度确定中间帧数量包括: 根据所述相邻图像的特征点差异度确定中间帧数量。
603中间帧生成模块用于根据相邻图像的特征点差异度确定所述中间帧 数量具体包括: 当所述相邻图像的相似度值位于第一区间时, 确定所述中 间帧的数量为第一数量; 当所述相邻图像的相似度值位于第二区间时, 确 定所述中间帧的数量为第二数量; 其中, 所述第一区间的取值小于第二区 间的取值, 第一数量小于第二数量。
所述装置还包括亮度预处理模块: 用于对所述多幅人脸图像中的相邻图 像进行亮度预处理; 所述中间帧生成模块, 用于根据经过色调预处理和亮 度预处理后的相邻图像生成渐变动画。
所述亮度预处理模块具体用于: 根据所述相邻图像的亮度进行计算得到 所述相邻图像的亮度差异, 根据所述亮度差异得到亮度差异绝对值, 当差 异绝对值大于第二阈值时, 根据差异确定所述相邻图像中需要调整亮度的 图像和亮度调整方式, 再按照亮度调整方式对所述需要调整的图像进行亮 度调整。
在本发明的一个实施例中, 渐变动画属于播放时长固定的动画, 所述装置 还包括: 判断模块, 用于判断所述播放时长的当前剩余时间是否大于零; 所述色调预处理模块, 用于在所述播放时长的当前剩余时间大于零时, 对 所述多幅图像中的相邻图像进行色调预处理。
本发明实施例提供了一种音乐播放器播放背景的生成方法, 其特征在于。 请参考图 7, 图 7提供了本发明实施例的结构图, 包括:
S701接收用于生成动画的多幅图像;
S703对所述多幅图像中的相邻图像进行色调预处理, 以减小所述相邻 图像的色调差;
S705根据经过色调预处理后相邻图像的特征点差异度确定中间帧数 量, 在相邻图像间通过图像变形技术生成所述数量的中间帧图像, 在相邻 图像间插入中间帧图像, 由多幅图像及所述多幅图像中所有相邻图像间插 入的中间帧图像生成渐变动画;
S707把由多幅图像生成的渐变动画作为所述音乐播放器的播放背景。 本发明的一个实施例中, 所述多幅图像是多幅人脸图像; 所述对所述 多幅图像中的相邻图像进行色调预处理包括: 对所述多幅人脸图像中的相 邻图像进行色调预处理。
所述色调预处理模块, 用于对所述多幅人脸图像中的相邻图像进行色调 预处理, 以减小所述相邻图像的色调差。
本发明实施例中, 在所述根据所述相邻图像的特征点差异度确定中间 帧数量之前包括: 对所述多幅图象中的人脸图像进行所述特征点定位。 所述特征点定位方法包括: 通过自动检测定位出人脸的特征点。
所述通过自动检测定位出人脸的特征点就是: 对给定的一张图片, 在 人脸检测的基础上, 不需要用户进行较多的手工操作自动检测出人脸的关 键特征点, 人脸定位方便快捷。 所述自动检测定位出人脸的特征点通过主 动轮廓模型算法检测定位出人脸的特征点。
所述人脸图像进行所述特征点定位包括: 通过整体拖动或单点拖动对人 脸图像进行特征点定位。 整体拖动方法把人脸图像特征点划分为人脸轮廓、 眉毛、 眼睛、 鼻子、 嘴巴五个部分的特征点; 以人脸轮廓、 眉毛、 眼睛、 鼻子、 嘴巴五个部分的特征点作为整体分别进行拖动。 其中对于眉毛、 眼 睛这个两部分, 左右眉毛、 左右眼睛也是分开拖动的。 整体拖动可避免在 手工定位模式下, 自动检测定位特征点与人脸实际特征点模板距离较远而 导致逐个移动特征点太繁琐。
所述单点拖动方法通过逐个拖动选中特征点, 实现精确的人脸特征定 位操作。
本发明实施例以自动检测定位特征点定位方法为主, 在自动检测定位 方法的效果不佳时, 通过整体拖动或单点拖动对人脸图像进行特征点定位, 对我们不满意的自动检测定位的特征点进行调整。
在所述对所述图像中的相邻图像进行图像渐变预处理之前还包括: 通 过捕捉音乐文件的时间戳来获取音乐文件的当前剩余时间。 判断当前剩余 时间是否大于零; 所述对所述图像中的相邻图像进行图像色调预处理指, 在所述当前剩余时间大于零时对, 所述多幅图像中的相邻图像进行色调预 处理。
本发明实施例在音乐播放的同时动态加载照片, 每次在内存中只需加 载两幅将要进行人脸变换的照片, 在变换过后随即销毁。 再对新的两幅进 行加载, 对内存无消耗。 加载照片的时间间隔过大会影响播放背景的流畅性, 过小则太过眼花缭 乱, 无法辨清两幅图像的真正人脸原图, 会一直处于变化中的各个帧。 本 发明实施例采用的最优时间间隔是 3-5秒, 但不限于此时间间隔取值。
本发明实施例提供了一种音乐播放器, 其特征在于, 所述音乐播放器包 括: 801色调预处理模块, 用于对所述多幅图像中的相邻图像进行色调预处 理, 以减小所述相邻图像的色调差; 803中间帧生成模块, 用于根据经过色 调预处理模块处理后的相邻图像的特征点差异度确定中间帧数量, 所述特 征点差异度根据所述相邻图像对应特征点的像素距离计算得到, 在相邻图 像间通过图像变形技术生成所述数量的中间帧图像, 在相邻图像间插入中 间帧图像; 805动画生成模块, 根据多幅图像及所述多幅图像中所有相邻图 像间插入的中间帧图像生成渐变动画; 播放模块: 用于播放音乐文件, 并 且在所述音乐文件的剩余播放时间大于零时, 将所述渐变动画在所述音乐 文件的视频显示界面上播放。
本发明实施例提供的音乐播放器还包括: 807存储模块, 用于存储所述 音乐文件及所述多幅图像。
本发明实施例提供的音乐播放器还包括: 809显示模块, 用于呈现所述 音乐文件的视频显示界面。
本领域技术人员可以理解附图只是一个优选实施例的示意图, 附图中 的模块或流程并不一定是实施本发明所必须的。
本领域技术人员可以理解实施例中的装置中的模块可以按照实施例描 述进行分布于实施例的装置中, 也可以进行相应变化位于不同于本实施例 的一个或多个装置中。 上述实施例的模块可以合并为一个模块, 也可以进 一步拆分成多个子模块。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修 改, 或者对其中部分技术特征进行等同替换; 而这些修改或者替换, 并不 使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims

权利要求
、 一种渐变动画的生成方法, 其特征在于,所述方法包括: 对多幅图像中的 相邻图像进行色调预处理, 以减小所述相邻图像的色调差; 根据经过 色调预处理后相邻图像的特征点差异度确定相邻图像间的中间帧数 量, 所述特征点差异度根据所述相邻图像对应特征点的像素距离计算 得到, 在相邻图像间通过图像变形技术生成所述中间帧数量的中间帧 图像, 在相邻图像间插入所述中间帧图像, 由所述多幅图像及所述多 幅图像中所有相邻图像间插入的中间帧图像生成渐变动画。
、 根据权利要求 1所述的方法, 其特征在于, 所述多幅图像是多幅人脸图 像; 所述对所述多幅图像中的相邻图像进行色调预处理包括: 对所述 多幅人脸图像中的相邻图像进行色调预处理。
、 根据权利要求 2所述的方法, 其特征在于, 所述对所述多幅人脸图像中 的相邻图像进行色调预处理之前还包括: 对所述多幅人脸图像排序; 所述对所述多幅图像中的相邻图像进行色调预处理指: 对排序后的多 幅人脸图像中的相邻图像进行色调预处理。
、 根据权利要求 3所述的方法, 其特征在于, 所述对所述多幅人脸图像排 序包括, 根据人脸大小排序。
、 根据权利要求 3所述的方法, 其特征在于, 所述对所述多幅人脸图像排 序包括, 根据图像亮度排序。
、 根据权利要求 2、 3、 4或 5所述的方法, 其特征在于, 所述色调预处理具 体包括, 根据所述相邻图像的色调进行计算得到色调差异, 根据色调 差异得到色调差异绝对值, 当所述差异绝对值大于第一阈值时, 根据 所述差异确定所述相邻图像中的需要调整的图像和色调调整方式, 再 按照色调调整方式对所述需要调整的图像进行色调调整。
、 根据权利要求 6所述的方法, 其特征在于, 所述根据所述相邻图像的色 调进行计算得到所述相邻图像的色调差异包括: 由相邻图像中第一图 像的平均色调值减去第二图像的平均色调值得到所述相邻图像的色 调差异; 所述按照色调调整方式对所述需要调整的图像进行色调调整 包括: 如果所述色调差异大于零, 降低第一图像每个像素的色调或提 高第二图像每个像素的色调, 如果所述色调差异小于零, 提高第一图 像每个像素的色调或降低第二图像每个像素的色调。
、 根据权利要求 2或 4所述的方法, 其特征在于, 所述方法还包括: 对所述 多幅图像中的相邻图像进行亮度预处理, 以减少所述相邻图像的亮度 差; 所述根据经过色调预处理后相邻图像的特征点差异度确定相邻图 像间的中间帧数量包括: 根据经过色调预处理和亮度预处理后相邻图 像的特征点差异度确定相邻图像间的中间帧数量。
、 根据权利要求 8所述的方法, 其特征在于, 所述亮度预处理具体包括: 根据所述相邻图像的亮度进行计算得到所述相邻图像的亮度差异, 根 据所述亮度差异得到亮度差异绝对值, 当所述差异绝对值大于第二阈 值时, 先根据所述亮度差异确定所述相邻图像中的亮度需要调整的图 像和亮度调整方式, 再按照亮度调整方式对所述亮度需要调整的图像 进行亮度调整。
、 根据权利要求 9所述的方法, 其特征在于, 所述根据所述相邻图像的 亮度进行计算得到所述相邻图像的亮度差异包括: 由相邻图像中第一 图像的平均亮度值减去第二图像的平均亮度值得到所述相邻图像的 亮度差异; 所述按照亮度调整方式对所述需要调整的图像进行亮度调 整包括: 如果所述亮度差异大于零, 降低第一图像每个像素的亮度或 提高第二图像每个像素的亮度, 如果所述亮度差异小于零, 提高第一 图像每个像素的亮度或降低第二图像每个像素的亮度。
1、 根据权利要求 1或 2所述的方法, 其特征在于, 所述根据所述经过色调 预处理后相邻图像的特征点差异度确定所述中间帧数量, 包括: 当所 述相邻图像的特征点差异度值位于第一区间时, 确定所述中间帧的数 量为第一数量; 当所述相邻图像的特征点差异度值位于第二区间时, 确定所述中间帧的数量为第二数量; 其中, 所述第一区间的取值小于 第二区间的取值, 第一数量小于第二数量。
12、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括: 所述渐 变动画属于播放时长固定的渐变动画, 所述对所述多幅图像中的相邻 图像进行色调预处理之前还包括: 判断所述播放时长的当前剩余时间 是否大于零; 所述对所述多幅图像中的相邻图像进行色调预处理包 括: 若所述当前剩余时间大于零, 对所述多幅图像中的相邻图像进行 色调预处理。
13、 一种渐变动画的生成装置, 其特征包括: 色调预处理模块, 用于对多 幅图像中的相邻图像进行色调预处理, 以减小所述相邻图像的色调 差; 中间帧生成模块, 用于根据经过色调预处理模块进行色调预处理 后的相邻图像的特征点差异度确定中间帧数量, 所述特征点差异度根 据所述相邻图像对应特征点的像素距离计算得到, 在相邻图像间通过 图像变形技术生成所述数量的中间帧图像, 在相邻图像间插入中间帧 图像; 动画生成模块, 用于由多幅图像及所述多幅图像中所有相邻图 像间插入的中间帧图像生成渐变动画。
14、 根据权利要求 13所述的装置, 其特征在于, 所述多幅图像为多幅人脸 图像; 所述色调预处理模块, 用于对所述多幅人脸图像中的相邻图像 进行色调预处理, 以减小所述相邻图像的色调差。
15、 根据权利要求 14所述的装置, 其特征在于, 所述装置还包括, 排序模 块, 用于对所述多幅人脸图像排序; 所述色调预处理模块用于对所述 排序模块排序后的所述多幅图像中的相邻图像进行色调预处理。
16、 根据权利要求 15所述的装置, 其特征在于, 所述排序模块用于根据人 脸大小对所述多幅人脸图像排序。
17、 根据权利要求 15所述的装置, 其特征在于, 所述排序模块用于根据图 像亮度对所述多幅人脸图像排序。
、 根据权利要求 14, 15, 16或 17所述的装置, 其特征在于, 所述色调预 处理模块用于根据所述相邻图像的色调进行计算得到所述相邻图像 的色调差异, 根据所述的色调差异得到色调差异绝对值, 当所述差异 绝对值大于第一阈值时, 根据所述差异确定所述相邻图像中的色调需 要调整的图像和色调调整方式, 再按照色调调整方式对所述色调需要 调整的图像进行色调调整。
、 根据权利要求 14或 16所述的装置, 其特征在于, 所述装置还包括亮度 预处理模块: 用于对所述多幅人脸图像中的相邻图像进行亮度预处 理; 所述中间帧生成模块, 用于 4艮据经过色调预处理和亮度预处理的 相邻图像的特征点差异度确定中间帧数量。
、 根据权利要求 19所述的装置, 其特征在于, 所述亮度预处理模块具体 用于: 根据所述相邻图像的亮度进行计算得到所述相邻图像的亮度差 异, 根据所述亮度差异得到亮度差异绝对值, 当所述差异绝对值大于 第二阈值时, 根据差异确定所述相邻图像中亮度需要调整的图像和亮 度调整方式, 再按照亮度调整方式对所述亮度需要调整的图像进行亮 度调整。
、 根据权利要求 14所述的装置, 其特征在于, 所述中间帧生成模块具体 用于: 当所述相邻图像的相似度值位于第一区间时, 确定所述中间帧 的数量为第一数量; 当所述相邻图像的相似度值位于第二区间时, 确 定所述中间帧的数量为第二数量; 其中, 所述第一区间的取值小于第 二区间的取值, 第一数量小于第二数量。
、 根据权利要求 14所述的装置, 其特征在于, 渐变动画属于播放时长固 定的动画, 所述装置还包括: 判断模块, 用于判断所述播放时长的当 前剩余时间是否大于零; 所述色调预处理模块, 用于在所述播放时长 的当前剩余时间大于零时, 对所述多幅图像中的相邻图像进行色调预 处理。
23、 一种音乐播放背景的生成方法, 其特征在于, 包括:
接收用于生成动画的多幅图像;
对所述多幅图像中的相邻图像进行色调预处理, 以减小所述相邻图像 的色调差;
根据经过色调预处理后相邻图像的特征点差异度确定中间帧数量, 在 相邻图像间通过图像变形技术生成所述数量的中间帧图像, 在相邻图像间 插入中间帧图像, 由多幅图像及所述多幅图像中所有相邻图像间插入的中 间帧图像生成渐变动画;
将所述渐变动画作为所述音乐播放器的播放背景。
24、 根据根据权利要求 23所述的方法, 所述多幅图像是多幅人脸图像; 所 述对所述多幅图像中的相邻图像进行色调预处理包括: 对所述多幅人 脸图像中的相邻图像进行色调预处理。
25、 根据根据权利要求 24所述的方法, 其特征在于, 在所述根据经过色调 预处理后相邻图像的特征点差异度确定中间帧数量之前包括: 对所述 相邻图像进行所述特征点定位。
26、 根据根据权利要求 25所述的方法, 其特征在于, 对所述人脸图像进行 所述特征点定位包括: 通过自动检测定位出人脸的特征点。
27、 根据根据权利要求 26所述的方法, 其特征在于, 所述通过自动检测定 位出人脸的特征点包括: 通过主动轮廓模型算法自动检测定位出人脸 的特征点。
28、 根据权利要求 25所述的方法, 其特征在于, 对所述人脸图像进行所述 特征点定位包括: 通过整体拖动或单点拖动对人脸图像进行特征点定 位, 所述整体拖动包括: 把人脸图像特征点划分为人脸轮廓、 眉毛、 眼睛、 鼻子、 嘴巴五个部分的特征点; 以人脸轮廓、 眉毛、 眼睛、 鼻 子、 嘴巴五个部分的特征点作为整体分别进行拖动; 所述单点拖动包 括: 单独拖动各个特征点。
、 根据权利要求 23所述的方法, 其特征在于, 在所述对所述图像中的相 邻图像进行图像色调预处理之前还包括: 通过捕捉音乐文件的时间戳 获取音乐文件的当前剩余时间; 判断当前剩余时间是否大于零, 所述 对所述图像中的相邻图像进行图像色调预处理指, 在所述当前剩余时 间大于零时对, 所述多幅图像中的相邻图像进行色调预处理。
、 一种音乐播放器, 其特征在于, 所述音乐播放器包括: 色调预处理模 块, 用于对所述多幅图像中的相邻图像进行色调预处理, 以减小所述 相邻图像的色调差; 中间帧生成模块, 用于根据经过色调预处理模块 处理后的相邻图像的特征点差异度确定中间帧数量, 所述特征点差异 度根据所述相邻图像对应特征点的像素距离计算得到, 在相邻图像间 通过图像变形技术生成所述数量的中间帧图像, 在相邻图像间插入中 间帧图像; 动画生成模块, 根据多幅图像及所述多幅图像中所有相邻 图像间插入的中间帧图像生成渐变动画; 播放模块: 用于播放音乐文 件, 并且在所述音乐文件的剩余播放时间大于零时, 将所述渐变动画 在所述音乐文件的视频显示界面上播放。
、 如权利要求 29所述的音乐播放器, 其特征在于, 所述音乐播放器还包 括: 存储模块, 用于存储所述音乐文件及所述多幅图像。
、 如权利要求 28所述的音乐播放器, 其特征在于, 所述音乐播放器还包 括显示模块, 用于呈现所述音乐文件的视频显示界面。
PCT/CN2011/080199 2011-09-27 2011-09-27 一种渐变动画的生成方法和装置 WO2012149772A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
ES11864760.1T ES2625263T3 (es) 2011-09-27 2011-09-27 Procedimiento y aparato para generar animación de metamorfosis
KR1020127024480A KR101388542B1 (ko) 2011-09-27 2011-09-27 모핑 애니메이션을 생성하기 위한 방법 및 장치
CN201180002501.8A CN102449664B (zh) 2011-09-27 2011-09-27 一种渐变动画的生成方法和装置
EP11864760.1A EP2706507B1 (en) 2011-09-27 2011-09-27 Method and apparatus for generating morphing animation
PCT/CN2011/080199 WO2012149772A1 (zh) 2011-09-27 2011-09-27 一种渐变动画的生成方法和装置
JP2013512750A JP5435382B2 (ja) 2011-09-27 2011-09-27 モーフィングアニメーションを生成するための方法および装置
US13/627,700 US8531484B2 (en) 2011-09-27 2012-09-26 Method and device for generating morphing animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/080199 WO2012149772A1 (zh) 2011-09-27 2011-09-27 一种渐变动画的生成方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/627,700 Continuation US8531484B2 (en) 2011-09-27 2012-09-26 Method and device for generating morphing animation

Publications (1)

Publication Number Publication Date
WO2012149772A1 true WO2012149772A1 (zh) 2012-11-08

Family

ID=46010200

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/080199 WO2012149772A1 (zh) 2011-09-27 2011-09-27 一种渐变动画的生成方法和装置

Country Status (7)

Country Link
US (1) US8531484B2 (zh)
EP (1) EP2706507B1 (zh)
JP (1) JP5435382B2 (zh)
KR (1) KR101388542B1 (zh)
CN (1) CN102449664B (zh)
ES (1) ES2625263T3 (zh)
WO (1) WO2012149772A1 (zh)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014017114A (ja) * 2012-07-09 2014-01-30 Panasonic Corp 照明システム
US9792714B2 (en) * 2013-03-20 2017-10-17 Intel Corporation Avatar-based transfer protocols, icon generation and doll animation
US9286710B2 (en) * 2013-05-14 2016-03-15 Google Inc. Generating photo animations
CN104182718B (zh) * 2013-05-21 2019-02-12 深圳市腾讯计算机系统有限公司 一种人脸特征点定位方法及装置
CN103413342B (zh) * 2013-07-25 2016-06-15 南京师范大学 一种基于像素点的图像文字渐变方法
CN104424295B (zh) * 2013-09-02 2019-09-24 联想(北京)有限公司 一种信息处理方法及电子设备
CN103927175A (zh) * 2014-04-18 2014-07-16 深圳市中兴移动通信有限公司 背景界面随音频动态变化的方法和终端设备
US10049141B2 (en) * 2014-10-10 2018-08-14 salesforce.com,inc. Declarative specification of visualization queries, display formats and bindings
CN104299252B (zh) * 2014-10-17 2018-09-07 惠州Tcl移动通信有限公司 一种图片显示切换的过渡方法及其系统
CN104992462B (zh) * 2015-07-20 2018-01-30 网易(杭州)网络有限公司 一种动画播放方法、装置及终端
CN106651998B (zh) * 2015-10-27 2020-11-24 北京国双科技有限公司 基于Canvas的动画播放速度调整方法及装置
CN106887030B (zh) * 2016-06-17 2020-03-06 阿里巴巴集团控股有限公司 一种动画生成方法和装置
CN106447754B (zh) * 2016-08-31 2019-12-24 和思易科技(武汉)有限责任公司 病理动画的自动生成方法
CN106297479A (zh) * 2016-08-31 2017-01-04 武汉木子弓数字科技有限公司 一种基于ar增强现实涂鸦技术的歌曲教学方法及系统
CN106445332A (zh) * 2016-09-05 2017-02-22 深圳Tcl新技术有限公司 图标显示方法及系统
US10395412B2 (en) 2016-12-30 2019-08-27 Microsoft Technology Licensing, Llc Morphing chart animations in a browser
US10304225B2 (en) 2016-12-30 2019-05-28 Microsoft Technology Licensing, Llc Chart-type agnostic scene graph for defining a chart
US11086498B2 (en) 2016-12-30 2021-08-10 Microsoft Technology Licensing, Llc. Server-side chart layout for interactive web application charts
JP6796015B2 (ja) * 2017-03-30 2020-12-02 キヤノン株式会社 シーケンス生成装置およびその制御方法
CN107316236A (zh) * 2017-07-07 2017-11-03 深圳易嘉恩科技有限公司 基于flex的票据图片预处理编辑器
CN107341841B (zh) * 2017-07-26 2020-11-27 厦门美图之家科技有限公司 一种渐变动画的生成方法及计算设备
CN107734322B (zh) * 2017-11-15 2020-09-22 深圳超多维科技有限公司 用于裸眼3d显示终端的图像显示方法、装置及终端
CN108769361B (zh) * 2018-04-03 2020-10-27 华为技术有限公司 一种终端壁纸的控制方法、终端以及计算机可读存储介质
CN109068053B (zh) * 2018-07-27 2020-12-04 香港乐蜜有限公司 图像特效展示方法、装置和电子设备
CN109947338B (zh) * 2019-03-22 2021-08-10 腾讯科技(深圳)有限公司 图像切换显示方法、装置、电子设备及存储介质
CN110049351B (zh) * 2019-05-23 2022-01-25 北京百度网讯科技有限公司 视频流中人脸变形的方法和装置、电子设备、计算机可读介质
CN110942501B (zh) * 2019-11-27 2020-12-22 深圳追一科技有限公司 虚拟形象切换方法、装置、电子设备及存储介质
CN111524062B (zh) * 2020-04-22 2023-11-24 北京百度网讯科技有限公司 图像生成方法和装置
CN112508773B (zh) 2020-11-20 2024-02-09 小米科技(武汉)有限公司 图像处理方法及装置、电子设备、存储介质
CN113313790A (zh) * 2021-05-31 2021-08-27 北京字跳网络技术有限公司 视频生成方法、装置、设备及存储介质
CN113411581B (zh) * 2021-06-28 2022-08-05 展讯通信(上海)有限公司 视频序列的运动补偿方法、系统、存储介质及终端
CN114173067B (zh) * 2021-12-21 2024-07-12 科大讯飞股份有限公司 一种视频生成方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200865A (ja) * 1993-12-27 1995-08-04 Casio Comput Co Ltd 画像変形方法およびその装置
US20060077206A1 (en) * 2004-09-13 2006-04-13 Denny Jaeger System and method for creating and playing a tweening animation using a graphic directional indicator
JP2007034724A (ja) * 2005-07-27 2007-02-08 Glory Ltd 画像処理装置、画像処理方法および画像処理プログラム
KR20080018407A (ko) * 2006-08-24 2008-02-28 한국문화콘텐츠진흥원 3차원 캐릭터의 변형을 제공하는 캐릭터 변형 프로그램을기록한 컴퓨터 판독가능 기록매체
CN101236598A (zh) * 2007-12-28 2008-08-06 北京交通大学 基于多尺度总体变分商图像的独立分量分析人脸识别方法
CN101242476A (zh) * 2008-03-13 2008-08-13 北京中星微电子有限公司 图像颜色自动校正方法及数字摄像系统
CN101295354A (zh) * 2007-04-23 2008-10-29 索尼株式会社 图像处理装置、成像装置、图像处理方法和计算机程序
CN101923726A (zh) * 2009-06-09 2010-12-22 华为技术有限公司 一种语音动画生成方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6828972B2 (en) * 2002-04-24 2004-12-07 Microsoft Corp. System and method for expression mapping
JP2005135047A (ja) 2003-10-29 2005-05-26 Kyocera Mita Corp 動画生成機能を有する通信装置
JP4339675B2 (ja) * 2003-12-24 2009-10-07 オリンパス株式会社 グラデーション画像作成装置及びグラデーション画像作成方法
JP5078334B2 (ja) 2005-12-28 2012-11-21 三洋電機株式会社 非水電解質二次電池
JP2011181996A (ja) 2010-02-26 2011-09-15 Casio Computer Co Ltd 表示順序決定装置、画像表示装置及びプログラム

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200865A (ja) * 1993-12-27 1995-08-04 Casio Comput Co Ltd 画像変形方法およびその装置
US20060077206A1 (en) * 2004-09-13 2006-04-13 Denny Jaeger System and method for creating and playing a tweening animation using a graphic directional indicator
JP2007034724A (ja) * 2005-07-27 2007-02-08 Glory Ltd 画像処理装置、画像処理方法および画像処理プログラム
KR20080018407A (ko) * 2006-08-24 2008-02-28 한국문화콘텐츠진흥원 3차원 캐릭터의 변형을 제공하는 캐릭터 변형 프로그램을기록한 컴퓨터 판독가능 기록매체
CN101295354A (zh) * 2007-04-23 2008-10-29 索尼株式会社 图像处理装置、成像装置、图像处理方法和计算机程序
CN101236598A (zh) * 2007-12-28 2008-08-06 北京交通大学 基于多尺度总体变分商图像的独立分量分析人脸识别方法
CN101242476A (zh) * 2008-03-13 2008-08-13 北京中星微电子有限公司 图像颜色自动校正方法及数字摄像系统
CN101923726A (zh) * 2009-06-09 2010-12-22 华为技术有限公司 一种语音动画生成方法及系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIA, ZEJU: "Study on the Morphing of Color Facial Images Based on Improved MR-ASM", MASTER'S DISSERTATION OF UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA, CHINA MASTER'S THESES FULL-TEXT DATABASE (E-JOURNAL), ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, 15 January 2011 (2011-01-15), pages 6 - 16,43-46, AND 53-58, XP008168187 *
ZHANG, YI: "Expressive Facial Animation Based on Visual Feature Extraction", MASTER'S DISSERTATION OF ZHEJIANG UNIVERSITY, CHINA MASTER'S THESES FULL-TEXT DATABASE (E-JOURNAL), ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, 15 August 2008 (2008-08-15), pages 11 - 12 AND 54-55, XP008167916 *

Also Published As

Publication number Publication date
KR20130045242A (ko) 2013-05-03
JP2013531290A (ja) 2013-08-01
CN102449664B (zh) 2017-04-12
EP2706507A1 (en) 2014-03-12
US20130079911A1 (en) 2013-03-28
CN102449664A (zh) 2012-05-09
US8531484B2 (en) 2013-09-10
JP5435382B2 (ja) 2014-03-05
KR101388542B1 (ko) 2014-04-23
ES2625263T3 (es) 2017-07-19
EP2706507B1 (en) 2017-03-01
EP2706507A4 (en) 2016-02-17

Similar Documents

Publication Publication Date Title
WO2012149772A1 (zh) 一种渐变动画的生成方法和装置
US11595737B2 (en) Method for embedding advertisement in video and computer device
CN104834898B (zh) 一种人物摄影图像的质量分类方法
CN108537782B (zh) 一种基于轮廓提取的建筑物图像匹配与融合的方法
KR101670282B1 (ko) 전경-배경 제약 조건 전파를 기초로 하는 비디오 매팅
CN103262119B (zh) 用于对图像进行分割的方法和系统
TWI607409B (zh) 影像優化方法以及使用此方法的裝置
WO2021169396A1 (zh) 一种媒体内容植入方法以及相关装置
Guo et al. Improving photo composition elegantly: Considering image similarity during composition optimization
WO2007074844A1 (ja) 顔パーツの位置の検出方法及び検出システム
CN111160291B (zh) 基于深度信息与cnn的人眼检测方法
CN109191444A (zh) 基于深度残差网络的视频区域移除篡改检测方法及装置
CN111127476A (zh) 一种图像处理方法、装置、设备及存储介质
CN108510500A (zh) 一种基于人脸肤色检测的虚拟人物形象的头发图层处理方法及系统
WO2022156214A1 (zh) 一种活体检测方法及装置
CN111242074A (zh) 一种基于图像处理的证件照背景替换方法
KR20190080388A (ko) Cnn을 이용한 영상 수평 보정 방법 및 레지듀얼 네트워크 구조
KR101124560B1 (ko) 동영상 내의 자동 객체화 방법 및 객체 서비스 저작 장치
CN103618846A (zh) 一种视频分析中抑制光线突然变化影响的背景去除方法
CN116580445A (zh) 一种大语言模型人脸特征分析方法、系统及电子设备
TWI373961B (en) Fast video enhancement method and computer device using the method
TWI313136B (zh)
Nguyen et al. Novel evaluation metrics for seam carving based image retargeting
CN115100312B (zh) 一种图像动漫化的方法和装置
US9135687B2 (en) Threshold setting apparatus, threshold setting method and recording medium in which program for threshold setting method is stored

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180002501.8

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 20127024480

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2013512750

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11864760

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2011864760

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011864760

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE