WO2012149772A1 - 一种渐变动画的生成方法和装置 - Google Patents
一种渐变动画的生成方法和装置 Download PDFInfo
- Publication number
- WO2012149772A1 WO2012149772A1 PCT/CN2011/080199 CN2011080199W WO2012149772A1 WO 2012149772 A1 WO2012149772 A1 WO 2012149772A1 CN 2011080199 W CN2011080199 W CN 2011080199W WO 2012149772 A1 WO2012149772 A1 WO 2012149772A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- adjacent
- difference
- brightness
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
Definitions
- the present invention relates to an image processing technique, and in particular to a method and apparatus for generating a gradient animation.
- the general method of gradual animation is to transform the two images in two directions on the basis of image deformation.
- the two images are called source image and target image in the order of playback, respectively.
- the deformation includes two variants of the source image to the target image, the target image to the source image.
- Image gradation fusion is performed on the two deformed images to produce a series of intermediate images, thereby achieving a smooth gradation of the image. Therefore, the quality and related characteristics of image deformation technology are one of the key factors affecting image gradation.
- Image deformation technology has been widely used in film and television special effects and advertising design. Through extensive and in-depth research on image warping technology, a series of methods with spatial image as the core have been formed. In image warping, spatial mapping is the core, and image deformation techniques can be roughly classified into three categories:
- Block-based deformation Typical algorithms include 2 mesh deformation algorithms and triangulation-based deformation algorithms. Their common idea is to divide the whole image into several blocks, and then combine the deformation of the whole image with the deformation of each small piece. The significant advantage of this type of algorithm is that the deformation speed is fast, but the preprocessing work of dividing the image into small pieces is cumbersome, and the reasonable validity of the block will directly affect the final deformation effect.
- a typical algorithm is a radial basis function based deformation algorithm.
- the basic idea of this algorithm is to treat the image as a series of scattered points.
- the spatial mapping of all points on the image is done by specifying the spatial mapping of special points and some suitable radial basis functions.
- This algorithm is relatively straightforward, but since the radial basis function is generally a complex function such as a Gaussian function, the deformation speed is very slow. In addition, this algorithm is difficult to guarantee the stable boundary of the deformed image.
- Embodiments of the present invention provide a method and apparatus for generating a gradation animation from a plurality of images to improve the gradation visual effect.
- An embodiment of the present invention provides a method for generating a gradation animation, including: performing tone preprocessing on adjacent images in a plurality of images to reduce a difference in hue of the adjacent images;
- the feature point difference degree of the image determines the number of intermediate frames between adjacent images, the feature point difference degree is calculated according to the pixel distance of the corresponding feature point of the adjacent image, and the middle is generated by image deformation technology between adjacent images
- An intermediate frame image of the number of frames, the intermediate frame image is inserted between adjacent images, and a gradation animation is generated by the plurality of images and the intermediate frame image inserted between all adjacent images in the plurality of images.
- An embodiment of the present invention provides a gradation animation generating apparatus, including: a tone preprocessing module, configured to perform tone preprocessing on adjacent images in a plurality of images to reduce a difference in hue of the adjacent images; a frame generating module, configured to determine an intermediate frame number according to a feature point difference degree of the adjacent image subjected to the tone preprocessing by the tone preprocessing module, where the feature point difference degree is calculated according to a pixel distance of the corresponding feature point of the adjacent image Obtaining, generating, by the image deformation technique between the adjacent images, the number of intermediate frame images, inserting an intermediate frame image between adjacent images; and an animation generating module, configured by the plurality of images and all adjacent ones of the plurality of images Intermediate frame map inserted between images Like generating a gradient animation.
- a tone preprocessing module configured to perform tone preprocessing on adjacent images in a plurality of images to reduce a difference in hue of the adjacent images
- a frame generating module configured to determine an intermediate frame number according to a
- An embodiment of the present invention provides a method for generating a music playing background, which includes: receiving a plurality of images for generating an animation; performing tone preprocessing on adjacent images of the plurality of images to reduce a difference in hue of the adjacent image; determining an intermediate frame number according to a feature point difference degree of the adjacent image after the tone preprocessing, and generating the number of intermediate frame images by an image deformation technique between adjacent images, in the adjacent image Intersecting an intermediate frame image, and generating a gradation animation from the plurality of images and the intermediate frame image inserted between all adjacent images of the plurality of images; using the gradation animation as a playing background of the music player.
- An embodiment of the present invention provides a music player, including: a tone preprocessing module, configured to perform tone preprocessing on adjacent images in the plurality of images to reduce a difference in hue of the adjacent images; a frame generation module, configured to determine an intermediate frame number according to a feature point difference degree of the adjacent image processed by the tone preprocessing module, where the feature point difference degree is calculated according to a pixel distance of the corresponding feature point of the adjacent image,
- the intermediate frame image is generated by an image deformation technique between adjacent images, and an intermediate frame image is inserted between adjacent images;
- the animation generation module is configured according to the plurality of images and the middle of insertion between all adjacent images in the plurality of images
- the frame image generates a gradation animation; a play module: for playing a music file, and playing the gradation animation on a video display interface of the music file when the remaining play time of the music file is greater than zero.
- the intermediate frame image of the number of intermediate frames determined according to the difference degree of the feature points is inserted between adjacent images by the tone preprocessing, and then the gradation animation is generated, and the generated gradation animation is smooth and natural.
- the gradient effect of the gradient animation is perfected.
- FIG. 1 is a flow chart of an embodiment of a method for generating a gradual animation according to the present invention
- FIG. 2 is a schematic diagram of another embodiment of a method for generating a gradation animation according to the present invention.
- FIG. 3 is a flow chart of a tone gradation preprocessing according to an embodiment of the present invention.
- FIG. 4 is a flow chart of luminance gradation preprocessing according to an embodiment of the present invention.
- FIG. 5 is a flowchart of determining the number of intermediate frames in an embodiment of the present invention.
- FIG. 6 is a schematic structural view of an embodiment of a device for generating a gradation animation from a plurality of face images according to the present invention
- FIG. 7 is a flow chart of an embodiment of a method for generating a background of a music player according to the present invention
- FIG. 8 is a schematic structural diagram of a music player in an embodiment of the present invention.
- An embodiment of the present invention provides a method for generating a gradation animation from a plurality of images, the method comprising: performing tone preprocessing on adjacent ones of the plurality of images to reduce a difference in hue of the adjacent images; The number of intermediate points between adjacent images is determined by the feature point difference degree of the adjacent image after the tone preprocessing, and the feature point difference degree is calculated according to the pixel distance of the corresponding feature point of the adjacent image, and is passed between adjacent images.
- An image warping technique generates an intermediate frame image of the number of intermediate frames, and inserts the intermediate frame image between adjacent images, and generates an intermediate frame image inserted between the plurality of images and all adjacent images in the plurality of images Gradient animation. Please refer to FIG. 1.
- FIG. 1 Please refer to FIG. 1.
- FIG. 1 is a flowchart of an embodiment of a method for generating a gradation animation from multiple images according to the present invention, including: S101: Perform tone preprocessing on adjacent images in the plurality of images to reduce a difference in hue of the adjacent images, so that the generated animation is smoother from one play of the adjacent images to another;
- S103 Determine an intermediate frame number according to a feature point difference degree of the adjacent image after the tone preprocessing, and generate the quantity of the intermediate frame image by using an image deformation technique between adjacent images;
- S105 Generate a gradation animation from the plurality of images and the intermediate frame images inserted by all two adjacent images of the plurality of images.
- the image is a face image.
- the performing tone preprocessing on adjacent ones of the plurality of images comprises: performing tone preprocessing on adjacent images in the plurality of face images.
- the method before performing the tone preprocessing on the adjacent images in the plurality of face images in S101, the method further includes: sorting the plurality of face images to reduce the neighbors in total The difference in the image.
- Performing tone preprocessing on adjacent ones of the plurality of images refers to: performing tone preprocessing on adjacent images in the plurality of sorted face images.
- FIG. 2 A flowchart of an embodiment of the present invention is shown in FIG. 2, and the method includes:
- S205 Determine an intermediate frame number according to the similarity of the adjacent images, and generate an intermediate frame image of the intermediate frame number by using an image deformation technique between adjacent images.
- the step of sorting the plurality of face images by S201 specifically includes: sorting according to a face size.
- the specific steps are:
- the image size is counted to find the smallest image size, or given a picture size, all the images are converted to the same image size;
- the face size under the image transformed size is counted, and the plurality of images are sorted from small to large or from large to small according to the face size under the transformed size;
- the face size may be a face area, a face width, a face length, and the like.
- the gradient animation effect of adjacent face images is affected by the difference in face size in adjacent images. The larger the difference in face size, the less natural and smooth the animation effect under the same conditions; the smaller the difference in face size, the smoother and more natural the animation effect under the same conditions. Therefore, compared with the animation effect without this sorting process, the overall effect of forming a gradient based on the face size sorting of the face size is better under the same subsequent gradation processing method.
- the sorting the plurality of face images by S201 further includes sorting according to image brightness.
- the specific steps are:
- the gradient animation effect of adjacent face images is affected by the difference in brightness of adjacent images.
- the overall brightness is smoother from dark to bright, or from light to dark, which can improve the visual effect of multiple image gradient animations in general.
- the overall effect of generating animations from multiple face images based on face size sorting is smoother and more natural than that achieved under the same subsequent processing method.
- the method further includes: calculating a hue difference according to the hue of the adjacent image, obtaining an absolute value of the hue difference according to the hue difference, and determining a hue requirement in the adjacent image according to the difference when the absolute value of the difference is greater than the first threshold
- the adjusted image and the color tone adjustment method are used to adjust the color tone of the image whose color tone needs to be adjusted according to the color tone adjustment method.
- Calculating the hue difference of the adjacent image according to the hue of the adjacent image comprises: subtracting the average hue value of the second image from the average image value of the first image in the adjacent image to obtain the adjacent image
- the hue difference of the image to be adjusted according to the hue adjustment manner includes: if the hue difference is greater than zero, reducing the hue of each pixel of the first image or increasing the hue of each pixel of the second image; If the hue difference is less than zero, the hue of each pixel of the first image is increased or the hue of each pixel of the second image is lowered.
- FIG. 3 is a flowchart of preprocessing of the tone gradient animation in an embodiment, the process includes:
- the process of calculating the difference in hue between the first image and the second image of the adjacent image in S301 specifically includes: first, converting the first image S and the second image D into a HIS color model, respectively, so as to acquire a hue value of an arbitrary pixel in the image;
- the second image is scaled to the same scale of the first image, and the width and height of the first image are respectively W and H, and the width and height are in units of pixels;
- the number is the unit; Afterwards, respectively acquiring the tone values of the corresponding pixels on the first image and the second image, and calculating a sum of differences Hdt of the tonal values of the corresponding pixels on the first image and the second image, as shown in formula (1);
- Hdt 2 ⁇ ( Hue ( S ij ) - Hue ( E) ij ))
- Hdm Hdt / (wxh) ( 2 )
- Hdm Hdt / (wxh) ( 2 )
- the second image pixel average tone value is relatively low, and S503 appropriately increases the tone value of all the pixels of the second image.
- the first threshold value is 0.1
- the hue value of each pixel of the second image is increased by 0.8 X IHdml;
- the first image pixel average tonal value is relatively low, and S505 appropriately increases the tonal value of all pixels of the first image.
- the first image is per pixel.
- the hue value is increased by 0.8 X IHdml;
- the S203 method further includes: performing luminance pre-processing on adjacent ones of the plurality of images to reduce a luminance difference of the adjacent images;
- the feature point difference degree determining the number of intermediate frames between adjacent images includes: determining the number of intermediate frames between adjacent images according to the feature point difference degree of the adjacent images after the tone preprocessing and the brightness preprocessing.
- the brightness pre-processing specifically includes: calculating a brightness difference of the adjacent image according to the brightness of the adjacent image, and calculating an absolute value of the two degrees difference according to the brightness difference, when the difference absolute value is greater than the second threshold First, determining an image and brightness adjustment mode in which the brightness in the adjacent image needs to be adjusted according to the difference, and then adjusting the brightness according to the brightness adjustment mode. Like brightness adjustment.
- Calculating the brightness difference of the adjacent image according to the brightness of the adjacent image comprises: subtracting the average brightness value of the second image from the average brightness value of the first image in the adjacent image to obtain the adjacent image
- the brightness difference of the image to be adjusted according to the brightness adjustment manner includes: if the brightness difference is greater than zero, reducing the brightness of each pixel of the first image or increasing the brightness of each pixel of the second image; If the difference in luminance is less than zero, the brightness of each pixel of the first image is increased or the brightness of each pixel of the second image is decreased.
- FIG. 4 is a flow chart showing the preprocessing of the brightness gradient animation in one embodiment of the present invention, the process includes:
- S401 calculates the brightness similarity between the first image and the second image as follows:
- the first image S and the second image D are respectively converted into HIS color models to obtain brightness values of arbitrary pixels in the image;
- the second image is scaled to the same scale of the first image, where the width and height of the first image are set to W and H, respectively, and the width and height are in units of pixels;
- a corresponding rectangular area is respectively constructed on the first image and the second image, the width of the rectangle is w (0 ⁇ w ⁇ W), the height of the rectangle is h (0 ⁇ h ⁇ H), and the width and height of the rectangle are in pixels.
- the number is a unit; after that, the brightness values of the grid point pixels on the first image and the second image are respectively obtained, and the sum of the differences of the brightness values of the pixels on the first image and the second image corresponding to the grid point is calculated (Intensity difference total, Idt), as shown in equation (3): 1
- Idt ⁇ 2 ⁇ ( Intensity ( S TD ) - Intensity ( DJ d )
- Idm Image pixel average brightness difference
- the luminance similarity of the first image and the second image is represented by the average luminance difference amount Idm of the first image and the second image.
- the rectangular width and height are ⁇ and 11, respectively.
- the second image pixel average luminance value is relatively small, and S403 obtains the second image and the first image by appropriately increasing the luminance values of all the pixels of the second image.
- the first threshold value is 0.1, the brightness value of each pixel of the second image is increased by 0.8 X lldml;
- the second image pixel average brightness value is relatively large, and the second image and the first image can be obtained by appropriately increasing the brightness values of all the pixels of the first image.
- the first threshold is 0.1, and the brightness value of each pixel of the first image is 0.8 X lldml;
- the difference in hue and brightness of the adjacent image processed by the gradient animation is first evaluated.
- the tone preprocessing is performed, and then the subsequent gradation animation processing is performed; if the automatic evaluation result is a difference Hours, subsequent gradual animation processing of the group of images.
- the determining, by the S205, the number of the intermediate frames according to the similarity of the adjacent images comprises: determining the number of the intermediate frames according to the feature point difference degree of the adjacent images.
- the feature point extraction method includes:
- the face image library is trained by the active contour model (ASM) algorithm, with ASM training. As a result, a feature point detection file is obtained;
- ASM active contour model
- the Adaboost algorithm is used to obtain the face region in the image.
- the Adaboost algorithm is the most commonly used face detection algorithm.
- the feature point detection file output by the ASM training algorithm is used in the face region to perform face feature point localization.
- the number of face feature points is selected 45.
- the feature point difference uses a normalized absolute distance method.
- the adjacent images are referred to as the source image and the target image, respectively, in the order in which they are played. Methods as below:
- the present invention uses the relative difference of the features of the source image and the target image to represent the source image and the target image.
- the degree of feature difference of the image is the degree of feature difference of the image.
- the gradient animation process has different choices for the intermediate frame numbers of the gradient animation source image and the target image. Determining the number of the intermediate frames according to the size of the feature point difference value of the adjacent image, including: determining that the number of the intermediate frames is the first quantity when the feature point difference value of the adjacent image is located in the first interval When the similarity value of the adjacent image is located in the second interval, determining that the number of the intermediate frames is the second number; wherein, the value of the first interval is smaller than the value of the second interval, the first quantity Less than the second quantity.
- adjacent images are referred to as a source image and a P] image, respectively, in order of playback.
- the larger the feature similarity between the source image and the target image the smaller the relative difference of the features, the fewer intermediate frames required for the gradient animation process; the characteristics of the source image and the target image
- FIG. 5 is a flowchart of determining the number of intermediate frames in an embodiment of the present invention, and the process includes:
- the value of the first interval is (0, L), and the value of the second interval is (L, 2L), and a preferred value of the embodiment L of the present invention is 0.25.
- the value of the first embodiment of the present invention is N, and the second value of the embodiment of the present invention is 1.5 *N, and N is a natural number.
- N can take any value between 16 and 24, and those skilled in the art can take other natural numbers according to actual needs.
- an intermediate frame image is generated from the source image and the target image.
- the process includes:
- Source Control Points SCP
- DCP Destination Control Points
- ICP intermediate control point
- ICP(t) (l-t)*SCP(t)+t* DCP(t) [0,1] ( 11 )
- SCP and ICP(t) are used as source control points and target control points to image the source image (Source Image, SI) to obtain the source warped image (SWI(t)); DCP and ICP(t) ) Decompose the target image (Destination Image, DI) as the source control point and the target control point respectively to obtain the image (Destination Warped Image, DWI(t)); and formulate SWI(t) and DWI(t) according to the formula (12) Image fusion is performed to obtain an intermediate image (Inter Image, INTER_I(t)).
- N is the number of sheets in the middle image, and return to S603.
- N is the number of sheets in the middle image
- the gradation animation belongs to a gradation animation with a fixed playing time.
- the method further includes determining whether the current remaining time of the playing duration is greater than zero;
- Performing tone preprocessing on adjacent images in the method includes: performing tone preprocessing on adjacent images in the plurality of images if the current remaining time is greater than zero.
- FIG. 6 is a schematic structural diagram of an embodiment of a gradual animation device generated by a plurality of images according to the present invention.
- the apparatus includes: a 601 tone preprocessing module, configured to perform tone preprocessing on adjacent ones of the plurality of images to reduce a difference in hue of the adjacent images, so that the generated animation is from the adjacent image
- the 604 intermediate frame generation module is configured to determine the number of intermediate frames according to the feature point difference degree of the adjacent image subjected to the tone preprocessing by the tone preprocessing module, and pass between adjacent images.
- Image warping techniques generate the number of intermediate frame images between adjacent images Generating the number of intermediate frame images by an image warping technique; 605 an animation generating module, configured to generate a gradation animation from the plurality of images and the intermediate frame images inserted between all adjacent images of the plurality of images.
- the plurality of images are a plurality of face images; the tone preprocessing module is configured to perform tone preprocessing on adjacent images in the plurality of face images to reduce The hue of the adjacent image is small.
- Another embodiment of the present invention further includes a sorting module for sorting the plurality of face images to reduce the difference of adjacent images as a whole, and causing the generated animation to be played from one of the adjacent images to another
- the tone preprocessing module is configured to perform tone preprocessing on adjacent ones of the plurality of images processed by the sorting module.
- the sorting module is configured to sort the plurality of face images according to a face size.
- the sorting module is configured to sort the plurality of face images according to image brightness
- a 601 tone pre-processing module is configured to calculate a hue difference of the adjacent image according to the hue of the adjacent image, and calculate an absolute value of the hue difference according to the hue difference, when the absolute value of the difference is greater than the first pavilion At the time of the value, the image in which the hue needs to be adjusted and the hue adjustment mode are determined according to the difference, and the hue adjustment is performed on the image whose hue needs to be adjusted according to the hue adjustment method.
- the determining, by the 603 intermediate frame generating module, the number of the intermediate frames according to the similarity of the adjacent images comprises: determining the number of intermediate frames according to the feature point difference degree of the adjacent images.
- the determining, by the 603 intermediate frame generating module, the number of the intermediate frames according to the feature point difference degree of the adjacent image specifically includes: when the similarity value of the adjacent image is located in the first interval, determining that the number of the intermediate frames is the first a quantity; when the similarity value of the adjacent image is located in the second interval, determining that the number of the intermediate frames is a second number; wherein, the value of the first interval is smaller than the value of the second interval, One quantity is less than the second quantity.
- the device further includes a brightness pre-processing module: configured to perform brightness pre-processing on adjacent images in the plurality of face images; and the intermediate frame generating module is configured to perform pre-processing and brightness according to the tone
- the pre-processed adjacent image generates a gradient animation.
- the brightness pre-processing module is specifically configured to: obtain a brightness difference of the adjacent image according to the brightness of the adjacent image, and obtain an absolute value of the brightness difference according to the brightness difference, when the absolute value of the difference is greater than the second threshold And determining an image and brightness adjustment mode in the adjacent image that needs to be adjusted according to the difference, and performing brightness adjustment on the image to be adjusted according to the brightness adjustment mode.
- the gradation animation belongs to the animation with a fixed playing duration
- the device further includes: a determining module, configured to determine whether the current remaining time of the playing duration is greater than zero; Tone preprocessing the adjacent images of the plurality of images when the current remaining time of the playback duration is greater than zero.
- the embodiment of the invention provides a method for generating a background of a music player, which is characterized in that: Please refer to FIG. 7, FIG. 7 for a structural diagram of an embodiment of the present invention, including:
- S701 receives a plurality of images for generating an animation
- S703 performs tone preprocessing on adjacent ones of the plurality of images to reduce a difference in hue of the adjacent images
- S705 determines the number of intermediate frames according to the feature point difference degree of the adjacent image after the tone preprocessing, generates the number of intermediate frame images by image deformation technology between adjacent images, and inserts an intermediate frame image between adjacent images, And generating an gradation animation on the image and the intermediate frame image inserted between all adjacent images in the plurality of images;
- S707 uses the gradient animation generated by the plurality of images as the playback background of the music player.
- the plurality of images are multiple face images; and performing tone preprocessing on adjacent ones of the plurality of images includes: performing a phase in the plurality of face images The adjacent image is subjected to tone preprocessing.
- the tone preprocessing module is configured to perform tone preprocessing on adjacent images in the plurality of face images to reduce a difference in hue of the adjacent images.
- the middle is determined according to the feature point difference degree of the adjacent image
- the number of frames previously includes: performing the feature point positioning on the face image in the plurality of images.
- the feature point positioning method includes: automatically detecting a feature point of a face by automatically detecting.
- the feature point for positioning the face by automatic detection is: for a given picture, on the basis of the face detection, the user does not need to perform more manual operations to automatically detect the key feature points of the face, Face positioning is convenient and fast.
- the automatic detection of the feature points of the face is detected by the active contour model algorithm to detect the feature points of the face.
- Performing the feature point positioning on the face image includes: performing feature point positioning on the face image by using an overall drag or a single point drag.
- the overall drag method divides the feature points of the face image into feature points of the face contour, the eyebrows, the eyes, the nose, and the mouth; the feature points of the five parts of the face contour, the eyebrows, the eyes, the nose, and the mouth are respectively taken as a whole Drag.
- the left and right eyebrows and the left and right eyes are also dragged separately.
- the overall drag avoids the fact that in the manual positioning mode, the automatic detection of the positioning feature points is far from the actual feature point template of the face, and the moving of the feature points one by one is too cumbersome.
- the single-dot drag method selects feature points one by one to achieve accurate face feature positioning operation.
- the embodiment of the invention mainly adopts an automatic detection and positioning feature point positioning method.
- the feature point is positioned by the overall drag or single point drag on the face image, which is not satisfactory to us.
- the feature points of the detected positioning are adjusted.
- the method further comprises: acquiring a current remaining time of the music file by capturing a time stamp of the music file. Determining whether the current remaining time is greater than zero; performing image tone preprocessing on adjacent images in the image, and performing tone presetting on adjacent images in the plurality of images when the current remaining time is greater than zero deal with.
- the photo is dynamically loaded while the music is playing, and each time only two photos that are to be transformed by the face are loaded in the memory, and then destroyed after being transformed. Then load the new two images and consume no memory.
- the time interval between loading photos affects the fluency of the playback background. If it is too small, it is too dazzling. The original face image of the two images cannot be discerned, and the frames will always be in the process of change.
- the optimal time interval adopted by the embodiment of the present invention is 3-5 seconds, but is not limited to the value of the time interval.
- the embodiment of the present invention provides a music player, wherein the music player includes: 801 a tone preprocessing module, configured to perform tone preprocessing on adjacent images in the plurality of images to reduce And a 803 intermediate frame generating module, configured to determine an intermediate frame number according to a feature point difference degree of the adjacent image processed by the tone preprocessing module, where the feature point difference degree is according to the adjacent The pixel distance corresponding to the feature point of the image is calculated, the number of intermediate frame images are generated by image deformation technology between adjacent images, and the intermediate frame image is inserted between adjacent images; 805 animation generation module, according to multiple images and An intermediate frame image inserted between all adjacent images in a plurality of images generates a gradation animation; a playing module: for playing a music file, and when the remaining playing time of the music file is greater than zero, the gradation animation is in the music The video is displayed on the video display interface.
- a tone preprocessing module configured to perform tone preprocessing on adjacent images in the plurality of images to reduce
- the music player provided by the embodiment of the present invention further includes: 807 a storage module, configured to store the music file and the plurality of images.
- the music player provided by the embodiment of the present invention further includes: 809 display module, configured to present a video display interface of the music file.
- modules in the apparatus in the embodiments may be distributed in the apparatus of the embodiment according to the embodiment, or may be correspondingly changed in one or more apparatuses different from the embodiment.
- the modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
ES11864760.1T ES2625263T3 (es) | 2011-09-27 | 2011-09-27 | Procedimiento y aparato para generar animación de metamorfosis |
KR1020127024480A KR101388542B1 (ko) | 2011-09-27 | 2011-09-27 | 모핑 애니메이션을 생성하기 위한 방법 및 장치 |
CN201180002501.8A CN102449664B (zh) | 2011-09-27 | 2011-09-27 | 一种渐变动画的生成方法和装置 |
EP11864760.1A EP2706507B1 (en) | 2011-09-27 | 2011-09-27 | Method and apparatus for generating morphing animation |
PCT/CN2011/080199 WO2012149772A1 (zh) | 2011-09-27 | 2011-09-27 | 一种渐变动画的生成方法和装置 |
JP2013512750A JP5435382B2 (ja) | 2011-09-27 | 2011-09-27 | モーフィングアニメーションを生成するための方法および装置 |
US13/627,700 US8531484B2 (en) | 2011-09-27 | 2012-09-26 | Method and device for generating morphing animation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/080199 WO2012149772A1 (zh) | 2011-09-27 | 2011-09-27 | 一种渐变动画的生成方法和装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/627,700 Continuation US8531484B2 (en) | 2011-09-27 | 2012-09-26 | Method and device for generating morphing animation |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012149772A1 true WO2012149772A1 (zh) | 2012-11-08 |
Family
ID=46010200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2011/080199 WO2012149772A1 (zh) | 2011-09-27 | 2011-09-27 | 一种渐变动画的生成方法和装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US8531484B2 (zh) |
EP (1) | EP2706507B1 (zh) |
JP (1) | JP5435382B2 (zh) |
KR (1) | KR101388542B1 (zh) |
CN (1) | CN102449664B (zh) |
ES (1) | ES2625263T3 (zh) |
WO (1) | WO2012149772A1 (zh) |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014017114A (ja) * | 2012-07-09 | 2014-01-30 | Panasonic Corp | 照明システム |
US9792714B2 (en) * | 2013-03-20 | 2017-10-17 | Intel Corporation | Avatar-based transfer protocols, icon generation and doll animation |
US9286710B2 (en) * | 2013-05-14 | 2016-03-15 | Google Inc. | Generating photo animations |
CN104182718B (zh) * | 2013-05-21 | 2019-02-12 | 深圳市腾讯计算机系统有限公司 | 一种人脸特征点定位方法及装置 |
CN103413342B (zh) * | 2013-07-25 | 2016-06-15 | 南京师范大学 | 一种基于像素点的图像文字渐变方法 |
CN104424295B (zh) * | 2013-09-02 | 2019-09-24 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
CN103927175A (zh) * | 2014-04-18 | 2014-07-16 | 深圳市中兴移动通信有限公司 | 背景界面随音频动态变化的方法和终端设备 |
US10049141B2 (en) * | 2014-10-10 | 2018-08-14 | salesforce.com,inc. | Declarative specification of visualization queries, display formats and bindings |
CN104299252B (zh) * | 2014-10-17 | 2018-09-07 | 惠州Tcl移动通信有限公司 | 一种图片显示切换的过渡方法及其系统 |
CN104992462B (zh) * | 2015-07-20 | 2018-01-30 | 网易(杭州)网络有限公司 | 一种动画播放方法、装置及终端 |
CN106651998B (zh) * | 2015-10-27 | 2020-11-24 | 北京国双科技有限公司 | 基于Canvas的动画播放速度调整方法及装置 |
CN106887030B (zh) * | 2016-06-17 | 2020-03-06 | 阿里巴巴集团控股有限公司 | 一种动画生成方法和装置 |
CN106447754B (zh) * | 2016-08-31 | 2019-12-24 | 和思易科技(武汉)有限责任公司 | 病理动画的自动生成方法 |
CN106297479A (zh) * | 2016-08-31 | 2017-01-04 | 武汉木子弓数字科技有限公司 | 一种基于ar增强现实涂鸦技术的歌曲教学方法及系统 |
CN106445332A (zh) * | 2016-09-05 | 2017-02-22 | 深圳Tcl新技术有限公司 | 图标显示方法及系统 |
US10395412B2 (en) | 2016-12-30 | 2019-08-27 | Microsoft Technology Licensing, Llc | Morphing chart animations in a browser |
US10304225B2 (en) | 2016-12-30 | 2019-05-28 | Microsoft Technology Licensing, Llc | Chart-type agnostic scene graph for defining a chart |
US11086498B2 (en) | 2016-12-30 | 2021-08-10 | Microsoft Technology Licensing, Llc. | Server-side chart layout for interactive web application charts |
JP6796015B2 (ja) * | 2017-03-30 | 2020-12-02 | キヤノン株式会社 | シーケンス生成装置およびその制御方法 |
CN107316236A (zh) * | 2017-07-07 | 2017-11-03 | 深圳易嘉恩科技有限公司 | 基于flex的票据图片预处理编辑器 |
CN107341841B (zh) * | 2017-07-26 | 2020-11-27 | 厦门美图之家科技有限公司 | 一种渐变动画的生成方法及计算设备 |
CN107734322B (zh) * | 2017-11-15 | 2020-09-22 | 深圳超多维科技有限公司 | 用于裸眼3d显示终端的图像显示方法、装置及终端 |
CN108769361B (zh) * | 2018-04-03 | 2020-10-27 | 华为技术有限公司 | 一种终端壁纸的控制方法、终端以及计算机可读存储介质 |
CN109068053B (zh) * | 2018-07-27 | 2020-12-04 | 香港乐蜜有限公司 | 图像特效展示方法、装置和电子设备 |
CN109947338B (zh) * | 2019-03-22 | 2021-08-10 | 腾讯科技(深圳)有限公司 | 图像切换显示方法、装置、电子设备及存储介质 |
CN110049351B (zh) * | 2019-05-23 | 2022-01-25 | 北京百度网讯科技有限公司 | 视频流中人脸变形的方法和装置、电子设备、计算机可读介质 |
CN110942501B (zh) * | 2019-11-27 | 2020-12-22 | 深圳追一科技有限公司 | 虚拟形象切换方法、装置、电子设备及存储介质 |
CN111524062B (zh) * | 2020-04-22 | 2023-11-24 | 北京百度网讯科技有限公司 | 图像生成方法和装置 |
CN112508773B (zh) | 2020-11-20 | 2024-02-09 | 小米科技(武汉)有限公司 | 图像处理方法及装置、电子设备、存储介质 |
CN113313790A (zh) * | 2021-05-31 | 2021-08-27 | 北京字跳网络技术有限公司 | 视频生成方法、装置、设备及存储介质 |
CN113411581B (zh) * | 2021-06-28 | 2022-08-05 | 展讯通信(上海)有限公司 | 视频序列的运动补偿方法、系统、存储介质及终端 |
CN114173067B (zh) * | 2021-12-21 | 2024-07-12 | 科大讯飞股份有限公司 | 一种视频生成方法、装置、设备及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07200865A (ja) * | 1993-12-27 | 1995-08-04 | Casio Comput Co Ltd | 画像変形方法およびその装置 |
US20060077206A1 (en) * | 2004-09-13 | 2006-04-13 | Denny Jaeger | System and method for creating and playing a tweening animation using a graphic directional indicator |
JP2007034724A (ja) * | 2005-07-27 | 2007-02-08 | Glory Ltd | 画像処理装置、画像処理方法および画像処理プログラム |
KR20080018407A (ko) * | 2006-08-24 | 2008-02-28 | 한국문화콘텐츠진흥원 | 3차원 캐릭터의 변형을 제공하는 캐릭터 변형 프로그램을기록한 컴퓨터 판독가능 기록매체 |
CN101236598A (zh) * | 2007-12-28 | 2008-08-06 | 北京交通大学 | 基于多尺度总体变分商图像的独立分量分析人脸识别方法 |
CN101242476A (zh) * | 2008-03-13 | 2008-08-13 | 北京中星微电子有限公司 | 图像颜色自动校正方法及数字摄像系统 |
CN101295354A (zh) * | 2007-04-23 | 2008-10-29 | 索尼株式会社 | 图像处理装置、成像装置、图像处理方法和计算机程序 |
CN101923726A (zh) * | 2009-06-09 | 2010-12-22 | 华为技术有限公司 | 一种语音动画生成方法及系统 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6828972B2 (en) * | 2002-04-24 | 2004-12-07 | Microsoft Corp. | System and method for expression mapping |
JP2005135047A (ja) | 2003-10-29 | 2005-05-26 | Kyocera Mita Corp | 動画生成機能を有する通信装置 |
JP4339675B2 (ja) * | 2003-12-24 | 2009-10-07 | オリンパス株式会社 | グラデーション画像作成装置及びグラデーション画像作成方法 |
JP5078334B2 (ja) | 2005-12-28 | 2012-11-21 | 三洋電機株式会社 | 非水電解質二次電池 |
JP2011181996A (ja) | 2010-02-26 | 2011-09-15 | Casio Computer Co Ltd | 表示順序決定装置、画像表示装置及びプログラム |
-
2011
- 2011-09-27 JP JP2013512750A patent/JP5435382B2/ja active Active
- 2011-09-27 ES ES11864760.1T patent/ES2625263T3/es active Active
- 2011-09-27 CN CN201180002501.8A patent/CN102449664B/zh active Active
- 2011-09-27 KR KR1020127024480A patent/KR101388542B1/ko active IP Right Grant
- 2011-09-27 EP EP11864760.1A patent/EP2706507B1/en active Active
- 2011-09-27 WO PCT/CN2011/080199 patent/WO2012149772A1/zh active Application Filing
-
2012
- 2012-09-26 US US13/627,700 patent/US8531484B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07200865A (ja) * | 1993-12-27 | 1995-08-04 | Casio Comput Co Ltd | 画像変形方法およびその装置 |
US20060077206A1 (en) * | 2004-09-13 | 2006-04-13 | Denny Jaeger | System and method for creating and playing a tweening animation using a graphic directional indicator |
JP2007034724A (ja) * | 2005-07-27 | 2007-02-08 | Glory Ltd | 画像処理装置、画像処理方法および画像処理プログラム |
KR20080018407A (ko) * | 2006-08-24 | 2008-02-28 | 한국문화콘텐츠진흥원 | 3차원 캐릭터의 변형을 제공하는 캐릭터 변형 프로그램을기록한 컴퓨터 판독가능 기록매체 |
CN101295354A (zh) * | 2007-04-23 | 2008-10-29 | 索尼株式会社 | 图像处理装置、成像装置、图像处理方法和计算机程序 |
CN101236598A (zh) * | 2007-12-28 | 2008-08-06 | 北京交通大学 | 基于多尺度总体变分商图像的独立分量分析人脸识别方法 |
CN101242476A (zh) * | 2008-03-13 | 2008-08-13 | 北京中星微电子有限公司 | 图像颜色自动校正方法及数字摄像系统 |
CN101923726A (zh) * | 2009-06-09 | 2010-12-22 | 华为技术有限公司 | 一种语音动画生成方法及系统 |
Non-Patent Citations (2)
Title |
---|
XIA, ZEJU: "Study on the Morphing of Color Facial Images Based on Improved MR-ASM", MASTER'S DISSERTATION OF UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA, CHINA MASTER'S THESES FULL-TEXT DATABASE (E-JOURNAL), ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, 15 January 2011 (2011-01-15), pages 6 - 16,43-46, AND 53-58, XP008168187 * |
ZHANG, YI: "Expressive Facial Animation Based on Visual Feature Extraction", MASTER'S DISSERTATION OF ZHEJIANG UNIVERSITY, CHINA MASTER'S THESES FULL-TEXT DATABASE (E-JOURNAL), ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, 15 August 2008 (2008-08-15), pages 11 - 12 AND 54-55, XP008167916 * |
Also Published As
Publication number | Publication date |
---|---|
KR20130045242A (ko) | 2013-05-03 |
JP2013531290A (ja) | 2013-08-01 |
CN102449664B (zh) | 2017-04-12 |
EP2706507A1 (en) | 2014-03-12 |
US20130079911A1 (en) | 2013-03-28 |
CN102449664A (zh) | 2012-05-09 |
US8531484B2 (en) | 2013-09-10 |
JP5435382B2 (ja) | 2014-03-05 |
KR101388542B1 (ko) | 2014-04-23 |
ES2625263T3 (es) | 2017-07-19 |
EP2706507B1 (en) | 2017-03-01 |
EP2706507A4 (en) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2012149772A1 (zh) | 一种渐变动画的生成方法和装置 | |
US11595737B2 (en) | Method for embedding advertisement in video and computer device | |
CN104834898B (zh) | 一种人物摄影图像的质量分类方法 | |
CN108537782B (zh) | 一种基于轮廓提取的建筑物图像匹配与融合的方法 | |
KR101670282B1 (ko) | 전경-배경 제약 조건 전파를 기초로 하는 비디오 매팅 | |
CN103262119B (zh) | 用于对图像进行分割的方法和系统 | |
TWI607409B (zh) | 影像優化方法以及使用此方法的裝置 | |
WO2021169396A1 (zh) | 一种媒体内容植入方法以及相关装置 | |
Guo et al. | Improving photo composition elegantly: Considering image similarity during composition optimization | |
WO2007074844A1 (ja) | 顔パーツの位置の検出方法及び検出システム | |
CN111160291B (zh) | 基于深度信息与cnn的人眼检测方法 | |
CN109191444A (zh) | 基于深度残差网络的视频区域移除篡改检测方法及装置 | |
CN111127476A (zh) | 一种图像处理方法、装置、设备及存储介质 | |
CN108510500A (zh) | 一种基于人脸肤色检测的虚拟人物形象的头发图层处理方法及系统 | |
WO2022156214A1 (zh) | 一种活体检测方法及装置 | |
CN111242074A (zh) | 一种基于图像处理的证件照背景替换方法 | |
KR20190080388A (ko) | Cnn을 이용한 영상 수평 보정 방법 및 레지듀얼 네트워크 구조 | |
KR101124560B1 (ko) | 동영상 내의 자동 객체화 방법 및 객체 서비스 저작 장치 | |
CN103618846A (zh) | 一种视频分析中抑制光线突然变化影响的背景去除方法 | |
CN116580445A (zh) | 一种大语言模型人脸特征分析方法、系统及电子设备 | |
TWI373961B (en) | Fast video enhancement method and computer device using the method | |
TWI313136B (zh) | ||
Nguyen et al. | Novel evaluation metrics for seam carving based image retargeting | |
CN115100312B (zh) | 一种图像动漫化的方法和装置 | |
US9135687B2 (en) | Threshold setting apparatus, threshold setting method and recording medium in which program for threshold setting method is stored |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180002501.8 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 20127024480 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2013512750 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11864760 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2011864760 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011864760 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |