WO2016107356A1 - 一种基于静态图片的动态交互方法和装置 - Google Patents

一种基于静态图片的动态交互方法和装置 Download PDF

Info

Publication number
WO2016107356A1
WO2016107356A1 PCT/CN2015/095933 CN2015095933W WO2016107356A1 WO 2016107356 A1 WO2016107356 A1 WO 2016107356A1 CN 2015095933 W CN2015095933 W CN 2015095933W WO 2016107356 A1 WO2016107356 A1 WO 2016107356A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
feature
feature area
points
mapping
Prior art date
Application number
PCT/CN2015/095933
Other languages
English (en)
French (fr)
Inventor
胡金辉
韩玉刚
唐雨
闫杨
任纪海
何振科
Original Assignee
北京奇虎科技有限公司
奇智软件(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201410854768.5A external-priority patent/CN104574473B/zh
Priority claimed from CN201410855538.0A external-priority patent/CN104571887B/zh
Priority claimed from CN201410854766.6A external-priority patent/CN104574483A/zh
Priority claimed from CN201410854767.0A external-priority patent/CN104574484B/zh
Application filed by 北京奇虎科技有限公司, 奇智软件(北京)有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2016107356A1 publication Critical patent/WO2016107356A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a dynamic interaction method based on static pictures, a dynamic interaction device based on static pictures, a method for generating dynamic effects based on static pictures, and a dynamic effect based on static pictures.
  • the device a method for generating a picture dynamic effect based on an interaction operation, a device for generating a picture dynamic effect based on an interaction operation, a method for generating a customizable dynamic picture, and a device for generating a customizable dynamic picture.
  • the commonly used dynamic pictures generally adopt the GIF format.
  • the GIF picture is composed of a plurality of pictures that have existed beforehand, and is synthesized according to the specified interval of each frame.
  • the picture size increases with the geometric multiple of the number of playback frames, and the playback form is single.
  • GIF diagrams require specialized application generation.
  • development kits or tools such as OpenGL (Open Graphics Library) interface are used to generate by rendering.
  • This method relies on third-party development packages, library files or tools, and on the other hand.
  • Configurations and resources have special requirements, so they consume large resources, slow rendering generation, and are not conducive to cross-platform porting.
  • the average user is difficult to edit, the technical threshold is high, and the operation is difficult.
  • generating GIF maps is usually very time consuming, and the GIF maps are relatively large in size, and the network bandwidth requirements are high during transmission.
  • the dynamic effect produced by pixel rendering is relatively low in efficiency, and the rendering time is long, which is prone to lag.
  • the present invention has been made in order to provide a static picture-based dynamic interaction method, a static picture-based dynamic interaction device, and a static picture based on overcoming the above problems or at least partially solving or alleviating the above problems.
  • a method for generating a dynamic effect a device for generating a dynamic effect based on a static image, a method for generating a dynamic effect of a picture based on an interactive operation, a device for generating a dynamic effect of a picture based on an interactive operation, and a method for generating a customizable dynamic image
  • a device that generates a customizable dynamic graph is a method for generating a dynamic effect, a device for generating a dynamic effect based on a static image, a method for generating a dynamic effect of a picture based on an interactive operation, and a method for generating a customizable dynamic image.
  • a dynamic interaction method based on a static picture including:
  • mapping reference object is determined according to the specified interoperation event
  • mapping at least part of the pixel points in the feature area to one or more frame distortion pictures according to the mapping reference object to drive the static picture to change frame by frame.
  • a method for generating a dynamic effect based on a still picture including:
  • the feature area has feature points
  • a dynamic effect is generated based on the one or more frames of distorted pictures.
  • a method for generating a picture dynamic effect based on an interaction operation including:
  • At least part of the pixel points in the feature area are texture mapped according to a preset mode, and a dynamic effect including a one-frame or multi-frame warped picture change is generated.
  • a method of generating a customizable dynamic graph comprising:
  • Reading dynamic information from a dynamic interaction file includes a script object and a static image
  • mapping by the script object, at least part of the pixel points in the static picture to one or more frame distortion pictures according to the dynamic information, to drive the static picture to change frame by frame.
  • a dynamic interactive device based on a static picture including:
  • a determining module configured to determine a mapping reference object according to the specified interaction event when the specified interaction event is monitored
  • mapping module configured to map at least part of the pixel points in the feature area into one or more frame distortion pictures according to the mapping reference object, to drive the static picture to change frame by frame.
  • an apparatus for generating a dynamic effect based on a still picture including:
  • a selection module configured to select a feature area in the static picture; the feature area has a feature point;
  • mapping module configured to map pixel points of the static picture into one or more frame distortion pictures according to the feature point and the one or more reference points;
  • a generating module configured to generate a dynamic effect based on the one or more frames of the distorted picture.
  • an apparatus for generating a picture dynamic effect based on an interaction operation including:
  • a determining module configured to determine a moving direction of at least a part of the pixel points in the feature area according to the specified operation event when the specified interactive operation event is monitored;
  • a mapping module configured to perform texture mapping on at least part of the pixel points in the feature area according to the preset mode in the moving direction, to generate a dynamic effect including a one-frame or multi-frame warped picture change.
  • an apparatus for generating a customizable dynamic graph comprising:
  • a reading module adapted to read dynamic information from a dynamic interaction file;
  • the dynamic interaction file includes a script object and a static image;
  • mapping module configured to map, by the script object, at least part of the pixels in the static picture into one or more frame distortion pictures according to the dynamic information, to drive the static picture to change frame by frame.
  • a computer program comprising computer readable code causing the computing device to perform the static picture based dynamic interaction method described above when the computer readable code is run on a computing device Or causing the computing device to perform the above-described method for generating a dynamic effect based on a static picture, or causing the computing device to perform the above-described method for generating a picture dynamic effect based on an interaction operation, or causing the computing device to perform the above-described Generate a method for customizable dynamic graphs.
  • a computer readable medium wherein the computer program described above is stored.
  • the embodiment of the present invention determines a mapping reference object by listening to the specified interaction operation event, so as to map at least part of the pixel points in the feature area of the static picture into one or more frame distortion pictures to drive the static picture to change frame by frame. No need to use special applications to generate dynamic effects, which reduces the technical threshold and improves the ease of operation. In addition, through the feedback of the user's interaction, the dynamic interaction of static images is realized, enriching the form of dynamic effects.
  • the embodiment of the present invention maps pixel points of a static picture to one or more frame distortion pictures based on one or more reference points to generate a dynamic effect, which is simple to calculate, does not depend on a third-party development package, a library file or a tool, and generates a rendering speed. Fast, less resource consumption, easy to cross-platform.
  • the invention performs texture mapping on at least some pixel points in the feature region based on the motion direction, and generates a dynamic effect including a one-frame or multi-frame warped picture change.
  • a dynamic effect is generated on the feature region, which reduces the volume of the dynamic effect and reduces The bandwidth occupation during transmission is convenient for transmission.
  • the time consumption for generating dynamic effects is reduced, and the pictures in the network picture or the system album can be quickly and vividly reproduced.
  • the effect of the state, the dynamic effect is generated quickly and conveniently, realizing the real-time interaction of the dynamic effect with the user's interaction behavior.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a dynamic interaction method based on a static picture according to an embodiment of the present invention
  • FIG. 2 is a view schematically showing an example of a still picture according to an embodiment of the present invention
  • FIG. 3 is a view schematically showing an example of selecting feature regions in a still picture according to an embodiment of the present invention
  • 4A and 4B are schematic diagrams showing an example of mapping of pixel points according to an embodiment of the present invention.
  • FIG. 5A and 5B schematically illustrate an exemplary diagram of a warped image in accordance with one embodiment of the present invention
  • FIG. 6 is a flow chart showing the steps of an embodiment of a method for generating a dynamic effect based on a still picture according to an embodiment of the present invention
  • FIG. 7 is a flow chart showing the steps of an embodiment of a method for generating a picture dynamic effect based on an interaction operation according to an embodiment of the present invention
  • FIG. 8 is a flow chart showing the steps of an embodiment of a method for generating a customizable dynamic graph, in accordance with one embodiment of the present invention.
  • FIG. 9 is a block diagram showing the structure of an embodiment of a dynamic interaction device based on a static picture according to an embodiment of the present invention.
  • FIG. 10 is a block diagram showing the structure of an apparatus for generating a dynamic effect based on a still picture according to an embodiment of the present invention.
  • FIG. 11 is a block diagram showing the structure of an apparatus for generating a picture dynamic effect based on an interaction operation according to an embodiment of the present invention.
  • FIG. 12 is a block diagram showing the structure of an embodiment of a dynamic interaction device based on a static picture according to an embodiment of the present invention
  • Figure 13 is a schematic block diagram showing a computing device for performing the method according to the present invention.
  • Fig. 14 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • FIG. 1 a flow chart of a step of a static image-based dynamic interaction method according to an embodiment of the present invention is shown, which may specifically include the following steps:
  • Step 101 Select a feature area in the static picture.
  • the embodiment of the present invention can be applied to a mobile device, for example, a mobile phone, a PDA (Personal Digital Assistant), a laptop computer, a palmtop computer, etc., and can also be applied to a fixed device, for example, The personal computer (PC), the notebook computer, and the like are not limited in the embodiment of the present invention.
  • a mobile device for example, a mobile phone, a PDA (Personal Digital Assistant), a laptop computer, a palmtop computer, etc.
  • PC personal computer
  • the notebook computer and the like are not limited in the embodiment of the present invention.
  • These mobile devices or fixed devices can generally support operating systems including Android (Android), IOS, Windows Phone, or Windows, and can usually store still pictures.
  • Android Android
  • IOS IOS
  • Windows Phone or Windows
  • a static image which can be relative to a dynamic image, that is, a picture that does not have a dynamic effect.
  • the static picture may include a JPG, JPEG, PNG, BMP, and the like, which is not limited by the embodiment of the present invention.
  • a certain area may be selected as a feature area in the static picture, and the feature area is selected.
  • the shape may be a polygon, a circle, an ellipse or the like, and a dynamic effect is generated for the image data in the feature area.
  • an elliptical selection box as shown in FIG. 3 may be provided, and the user may change the shape of the elliptical selection box and select its position for the still picture, the position may be Determined as a feature area.
  • Step 102 When listening to the specified interaction operation event, determining a mapping reference object according to the specified interaction operation event;
  • the interoperation event may be an event caused by an interaction operation by a user.
  • the mapping reference object may be an object that is referenced by the mapping location when mapping the pixel points in the feature area.
  • the feature area can be similar to the physical water ball dithering effect (approximating the balloon filled with water), and the direction and manner of the shaking are changed according to different interaction operations of the user, such as shaking the mobile phone.
  • the specified interworking event may include a shaking event
  • the mapping reference object may include one or more reference points
  • step 102 may include the following sub-steps:
  • Sub-step S11 one or more reference points are selected in the feature area of the still picture according to the shaking direction of the shaking event.
  • one or more reference points may be determined according to the specified interaction event when the specified interaction event is monitored.
  • the user can perform an interactive operation by shaking.
  • an acceleration sensor such as a three-axis acceleration sensor
  • an operating system such as Android
  • the accelerations of the device in the horizontal, vertical and spatial vertical directions are respectively obtained, the sum of the squares of the accelerations in each direction is calculated, and the square root is obtained as the comprehensive acceleration of the device movement.
  • the integrated acceleration is greater than the set acceleration threshold, it can be determined that the shaking event is monitored, and the user's shaking operation is determined to interact.
  • the shaking direction it may be the same as the shaking direction, or may be opposite to the shaking direction, and one or more continuously distributed reference points are selected in the feature area of the still picture.
  • the specified interworking event may include a screen click event
  • the mapping reference object may include one or more reference points
  • step 102 may include the following sub-steps:
  • Sub-step S12 selecting one or more reference points in the feature area of the still picture according to the direction pointing to the occurrence of the screen click event.
  • the user can perform an interactive operation by clicking a screen (such as a feature area).
  • the direction of the screen click event may be pointed to.
  • the center point/center of gravity of the feature area points to the direction in which the screen click event occurs, and may be the same as the direction, or may be opposite to the direction.
  • One or more consecutively distributed reference points are selected from the feature regions of the picture.
  • the determination manner of the above reference point is only an example.
  • the determination manner of the other reference points may be set according to the actual situation, for example, the position of the reference point is directly specified, which is not limited by the embodiment of the present invention.
  • those skilled in the art may also adopt other reference point determination manners according to actual needs, which is not limited by the embodiment of the present invention.
  • the specified interworking event may include a shaking event
  • the mapping reference object may include a moving direction of at least a part of the pixel points in the feature region
  • step 102 may include the following sub-steps:
  • Sub-step S13 setting the shaking direction of the shaking event to the moving direction of at least part of the pixel points in the feature area.
  • the position of the vertices of the drawn graphic can be moved, and the movement of the position of the vertices can depend on the user's interaction.
  • the vertices of the drawn graphic can move in the direction of shaking, and the shape will change accordingly.
  • the vertices of the drawn graphic may move toward the touch point of the user, and the shape thereof also changes.
  • the feature area can be similar to the physical water ball dithering effect (approximating the balloon filled with water), and the direction and manner of the shaking are changed according to different interaction operations of the user, such as shaking the mobile phone.
  • the user can perform an interactive operation by shaking.
  • the shaking direction it may be the same as the shaking direction, or may be opposite to the shaking direction, and select a moving direction of at least a part of the pixel points in the feature area as a feature area of the still picture.
  • direction of motion may include acceleration.
  • the specified operation event may include a screen click event
  • the mapping reference object may include a moving direction of at least a part of the pixel points in the feature area
  • step 102 may include the following sub-steps:
  • Sub-step S14 setting a direction pointing to the occurrence of the screen click event is a moving direction of at least a part of the pixel points in the feature area.
  • the user can perform an interactive operation by clicking a screen (such as a feature area).
  • the direction of the screen click event may be pointed to.
  • the center point/center of gravity of the feature area points to the direction in which the screen click event occurs, and may be the same as the direction, or may be opposite to the direction.
  • the direction of motion of at least some of the pixel points in the feature area is selected in the feature area of the picture.
  • the manner of determining the direction of motion is only an example.
  • the manner of determining other directions of motion may be set according to an actual situation, which is not limited by the embodiment of the present invention.
  • those skilled in the art may also adopt other manners of determining the direction of motion according to actual needs, which is not limited by the embodiment of the present invention.
  • Step 103 Map at least part of the pixel points in the feature area to one or more frame distortion pictures according to the mapping reference object, to drive the static picture to change frame by frame.
  • the reference object may be mapped as a distorted amplitude reference, and the still picture is mapped to generate a distorted picture.
  • the embodiment of the present invention determines a mapping reference object by listening to the specified interaction operation event, so as to map at least part of the pixel points in the feature area of the static picture into one or more frame distortion pictures to drive the static picture to change frame by frame. No need to use special applications to generate dynamic effects, which reduces the technical threshold and improves the ease of operation. In addition, through the feedback of the user's interaction, the dynamic interaction of static images is realized, enriching the form of dynamic effects.
  • step 103 may include the following sub-steps:
  • Sub-step S21 mapping pixel points of the still picture into a one-frame or multi-frame warp picture according to the feature point and the one or more reference points.
  • the feature area may have feature points for generating a dynamic effect.
  • the static picture may be mapped with the reference point as the reference of the feature point to generate a distorted picture.
  • the pixel points in the feature area in the still picture may be mapped along the direction in which the feature point points to the reference point, causing distortion of the static picture.
  • the amplitude of the distortion can be larger.
  • the smaller the distortion the smaller the amplitude of the distortion.
  • the distortion picture may not be generated. distortion.
  • the feature area may include a convex area
  • the feature point may be To include the point of gravity
  • the convex area geometrically, can mean that the figure is convex outward, and there is no recess.
  • Algebra can be defined as a convex region: any two points a, b in the set, t*a+(1-t)*b still belong to this set, where 0 ⁇ t ⁇ 1, the meaning of this expression can be the connection The straight line segments of the two points a b are still in the set.
  • the geometric center of gravity also known as the geometric center, when the object is homogeneous (density is fixed), the center of mass is equivalent to the centroid, for example, the intersection of the three midlines of the triangle.
  • sub-step S21 may comprise the following sub-steps:
  • Sub-step S211 generating a distorted picture
  • the initial state of the warped picture may be blank.
  • Sub-step S212 mapping pixel points on the first connection line in the feature area to the second connection line;
  • the first connection may be a line connecting the feature point and the edge point
  • the second connection may be a line connecting the current reference point and the edge point
  • the edge point may be the feature.
  • mapping may be performed according to the reference point.
  • the mapped pixel point may be located in the feature area, and the pixel points in the feature area are mapped in the feature area, instead of Will be mapped outside the feature area.
  • C0 is a feature point (such as a center of gravity)
  • C1 can be a reference point
  • E can be an edge point
  • C0E can be a first connection
  • C1E can be a second connection.
  • the pixel point P0 on the first connection line COE can be mapped onto the second connection line C1E to obtain the mapped point P1.
  • sub-step S212 may include the following sub-steps:
  • Sub-step S2121 calculating a relative position of the pixel on the first connection line in the feature area on the first connection line;
  • Sub-step S2122 copying the pixel points to the second connection line according to the relative position.
  • the relative positions of the pixel points can be expressed in a proportional relationship.
  • the ratio of the line segment C0P0 to the line segment C0E can be used as the relative position of the pixel point P0 on the first line C0E.
  • the point P1 on the line segment C1E is obtained such that the ratio of the line segment C1P1 to the second line C1E is R.
  • the relative position between the line segment including the pixel point and the first line may be used, which is not limited in this embodiment of the present invention.
  • Sub-step S213, copying the pixel points on the second connection line to the same position in the twisted picture;
  • the position of the pixel on the second connection line is determined, it can be copied to the same position in the distorted picture to perform distortion mapping of the image.
  • the sub-step S21 may further include the following sub-steps:
  • Sub-step S214 the pixel points outside the feature area are mapped to the same position in the warped picture.
  • the pixel on the static picture if it is outside the convex feature area, it can be directly copied to the corresponding position on the twisted picture without distortion.
  • the embodiment of the present invention may not map the pixel points outside the feature area, and only perform the mapping based on the pixel points in the feature area, which is not limited by the embodiment of the present invention.
  • the sub-step S21 may further include the following sub-steps:
  • Sub-step S215 performing pixel point superimposition processing on the pixel points whose positions overlap in the warped picture.
  • the embodiment of the present invention can perform pixel point superposition processing.
  • RGB color mode For example, if the RGB color mode is applied, various colors can be obtained by changing the red (R), green (G), and blue (B) color channels of the pixel and superimposing them on each other. of.
  • one pixel point (such as randomly selecting a pixel point and selecting a pixel point finally copied to the position) may be selected as the pixel point of the position, and may also be selected by other methods.
  • the pixel of the location is not limited in this embodiment of the present invention.
  • the sub-step S21 may further include the following sub-steps:
  • Sub-step S216 performing pixel point interpolation processing on the blank position in the warped picture.
  • the static picture is mapped to generate distortion, in a region where the pixel point is sparse in the distorted picture, some pixels may not be assigned (ie, no pixel is mapped to the position, and the pixel at the position)
  • the point is the original state, such as white), which produces a blank position.
  • the embodiment of the present invention can perform pixel point interpolation processing to complete the distorted picture.
  • the pixel point Py (such as the upper pixel point, the lower pixel point, the left pixel point, the right pixel point, etc.) closest to it is selected, and the pixel is selected.
  • the value of the point Py is assigned to the pixel point Px.
  • a distorted picture can be mapped according to the position of the reference point (C1 shown in FIG. 4A), with reference.
  • the point (C1 as shown in Fig. 4A) moves to a different position, the distorted picture also changes, and the distorted picture can be continuously played to form a dynamic effect.
  • the whole of the feature image can be twisted to the left side; as shown in FIG. 5B, If the reference point is on the right side of the feature point (C0 as shown in FIG. 4A), the entirety of the feature image can be distorted to the right side.
  • the reference point may start from the position of the feature point along the direction corresponding to the specified interaction event (such as the shaking direction of the shaking event, pointing to the occurrence of the screen click event)
  • the direction is distributed on both sides of the feature point, and finally coincides with the feature point, and the distorted picture mapped by the feature area can be twisted back and forth along the direction corresponding to the specified interoperation event, and the dither effect is generated, and finally is still.
  • the reference point C1 is oscillated along the X-axis direction passing through the center of gravity point C0, and each time the reference point C1 moves to a position, a distorted picture is generated, and the distorted picture is played frame by frame, which can be generated as
  • the images in the feature areas shown in Figs. 5A and 5B show the dynamic effects of left and right vibrations.
  • the embodiment of the present invention maps pixel points of a static picture to one or more frame distortion pictures based on one or more reference points to generate a dynamic effect, which is simple to calculate, does not depend on a third-party development package, a library file or a tool, and generates a rendering speed. Fast, less resource consumption, easy to cross-platform.
  • step 103 may include the following sub-steps:
  • Sub-step S31 in the moving direction, at least part of the pixel points in the feature area are texture-mapped according to a preset mode, and a dynamic effect including a one-frame or multi-frame warped picture change is generated.
  • the static picture may be mapped with the motion direction (including acceleration) as the amplitude reference of the distortion to generate a distorted picture.
  • Pixels in the feature area in the still picture can be mapped along the direction of motion, causing distortion of the still picture.
  • the greater the magnitude of the direction of motion eg, the greater the magnitude of the acceleration, the farther the position of the click screen is from the center of the feature area
  • the greater the magnitude of the distortion e.g, the smaller the amplitude of the motion direction
  • sub-step S31 may comprise the following sub-steps:
  • Sub-step S311 dividing the feature area into one or more drawing patterns
  • a graphical manner may be applied, that is, a dynamic effect may be generated based on a grid.
  • the feature area may be divided into one or more drawing graphics, which may be a triangle or other shaped mesh (ie, drawing a graphic), and the triangle is used because the graphic drawing interface, such as OpenGL (Open Graphics Library),
  • drawing graphics such as a triangle or other shaped mesh
  • OpenGL Open Graphics Library
  • Each drawing graphic may have multiple vertices, and each drawing graphic (such as a triangle) may be represented by vertices (such as three vertices), and each vertice may have texture coordinates of a static picture in addition to the corresponding two-dimensional coordinates.
  • one or more rendering graphics may be divided into a central region of the feature region to simulate a dithering effect similar to a physical water ball (a balloon filled with water).
  • Sub-step S312 in the moving direction, moving the vertices of each of the drawn images at one or more time points according to the preset mode;
  • the vertices of each of the drawn figures may be moved in the same direction and in the opposite direction of the moving direction according to the preset mode.
  • the preset mode may include a simple harmonic motion mode and/or a damped vibration mode.
  • the sub-step S312 may include the following sub-steps:
  • Sub-step S3121 in the moving direction, moving the vertices of each of the drawn images at one or more time points according to the simple harmonic motion mode and/or the damped vibration mode.
  • Simple harmonic motion or simple harmonic motion, SHM (Simple Harmonic Motion)
  • SHM Simple Harmonic Motion
  • Simple harmonic motion can refer to an object (such as the vertices of each drawing graphic) when the simple harmonic motion, the object (such as the vertices of each drawing graphic)
  • the force is proportional to the displacement and the force always points to the equilibrium position.
  • Damping vibration can refer to the vibration under the action of resistance. When the resistance is negligible, it can be said to be a simple harmonic motion. During the vibration process, the resistance is affected, the amplitude is gradually reduced, and the energy is gradually lost until the vibration stops.
  • the sub-step S3121 may include the following sub-steps:
  • Sub-step S31211 determining an acceleration of a vertex of each drawn image
  • the interaction event is a shaking event
  • the initial acceleration at the time of shaking can be extracted from the shaking event.
  • the acceleration of the vertices of each drawing figure the greater the amplitude of the shaking, the greater the acceleration of the vertices of each drawing figure.
  • the preset acceleration may be extracted as the initial acceleration of the vertices of each of the drawn graphs.
  • Sub-step S31212 calculating a moving distance of moving the vertices of each of the drawn images along the moving direction in one or more time points according to the acceleration and/or the preset damping coefficient;
  • the force applied to the vertices of each of the drawn graphs can be simulated according to the acceleration, and the force is directed to the equilibrium position, and a spring model is constructed to simulate the harmonic motion of the vertices of each of the drawn graphics along the moving direction.
  • the damping force can be used to simulate the resistance of the vertices of each drawing figure, and the apex of each drawing figure is simulated to dampen vibration along the moving direction.
  • the moving distance can be calculated by one or more time points by a kinematic formula.
  • Sub-step S31213 the target coordinates of the vertices of each of the drawn images are calculated from the original coordinates and the moving distance.
  • the vertices of each drawing figure may have original coordinates, that is, original two-dimensional coordinates in the static picture, and the moving distance is added along the moving direction, and each drawing figure after the movement can be obtained.
  • the position of the vertex ie the target coordinates.
  • Sub-step S313 for each drawing graphic, using a graphic drawing interface to perform texture mapping on the pixel points in the drawing graphic according to the texture coordinates of each vertex, to generate a dynamic effect including one or more frames of distorted picture changes.
  • the graphics rendering interface may adopt OpenGL, which may provide texture mapping, that is, a process of mapping texture pixels in the texture space to pixels in the screen space.
  • the steps to use texture mapping can be as follows:
  • the first step define the texture object
  • Step 2 Generate an array of texture objects
  • Step 3 Complete the definition of the texture object by selecting the texture object using glBindTexture.
  • glTexImage2D (GL_TEXTURE_2D,0,3, mes_Texmapl.GetWidth(), mee_Texmapl.GetHeight(), 0, GL_BGR_EXT, GL_UNSIGNED_BYTE, mse_Texmapl.GetDibBitsl'trQ);
  • Step 4 Load the corresponding texture for the scene by glBindTexture before drawing the scene.
  • Step 5 Call glDeleteTextures to delete the texture object before the program ends.
  • the rendered graph is a triangle comprising one or more pixels, wherein the vertices have texture coordinates in the texture space, and the texture coordinates of the vertex a are (0.2, 0.8), the vertex b The texture coordinates are (0.4, 0.2), and the texture coordinates of the vertex c are (0.8, 0.4), and the vertices of the drawing graphics are moved, so that the drawing graphics are deformed, and the OpenGL texture is mapped to the obtained object space, and after rendering, Drawing graphics produces effects such as stretching and compression, and the feature area will appear to move.
  • the whole of the feature image may be distorted to the left side; as shown in FIG. 5B, if at least part of the pixels in the feature area The direction of movement of the point to the right of the feature area, the whole of the feature image can be distorted to the right.
  • the vertices of the drawn graphics near the central area of the feature area of the still picture can simulate the simple harmonic motion of the spring, so that the picture is regularly pulled up, producing a jitter effect similar to the elastic water ball.
  • the motion direction of at least part of the pixel points in the feature area may be along a direction corresponding to the specified interaction event (such as a shaking direction of the shaking event, pointing to a screen click event) Directions)
  • the specified interaction event such as a shaking direction of the shaking event, pointing to a screen click event
  • the distorted picture mapped by the feature area may correspond to the specified interoperation event
  • the direction is twisted back and forth, creating a dithering effect and eventually resting.
  • the direction of the shaking of the mobile phone can be judged, and the feature area in the static picture can move along the shaking direction.
  • the feature area can be rotated around the center to simulate the dynamic effect of the violent shaking.
  • the center of the feature area can be shaken along the direction of the center position and the click position.
  • the center of the feature area can follow the direction of the finger movement.
  • the effect of being dragged is generated, and the micro-jitter algorithm is used to make the jitter region produce the effect of the slight jitter generated when the water ball is dragged, and enhance its physical authenticity.
  • the device is shaken left or right or the user slides back and forth in the feature area, so that at least some of the pixel points in the feature area are oscillated along the horizontal axis direction, and the vertices of each of the drawn figures in the feature area are moved to one at each time point.
  • the position generates a distorted picture, and the distorted picture is played frame by frame, which can produce a dynamic effect in which the image in the feature area shown in FIGS. 5A and 5B exhibits left and right vibration.
  • the invention performs texture mapping on at least some pixel points in the feature region based on the motion direction, and generates a dynamic effect including a one-frame or multi-frame warped picture change.
  • a dynamic effect is generated on the feature region, which reduces the volume of the dynamic effect and reduces The bandwidth occupation during transmission is convenient for transmission.
  • the time consumption for generating dynamic effects is reduced, and dynamic effects can be quickly generated for pictures in network pictures or system albums, and quickly It is convenient to generate dynamic effects, realizing the real-time interaction between dynamic effects and user interaction.
  • the method may further include the following steps:
  • Step 104 Generate a dynamic picture by using the static picture and the one or more frame distortion image.
  • one frame of the still picture may be saved, and one or more frames including the distorted picture of the feature area, and a dynamic picture, such as a GIF, is generated.
  • a dynamic picture such as a GIF
  • the method may further include the following steps:
  • Step 105 Generate dynamic information based on the feature area.
  • the dynamic information may be configuration information for mapping a feature region of a still picture to a one-frame or multi-frame warped image, such as XML (Extensible Markup Language), json (Javascript Object Notation), and the like.
  • XML Extensible Markup Language
  • json Javascript Object Notation
  • An example of configuration information designed with json can be as follows:
  • step 105 may include the following sub-steps:
  • Sub-step S41 generating dynamic information using the feature area, the feature point, and the one or more reference points.
  • dynamic information may be generated by the feature area, the feature point, and one or more reference points to support the generation of the dynamic effect based on the reference point.
  • step 105 may include the following sub-steps:
  • Sub-step S42 generating dynamic information using the motion direction of the feature area and at least part of the pixel points in the feature area.
  • the motion direction of at least part of the pixel points in the feature area and the feature area may be generated to generate dynamic information to support the generation of the dynamic effect based on the motion direction.
  • Step 106 Write the dynamic information and the script object into the static picture to generate a dynamic interaction file.
  • a script object (such as a JS script) may be written into a static picture, and may be transmitted to a network or to other users, or may be stored.
  • the static object After reading the script object, the static object can be mapped according to the dynamic information by using the script object to generate a dynamic effect of changing frame by frame.
  • FIG. 6 a flow chart of a method for generating a dynamic effect based on a static picture according to an embodiment of the present invention is shown.
  • Step 601 Select a feature area in the static picture.
  • a static image which can be relative to a dynamic image, that is, a picture that does not have a dynamic effect.
  • the static picture may include a JPG, JPEG, PNG, BMP, and the like, which is not limited by the embodiment of the present invention.
  • a certain area may be selected as a feature area in a static picture, and the feature area may be a polygon, a circle, an ellipse or the like, and a dynamic effect is generated for the image data in the feature area.
  • an elliptical selection box as shown in FIG. 3 may be provided, and the user may change the shape of the elliptical selection box and select its position for the still picture, the position may be Determined as a feature area.
  • the feature area may have feature points for generating a dynamic effect.
  • the feature area may include a convex area
  • the feature point may be To include the point of gravity
  • the convex area geometrically, can mean that the figure is convex outward, and there is no recess.
  • Algebra can be defined as a convex region: any two points a, b in the set, t*a+(1-t)*b still belong to this set, where 0 ⁇ t ⁇ 1, the meaning of this expression can be the connection The straight line segments of the two points a b are still in the set.
  • the geometric center of gravity also known as the geometric center, when the object is homogeneous (density is fixed), the center of mass is equivalent to the centroid, for example, the intersection of the three midlines of the triangle.
  • Step 602 determining one or more reference points in the feature area
  • one or more reference points may be determined according to the specified interaction event when the specified interaction event is monitored.
  • the interactivity event can be an event caused by an interaction by the user.
  • the feature area can be similar to the physical water ball dithering effect (approximating the balloon filled with water), and the direction and manner of the shaking are changed according to different interaction operations of the user, such as shaking the mobile phone.
  • the specified interoperation event may include a shaking event
  • one or more reference points may be selected in the feature area of the still picture according to the shaking direction of the shaking event.
  • the user can interact by shaking.
  • an acceleration sensor such as a three-axis acceleration sensor
  • an operating system such as Android
  • the accelerations of the device in the horizontal, vertical and spatial vertical directions are respectively obtained, the sum of the squares of the accelerations in each direction is calculated, and the square root is obtained as the comprehensive acceleration of the device movement.
  • the integrated acceleration is greater than the set acceleration threshold, it can be determined that the shaking event is monitored, and the user's shaking operation is determined to interact.
  • the shaking direction it may be the same as the shaking direction, or may be opposite to the shaking direction, and one or more continuously distributed reference points are selected in the feature area of the still picture.
  • the specified interactivity event may include a screen click event, and one or more reference points are selected in the feature area of the still picture in a direction pointing to the occurrence of the screen click event.
  • the user can interact by clicking on a screen such as a feature area.
  • the direction of the screen click event may be pointed to.
  • the center point/center of gravity of the feature area points to the direction in which the screen click event occurs, and may be the same as the direction, or may be opposite to the direction.
  • One or more consecutively distributed reference points are selected from the feature regions of the picture.
  • the determination manner of the above reference point is only an example.
  • the determination manner of the other reference points may be set according to the actual situation, for example, the position of the reference point is directly specified, which is not limited by the embodiment of the present invention.
  • those skilled in the art may also adopt other reference point determination manners according to actual needs, which is not limited by the embodiment of the present invention.
  • Step 603 Map pixel points of the static picture to one or more frame distortion pictures according to the feature point and the one or more reference points;
  • the static picture may be mapped with the reference point as the reference of the feature point to generate a distorted picture.
  • the pixel points in the feature area in the still picture may be mapped along the direction in which the feature point points to the reference point, causing distortion of the static picture.
  • the amplitude of the distortion can be larger.
  • the smaller the distortion the smaller the amplitude of the distortion.
  • the distortion picture may not be generated. distortion.
  • step 603 may include the following sub-steps:
  • Sub-step S51 generating a distorted picture
  • the initial state of the warped picture may be blank.
  • Sub-step S52 mapping pixel points on the first connection line in the feature area to the second connection line;
  • the first connection may be a connection between the feature point and an edge point
  • the second connection may be A line connecting the current reference point to the edge point, which may be a coordinate point on the edge of the feature area.
  • mapping may be performed according to the reference point.
  • the mapped pixel point may be located in the feature area, and the pixel points in the feature area are mapped in the feature area, instead of Will be mapped outside the feature area.
  • C0 is a feature point (such as a center of gravity)
  • C1 can be a reference point
  • E can be an edge point
  • C0E can be a first connection
  • C1E can be a second connection.
  • the pixel point P0 on the first connection line COE can be mapped onto the second connection line C1E to obtain the mapped point P1.
  • the sub-step S52 may include the following sub-steps:
  • Sub-step S521 calculating a relative position of the pixel on the first connection line in the feature area on the first connection line;
  • Sub-step S522 copying the pixel points to the second connection line according to the relative position.
  • the relative positions of the pixel points can be expressed in a proportional relationship.
  • the ratio of the line segment C0P0 to the line segment C0E can be used as the relative position of the pixel point P0 on the first line C0E.
  • the point P1 on the line segment C1E is obtained such that the ratio of the line segment C1P1 to the second line C1E is R.
  • the relative position between the line segment including the pixel point and the first line may be used, which is not limited in this embodiment of the present invention.
  • Sub-step S53 copying the pixel points on the second connection line to the same position in the warped picture
  • the position of the pixel on the second connection line is determined, it can be copied to the same position in the distorted picture to perform distortion mapping of the image.
  • step 603 may further include the following sub-steps:
  • Sub-step S54 the pixel points outside the feature area are mapped to the same position in the warped picture.
  • the pixel on the static picture if it is outside the convex feature area, it can be directly copied to the corresponding position on the twisted picture without distortion.
  • the embodiment of the present invention may not map the pixel points outside the feature area, and only perform the mapping based on the pixel points in the feature area, which is not limited by the embodiment of the present invention.
  • step 603 may further include the following sub-steps:
  • Sub-step S55 performing pixel point superimposition processing on the pixel points whose positions overlap in the warped picture.
  • the embodiment of the present invention can perform pixel point superposition processing.
  • RGB color mode For example, if the RGB color mode is applied, various colors can be obtained by changing the red (R), green (G), and blue (B) color channels of the pixel and superimposing them on each other. of.
  • one pixel point (such as randomly selecting a pixel point and selecting a pixel point finally copied to the position) may be selected as the pixel point of the position, and may also be selected by other methods.
  • the pixel of the location is not limited in this embodiment of the present invention.
  • step 603 may further include the following sub-steps:
  • Sub-step S56 performing pixel point interpolation processing on the blank position in the warped picture.
  • the static picture is mapped to generate distortion, in a region where the pixel point is sparse in the distorted picture, some pixels may not be assigned (ie, no pixel is mapped to the position, and the pixel at the position)
  • the point is the original state, such as white), which produces a blank position.
  • the embodiment of the present invention can perform pixel point interpolation processing to complete the distorted picture.
  • the pixel point Py (such as the upper pixel point, the lower pixel point, the left pixel point, the right pixel point, etc.) closest to it is selected, and the pixel is selected.
  • the value of the point Py is assigned to the pixel point Px.
  • the whole of the feature image can be twisted to the left side; as shown in FIG. 5B, If the reference point is on the right side of the feature point (C0 as shown in FIG. 4A), the entirety of the feature image can be distorted to the right side.
  • the reference point may start from the position of the feature point along the direction corresponding to the specified interaction event (such as the shaking direction of the shaking event, pointing to the occurrence of the screen click event)
  • the direction is distributed on both sides of the feature point, and finally coincides with the feature point, and the distorted picture mapped by the feature area can be twisted back and forth along the direction corresponding to the specified interoperation event, and the dither effect is generated, and finally is still.
  • the reference point C1 is oscillated along the X-axis direction passing through the center of gravity point C0, and each time the reference point C1 moves to a position, a distorted picture is generated, and the distorted picture is played frame by frame, which can be generated as
  • the images in the feature areas shown in Figs. 5A and 5B show the dynamic effects of left and right vibrations.
  • Step 604 Generate a dynamic effect based on the one or more frames of distorted pictures.
  • the dynamic effect may be generated at the current time or may be driven later, which is not limited in the embodiment of the present invention.
  • step 604 can include the following sub-steps:
  • Sub-step S61 generating a dynamic picture by using the static picture and the one or more frames of distorted pictures.
  • one frame of the still picture may be saved, and one or more frames including the distorted picture of the feature area, and a dynamic picture, such as a GIF, is generated.
  • a dynamic picture such as a GIF
  • step 604 can include the following sub-steps:
  • Sub-step S71 generating dynamic information based on the feature area
  • the dynamic information may be configuration information for mapping a feature region of a still picture to a one-frame or multi-frame warped image, such as XML (Extensible Markup Language), json (Javascript Object Notation), and the like.
  • XML Extensible Markup Language
  • json Javascript Object Notation
  • the sub-step S71 may include the following sub-steps:
  • Sub-step S711 generating dynamic information using the feature area, the feature point, and the one or more reference points.
  • dynamic information may be generated by the feature area, the feature point, and one or more reference points to support the generation of the dynamic effect based on the reference point.
  • Sub-step S712 the dynamic information and the script object are written into the static picture to generate a dynamic interaction file.
  • a script object (such as a JS script) may be written into a static picture, and may be transmitted to a network or to other users, or may be stored.
  • the static object After reading the script object, the static object can be mapped according to the dynamic information by using the script object to generate a dynamic effect of changing frame by frame.
  • one or more reference points are determined to map the pixel points of the static picture into one or more frames of the distorted picture to generate a dynamic effect, which is simple to calculate and does not depend on third-party development.
  • FIG. 7 a flow chart of a method for generating a picture dynamic effect based on an interaction operation according to an embodiment of the present invention is shown.
  • Step 701 Select a feature area in the static picture.
  • a static image which can be relative to a dynamic image, that is, a picture that does not have a dynamic effect.
  • the static picture may include a JPG, JPEG, PNG, BMP, and the like, which is not limited by the embodiment of the present invention.
  • a certain area may be selected as a feature area in a static picture, and the feature area may be a polygon, a circle, an ellipse or the like, and a dynamic effect is generated for the image data in the feature area.
  • an elliptical selection box as shown in FIG. 3 may be provided, and the user may change the shape of the elliptical selection box and select its position for the still picture, the position may be Determined as a feature area.
  • Step 702 When listening to the specified interaction operation event, determining a motion direction of at least a part of the pixel points in the feature area according to the specified operation event;
  • the interoperation event may be an event caused by an interaction operation by a user.
  • the position of the vertices of the drawn graphic can be moved, and the movement of the position of the vertices can depend on the user's interaction.
  • the vertices of the drawn graphic can move in the direction of shaking, and the shape will change accordingly.
  • the vertices of the drawn graphic may move toward the touch point of the user, and the shape thereof also changes.
  • the feature area can be similar to the physical water ball dithering effect (approximating the balloon filled with water), and the direction and manner of the shaking are changed according to different interaction operations of the user, such as shaking the mobile phone.
  • the specified interworking event includes a shaking event
  • step 702 may include the following substeps:
  • Sub-step S81 setting the shaking direction of the shaking event to the moving direction of at least part of the pixel points in the feature area.
  • the user can perform an interactive operation by shaking.
  • an acceleration sensor such as a three-axis acceleration sensor
  • an operating system such as Android
  • the accelerations of the device in the horizontal, vertical and spatial vertical directions are respectively obtained, the sum of the squares of the accelerations in each direction is calculated, and the square root is obtained as the comprehensive acceleration of the device movement.
  • the integrated acceleration is greater than the set acceleration threshold, it can be determined that the shaking event is monitored, and the user's shaking operation is determined to interact.
  • the shaking direction it may be the same as the shaking direction, or may be opposite to the shaking direction, and select a moving direction of at least a part of the pixel points in the feature area as a feature area of the still picture.
  • direction of motion may include acceleration.
  • the specified interaction event may include a screen click event
  • step 702 may include the following sub-steps:
  • Sub-step S82 setting a position pointing to the occurrence of the screen click event as a moving direction of at least a part of the pixel points in the feature area.
  • the user can perform an interactive operation by clicking a screen (such as a feature area).
  • the direction of the screen click event may be pointed to.
  • the center point/center of gravity of the feature area points to the direction in which the screen click event occurs, and may be the same as the direction, or may be opposite to the direction.
  • the direction of motion of at least some of the pixel points in the feature area is selected in the feature area of the picture.
  • the manner of determining the direction of motion is only an example.
  • the manner of determining other directions of motion may be set according to an actual situation, which is not limited by the embodiment of the present invention.
  • those skilled in the art may also adopt other manners of determining the direction of motion according to actual needs, which is not limited by the embodiment of the present invention.
  • Step 703 Perform texture mapping on at least part of the pixel points in the feature area according to the preset mode in the moving direction, to generate a dynamic effect including a one-frame or multi-frame warped picture change.
  • the direction of motion (including acceleration) can be used as a reference for the amplitude of the distortion, and the static The image is mapped to produce a distorted picture.
  • Pixels in the feature area in the still picture can be mapped along the direction of motion, causing distortion of the still picture.
  • the greater the magnitude of the direction of motion eg, the greater the magnitude of the acceleration, the farther the position of the click screen is from the center of the feature area
  • the greater the magnitude of the distortion e.g, the smaller the amplitude of the motion direction
  • step 703 may include the following sub-steps:
  • Sub-step S91 dividing the feature area into one or more drawing patterns
  • a graphical manner may be applied, that is, a dynamic effect may be generated based on a grid.
  • the feature area may be divided into one or more drawing graphics, which may be a triangle or other shaped mesh (ie, drawing a graphic), and the triangle is used because the graphic drawing interface, such as OpenGL (Open Graphics Library),
  • drawing graphics such as a triangle or other shaped mesh
  • OpenGL Open Graphics Library
  • Each drawing graphic may have multiple vertices, and each drawing graphic (such as a triangle) may be represented by vertices (such as three vertices), and each vertice may have texture coordinates of a static picture in addition to the corresponding two-dimensional coordinates.
  • one or more rendering graphics may be divided into a central region of the feature region to simulate a dithering effect similar to a physical water ball (a balloon filled with water).
  • Sub-step S92 in the moving direction, moving the vertices of each drawing graphic at one or more time points according to the preset mode;
  • the vertices of each of the drawn figures may be moved in the same direction and in the opposite direction of the moving direction according to the preset mode.
  • the preset mode may include a simple harmonic motion mode and/or a damped vibration mode
  • the sub-step S92 may include the following sub-steps:
  • Sub-step S921 in the moving direction, moving the vertices of each of the drawn figures at one or more time points according to the simple harmonic motion mode and/or the damped vibration mode.
  • Simple harmonic motion or simple harmonic motion, SHM (Simple Harmonic Motion)
  • SHM Simple Harmonic Motion
  • Simple harmonic motion can refer to an object (such as the vertices of each drawing graphic) when the simple harmonic motion, the object (such as the vertices of each drawing graphic)
  • the force is proportional to the displacement and the force always points to the equilibrium position.
  • Damping vibration can refer to the vibration under the action of resistance. When the resistance is negligible, it can be said to be a simple harmonic motion. During the vibration process, the resistance is affected, the amplitude is gradually reduced, and the energy is gradually lost until the vibration stops.
  • the sub-step S921 may include the following sub-steps:
  • Sub-step S9211 determining an acceleration of a vertex of each of the drawn graphics
  • the interaction event is a shaking event
  • the initial acceleration at the time of shaking can be extracted from the shaking event.
  • the acceleration of the vertices of each drawing figure the greater the amplitude of the shaking, the greater the acceleration of the vertices of each drawing figure.
  • the preset acceleration may be extracted as the initial acceleration of the vertices of each of the drawn graphs.
  • Sub-step S9212 calculating, according to the acceleration and/or the preset damping coefficient, a moving distance of moving a vertex of each drawing figure along the moving direction in one or more time points;
  • the force applied to the vertices of each of the drawn graphs can be simulated according to the acceleration, and the force is directed to the equilibrium position, and a spring model is constructed to simulate the harmonic motion of the vertices of each of the drawn graphics along the moving direction.
  • the damping force can be used to simulate the resistance of the vertices of each drawing figure, and the apex of each drawing figure is simulated to dampen vibration along the moving direction.
  • the moving distance can be calculated by one or more time points by a kinematic formula.
  • Sub-step S9213 the target coordinates of the vertices of each of the drawn figures are calculated from the original coordinates and the moving distance.
  • the vertices of each drawing figure may have original coordinates, that is, original two-dimensional coordinates in the static picture, and the moving distance is added along the moving direction, and each drawing figure after the movement can be obtained.
  • the position of the vertex ie the target coordinates.
  • Sub-step S93 for each drawing graphic, using a graphic drawing interface to perform texture mapping on the pixel points in the drawing graphic according to the texture coordinates of each vertex, to generate a dynamic effect including one or more frames of changed pictures.
  • the graphics rendering interface may adopt OpenGL, which may provide texture mapping, that is, a process of mapping texture pixels in the texture space to pixels in the screen space.
  • the rendered graph is a triangle comprising one or more pixels, wherein the vertices have texture coordinates in the texture space, and the texture coordinates of the vertex a are (0.2, 0.8), the vertex b The texture coordinates are (0.4, 0.2), and the texture coordinates of the vertex c are (0.8, 0.4), and the vertices of the drawing graphics are moved, so that the drawing graphics are deformed, and the OpenGL texture is mapped to the obtained object space, and after rendering, Drawing graphics produces effects such as stretching and compression, and the feature area will appear to move.
  • the whole of the feature image may be distorted to the left side; as shown in FIG. 5B, if at least part of the pixels in the feature area The direction of movement of the point to the right of the feature area, the whole of the feature image can be distorted to the right.
  • the vertices of the drawn graphics near the central area of the feature area of the still picture can simulate the simple harmonic motion of the spring, so that the picture is regularly pulled up, producing a jitter effect similar to the elastic water ball.
  • the motion direction of at least part of the pixel points in the feature area may be along a direction corresponding to the specified interaction event (such as a shaking direction of the shaking event, pointing to a screen click event) Directions)
  • the specified interaction event such as a shaking direction of the shaking event, pointing to a screen click event
  • the distorted picture mapped by the feature area may correspond to the specified interoperation event
  • the direction is twisted back and forth, creating a dithering effect and eventually resting.
  • the direction of the shaking of the mobile phone can be judged, and the feature area in the static picture can move along the shaking direction.
  • the feature area can be rotated around the center to simulate the dynamic effect of the violent shaking.
  • the center of the feature area can be shaken along the direction of the center position and the click position.
  • the center of the feature area can follow the direction of the finger movement.
  • the effect of being dragged is generated, and the micro-jitter algorithm is used to make the jitter region produce the effect of the slight jitter generated when the water ball is dragged, and enhance its physical authenticity.
  • the device is shaken left or right or the user slides back and forth in the feature area, so that at least some of the pixel points in the feature area are oscillated along the horizontal axis direction, and the vertices of each of the drawn figures in the feature area are moved to one at each time point.
  • the position generates a distorted picture, and the distorted picture is played frame by frame, which can produce a dynamic effect in which the image in the feature area shown in FIGS. 5A and 5B exhibits left and right vibration.
  • the embodiment of the present invention determines the motion direction of at least a part of the pixel points in the feature area of the static picture based on the specified interaction event, and performs texture mapping on at least part of the pixel points in the feature area according to the preset mode to generate a frame.
  • multi-frame distortion of the dynamic effect of the picture change on the one hand, generating dynamic effects on the feature area, reducing the volume of the dynamic effect, reducing the bandwidth occupation during transmission, and facilitating transmission; on the other hand, due to the high efficiency of texture mapping, reduction
  • the time-consuming process of generating dynamic effects can quickly generate dynamic effects for pictures in the network picture or system album, and generate dynamic effects quickly and conveniently, realizing the real-time interaction between the dynamic effects and the user's interaction behavior.
  • FIG. 8 a flow chart of steps of a method for generating a customizable dynamic graph according to an embodiment of the present invention is shown. Specifically, the method may include the following steps:
  • Step 801 Read dynamic information from a dynamic interaction file.
  • the dynamic interaction file may include dynamic information, a script object, and a static picture.
  • a static image which can be relative to a dynamic image, that is, a picture that does not have a dynamic effect.
  • the static picture may include a JPG, JPEG, PNG, BMP, and the like, which is not limited by the embodiment of the present invention.
  • a certain area in the still picture is used as a feature area
  • the feature area may be a polygon, a circle, an ellipse or the like, and a dynamic effect may be generated for the image data in the feature area.
  • the dynamic information may be configuration information for mapping a feature region of a still picture to a one-frame or multi-frame warped image, such as XML (Extensible Markup Language), json (Javascript Object Notation), and the like.
  • XML Extensible Markup Language
  • json Javascript Object Notation
  • the feature area in the still picture may be an elliptical area as shown in FIG.
  • Step 802 Map, by the script object, at least part of pixel points in the static picture to one or more frame distortion pictures according to the dynamic information, to drive the static picture to change frame by frame.
  • the static image After reading the dynamic information, the static image can be mapped according to the dynamic information by using a script object to generate a dynamic effect of changing frame by frame.
  • the script object maps at least part of the pixels in the static image to one or more frames of the distorted picture according to the dynamic information to drive the static
  • the picture changes from frame to frame, and the dynamic information can be customized.
  • the dynamic effect can be specified by the user, which enriches the form of the dynamic effect. Since the size of the static picture itself changes little, only the volume such as dynamic information and script object is added. Very small (several K size) information, while ensuring dynamic effects, greatly reducing the size.
  • the dynamic information may include a feature area, a feature point, and one or more reference points.
  • step 802 may include the following sub-steps:
  • Sub-step S11 the pixel object of the static picture is mapped into one or more frame distortion pictures by the script object according to the feature point and the one or more reference points.
  • the static picture may be mapped with the reference point as the reference of the feature point to generate a distorted picture.
  • the pixel points in the feature area in the still picture may be mapped along the direction in which the feature point points to the reference point, causing distortion of the static picture.
  • the amplitude of the distortion can be larger.
  • the smaller the distortion the smaller the amplitude of the distortion.
  • the distortion picture may not be generated. distortion.
  • the feature area may include a convex area
  • the feature point may include a center of gravity point
  • the convex area geometrically, can mean that the figure is convex outward, and there is no recess.
  • Algebra can be defined as a convex region: any two points a, b in the set, t*a+(1-t)*b still belong to this set, where 0 ⁇ t ⁇ 1, the meaning of this expression can be the connection The straight line segments of the two points a b are still in the set.
  • the geometric center of gravity also known as the geometric center, when the object is homogeneous (density is fixed), the center of mass is equivalent to the centroid, for example, the intersection of the three midlines of the triangle.
  • sub-step S11 may comprise the following sub-steps:
  • Sub-step S111 generating a distorted picture by the script object
  • the initial state of the warped picture may be blank.
  • Sub-step S112 mapping pixel points on the first connection line in the feature area to the second connection line;
  • the first connection may be a line connecting the feature point and the edge point
  • the second connection may be a line connecting the current reference point and the edge point
  • the edge point may be the feature.
  • mapping may be performed according to the reference point.
  • the mapped pixel point may be located in the feature area, and the pixel points in the feature area are mapped in the feature area, instead of Will be mapped outside the feature area.
  • C0 is a feature point (such as a center of gravity)
  • C1 can be a reference point
  • E can be an edge point
  • C0E can be a first connection
  • C1E can be a second connection.
  • the pixel point P0 on the first connection line COE can be mapped onto the second connection line C1E to obtain the mapped point P1.
  • the sub-step S112 may include the following sub-steps:
  • Sub-step S1111 calculating a relative position of the pixel on the first connection line in the feature area on the first connection line;
  • Sub-step S1122 copying the pixel points to the second connection line according to the relative position.
  • the relative positions of the pixel points can be expressed in a proportional relationship.
  • the ratio of the line segment C0P0 to the line segment C0E can be used as the relative position of the pixel point P0 on the first line C0E.
  • the point P1 on the line segment C1E is obtained such that the ratio of the line segment C1P1 to the second line C1E is R.
  • the relative position between the line segment including the pixel point and the first line may be used, which is not limited in this embodiment of the present invention.
  • Sub-step S113 copying the pixel points on the second connection line to the same position in the warped picture
  • the position of the pixel on the second connection line is determined, it can be copied to the same position in the distorted picture to perform distortion mapping of the image.
  • sub-step S11 may comprise the following sub-steps:
  • Sub-step S114 the pixel points outside the feature area are mapped to the same position in the warped picture.
  • the pixel on the static picture if it is outside the convex feature area, it can be directly copied to the corresponding position on the twisted picture without distortion.
  • the embodiment of the present invention may not map the pixel points outside the feature area, and only perform the mapping based on the pixel points in the feature area, which is not limited by the embodiment of the present invention.
  • sub-step S11 may comprise the following sub-steps:
  • Sub-step S115 performing pixel point superimposition processing on the pixel points whose positions overlap in the warped picture.
  • the embodiment of the present invention can perform pixel point superposition processing.
  • RGB color mode For example, if the RGB color mode is applied, various colors can be obtained by changing the red (R), green (G), and blue (B) color channels of the pixel and superimposing them on each other. of.
  • one pixel point (such as randomly selecting a pixel point and selecting a pixel point finally copied to the position) may be selected as the pixel point of the position, and may also be selected by other methods.
  • the pixel of the location is not limited in this embodiment of the present invention.
  • sub-step S11 may comprise the following sub-steps:
  • Sub-step S116 performing pixel point interpolation processing on the blank position in the warped picture.
  • the static picture is mapped to generate distortion, in a region where the pixel point is sparse in the distorted picture, some pixels may not be assigned (ie, no pixel is mapped to the position, and the pixel at the position)
  • the point is the original state, such as white), which produces a blank position.
  • the embodiment of the present invention can perform pixel point interpolation processing to complete the distorted picture.
  • the pixel point Py (such as the upper pixel point, the lower pixel point, the left pixel point, the right pixel point, etc.) closest to it is selected, and the pixel is selected.
  • the value of the point Py is assigned to the pixel point Px.
  • a distorted picture can be mapped according to the position of the reference point (C1 shown in FIG. 4A), with reference.
  • the point (C1 as shown in Fig. 4A) moves to a different position, the distorted picture also changes, and the distorted picture can be continuously played to form a dynamic effect.
  • the whole of the feature image can be twisted to the left side; as shown in FIG. 5B, If the reference point is on the right side of the feature point (C0 as shown in FIG. 4A), the entirety of the feature image can be distorted to the right side.
  • the reference point may be distributed from the position of the feature point, distributed on both sides of the feature point, and finally coincides with the feature point, and the distorted picture mapped by the feature area may be twisted back and forth along the direction corresponding to the specified interaction event to generate a dithering effect. And eventually still.
  • the reference point C1 is oscillated along the X-axis direction passing through the center of gravity point C0, and each time the reference point C1 moves to a position, a distorted picture is generated, and the distorted picture is played frame by frame, which can be generated as
  • the images in the feature areas shown in Figs. 5A and 5B show the dynamic effects of left and right vibrations.
  • the embodiment of the present invention maps pixel points of a static picture to one or more frame distortion pictures based on one or more reference points to generate a dynamic effect, which is simple to calculate, does not depend on a third-party development package, a library file or a tool, and generates a rendering speed. Fast, less resource consumption, easy to cross-platform.
  • the dynamic information may include a feature area and a moving direction of at least a part of the pixel points in the feature area.
  • step 102 may include the following sub-steps. :
  • Sub-step S21 the script object performs texture mapping on at least part of the pixel points in the feature area according to the preset mode in the moving direction to generate a dynamic effect including one or more frames of changed pictures.
  • the static picture may be mapped with the motion direction (including acceleration) as the amplitude reference of the distortion to generate a distorted picture.
  • Pixels in the feature area in the still picture can be mapped along the direction of motion, causing distortion of the still picture.
  • the greater the magnitude of the direction of motion eg, the greater the magnitude of the acceleration, the farther the position of the click screen is from the center of the feature area
  • the greater the magnitude of the distortion e.g, the smaller the amplitude of the motion direction
  • sub-step S21 may comprise the following sub-steps:
  • Sub-step S211 dividing the feature area into one or more drawing graphics
  • a graphical manner may be applied, that is, a dynamic effect may be generated based on a grid.
  • the feature area may be divided into one or more drawing graphics, which may be a triangle or other shaped mesh (ie, drawing a graphic), and the triangle is used because the graphic drawing interface, such as OpenGL (Open Graphics Library),
  • drawing graphics such as a triangle or other shaped mesh
  • OpenGL Open Graphics Library
  • Each drawing graphic may have multiple vertices, and each drawing graphic (such as a triangle) may be represented by vertices (such as three vertices), and each vertice may have texture coordinates of a static picture in addition to the corresponding two-dimensional coordinates.
  • one or more rendering graphics may be divided into a central region of the feature region to simulate a dithering effect similar to a physical water ball (a balloon filled with water).
  • Sub-step S212 in the moving direction, moving the vertices of each of the drawn images at one or more time points according to the preset mode;
  • the vertices of each of the drawn figures may be moved in the same direction and in the opposite direction of the moving direction according to the preset mode.
  • the preset mode includes a simple harmonic motion mode and/or a damped vibration mode.
  • the sub-step S212 may include the following sub-steps:
  • Sub-step S2121 in the moving direction, moving the vertices of each of the drawn images at one or more time points according to the simple harmonic motion mode and/or the damped vibration mode.
  • Simple harmonic motion or simple harmonic motion, SHM (Simple Harmonic Motion)
  • SHM Simple Harmonic Motion
  • Simple harmonic motion can refer to an object (such as the vertices of each drawing graphic) when the simple harmonic motion, the object (such as the vertices of each drawing graphic)
  • the force is proportional to the displacement and the force always points to the equilibrium position.
  • Damping vibration can refer to the vibration under the action of resistance. When the resistance is negligible, it can be said to be a simple harmonic motion. During the vibration process, the resistance is affected, the amplitude is gradually reduced, and the energy is gradually lost until the vibration stops.
  • the sub-step S2121 may include the following sub-steps:
  • Sub-step S21211 determining an acceleration of a vertex of each of the drawn images
  • the acceleration set first or the preset acceleration may be used as the initial acceleration of the vertices of each of the drawn graphics.
  • Sub-step S21212 calculating a moving distance of moving the vertices of each of the drawn images along the moving direction in one or more time points according to the acceleration and/or the preset damping coefficient;
  • the force applied to the vertices of each of the drawn graphs can be simulated according to the acceleration, and the force is directed to the equilibrium position, and a spring model is constructed to simulate the harmonic motion of the vertices of each of the drawn graphics along the moving direction.
  • the damping force can be used to simulate the resistance of the vertices of each drawing figure, and the apex of each drawing figure is simulated to dampen vibration along the moving direction.
  • the moving distance can be calculated by one or more time points by a kinematic formula.
  • Sub-step S21213 the target coordinates of the vertices of each of the drawn images are calculated from the original coordinates and the moving distance.
  • the vertices of each drawing figure may have original coordinates, that is, original two-dimensional coordinates in the static picture, and the moving distance is added along the moving direction, and each drawing figure after the movement can be obtained.
  • the position of the vertex ie the target coordinates.
  • the graphics rendering interface may adopt OpenGL, which may provide texture mapping, that is, a process of mapping texture pixels in the texture space to pixels in the screen space.
  • the rendered graph is a triangle comprising one or more pixels, wherein the vertices have texture coordinates in the texture space, and the texture coordinates of the vertex a are (0.2, 0.8), the vertex b The texture coordinates are (0.4, 0.2), and the texture coordinates of the vertex c are (0.8, 0.4), and the vertices of the drawing graphics are moved, so that the drawing graphics are deformed, and the OpenGL texture is mapped to the obtained object space, and after rendering, Drawing graphics produces effects such as stretching and compression, and the feature area will appear to move.
  • the whole of the feature image may be distorted to the left side; as shown in FIG. 5B, if at least part of the pixels in the feature area The direction of movement of the point to the right of the feature area, the whole of the feature image can be distorted to the right.
  • the vertices of the drawn graphics near the central area of the feature area of the still picture can simulate the simple harmonic motion of the spring, so that the picture is regularly pulled up, producing a jitter effect similar to the elastic water ball.
  • the moving direction of at least part of the pixel points in the feature area may be on both sides of the feature area (such as the left side and the right side, the upper side and the lower side), and the feature area mapping is performed according to the simple harmonic motion mode and/or the damped vibration mode.
  • the distorted picture can be twisted back and forth, producing a dithering effect and eventually resting.
  • the moving direction of at least part of the pixel points in the feature area is left and right sliding back and forth, so that at least some of the pixel points in the feature area are oscillated along the horizontal axis direction, and the vertices of each drawing figure in the feature area are at each time point.
  • Moving to a position produces a distorted picture, and the distorted picture is played frame by frame, which can produce a dynamic effect in which the image in the feature area shown in FIGS. 5A and 5B exhibits left and right vibration.
  • the invention performs texture mapping on at least some pixel points in the feature region based on the motion direction, and generates a dynamic effect including a one-frame or multi-frame warped picture change.
  • a dynamic effect is generated on the feature region, which reduces the volume of the dynamic effect and reduces The bandwidth occupation during transmission is convenient for transmission.
  • the time-consuming generation of dynamic effects is reduced, and dynamic effects can be quickly generated for pictures in network pictures or system albums, and quickly Conveniently generate dynamic effects.
  • FIG. 9 a structural block diagram of a static picture-based dynamic interaction device according to an embodiment of the present invention is shown, which may specifically include the following modules:
  • the selecting module 901 is adapted to select a feature area in the static picture
  • the determining module 902 is adapted to determine the mapping reference object according to the specified interaction operation event when the specified interaction operation event is monitored;
  • the mapping module 903 is adapted to map at least part of the pixel points in the feature area into one or more frame distortion pictures according to the mapping reference object, to drive the static picture to change frame by frame.
  • the specified interworking event may include a shaking event
  • the mapping reference object may include one or more reference points
  • the determining module 902 can also be adapted to:
  • the specified interworking event may include a screen click event
  • the mapping reference object may include one or more reference points
  • the determining module 902 can also be adapted to:
  • the feature area may have a feature point; the mapping module 903 may also be adapted to:
  • Pixel points of the still picture are mapped into one or more frames of distorted pictures according to the feature points and the one or more reference points.
  • the feature area may include a convex area
  • the feature point may include a center of gravity point
  • mapping module 903 is further adapted to:
  • the first line is a line connecting the feature point and the edge point
  • the second line is a line connecting the current reference point and the edge point
  • the edge point is on the edge of the feature area.
  • mapping module 903 is further adapted to:
  • the pixel points are copied to the second connection line according to the relative position.
  • mapping module 903 is further adapted to:
  • Pixels outside the feature area are mapped to the same location in the warped picture.
  • mapping module 903 is further adapted to:
  • Pixel point superimposition processing is performed on pixels that overlap in position in the warped picture.
  • mapping module 903 is further adapted to:
  • the specified interoperation event may include a shaking event
  • the mapping reference object may include a moving direction of at least part of the pixel points in the feature region
  • the shaking direction of the shaking event is set to be the moving direction of at least a part of the pixel points in the feature area.
  • the specified operation event may include a screen click event
  • the mapping reference object may include a moving direction of at least a part of the pixel points in the feature area
  • the direction pointing to the occurrence of the screen click event is set to the direction of motion of at least a portion of the pixels in the feature area.
  • mapping module 903 is further adapted to:
  • At least part of the pixel points in the feature area are texture mapped according to a preset mode, and a dynamic effect including a one-frame or multi-frame warped picture change is generated.
  • mapping module 903 is further adapted to:
  • each drawing graphic has a plurality of vertices, each vertice having a texture coordinate;
  • the graph drawing interface is used to texture map the pixels in the drawing graph according to the texture coordinates of each vertex, and generate a dynamic effect including one or more frames of distorted picture changes.
  • the preset mode may include a simple harmonic motion mode and/or a damped vibration mode; the mapping module 903 may also be adapted to:
  • the vertices of each rendered image are moved at one or more points in time in accordance with a simple harmonic motion pattern and/or a damped vibration pattern.
  • mapping module 603 is further adapted to:
  • the target coordinates of the vertices of each of the drawn images are calculated from the original coordinates and the moving distance.
  • the device may further comprise the following modules:
  • the first generating module is adapted to generate a dynamic picture by using the static picture and the one or more frame distortion image.
  • the device may further comprise the following modules:
  • a second generating module configured to generate dynamic information based on the feature area
  • the writing module is adapted to write the dynamic information and the script object into the static picture to generate a dynamic interaction file.
  • the second generating module may further be adapted to:
  • Dynamic information is generated using the feature area, the feature point, and the one or more reference points.
  • the second generating module may further be adapted to:
  • Dynamic information is generated using the feature area and a direction of motion of at least a portion of the pixel points in the feature area.
  • FIG. 10 a structural block diagram of an apparatus for generating a dynamic effect based on a static picture according to an embodiment of the present invention is shown. Specifically, the following modules may be included:
  • the selecting module 1001 is adapted to select a feature area in the static picture; the feature area has a feature point;
  • a determining module 1002 adapted to determine one or more reference points in the feature area
  • the mapping module 1003 is adapted to map pixel points of the static picture into one or more frame distortion pictures according to the feature point and the one or more reference points;
  • the generating module 1004 is adapted to generate a dynamic effect based on the one or more frames of distorted pictures.
  • the feature area may include a convex area
  • the feature point may include a center of gravity point
  • mapping module 1004 is further adapted to:
  • the first line is a line connecting the feature point and the edge point
  • the second line is a line connecting the current reference point and the edge point
  • the edge point is on the edge of the feature area.
  • mapping module 1004 is further adapted to:
  • the pixel points are copied to the second connection line according to the relative position.
  • mapping module 1004 is further adapted to:
  • Pixels outside the feature area are mapped to the same location in the warped picture.
  • mapping module 1004 is further adapted to:
  • Pixel point superimposition processing is performed on pixels that overlap in position in the warped picture.
  • mapping module 1004 is further adapted to:
  • the generating module 1005 is further adapted to:
  • the generating module 1005 is further adapted to:
  • the dynamic information and script objects are written into the static picture to generate a dynamic interaction file.
  • the generating module 1005 is further adapted to:
  • Dynamic information is generated using the feature area, the feature point, and the one or more reference points.
  • FIG. 11 a structural block diagram of an apparatus for generating a picture dynamic effect based on an interaction operation according to an embodiment of the present invention is shown. Specifically, the following modules may be included:
  • the selecting module 1101 is adapted to select a feature area in the static picture
  • the determining module 1102 is adapted to determine, according to the specified operational event, when the specified interactive operation event is monitored a direction of movement of at least a portion of the pixel points in the feature area;
  • the mapping module 1103 is configured to perform texture mapping on at least part of the pixel points in the feature area according to the preset mode in the moving direction to generate a dynamic effect including a one-frame or multi-frame warped picture change.
  • the specified interaction event may include a shaking event
  • the determining module 1102 may further be configured to:
  • the shaking direction of the shaking event is set to be the moving direction of at least a part of the pixel points in the feature area.
  • the specified interaction event may include a screen click event
  • the determining module 1102 may further be configured to:
  • a position directed to the occurrence of the screen click event is set to a direction of motion of at least a portion of the pixels in the feature area.
  • mapping module 1103 is further adapted to:
  • each drawing graphic has a plurality of vertices, each vertice having a texture coordinate;
  • the graphic drawing interface is used to texture map the pixel points in the drawing graphic according to the texture coordinates of each vertex, and generate a dynamic effect including one or more frame changing pictures.
  • the preset mode includes a simple harmonic motion mode and/or a damped vibration mode; the mapping module 1103 may also be adapted to:
  • the vertices of each of the rendered graphics are moved at one or more points in time in accordance with a simple harmonic motion mode and/or a damped vibration mode.
  • mapping module 1103 is further applicable to:
  • the target coordinates of the vertices of each of the drawn figures are calculated from the original coordinates and the moving distance.
  • FIG. 12 a structural block diagram of an apparatus for generating a customizable dynamic graph according to an embodiment of the present invention is shown. Specifically, the following modules may be included:
  • the reading module 1201 is adapted to read dynamic information from a dynamic interaction file;
  • the dynamic interaction file includes a script object and a static picture;
  • the mapping module 1202 is adapted to map, by the script object, at least part of the pixels in the static picture into one or more frame distortion pictures according to the dynamic information, to drive the static picture to change frame by frame.
  • the dynamic information may include a feature area, a feature point, and one or more reference points.
  • the mapping module 1202 may further be configured to:
  • Pixel points of the still picture are mapped into one or more frames of distorted pictures by the script object according to the feature points and the one or more reference points.
  • the feature area may include a convex area
  • the feature point may include a center of gravity point
  • mapping module 1202 is further adapted to:
  • the first line is a line connecting the feature point and the edge point
  • the second line is a line connecting the current reference point and the edge point
  • the edge point is on the edge of the feature area.
  • mapping module 1202 is further adapted to:
  • the pixel points are copied to the second connection line according to the relative position.
  • mapping module 1202 is further adapted to:
  • Pixels outside the feature area are mapped to the same location in the warped picture.
  • mapping module 1202 is further adapted to:
  • Pixel point superimposition processing is performed on pixels that overlap in position in the warped picture.
  • mapping module 1202 is further adapted to:
  • the dynamic information includes a feature area, a moving direction of at least a part of the pixel points in the feature area, and the mapping module 1202 is further adapted to:
  • mapping module 1202 is further adapted to:
  • each drawing graphic has a plurality of vertices, each vertice having a texture coordinate;
  • the graph drawing interface is used to texture map the pixels in the drawing graph according to the texture coordinates of each vertex, and generate a dynamic effect including one or more frames of distorted picture changes.
  • the preset mode includes a simple harmonic motion mode and/or a damped vibration mode; the mapping module 1202 may further be adapted to:
  • the vertices of each rendered image are moved at one or more points in time in accordance with a simple harmonic motion pattern and/or a damped vibration pattern.
  • mapping module 1202 is further adapted to:
  • the target coordinates of the vertices of each of the drawn images are calculated from the original coordinates and the moving distance.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the static picture based dynamic interaction device in accordance with embodiments of the present invention.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
  • Figure 13 illustrates a computing device, such as an application server, that can implement dynamic interaction based on static pictures in accordance with the present invention.
  • the server conventionally includes a processor 1310 and a computer program product or computer readable medium in the form of a memory 1320.
  • the memory 1320 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 1320 has a storage space 1330 for program code 1331 for performing any of the method steps described above.
  • the storage space 1330 for program code may include respective program codes 1331 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • Such computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 1320 in the server of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 1331', code that can be read by, for example, a processor such as 1310, which when executed by the server causes the server to perform various steps in the methods described above.
  • an embodiment means that Particular features, structures, or characteristics described in connection with the embodiments are included in at least one embodiment of the invention.
  • the phrase “in one embodiment” is not necessarily referring to the same embodiment.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于静态图片的动态交互方法和装置,所述方法包括:在静态图片中选取特征区域;当监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。本发明实施例无需借助专门的应用生成动态效果,降低了技术门槛,提高了操作的简便性,此外,通过对用户的交互操作进行反馈,实现了静态图片的动态交互,丰富了动态效果的形式。

Description

一种基于静态图片的动态交互方法和装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种基于静态图片的动态交互方法、一种基于静态图片的动态交互装置、一种基于静态图片生成动态效果的方法、一种基于静态图片生成动态效果的装置、一种基于交互操作产生图片动态效果的方法、一种基于交互操作产生图片动态效果的装置、一种生成可定制动态图的方法,以及,一种生成可定制动态图的装置。
背景技术
随着计算机科技的发展,尤其是移动设备的广泛普及,基于计算机的应用广泛进入人们生活的各个方面。
互联网上的图片浏览占据了用户访问量的很大一部分,随着移动互联网的发展,用户在手机上浏览图片正日益成为趋势。但是传统的互联网图片大部分都是静态图片,缺乏动态效果。
常用的动态图片一般采用GIF格式,然而GIF图是由事先已经存在的多张图片,合成之后,根据指定好的每帧间隔时间进行播放的。图片体积大小随着播放帧数几何倍数增加外,播放形式单一。
并且GIF图需要专门的应用生成,例如,采用openGL(Open Graphics Library)接口等开发包或者工具来通过渲染生成,这种方式一方面依赖第三方开发包、库文件或者工具,另一方面对系统配置和资源有特殊要求,因此,消耗资源大、渲染生成速度慢,且不利于跨平台移植。并且一般的用户很难编辑,技术门槛高,操作困难。另一方面,因为移动设备的性能问题,生成GIF图通常十分耗时,而且GIF图的体积比较大,传输时对网络带宽要求较高。而像素渲染产生的动态效果,效率比较低,渲染时间较长,容易产生滞后。
发明内容
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决或减缓上述问题的一种基于静态图片的动态交互方法、一种基于静态图片的动态交互装置、一种基于静态图片生成动态效果的方法、一种基于静态图片生成动态效果的装置、一种基于交互操作产生图片动态效果的方法、一种基于交互操作产生图片动态效果的装置、一种生成可定制动态图的方法,以及,一种生成可定制动态图的装置。
根据本发明的一个方面,提供了一种基于静态图片的动态交互方法,包括:
在静态图片中选取特征区域;
当监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;
根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
根据本发明的另一方面,提供了一种基于静态图片生成动态效果的方法,包括:
在静态图片中选取特征区域;所述特征区域中具有特征点;
在所述特征区域中确定一个或多个参考点;
根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中;
基于所述一帧或多帧扭曲图片生成动态效果。
根据本发明的另一方面,提供了一种基于交互操作产生图片动态效果的方法,包括:
在静态图片中选取特征区域;
当监听到指定的交互操作事件时,根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向;
在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
根据本发明的另一方面,提供了一种生成可定制动态图的方法,包括:
从动态交互文件中读取动态信息;所述动态交互文件包括脚本对象和静态图片;
由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
根据本发明的另一方面,提供了一种基于静态图片的动态交互装置,包括:
选取模块,适于在静态图片中选取特征区域;
确定模块,适于在监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;
映射模块,适于根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
根据本发明的另一方面,提供了一种基于静态图片生成动态效果的装置,包括:
选取模块,适于在静态图片中选取特征区域;所述特征区域中具有特征点;
确定模块,适于在所述特征区域中确定一个或多个参考点;
映射模块,适于根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中;
生成模块,适于基于所述一帧或多帧扭曲图片生成动态效果。
根据本发明的另一方面,提供了一种基于交互操作产生图片动态效果的装置,包括:
选取模块,适于在静态图片中选取特征区域;
确定模块,适于在监听到指定的交互操作事件时,根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向;
映射模块,适于在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
根据本发明的另一方面,提供了一种生成可定制动态图的装置,包括:
读取模块,适于从动态交互文件中读取动态信息;所述动态交互文件包括脚本对象和静态图片;
映射模块,适于由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
根据本发明的又一个方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算设备上运行时,导致所述计算设备执行上述的基于静态图片的动态交互方法,或者,导致所述计算设备执行上述的基于静态图片生成动态效果的方法,或者,导致所述计算设备执行上述的基于交互操作产生图片动态效果的方法,或者,导致所述计算设备执行上述的生成可定制动态图的方法。
根据本发明的再一个方面,提供了一种计算机可读介质,其中存储了上述的计算机程序。
本发明的有益效果为:
本发明实施例由监听到指定的交互操作事件时确定映射参考对象,以对静态图片的特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动静态图片逐帧变化,无需借助专门的应用生成动态效果,降低了技术门槛,提高了操作的简便性,此外,通过对用户的交互操作进行反馈,实现了静态图片的动态交互,丰富了动态效果的形式。
本发明实施例基于一个或多个参考点将静态图片的像素点映射到一帧或多帧扭曲图片中,生成动态效果,计算简单、无需依赖第三方开发包、库文件或者工具,渲染生成速度快,对资源消耗少,容易跨平台。
本发明基于运动方向将特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果,一方面,对特征区域生成动态效果,减少了动态效果的体积,减少了传输时的带宽占用,方便传输,另一方面,由于纹理映射效率很高,减少了生成动态效果的耗时,对于网络图片或者系统相册里的图片等均可以很快地产生动 态效果,快速、方便地生成动态效果,实现了动态效果可以和用户的交互行为的实时互动。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示意性示出了根据本发明一个实施例的一种基于静态图片的动态交互方法实施例的步骤流程图;
图2示意性示出了根据本发明一个实施例的一种静态图片的示例图;
图3示意性示出了根据本发明一个实施例的一种在静态图片中选取特征区域的示例图;
图4A和图4B示意性示出了根据本发明一个实施例的一种像素点的映射示例图;
图5A和图5B示意性示出了根据本发明一个实施例的一种扭曲图像的示例图;以及
图6示意性示出了根据本发明一个实施例的一种基于静态图片生成动态效果的方法实施例的步骤流程图;
图7示意性示出了根据本发明一个实施例的一种基于交互操作产生图片动态效果的方法实施例的步骤流程图;
图8示意性示出了根据本发明一个实施例的一种生成可定制动态图的方法实施例的步骤流程图;
图9示意性示出了根据本发明一个实施例的一种基于静态图片的动态交互装置实施例的结构框图;
图10示意性示出了根据本发明一个实施例的一种基于静态图片生成动态效果的装置实施例的结构框图;
图11示意性示出了根据本发明一个实施例的一种基于交互操作产生图片动态效果的装置实施例的结构框图;
图12示意性示出了根据本发明一个实施例的一种基于静态图片的动态交互装置实施例的结构框图;
图13示意性地示出了用于执行根据本发明的方法的计算设备的框图;以及
图14示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施例
下面结合附图和具体的实施方式对本发明作进一步的描述。
参照图1,示出了根据本发明一个实施例的一种基于静态图片的动态交互方法实施例的步骤流程图,具体可以包括如下步骤:
步骤101,在静态图片中选取特征区域;
需要说明的是,本发明实施例可以应用于移动设备中,例如,手机、PDA(Personal Digital Assistant,个人数字助理)、膝上型计算机、掌上电脑等等,也可以应用于固定设备中,例如,个人电脑(Personal Computer,PC)、笔记本电脑等等,本发明实施例对此不加以限制。
这些移动设备或固定设备一般可以支持包括Android(安卓)、IOS、WindowsPhone或者windows等的操作系统,通常可以存储静态图片。
静态图片,可以是相对于动态图片而言的,即不具有动态效果的图片。
该静态图片可以包括JPG、JPEG、PNG、BMP等格式,本发明实施例对此不加以限制。
在本发明实施例中,可以在静态图片中选取某一个区域作为特征区域,该特征区域 可以为多边形、圆形、椭圆型等形状,针对该特征区域中的图像数据进行动态效果的生成。
例如,对于如图2所示的静态图片,可以提供一个如图3所示的椭圆形选择框,用户可以改变该椭圆形的选择框的形状,并选择其对于静态图片的位置,该位置可以确定为特征区域。
步骤102,当监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;
在具体实现中,该交互操作事件可以为由用户进行交互操作所引起的事件。
映射参考对象可以为针对特征区域中的像素点进行映射时,作为映射位置参考的对象。
通过该交互操作事件的触发,可以使得特征区域进行类似于物理水球的抖动效果(近似装满水的气球),并且其抖动方向和方式是根据用户不同的交互操作而变化的,如手机摇动的方向、屏幕点击的位置等。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括摇晃事件,所述映射参考对象可以包括一个或多个参考点;
则在本发明实施例中,步骤102可以包括如下子步骤:
子步骤S11,按照摇晃事件的摇晃方向,在静态图片的特征区域中选取一个或多个参考点。
在具体实现中,可以在监听到指定的交互操作事件时,根据指定的交互操作事件确定一个或多个参考点。
在本发明实施例中,用户可以通过摇晃进行交互操作。
具体而言,可以从操作系统(如Android)的传感器事件接口,监听加速度传感器(如三轴加速度传感器)事件。
收到加速度传感器变化事件后,分别获取设备在水平、垂直以及空间垂直三个方向的加速度,计算各方向加速度的平方和,并获取其平方根,作为设备移动的综合加速度。
若综合加速度大于设定的加速度阈值,则可以判断监听到摇晃事件,认定用户的摇晃操作进行交互。
在摇晃方向上,可以与摇晃方向相同,也可以与摇晃方向相反,在静态图片的特征区域中选取一个或多个连续分布的参考点。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括屏幕点击事件,所述映射参考对象可以包括一个或多个参考点;
则在本发明实施例中,步骤102可以包括如下子步骤:
子步骤S12,按照指向发生屏幕点击事件的方向,在静态图片的特征区域中选取一个或多个参考点。
在本发明实施例中,用户可以通过点击屏幕(如特征区域)进行交互操作。
若监听到屏幕点击事件,则可以按照指向发生屏幕点击事件的方向,如特征区域的中心点/重心点指向发生屏幕点击事件的方向,可以与该方向相同,也可以与该方向相反,在静态图片的特征区域中选取一个或多个连续分布的参考点。
当然,上述参考点的确定方式只是作为示例,在实施本发明实施例时,可以根据实际情况设置其他参考点的确定方式,例如,直接指定参考点的位置,本发明实施例对此不加以限制。另外,除了上述参考点的确定方式外,本领域技术人员还可以根据实际需要采用其它参考点的确定方式,本发明实施例对此也不加以限制。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括摇晃事件,所述映射参考对象可以包括所述特征区域中的至少部分像素点的运动方向;
则在本发明实施例中,步骤102可以包括如下子步骤:
子步骤S13,设置摇晃事件的摇晃方向为所述特征区域中的至少部分像素点的运动方向。
为了产生动态效果,可以移动绘制图形的顶点的位置,顶点的位置的移动可以取决于用户的交互操作。
例如,当用户手摇晃手机时,则绘制图形的顶点可以会往摇晃的方向移动,其形状也会随之改变。
又例如,当用户手触摸屏幕的某个位置,则绘制图形的顶点可以会往用户的触摸点移动,其形状也会随之改变。
通过该交互操作事件的触发,可以使得特征区域进行类似于物理水球的抖动效果(近似装满水的气球),并且其抖动方向和方式是根据用户不同的交互操作而变化的,如手机摇动的方向、屏幕点击的位置等。
在本发明实施例中,用户可以通过摇晃进行交互操作。
在摇晃方向上,可以与摇晃方向相同,也可以与摇晃方向相反,作为静态图片的特征区域中选取特征区域中的至少部分像素点的运动方向。
需要说明的是,运动方向可以包括加速度。
在本发明的一种可选实施例中,所述指定的操作事件可以包括屏幕点击事件,所述映射参考对象可以包括所述特征区域中的至少部分像素点的运动方向;
则在本发明实施例中,步骤102可以包括如下子步骤:
子步骤S14,设置指向发生屏幕点击事件的方向为所述特征区域中的至少部分像素点的运动方向。
在本发明实施例中,用户可以通过点击屏幕(如特征区域)进行交互操作。
若监听到屏幕点击事件,则可以按照指向发生屏幕点击事件的方向,如特征区域的中心点/重心点指向发生屏幕点击事件的方向,可以与该方向相同,也可以与该方向相反,作为静态图片的特征区域中选取特征区域中的至少部分像素点的运动方向。
当然,上述运动方向的确定方式只是作为示例,在实施本发明实施例时,可以根据实际情况设置其他运动方向的确定方式,本发明实施例对此不加以限制。另外,除了上述参考点的确定方式外,本领域技术人员还可以根据实际需要采用其它运动方向的确定方式,本发明实施例对此也不加以限制。
步骤103,根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
在具体实现中,可以映射参考对象作为扭曲的幅度参考,对静态图片进行映射,以生成扭曲图片。
当映射参考对象的幅度越大,扭曲的幅度可以越大,当映射参考对象的幅度越小,扭曲的幅度可以越小。
本发明实施例由监听到指定的交互操作事件时确定映射参考对象,以对静态图片的特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动静态图片逐帧变化,无需借助专门的应用生成动态效果,降低了技术门槛,提高了操作的简便性,此外,通过对用户的交互操作进行反馈,实现了静态图片的动态交互,丰富了动态效果的形式。
在本发明的一种可选实施例中,步骤103可以包括如下子步骤:
子步骤S21,根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中。
在本发明实施例中,所述特征区域中可以具有特征点,用以进行动态效果的生成。
在本发明实施例中,可以以特征点为基准,以参考点作为扭曲的幅度参考,对静态图片进行映射,以生成扭曲图片。
静态图片中的特征区域中的像素点可以沿特征点指向参考点的方向映射,造成静态图片的扭曲。当参考点偏离特征点越大,扭曲的幅度可以越大,当参考点偏离特征点越小,扭曲的幅度可以越小,特别地,当特征点与参考点重合时,扭曲图片中可以不产生扭曲。
在本发明实施例的一种可选示例中,所述特征区域可以包括凸区域,所述特征点可 以包括重心点。
凸区域,从几何上看可以是指图形是往外凸的,没有凹进去的地方。
代数上可以这样定义凸区域:集合中任取两个点a、b,有t*a+(1-t)*b仍属于这个集合,其中0<t<1,这个表达式的意思可以是连接两个点a b的直线段还在集合中。
几何上的重心,又称为几何中心,当物体为均质(密度为定值),质心等同于形心,如,三角形三条中线的交点。
在本发明的一种可选实施例中,子步骤S21可以包括如下子步骤:
子步骤S211,生成扭曲图片;
在本发明实施例中,扭曲图片的初始状态可以是空白的。
子步骤S212,将在所述特征区域中第一连线上的像素点映射到第二连线上;
其中,所述第一连线可以为所述特征点与边缘点之间的连线,所述第二连线可以为当前参考点与边缘点的连线,所述边缘点可以为所述特征区域边缘上的坐标点。
本发明实施例中,对于在特征区域中的像素点,可以按照参考点进行映射。
需要说明的是,若特征区域为凸区域,参考点为特征区域内的像素点,则映射后的像素点可以位于特征区域内,实现特征区域内的像素点在特征区域内进行映射,而不会映射到特征区域外。
例如,如图4A所示,C0为特征点(如重心点),C1可以为参考点,E可以边缘点,则C0E可以为第一连线,C1E可以为第二连线,本发明实施例中,可以将第一连线C0E上的像素点P0,映射到第二连线C1E上,获得映射到的点P1。
在本发明的一种可选实施例中,子步骤S212可以包括如下子步骤:
子步骤S2121,计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
子步骤S2122,按照所述相对位置,将所述像素点拷贝到第二连线上。
在实际应用中,可以以比例关系表达像素点的相对位置。
例如,在一个示例中,如图4A所示,线段C0P0与线段C0E的比值,作为比例R,可以作为像素点P0在第一连线C0E上的相对位置。根据比例R,求得线段C1E上的点P1,使得线段C1P1与第二连线C1E的比值为R。
当然,在本发明实施例还可以采用其他包含像素点的线段与第一连线之间比例表达相对位置,本发明实施例对此不加以限制。
子步骤S213,将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
若确定了第二连线上像素点的位置,则可以将其拷贝到扭曲图片中的相同位置,进行图像的扭曲映射。
在本发明的一种可选实施例中,子步骤S21还可以包括如下子步骤:
子步骤S214,在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
本发明实施例中,如果静态图片上像素点在凸特征区域外,则可以直接拷贝到扭曲图片上对应的相同位置,不产生扭曲。
当然,本发明实施例也可以不对特征区域外的像素点进行映射,仅基于特征区域中的像素点进行映射,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,子步骤S21还可以包括如下子步骤:
子步骤S215,对扭曲图片中位置重叠的像素点进行像素点叠加处理。
在本发明实施例中,由于静态图片进行映射产生扭曲,在扭曲图片中像素点较为集中的区域,可能会产生像素点的位置重叠的情形。
针对像素点的位置重叠的情形,本发明实施例可以进行像素点叠加处理。
例如,若应用RGB色彩模式,则可以通过对像素点的红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的。
当然,为了进一步减少计算量,也可以从叠加的像素点中选取一个像素点(如随机选取像素点、选取最后拷贝到该位置的像素点)作为该位置的像素点,还可以采用其他方式选取该位置的像素点,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,子步骤S21还可以包括如下子步骤:
子步骤S216,对扭曲图片中的空白位置进行像素点插值处理。
在本发明实施例中,由于静态图片进行映射产生扭曲,在扭曲图片中像素点较为稀疏的区域,可能会由某些像素点没有被赋值(即没有像素点映射到该位置,该位置的像素点为原始状态,如白色),产生空白位置的情形。
针对扭曲图片中的空白位置的情形,本发明实施例可以进行像素点插值处理,以补全扭曲图片。
例如,对于没有被赋值的像素点Px,选取距离它最近的已经赋值的像素点Py(如上方的像素点、下方的像素点、左侧的像素点、右侧的像素点等),将像素点Py的值赋给像素点Px。
在实际应用中,当参考点(如图4A所示的C1)移动到某一个位置的时候,可以根据参考点(如图4A所示的C1)的位置映射出一张扭曲图片,随着参考点(如图4A所示的C1)移动到不同的位置,扭曲图片也发生变化,扭曲图片可以连续播放,从而形成动态效果。
如图5A所示,若参考点(如图4A所示的C1)在特征点(如图4A所示的C0)的左侧,则特征图像中整体可以往左侧扭曲;图5B所示,若参考点在特征点(如图4A所示的C0)的右侧,则特征图像中整体可以往右侧扭曲。
进一步地,若参考点的位置根据指定的交互操作事件确定,则参考点可以从特征点的位置出发,沿指定的交互操作事件对应的方向(如摇晃事件的摇晃方向、指向发生屏幕点击事件的方向)在特征点两侧分布,最终与特征点重合,则特征区域映射出的扭曲图片可以沿指定的交互操作事件对应的方向来回扭曲,产生抖动效果,并最终静止。
例如,如图4A所示,参考点C1沿着经过重心点C0的X轴方向上做震荡移动,参考点C1每移动到一个位置就会生成一个扭曲图片,扭曲图片逐帧播放,可以产生如图5A和图5B所示的特征区域内的图像表现出左右震动的动态效果。
本发明实施例基于一个或多个参考点将静态图片的像素点映射到一帧或多帧扭曲图片中,生成动态效果,计算简单、无需依赖第三方开发包、库文件或者工具,渲染生成速度快,对资源消耗少,容易跨平台。
在本发明的一种可选实施例中,步骤103可以包括如下子步骤:
子步骤S31,在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在本发明实施例中,可以以运动方向(包括加速度)作为扭曲的幅度参考,对静态图片进行映射,以生成扭曲图片。
静态图片中的特征区域中的像素点可以沿运动方向映射,造成静态图片的扭曲。
当运动方向的幅度越大(如加速度的幅度越大、点击屏幕的位置离特征区域的中心越远),扭曲的幅度可以越大,当运动方向的幅度越小(如加速度的幅度越小、点击屏幕的位置离特征区域的中心越近),扭曲的幅度可以越小。
在本发明的一种可选实施例中,子步骤S31可以包括如下子步骤:
子步骤S311,将所述特征区域划分一个或多个绘制图形;
在本发明实施例中,可以应用图形学的方式,即可以基于网格的生成动态效果。
具体而言,可以将特征区域划分为一个或多个绘制图形,可选为三角形或其他形状的网格(即绘制图形),采用三角形是因为图形绘制接口,如OpenGL(Open Graphics Library),对于三角形的图形渲染有高效率的加速算法。
每个绘制图形中可以具有多个顶点,每个绘制图形(如三角形)可以由顶点(如三个顶点)表示,除了对应的二维坐标外,每个顶点可以具有静态图片的纹理坐标。
在一个实施例中,可以对特征区域的中心区域划分一个或多个绘制图形,以模拟类似于物理水球的抖动效果(近似装满水的气球)。
子步骤S312,在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图像的顶点;
在本发明实施例中,可以按照预设模式,沿该运动方向的相同方向、相反方向移动每个绘制图形的顶点。
在本发明的一种可选实施例中,所述预设模式可以包括简谐运动模式和/或阻尼振动模式;则在本发明实施例中,子步骤S312可以包括如下子步骤:
子步骤S3121,在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图像的顶点。
简谐运动,或简谐振动、谐振、SHM(Simple Harmonic Motion),可以指某物体(如每个绘制图形的顶点)进行简谐运动时,物体(如每个绘制图形的顶点)所受的力跟位移成正比,并且力总是指向平衡位置。
阻尼振动可以指在阻力作用下的震动,当阻力大小可以忽略时,可以说是简谐运动。振动过程中受到阻力的作用,振幅逐渐减小,能量逐渐损失,直至振动停止。
在本发明实施例的一种可选示例中,子步骤S3121可以包括如下子步骤:
子步骤S31211,确定每个绘制图像的顶点的加速度;
若交互操作事件为摇晃事件,则可以从该摇晃事件中提取摇晃时初始的加速度,作为每个绘制图形的顶点的加速度,摇晃的幅度越大,每个绘制图形的顶点的加速度也越大。
若交互操作事件为屏幕点击事件,则可以提取预设的加速度作为每个绘制图形的顶点初始的加速度。
子步骤S31212,按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图像的顶点的移动距离;
对于简谐运动,可以按照该加速度模拟每个绘制图形的顶点所受的力,该力指向平衡位置,构建弹簧模型,模拟每个绘制图形的顶点沿运动方向进行简谐运动。
对于阻尼振动,可以按照该阻尼系统模拟每个绘制图形的顶点所受的阻力,模拟每个绘制图形的顶点沿运动方向进行阻尼振动。
基于加速度、阻尼系数、运动方向可以通过运动学公式计算出在一个或多个时间点内移动距离。
子步骤S31213,由所述原始坐标和所述移动距离计算每个绘制图像的顶点的目标坐标。
在本发明实施例中,每个绘制图形的顶点可以具有原始坐标,即在静态图片中的原始的二维坐标,沿运动方向,添加上该移动距离,则可以获得移动后的每个绘制图形的顶点的位置(即目标坐标)。
子步骤S313,针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在具体实现中,图形绘制接口可以采用OpenGL,其可以提供纹理映射(Texture Mapping),即是将纹理空间中的纹理像素映射到屏幕空间中的像素的过程。
通常,使用纹理映射的步骤可以如下:
第一步:定义纹理对象
const int TexNumber4;
GLuint mes_Texture[TexNumber];//定义纹理对象数组
第二步:生成纹理对象数组
glGenTextures(TexNumber,m_Texture);
第三步:通过使用glBindTexture选择纹理对象,来完成该纹理对象的定义。
glBindTexture(GL_TEXTURE 2D,m_Texture[0]);
glTexImage2D(GL_TEXTURE_2D,0,3,mes_Texmapl.GetWidth(),mee_Texmapl.GetHeight(),0,GL_BGR_EXT,GL_UNSIGNED_BYTE,mse_Texmapl.GetDibBitsl′trQ);
第四步:在绘制景物之前通过glBindTexture,为该景物加载相应的纹理。
glBindTexture(GLes_TEXTURE_2D,mse_Texture[0]);
第五步:在程序结束之前调用glDeleteTextures删除纹理对象。
glDeleteTextures(TexNumber,mee_Texture)。
在一个示例中,如图4B所示,绘制图形为三角形,其包括一个或多个像素点,其中,顶点在纹理空间中具有纹理坐标,顶点a的纹理坐标为(0.2,0.8),顶点b的纹理坐标为(0.4,0.2),顶点c的纹理坐标为(0.8,0.4),将该绘制图形移动的顶点进行移动,使得绘制图形发生变形,进行OpenGL纹理映射到获对象空间,渲染出来后,绘制图形产生了拉伸、压缩等效果,特征区域就会呈现出移动的现象。
如图5A所示,若运特征区域中的至少部分像素点的运动方向往特征区域的左侧,则特征图像中整体可以往左侧扭曲;图5B所示,若特征区域中的至少部分像素点的运动方向往特征区域的右侧,则特征图像中整体可以往右侧扭曲。
通过OpenGL的纹理映射,可以将静态图片的特征区域中心区域附近的绘制图形的顶点模拟弹簧的简谐运动,使得图片被规律的进行拉升,产生类似于弹力水球的抖动效果。
进一步地,若运动方向根据指定的交互操作事件确定,则特征区域中的至少部分像素点的运动方向可以沿指定的交互操作事件对应的方向(如摇晃事件的摇晃方向、指向发生屏幕点击事件的方向)在特征区域的两侧(如左侧和右侧、上方和下方),则按照简谐运动模式和/或阻尼振动模式,特征区域映射出的扭曲图片可以沿指定的交互操作事件对应的方向来回扭曲,产生抖动效果,并最终静止。
其中,根据传感器可以判断手机摇动的方向,静态图片中特征区域可以会沿着摇动方向运动,当设备左右上下剧烈摇动时,特征区域可以绕着中心旋转以模拟猛烈摇动的动态效果。
通过判断手指点击屏幕的位置,特征区域中心可以沿着中心位置和点击位置的方向进行抖动,当手指按住抖动区域内,并在屏幕上来回滑动时,特征区域中心可以跟随手指运动的方向,产生被拖拽的效果,并通过微抖动算法,使得抖动区域产生水球被拖拽时产生的微微抖动的效果,增强其物理真实性。
例如,设备左右摇晃或者用户在特征区域中左右来回滑动,使得特征区域中的至少部分像素点沿水平轴方向上做震荡移动,特征区域中每个绘制图形的顶点在每一个时间点移动到一个位置就会生成一个扭曲图片,扭曲图片逐帧播放,可以产生如图5A和图5B所示的特征区域内的图像表现出左右震动的动态效果。
本发明基于运动方向将特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果,一方面,对特征区域生成动态效果,减少了动态效果的体积,减少了传输时的带宽占用,方便传输,另一方面,由于纹理映射效率很高,减少了生成动态效果的耗时,对于网络图片或者系统相册里的图片等均可以很快地产生动态效果,快速、方便地生成动态效果,实现了动态效果可以和用户的交互行为的实时互动。
在本发明的一种可选实施例中,所述方法还可以包括如下步骤:
步骤104,采用所述静态图片和所述一帧或多帧扭曲图像生成动态图片。
在本发明实施例中,可以保存一帧静态图片,以及,一帧或多帧包括该特征区域的扭曲图片,生成动态图片,例如GIF。
相对于传统的GIF,由于减少了扭曲图片以外的图像数据的存储,可以减少动态图片的体积大小。
在本发明的一种可选实施例中,所述方法还可以包括如下步骤:
步骤105,基于所述特征区域生成动态信息;
动态信息可以为将静态图片的特征区域映射到一帧或多帧扭曲图像的配置信息,例如XML(Extensible Markup Language,可扩展的标识语言)、json(Javascript Object Notation,数据交换语言)等。
以json设计的配置信息的示例可以如下:
Figure PCTCN2015095933-appb-000001
Figure PCTCN2015095933-appb-000002
Figure PCTCN2015095933-appb-000003
在本发明的一种可选实施例中,步骤105可以包括如下子步骤:
子步骤S41,使用所述特征区域、所述特征点和所述一个或多个参考点生成动态信息。
在本发明实施例中,可以将特征区域、特征点和一个或多个参考点生成动态信息,以支持基于参考点的动态效果的生成。
在本发明的一种可选实施例中,步骤105可以包括如下子步骤:
子步骤S42,使用所述特征区域和所述特征区域中的至少部分像素点的运动方向生成动态信息。
在本发明实施例中,可以将特征区域、特征区域中的至少部分像素点的运动方向生成生成动态信息,以支持基于运动方向的动态效果的生成。
步骤106,将所述动态信息和脚本对象写入所述静态图片中,以生成动态交互文件。
在本发明实施例中,可以将动态信息、脚本对象(如JS脚本)写入静态图片中,可以传输到网络或者给其他用户、也可以进行存储。
在读取该脚本对象后,可以使用脚本对象按照该动态信息对该静态图片进行映射,以产生逐帧变化的动态效果。
对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。
参照图6,示出了根据本发明一个实施例的一种基于静态图片生成动态效果的方法实施例的步骤流程图,具体可以包括如下步骤:
步骤601,在静态图片中选取特征区域;
静态图片,可以是相对于动态图片而言的,即不具有动态效果的图片。
该静态图片可以包括JPG、JPEG、PNG、BMP等格式,本发明实施例对此不加以限制。
在本发明实施例中,可以在静态图片中选取某一个区域作为特征区域,该特征区域可以为多边形、圆形、椭圆型等形状,针对该特征区域中的图像数据进行动态效果的生成。
例如,对于如图2所示的静态图片,可以提供一个如图3所示的椭圆形选择框,用户可以改变该椭圆形的选择框的形状,并选择其对于静态图片的位置,该位置可以确定为特征区域。
其中,所述特征区域中可以具有特征点,用以进行动态效果的生成。
在本发明实施例的一种可选示例中,所述特征区域可以包括凸区域,所述特征点可 以包括重心点。
凸区域,从几何上看可以是指图形是往外凸的,没有凹进去的地方。
代数上可以这样定义凸区域:集合中任取两个点a、b,有t*a+(1-t)*b仍属于这个集合,其中0<t<1,这个表达式的意思可以是连接两个点a b的直线段还在集合中。
几何上的重心,又称为几何中心,当物体为均质(密度为定值),质心等同于形心,如,三角形三条中线的交点。
步骤602,在所述特征区域中确定一个或多个参考点;
在具体实现中,可以在监听到指定的交互操作事件时,根据指定的交互操作事件确定一个或多个参考点。
该交互操作事件可以为由用户进行交互操作所引起的事件。
通过该交互操作事件的触发,可以使得特征区域进行类似于物理水球的抖动效果(近似装满水的气球),并且其抖动方向和方式是根据用户不同的交互操作而变化的,如手机摇动的方向、屏幕点击的位置等。
在一种情形中,该指定的交互操作事件可以包括摇晃事件,则可以按照摇晃事件的摇晃方向,在静态图片的特征区域中选取一个或多个参考点。
在此种情形中,用户可以通过摇晃进行交互操作。
具体而言,可以从操作系统(如Android)的传感器事件接口,监听加速度传感器(如三轴加速度传感器)事件。
收到加速度传感器变化事件后,分别获取设备在水平、垂直以及空间垂直三个方向的加速度,计算各方向加速度的平方和,并获取其平方根,作为设备移动的综合加速度。
若综合加速度大于设定的加速度阈值,则可以判断监听到摇晃事件,认定用户的摇晃操作进行交互。
在摇晃方向上,可以与摇晃方向相同,也可以与摇晃方向相反,在静态图片的特征区域中选取一个或多个连续分布的参考点。
在另一种情形中,该指定的交互操作事件可以包括屏幕点击事件,则按照指向发生屏幕点击事件的方向,在静态图片的特征区域中选取一个或多个参考点。
在此种情形中,用户可以通过点击屏幕(如特征区域)进行交互操作。
若监听到屏幕点击事件,则可以按照指向发生屏幕点击事件的方向,如特征区域的中心点/重心点指向发生屏幕点击事件的方向,可以与该方向相同,也可以与该方向相反,在静态图片的特征区域中选取一个或多个连续分布的参考点。
当然,上述参考点的确定方式只是作为示例,在实施本发明实施例时,可以根据实际情况设置其他参考点的确定方式,例如,直接指定参考点的位置,本发明实施例对此不加以限制。另外,除了上述参考点的确定方式外,本领域技术人员还可以根据实际需要采用其它参考点的确定方式,本发明实施例对此也不加以限制。
步骤603,根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中;
在本发明实施例中,可以以特征点为基准,以参考点作为扭曲的幅度参考,对静态图片进行映射,以生成扭曲图片。
静态图片中的特征区域中的像素点可以沿特征点指向参考点的方向映射,造成静态图片的扭曲。当参考点偏离特征点越大,扭曲的幅度可以越大,当参考点偏离特征点越小,扭曲的幅度可以越小,特别地,当特征点与参考点重合时,扭曲图片中可以不产生扭曲。
在本发明的一种可选实施例中,步骤603可以包括如下子步骤:
子步骤S51,生成扭曲图片;
在本发明实施例中,扭曲图片的初始状态可以是空白的。
子步骤S52,将在所述特征区域中第一连线上的像素点映射到第二连线上;
其中,所述第一连线可以为所述特征点与边缘点之间的连线,所述第二连线可以为 当前参考点与边缘点的连线,所述边缘点可以为所述特征区域边缘上的坐标点。
本发明实施例中,对于在特征区域中的像素点,可以按照参考点进行映射。
需要说明的是,若特征区域为凸区域,参考点为特征区域内的像素点,则映射后的像素点可以位于特征区域内,实现特征区域内的像素点在特征区域内进行映射,而不会映射到特征区域外。
例如,如图4A所示,C0为特征点(如重心点),C1可以为参考点,E可以边缘点,则C0E可以为第一连线,C1E可以为第二连线,本发明实施例中,可以将第一连线C0E上的像素点P0,映射到第二连线C1E上,获得映射到的点P1。
在本发明实施例的一种可选示例中,子步骤S52可以包括如下子步骤:
子步骤S521,计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
子步骤S522,按照所述相对位置,将所述像素点拷贝到第二连线上。
在实际应用中,可以以比例关系表达像素点的相对位置。
例如,在一个示例中,如图4A所示,线段C0P0与线段C0E的比值,作为比例R,可以作为像素点P0在第一连线C0E上的相对位置。根据比例R,求得线段C1E上的点P1,使得线段C1P1与第二连线C1E的比值为R。
当然,在本发明实施例还可以采用其他包含像素点的线段与第一连线之间比例表达相对位置,本发明实施例对此不加以限制。
子步骤S53,将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
若确定了第二连线上像素点的位置,则可以将其拷贝到扭曲图片中的相同位置,进行图像的扭曲映射。
在本发明的一种可选实施例中,步骤603还可以包括如下子步骤:
子步骤S54,在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
本发明实施例中,如果静态图片上像素点在凸特征区域外,则可以直接拷贝到扭曲图片上对应的相同位置,不产生扭曲。
当然,本发明实施例也可以不对特征区域外的像素点进行映射,仅基于特征区域中的像素点进行映射,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,步骤603还可以包括如下子步骤:
子步骤S55,对扭曲图片中位置重叠的像素点进行像素点叠加处理。
在本发明实施例中,由于静态图片进行映射产生扭曲,在扭曲图片中像素点较为集中的区域,可能会产生像素点的位置重叠的情形。
针对像素点的位置重叠的情形,本发明实施例可以进行像素点叠加处理。
例如,若应用RGB色彩模式,则可以通过对像素点的红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的。
当然,为了进一步减少计算量,也可以从叠加的像素点中选取一个像素点(如随机选取像素点、选取最后拷贝到该位置的像素点)作为该位置的像素点,还可以采用其他方式选取该位置的像素点,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,步骤603还可以包括如下子步骤:
子步骤S56,对扭曲图片中的空白位置进行像素点插值处理。
在本发明实施例中,由于静态图片进行映射产生扭曲,在扭曲图片中像素点较为稀疏的区域,可能会由某些像素点没有被赋值(即没有像素点映射到该位置,该位置的像素点为原始状态,如白色),产生空白位置的情形。
针对扭曲图片中的空白位置的情形,本发明实施例可以进行像素点插值处理,以补全扭曲图片。
例如,对于没有被赋值的像素点Px,选取距离它最近的已经赋值的像素点Py(如上方的像素点、下方的像素点、左侧的像素点、右侧的像素点等),将像素点Py的值赋给像素点Px。
在实际应用中,当参考点(如图4A所示的C1)移动到某一个位置的时候,可以根 据参考点(如图4A所示的C1)的位置映射出一张扭曲图片,随着参考点(如图4A所示的C1)移动到不同的位置,扭曲图片也发生变化,扭曲图片可以连续播放,从而形成动画效果。
如图5A所示,若参考点(如图4A所示的C1)在特征点(如图4A所示的C0)的左侧,则特征图像中整体可以往左侧扭曲;图5B所示,若参考点在特征点(如图4A所示的C0)的右侧,则特征图像中整体可以往右侧扭曲。
进一步地,若参考点的位置根据指定的交互操作事件确定,则参考点可以从特征点的位置出发,沿指定的交互操作事件对应的方向(如摇晃事件的摇晃方向、指向发生屏幕点击事件的方向)在特征点两侧分布,最终与特征点重合,则特征区域映射出的扭曲图片可以沿指定的交互操作事件对应的方向来回扭曲,产生抖动效果,并最终静止。
例如,如图4A所示,参考点C1沿着经过重心点C0的X轴方向上做震荡移动,参考点C1每移动到一个位置就会生成一个扭曲图片,扭曲图片逐帧播放,可以产生如图5A和图5B所示的特征区域内的图像表现出左右震动的动态效果。
步骤604,基于所述一帧或多帧扭曲图片生成动态效果。
需要说明的是,该动态效果可以在当前进行生成,也可以在后进行驱动,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,步骤604可以包括如下子步骤:
子步骤S61,采用所述静态图片和所述一帧或多帧扭曲图片生成动态图片。
在本发明实施例中,可以保存一帧静态图片,以及,一帧或多帧包括该特征区域的扭曲图片,生成动态图片,例如GIF。
相对于传统的GIF,由于减少了扭曲图片以外的图像数据的存储,可以减少动态图片的体积大小。
在本发明的一种可选实施例中,步骤604可以包括如下子步骤:
子步骤S71,基于所述特征区域生成动态信息;
动态信息可以为将静态图片的特征区域映射到一帧或多帧扭曲图像的配置信息,例如XML(Extensible Markup Language,可扩展的标识语言)、json(Javascript Object Notation,数据交换语言)等。
在本发明实施例的一种可选示例中,子步骤S71可以包括如下子步骤:
子步骤S711,使用所述特征区域、所述特征点和所述一个或多个参考点生成动态信息。
在本发明实施例中,可以将特征区域、特征点和一个或多个参考点生成动态信息,以支持基于参考点的动态效果的生成。
子步骤S712,将所述动态信息和脚本对象写入所述静态图片中,以生成动态交互文件。
在本发明实施例中,可以将动态信息、脚本对象(如JS脚本)写入静态图片中,可以传输到网络或者给其他用户、也可以进行存储。
在读取该脚本对象后,可以使用脚本对象按照该动态信息对该静态图片进行映射,以产生逐帧变化的动态效果。
本发明实施例对静态图片的特征区域中,确定一个或多个参考点,以将静态图片的像素点映射到一帧或多帧扭曲图片中,生成动态效果,计算简单、无需依赖第三方开发包、库文件或者工具,渲染生成速度快,对资源消耗少,容易跨平台。
参照图7,示出了根据本发明一个实施例的一种基于交互操作产生图片动态效果的方法实施例的步骤流程图,具体可以包括如下步骤:
步骤701,在静态图片中选取特征区域;
静态图片,可以是相对于动态图片而言的,即不具有动态效果的图片。
该静态图片可以包括JPG、JPEG、PNG、BMP等格式,本发明实施例对此不加以限制。
在本发明实施例中,可以在静态图片中选取某一个区域作为特征区域,该特征区域可以为多边形、圆形、椭圆型等形状,针对该特征区域中的图像数据进行动态效果的生成。
例如,对于如图2所示的静态图片,可以提供一个如图3所示的椭圆形选择框,用户可以改变该椭圆形的选择框的形状,并选择其对于静态图片的位置,该位置可以确定为特征区域。
步骤702,当监听到指定的交互操作事件时,根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向;
在具体实现中,该交互操作事件可以为由用户进行交互操作所引起的事件。
为了产生动态效果,可以移动绘制图形的顶点的位置,顶点的位置的移动可以取决于用户的交互操作。
例如,当用户手摇晃手机时,则绘制图形的顶点可以会往摇晃的方向移动,其形状也会随之改变。
又例如,当用户手触摸屏幕的某个位置,则绘制图形的顶点可以会往用户的触摸点移动,其形状也会随之改变。
通过该交互操作事件的触发,可以使得特征区域进行类似于物理水球的抖动效果(近似装满水的气球),并且其抖动方向和方式是根据用户不同的交互操作而变化的,如手机摇动的方向、屏幕点击的位置等。
在本发明的一种可选实施例中,所述指定的交互操作事件包括摇晃事件,步骤702可以包括如下子步骤:
子步骤S81,设置摇晃事件的摇晃方向为所述特征区域中的至少部分像素点的运动方向。
在本发明实施例中,用户可以通过摇晃进行交互操作。
具体而言,可以从操作系统(如Android)的传感器事件接口,监听加速度传感器(如三轴加速度传感器)事件。
收到加速度传感器变化事件后,分别获取设备在水平、垂直以及空间垂直三个方向的加速度,计算各方向加速度的平方和,并获取其平方根,作为设备移动的综合加速度。
若综合加速度大于设定的加速度阈值,则可以判断监听到摇晃事件,认定用户的摇晃操作进行交互。
在摇晃方向上,可以与摇晃方向相同,也可以与摇晃方向相反,作为静态图片的特征区域中选取特征区域中的至少部分像素点的运动方向。
需要说明的是,运动方向可以包括加速度。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括屏幕点击事件,步骤702可以包括如下子步骤:
子步骤S82,设置指向发生屏幕点击事件的位置为所述特征区域中的至少部分像素点的运动方向。
在本发明实施例中,用户可以通过点击屏幕(如特征区域)进行交互操作。
若监听到屏幕点击事件,则可以按照指向发生屏幕点击事件的方向,如特征区域的中心点/重心点指向发生屏幕点击事件的方向,可以与该方向相同,也可以与该方向相反,作为静态图片的特征区域中选取特征区域中的至少部分像素点的运动方向。
当然,上述运动方向的确定方式只是作为示例,在实施本发明实施例时,可以根据实际情况设置其他运动方向的确定方式,本发明实施例对此不加以限制。另外,除了上述参考点的确定方式外,本领域技术人员还可以根据实际需要采用其它运动方向的确定方式,本发明实施例对此也不加以限制。
步骤703,在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在本发明实施例中,可以以运动方向(包括加速度)作为扭曲的幅度参考,对静态 图片进行映射,以生成扭曲图片。
静态图片中的特征区域中的像素点可以沿运动方向映射,造成静态图片的扭曲。
当运动方向的幅度越大(如加速度的幅度越大、点击屏幕的位置离特征区域的中心越远),扭曲的幅度可以越大,当运动方向的幅度越小(如加速度的幅度越小、点击屏幕的位置离特征区域的中心越近),扭曲的幅度可以越小。
在本发明的一种可选实施例中,步骤703可以包括如下子步骤:
子步骤S91,将所述特征区域划分一个或多个绘制图形;
在本发明实施例中,可以应用图形学的方式,即可以基于网格的生成动态效果。
具体而言,可以将特征区域划分为一个或多个绘制图形,可选为三角形或其他形状的网格(即绘制图形),采用三角形是因为图形绘制接口,如OpenGL(Open Graphics Library),对于三角形的图形渲染有高效率的加速算法。
每个绘制图形中可以具有多个顶点,每个绘制图形(如三角形)可以由顶点(如三个顶点)表示,除了对应的二维坐标外,每个顶点可以具有静态图片的纹理坐标。
在一个实施例中,可以对特征区域的中心区域划分一个或多个绘制图形,以模拟类似于物理水球的抖动效果(近似装满水的气球)。
子步骤S92,在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图形的顶点;
在本发明实施例中,可以按照预设模式,沿该运动方向的相同方向、相反方向移动每个绘制图形的顶点。
在本发明的一种可选实施例中,所述预设模式可以包括简谐运动模式和/或阻尼振动模式,则子步骤S92可以包括如下子步骤:
子步骤S921,在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图形的顶点。
简谐运动,或简谐振动、谐振、SHM(Simple Harmonic Motion),可以指某物体(如每个绘制图形的顶点)进行简谐运动时,物体(如每个绘制图形的顶点)所受的力跟位移成正比,并且力总是指向平衡位置。
阻尼振动可以指在阻力作用下的震动,当阻力大小可以忽略时,可以说是简谐运动。振动过程中受到阻力的作用,振幅逐渐减小,能量逐渐损失,直至振动停止。
在本发明实施例的一种可选示例中,子步骤S921可以包括如下子步骤:
子步骤S9211,确定每个绘制图形的顶点的加速度;
若交互操作事件为摇晃事件,则可以从该摇晃事件中提取摇晃时初始的加速度,作为每个绘制图形的顶点的加速度,摇晃的幅度越大,每个绘制图形的顶点的加速度也越大。
若交互操作事件为屏幕点击事件,则可以提取预设的加速度作为每个绘制图形的顶点初始的加速度。
子步骤S9212,按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图形的顶点的移动距离;
对于简谐运动,可以按照该加速度模拟每个绘制图形的顶点所受的力,该力指向平衡位置,构建弹簧模型,模拟每个绘制图形的顶点沿运动方向进行简谐运动。
对于阻尼振动,可以按照该阻尼系统模拟每个绘制图形的顶点所受的阻力,模拟每个绘制图形的顶点沿运动方向进行阻尼振动。
基于加速度、阻尼系数、运动方向可以通过运动学公式计算出在一个或多个时间点内移动距离。
子步骤S9213,由所述原始坐标和所述移动距离计算每个绘制图形的顶点的目标坐标。
在本发明实施例中,每个绘制图形的顶点可以具有原始坐标,即在静态图片中的原始的二维坐标,沿运动方向,添加上该移动距离,则可以获得移动后的每个绘制图形的顶点的位置(即目标坐标)。
子步骤S93,针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧变化图片的动态效果。
在具体实现中,图形绘制接口可以采用OpenGL,其可以提供纹理映射(Texture Mapping),即是将纹理空间中的纹理像素映射到屏幕空间中的像素的过程。
在一个示例中,如图4B所示,绘制图形为三角形,其包括一个或多个像素点,其中,顶点在纹理空间中具有纹理坐标,顶点a的纹理坐标为(0.2,0.8),顶点b的纹理坐标为(0.4,0.2),顶点c的纹理坐标为(0.8,0.4),将该绘制图形移动的顶点进行移动,使得绘制图形发生变形,进行OpenGL纹理映射到获对象空间,渲染出来后,绘制图形产生了拉伸、压缩等效果,特征区域就会呈现出移动的现象。
如图5A所示,若运特征区域中的至少部分像素点的运动方向往特征区域的左侧,则特征图像中整体可以往左侧扭曲;图5B所示,若特征区域中的至少部分像素点的运动方向往特征区域的右侧,则特征图像中整体可以往右侧扭曲。
通过OpenGL的纹理映射,可以将静态图片的特征区域中心区域附近的绘制图形的顶点模拟弹簧的简谐运动,使得图片被规律的进行拉升,产生类似于弹力水球的抖动效果。
进一步地,若运动方向根据指定的交互操作事件确定,则特征区域中的至少部分像素点的运动方向可以沿指定的交互操作事件对应的方向(如摇晃事件的摇晃方向、指向发生屏幕点击事件的方向)在特征区域的两侧(如左侧和右侧、上方和下方),则按照简谐运动模式和/或阻尼振动模式,特征区域映射出的扭曲图片可以沿指定的交互操作事件对应的方向来回扭曲,产生抖动效果,并最终静止。
其中,根据传感器可以判断手机摇动的方向,静态图片中特征区域可以会沿着摇动方向运动,当设备左右上下剧烈摇动时,特征区域可以绕着中心旋转以模拟猛烈摇动的动态效果。
通过判断手指点击屏幕的位置,特征区域中心可以沿着中心位置和点击位置的方向进行抖动,当手指按住抖动区域内,并在屏幕上来回滑动时,特征区域中心可以跟随手指运动的方向,产生被拖拽的效果,并通过微抖动算法,使得抖动区域产生水球被拖拽时产生的微微抖动的效果,增强其物理真实性。
例如,设备左右摇晃或者用户在特征区域中左右来回滑动,使得特征区域中的至少部分像素点沿水平轴方向上做震荡移动,特征区域中每个绘制图形的顶点在每一个时间点移动到一个位置就会生成一个扭曲图片,扭曲图片逐帧播放,可以产生如图5A和图5B所示的特征区域内的图像表现出左右震动的动态效果。
本发明实施例基于监听到指定的交互操作事件,确定静态图片的特征区域中的至少部分像素点的运动方向,按照预设模式对特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果,一方面,对特征区域生成动态效果,减少了动态效果的体积,减少了传输时的带宽占用,方便传输,另一方面,由于纹理映射效率很高,减少了生成动态效果的耗时,对于网络图片或者系统相册里的图片等均可以很快的产生动态效果,快速、方便地生成动态效果,实现了动态效果可以和用户的交互行为的实时互动。
参照图8,示出了根据本发明一个实施例的一种生成可定制动态图的方法实施例的步骤流程图,具体可以包括如下步骤:
步骤801,从动态交互文件中读取动态信息;
其中,所述动态交互文件可以包括动态信息、脚本对象和静态图片;
静态图片,可以是相对于动态图片而言的,即不具有动态效果的图片。
该静态图片可以包括JPG、JPEG、PNG、BMP等格式,本发明实施例对此不加以限制。
在本发明实施例中,静态图片中某一个区域作为特征区域,该特征区域可以为多边形、圆形、椭圆型等形状,可以针对该特征区域中的图像数据进行动态效果的生成。
动态信息可以为将静态图片的特征区域映射到一帧或多帧扭曲图像的配置信息,例如XML(Extensible Markup Language,可扩展的标识语言)、json(Javascript Object Notation,数据交换语言)等。
例如,对于如图2所示的静态图片,该静态图片中的特征区域可以为如图3所示的椭圆形区域。
步骤802,由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
在读取该动态信息后,可以使用脚本对象按照该动态信息对该静态图片进行映射,以产生逐帧变化的动态效果。
本发明实施例对动态交互文件中的动态信息、脚本对象和静态图片,由所述脚本对象根据动态信息对静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动静态图片逐帧变化,通过动态信息实现了可定制的动态图,可以由用户指定动态效果,丰富了动态效果的形式,由于静态图片本身大小变化很小,只增加了如动态信息、脚本对象等体积很小(几K大小)的信息,在保证动态效果的同时,大大减少了体积大小。
在本发明的一种可选实施例中,所述动态信息可以包括特征区域、特征点、一个或多个参考点;则在本发明实施例中,步骤802可以包括如下子步骤:
子步骤S11,由所述脚本对象根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中。
在本发明实施例中,可以以特征点为基准,以参考点作为扭曲的幅度参考,对静态图片进行映射,以生成扭曲图片。
静态图片中的特征区域中的像素点可以沿特征点指向参考点的方向映射,造成静态图片的扭曲。当参考点偏离特征点越大,扭曲的幅度可以越大,当参考点偏离特征点越小,扭曲的幅度可以越小,特别地,当特征点与参考点重合时,扭曲图片中可以不产生扭曲。
在本发明实施例的一种可选示例中,所述特征区域可以包括凸区域,所述特征点可以包括重心点。
凸区域,从几何上看可以是指图形是往外凸的,没有凹进去的地方。
代数上可以这样定义凸区域:集合中任取两个点a、b,有t*a+(1-t)*b仍属于这个集合,其中0<t<1,这个表达式的意思可以是连接两个点a b的直线段还在集合中。
几何上的重心,又称为几何中心,当物体为均质(密度为定值),质心等同于形心,如,三角形三条中线的交点。
在本发明的一种可选实施例中,子步骤S11可以包括如下子步骤:
子步骤S111,由所述脚本对象生成扭曲图片;
在本发明实施例中,扭曲图片的初始状态可以是空白的。
子步骤S112,将在所述特征区域中第一连线上的像素点映射到第二连线上;
其中,所述第一连线可以为所述特征点与边缘点之间的连线,所述第二连线可以为当前参考点与边缘点的连线,所述边缘点可以为所述特征区域边缘上的坐标点。
本发明实施例中,对于在特征区域中的像素点,可以按照参考点进行映射。
需要说明的是,若特征区域为凸区域,参考点为特征区域内的像素点,则映射后的像素点可以位于特征区域内,实现特征区域内的像素点在特征区域内进行映射,而不会映射到特征区域外。
例如,如图4A所示,C0为特征点(如重心点),C1可以为参考点,E可以边缘点,则C0E可以为第一连线,C1E可以为第二连线,本发明实施例中,可以将第一连线C0E上的像素点P0,映射到第二连线C1E上,获得映射到的点P1。
在本发明的一种可选实施例中,子步骤S112可以包括如下子步骤:
子步骤S1121,计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
子步骤S1122,按照所述相对位置,将所述像素点拷贝到第二连线上。
在实际应用中,可以以比例关系表达像素点的相对位置。
例如,在一个示例中,如图4A所示,线段C0P0与线段C0E的比值,作为比例R,可以作为像素点P0在第一连线C0E上的相对位置。根据比例R,求得线段C1E上的点P1,使得线段C1P1与第二连线C1E的比值为R。
当然,在本发明实施例还可以采用其他包含像素点的线段与第一连线之间比例表达相对位置,本发明实施例对此不加以限制。
子步骤S113,将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
若确定了第二连线上像素点的位置,则可以将其拷贝到扭曲图片中的相同位置,进行图像的扭曲映射。
在本发明的一种可选实施例中,子步骤S11可以包括如下子步骤:
子步骤S114,在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
本发明实施例中,如果静态图片上像素点在凸特征区域外,则可以直接拷贝到扭曲图片上对应的相同位置,不产生扭曲。
当然,本发明实施例也可以不对特征区域外的像素点进行映射,仅基于特征区域中的像素点进行映射,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,子步骤S11可以包括如下子步骤:
子步骤S115,对扭曲图片中位置重叠的像素点进行像素点叠加处理。
在本发明实施例中,由于静态图片进行映射产生扭曲,在扭曲图片中像素点较为集中的区域,可能会产生像素点的位置重叠的情形。
针对像素点的位置重叠的情形,本发明实施例可以进行像素点叠加处理。
例如,若应用RGB色彩模式,则可以通过对像素点的红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的。
当然,为了进一步减少计算量,也可以从叠加的像素点中选取一个像素点(如随机选取像素点、选取最后拷贝到该位置的像素点)作为该位置的像素点,还可以采用其他方式选取该位置的像素点,本发明实施例对此不加以限制。
在本发明的一种可选实施例中,子步骤S11可以包括如下子步骤:
子步骤S116,对扭曲图片中的空白位置进行像素点插值处理。
在本发明实施例中,由于静态图片进行映射产生扭曲,在扭曲图片中像素点较为稀疏的区域,可能会由某些像素点没有被赋值(即没有像素点映射到该位置,该位置的像素点为原始状态,如白色),产生空白位置的情形。
针对扭曲图片中的空白位置的情形,本发明实施例可以进行像素点插值处理,以补全扭曲图片。
例如,对于没有被赋值的像素点Px,选取距离它最近的已经赋值的像素点Py(如上方的像素点、下方的像素点、左侧的像素点、右侧的像素点等),将像素点Py的值赋给像素点Px。
在实际应用中,当参考点(如图4A所示的C1)移动到某一个位置的时候,可以根据参考点(如图4A所示的C1)的位置映射出一张扭曲图片,随着参考点(如图4A所示的C1)移动到不同的位置,扭曲图片也发生变化,扭曲图片可以连续播放,从而形成动态效果。
如图5A所示,若参考点(如图4A所示的C1)在特征点(如图4A所示的C0)的左侧,则特征图像中整体可以往左侧扭曲;图5B所示,若参考点在特征点(如图4A所示的C0)的右侧,则特征图像中整体可以往右侧扭曲。
进一步地,参考点可以从特征点的位置出发,在特征点两侧分布,最终与特征点重合,则特征区域映射出的扭曲图片可以沿指定的交互操作事件对应的方向来回扭曲,产生抖动效果,并最终静止。
例如,如图4A所示,参考点C1沿着经过重心点C0的X轴方向上做震荡移动,参考点C1每移动到一个位置就会生成一个扭曲图片,扭曲图片逐帧播放,可以产生如图5A和图5B所示的特征区域内的图像表现出左右震动的动态效果。
本发明实施例基于一个或多个参考点将静态图片的像素点映射到一帧或多帧扭曲图片中,生成动态效果,计算简单、无需依赖第三方开发包、库文件或者工具,渲染生成速度快,对资源消耗少,容易跨平台。
在本发明的一种可选实施例中,所述动态信息可以包括特征区域、所述特征区域中的至少部分像素点的运动方向;则在本发明实施例中,步骤102可以包括如下子步骤:
子步骤S21,由所述脚本对象在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧变化图片的动态效果。
在本发明实施例中,可以以运动方向(包括加速度)作为扭曲的幅度参考,对静态图片进行映射,以生成扭曲图片。
静态图片中的特征区域中的像素点可以沿运动方向映射,造成静态图片的扭曲。
当运动方向的幅度越大(如加速度的幅度越大、点击屏幕的位置离特征区域的中心越远),扭曲的幅度可以越大,当运动方向的幅度越小(如加速度的幅度越小、点击屏幕的位置离特征区域的中心越近),扭曲的幅度可以越小。
在本发明的一种可选实施例中,子步骤S21可以包括如下子步骤:
子步骤S211,将所述特征区域划分一个或多个绘制图形;
在本发明实施例中,可以应用图形学的方式,即可以基于网格的生成动态效果。
具体而言,可以将特征区域划分为一个或多个绘制图形,可选为三角形或其他形状的网格(即绘制图形),采用三角形是因为图形绘制接口,如OpenGL(Open Graphics Library),对于三角形的图形渲染有高效率的加速算法。
每个绘制图形中可以具有多个顶点,每个绘制图形(如三角形)可以由顶点(如三个顶点)表示,除了对应的二维坐标外,每个顶点可以具有静态图片的纹理坐标。
在一个实施例中,可以对特征区域的中心区域划分一个或多个绘制图形,以模拟类似于物理水球的抖动效果(近似装满水的气球)。
子步骤S212,在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图像的顶点;
在本发明实施例中,可以按照预设模式,沿该运动方向的相同方向、相反方向移动每个绘制图形的顶点。
在本发明的一种可选实施例中,所述预设模式包括简谐运动模式和/或阻尼振动模式;则在本发明实施例中,子步骤S212可以包括如下子步骤:
子步骤S2121,在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图像的顶点。
简谐运动,或简谐振动、谐振、SHM(Simple Harmonic Motion),可以指某物体(如每个绘制图形的顶点)进行简谐运动时,物体(如每个绘制图形的顶点)所受的力跟位移成正比,并且力总是指向平衡位置。
阻尼振动可以指在阻力作用下的震动,当阻力大小可以忽略时,可以说是简谐运动。振动过程中受到阻力的作用,振幅逐渐减小,能量逐渐损失,直至振动停止。
在本发明实施例的一种可选示例中,子步骤S2121可以包括如下子步骤:
子步骤S21211,确定每个绘制图像的顶点的加速度;
在本发明实施例中,可以在先设定的加速度,或者,预设的加速度作为每个绘制图形的顶点初始的加速度。
子步骤S21212,按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图像的顶点的移动距离;
对于简谐运动,可以按照该加速度模拟每个绘制图形的顶点所受的力,该力指向平衡位置,构建弹簧模型,模拟每个绘制图形的顶点沿运动方向进行简谐运动。
对于阻尼振动,可以按照该阻尼系统模拟每个绘制图形的顶点所受的阻力,模拟每个绘制图形的顶点沿运动方向进行阻尼振动。
基于加速度、阻尼系数、运动方向可以通过运动学公式计算出在一个或多个时间点内移动距离。
子步骤S21213,由所述原始坐标和所述移动距离计算每个绘制图像的顶点的目标坐标。
在本发明实施例中,每个绘制图形的顶点可以具有原始坐标,即在静态图片中的原始的二维坐标,沿运动方向,添加上该移动距离,则可以获得移动后的每个绘制图形的顶点的位置(即目标坐标)。
子步骤S213,针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在具体实现中,图形绘制接口可以采用OpenGL,其可以提供纹理映射(Texture Mapping),即是将纹理空间中的纹理像素映射到屏幕空间中的像素的过程。
在一个示例中,如图4B所示,绘制图形为三角形,其包括一个或多个像素点,其中,顶点在纹理空间中具有纹理坐标,顶点a的纹理坐标为(0.2,0.8),顶点b的纹理坐标为(0.4,0.2),顶点c的纹理坐标为(0.8,0.4),将该绘制图形移动的顶点进行移动,使得绘制图形发生变形,进行OpenGL纹理映射到获对象空间,渲染出来后,绘制图形产生了拉伸、压缩等效果,特征区域就会呈现出移动的现象。
如图5A所示,若运特征区域中的至少部分像素点的运动方向往特征区域的左侧,则特征图像中整体可以往左侧扭曲;图5B所示,若特征区域中的至少部分像素点的运动方向往特征区域的右侧,则特征图像中整体可以往右侧扭曲。
通过OpenGL的纹理映射,可以将静态图片的特征区域中心区域附近的绘制图形的顶点模拟弹簧的简谐运动,使得图片被规律的进行拉升,产生类似于弹力水球的抖动效果。
进一步地,特征区域中的至少部分像素点的运动方向可以在特征区域的两侧(如左侧和右侧、上方和下方),则按照简谐运动模式和/或阻尼振动模式,特征区域映射出的扭曲图片可以来回扭曲,产生抖动效果,并最终静止。
例如,特征区域中的至少部分像素点的运动方向为左右来回滑动,使得特征区域中的至少部分像素点沿水平轴方向上做震荡移动,特征区域中每个绘制图形的顶点在每一个时间点移动到一个位置就会生成一个扭曲图片,扭曲图片逐帧播放,可以产生如图5A和图5B所示的特征区域内的图像表现出左右震动的动态效果。
本发明基于运动方向将特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果,一方面,对特征区域生成动态效果,减少了动态效果的体积,减少了传输时的带宽占用,方便传输,另一方面,由于纹理映射效率很高,减少了生成动态效果的耗时,对于网络图片或者系统相册里的图片等均可以很快的产生动态效果,快速、方便地生成动态效果。
参照图9,示出了根据本发明一个实施例的一种基于静态图片的动态交互装置实施例的结构框图,具体可以包括如下模块:
选取模块901,适于在静态图片中选取特征区域;
确定模块902,适于在监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;
映射模块903,适于根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括摇晃事件,所述映射参考对象可以包括一个或多个参考点;
所述确定模块902还可以适于:
按照摇晃事件的摇晃方向,在静态图片的特征区域中选取一个或多个参考点。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括屏幕点击事件,所述映射参考对象可以包括一个或多个参考点;
所述确定模块902还可以适于:
按照指向发生屏幕点击事件的方向,在静态图片的特征区域中选取一个或多个参考 点。
在本发明的一种可选实施例中,所述特征区域可以具有特征点;所述映射模块903还可以适于:
根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中。
在本发明实施例的一种可选示例中,所述特征区域可以包括凸区域,所述特征点可以包括重心点。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
生成扭曲图片;
将在所述特征区域中第一连线上的像素点映射到第二连线上;
将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
其中,所述第一连线为所述特征点与边缘点之间的连线,所述第二连线为当前参考点与边缘点的连线,所述边缘点为所述特征区域边缘上的坐标点。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
按照所述相对位置,将所述像素点拷贝到第二连线上。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
对扭曲图片中位置重叠的像素点进行像素点叠加处理。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
对扭曲图片中的空白位置进行像素点插值处理。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括摇晃事件,所述映射参考对象可以包括所述特征区域中的至少部分像素点的运动方向;所述确定模块902还可以适于:
设置摇晃事件的摇晃方向为所述特征区域中的至少部分像素点的运动方向。
在本发明的一种可选实施例中,所述指定的操作事件可以包括屏幕点击事件,所述映射参考对象可以包括所述特征区域中的至少部分像素点的运动方向;所述确定模块902还可以适于:
设置指向发生屏幕点击事件的方向为所述特征区域中的至少部分像素点的运动方向。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在本发明的一种可选实施例中,所述映射模块903还可以适于:
将所述特征区域划分一个或多个绘制图形;每个绘制图形中具有多个顶点,每个顶点具有纹理坐标;
在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图像的顶点;
针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在本发明的一种可选实施例中,所述预设模式可以包括简谐运动模式和/或阻尼振动模式;所述映射模块903还可以适于:
在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图像的顶点。
在本发明的一种可选实施例中,所述映射模块603还可以适于:
确定每个绘制图像的顶点的加速度;每个绘制图像的顶点具有原始坐标;
按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图像的顶点的移动距离;
由所述原始坐标和所述移动距离计算每个绘制图像的顶点的目标坐标。
在本发明的一种可选实施例中,所述装置还可以包括如下模块:
第一生成模块,适于采用所述静态图片和所述一帧或多帧扭曲图像生成动态图片。
在本发明的一种可选实施例中,所述装置还可以包括如下模块:
第二生成模块,适于基于所述特征区域生成动态信息;
写入模块,适于将所述动态信息和脚本对象写入所述静态图片中,以生成动态交互文件。
在本发明的一种可选实施例中,所述第二生成模块还可以适于:
使用所述特征区域、所述特征点和所述一个或多个参考点生成动态信息。
在本发明的一种可选实施例中,所述第二生成模块还可以适于:
使用所述特征区域和所述特征区域中的至少部分像素点的运动方向生成动态信息。
参照图10,示出了根据本发明一个实施例的一种基于静态图片生成动态效果的装置实施例的结构框图,具体可以包括如下模块:
选取模块1001,适于在静态图片中选取特征区域;所述特征区域中具有特征点;
确定模块1002,适于在所述特征区域中确定一个或多个参考点;
映射模块1003,适于根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中;
生成模块1004,适于基于所述一帧或多帧扭曲图片生成动态效果。
在本发明实施例的一种可选示例中,所述特征区域可以包括凸区域,所述特征点可以包括重心点。
在本发明的一种可选实施例中,所述映射模块1004还可以适于:
生成扭曲图片;
将在所述特征区域中第一连线上的像素点映射到第二连线上;
将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
其中,所述第一连线为所述特征点与边缘点之间的连线,所述第二连线为当前参考点与边缘点的连线,所述边缘点为所述特征区域边缘上的坐标点。
在本发明的一种可选实施例中,所述映射模块1004还可以适于:
计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
按照所述相对位置,将所述像素点拷贝到第二连线上。
在本发明的一种可选实施例中,所述映射模块1004还可以适于:
在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
在本发明的一种可选实施例中,所述映射模块1004还可以适于:
对扭曲图片中位置重叠的像素点进行像素点叠加处理。
在本发明的一种可选实施例中,所述映射模块1004还可以适于:
对扭曲图片中的空白位置进行像素点插值处理。
在本发明的一种可选实施例中,所述生成模块1005还可以适于:
采用所述静态图片和所述一帧或多帧扭曲图片生成动态图片。
在本发明的一种可选实施例中,所述生成模块1005还可以适于:
基于所述特征区域生成动态信息;
将所述动态信息和脚本对象写入所述静态图片中,以生成动态交互文件。
在本发明的一种可选实施例中,所述生成模块1005还可以适于:
使用所述特征区域、所述特征点和所述一个或多个参考点生成动态信息。
参照图11,示出了根据本发明一个实施例的一种基于交互操作产生图片动态效果的装置实施例的结构框图,具体可以包括如下模块:
选取模块1101,适于在静态图片中选取特征区域;
确定模块1102,适于在监听到指定的交互操作事件时,根据指定的操作事件确定所 述特征区域中的至少部分像素点的运动方向;
映射模块1103,适于在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括摇晃事件,所述确定模块1102还可以适于:
设置摇晃事件的摇晃方向为所述特征区域中的至少部分像素点的运动方向。
在本发明的一种可选实施例中,所述指定的交互操作事件可以包括屏幕点击事件,所述确定模块1102还可以适于:
设置指向发生屏幕点击事件的位置为所述特征区域中的至少部分像素点的运动方向。
在本发明的一种可选实施例中,所述映射模块1103还可以适于:
将所述特征区域划分一个或多个绘制图形;每个绘制图形中具有多个顶点,每个顶点具有纹理坐标;
在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图形的顶点;
针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧变化图片的动态效果。
在本发明的一种可选实施例中,所述预设模式包括简谐运动模式和/或阻尼振动模式;所述映射模块1103还可以适于:
在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图形的顶点。
在本发明实施例的一种可选示例中,所述映射模块1103还可以适于:
确定每个绘制图形的顶点的加速度;每个绘制图形的顶点具有原始坐标;
按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图形的顶点的移动距离;
由所述原始坐标和所述移动距离计算每个绘制图形的顶点的目标坐标。
参照图12,示出了根据本发明一个实施例的一种生成可定制动态图的装置实施例的结构框图,具体可以包括如下模块:
读取模块1201,适于从动态交互文件中读取动态信息;所述动态交互文件包括脚本对象和静态图片;
映射模块1202,适于由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
在本发明的一种可选实施例中,所述动态信息可以包括特征区域、特征点、一个或多个参考点;所述映射模块1202还可以适于:
由所述脚本对象根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中。
在本发明实施例的一种可选示例中,所述特征区域可以包括凸区域,所述特征点可以包括重心点。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
由所述脚本对象生成扭曲图片;
将在所述特征区域中第一连线上的像素点映射到第二连线上;
将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
其中,所述第一连线为所述特征点与边缘点之间的连线,所述第二连线为当前参考点与边缘点的连线,所述边缘点为所述特征区域边缘上的坐标点。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
按照所述相对位置,将所述像素点拷贝到第二连线上。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
对扭曲图片中位置重叠的像素点进行像素点叠加处理。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
对扭曲图片中的空白位置进行像素点插值处理。
在本发明的一种可选实施例中,所述动态信息包括特征区域、所述特征区域中的至少部分像素点的运动方向;所述映射模块1202还可以适于:
由所述脚本对象在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧变化图片的动态效果。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
将所述特征区域划分一个或多个绘制图形;每个绘制图形中具有多个顶点,每个顶点具有纹理坐标;
在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图像的顶点;
针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
在本发明的一种可选实施例中,所述预设模式包括简谐运动模式和/或阻尼振动模式;所述映射模块1202还可以适于:
在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图像的顶点。
在本发明的一种可选实施例中,所述映射模块1202还可以适于:
确定每个绘制图像的顶点的加速度;每个绘制图像的顶点具有原始坐标;
按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图像的顶点的移动距离;
由所述原始坐标和所述移动距离计算每个绘制图像的顶点的目标坐标。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于静态图片的动态交互设备中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
例如,图13示出了可以实现根据本发明的基于静态图片的动态交互的计算设备,例如应用服务器。该服务器传统上包括处理器1310和以存储器1320形式的计算机程序产品或者计算机可读介质。存储器1320可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器1320具有用于执行上述方法中的任何方法步骤的程序代码1331的存储空间1330。例如,用于程序代码的存储空间1330可以包括分别用于实现上面的方法中的各种步骤的各个程序代码1331。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图14所述的便携式或者固定存储单元。该存储单元可以具有与图13的服务器中的存储器1320类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码1331’,即可以由例如诸如1310之类的处理器读取的代码,这些代码当由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着, 结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
此外,还应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。

Claims (38)

  1. 一种基于静态图片的动态交互方法,包括步骤:
    在静态图片中选取特征区域;
    当监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;
    根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
  2. 如权利要求1所述的方法,其特征在于,所述指定的交互操作事件包括摇晃事件,所述映射参考对象包括一个或多个参考点;
    所述根据指定的交互操作事件确定映射参考对象的步骤包括:
    按照摇晃事件的摇晃方向,在静态图片的特征区域中选取一个或多个参考点。
  3. 如权利要求1所述的方法,其特征在于,所述指定的交互操作事件包括屏幕点击事件,所述映射参考对象包括一个或多个参考点;
    所述根据指定的交互操作事件确定映射参考对象的步骤包括:
    按照指向发生屏幕点击事件的方向,在静态图片的特征区域中选取一个或多个参考点。
  4. 如权利要求2或3所述的方法,其特征在于,所述特征区域具有特征点;
    所述根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化的步骤包括:
    根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中。
  5. 如权利要求4所述的方法,其特征在于,所述特征区域包括凸区域,所述特征点包括重心点。
  6. 如权利要求5所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤包括:
    生成扭曲图片;
    将在所述特征区域中第一连线上的像素点映射到第二连线上;
    将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
    其中,所述第一连线为所述特征点与边缘点之间的连线,所述第二连线为当前参考点与边缘点的连线,所述边缘点为所述特征区域边缘上的坐标点。
  7. 如权利要求6所述的方法,其特征在于,所述将在所述特征区域中第一连线上的像素点映射到第二连线上的步骤包括:
    计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
    按照所述相对位置,将所述像素点拷贝到第二连线上。
  8. 如权利要求5所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
  9. 如权利要求5或6或7或8所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    对扭曲图片中位置重叠的像素点进行像素点叠加处理。
  10. 一种基于静态图片的动态交互装置,包括:
    选取模块,适于在静态图片中选取特征区域;
    确定模块,适于在监听到指定的交互操作事件时,根据指定的交互操作事件确定映射参考对象;
    映射模块,适于根据所述映射参考对象对所述特征区域中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
  11. 一种基于静态图片生成动态效果的方法,包括步骤:
    在静态图片中选取特征区域;所述特征区域中具有特征点;
    在所述特征区域中确定一个或多个参考点;
    根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中;
    基于所述一帧或多帧扭曲图片生成动态效果。
  12. 如权利要求11所述的方法,其特征在于,所述特征区域包括凸区域,所述特征点包括重心点。
  13. 如权利要求12所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤包括:
    生成扭曲图片;
    将在所述特征区域中第一连线上的像素点映射到第二连线上;
    将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
    其中,所述第一连线为所述特征点与边缘点之间的连线,所述第二连线为当前参考点与边缘点的连线,所述边缘点为所述特征区域边缘上的坐标点。
  14. 如权利要求13所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
  15. 如权利要求13或14所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    对扭曲图片中位置重叠的像素点进行像素点叠加处理。
  16. 如权利要求13或14所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    对扭曲图片中的空白位置进行像素点插值处理。
  17. 如权利要求11所述的方法,其特征在于,所述基于所述一帧或多帧扭曲图片生成动态效果的步骤包括:
    采用所述静态图片和所述一帧或多帧扭曲图片生成动态图片。
  18. 如权利要求11所述的方法,其特征在于,所述基于所述一帧或多帧扭曲图片生成动态效果的步骤包括:
    基于所述特征区域生成动态信息;
    将所述动态信息和脚本对象写入所述静态图片中,以生成动态交互文件。
  19. 一种基于静态图片生成动态效果的装置,包括:
    选取模块,适于在静态图片中选取特征区域;所述特征区域中具有特征点;
    确定模块,适于在所述特征区域中确定一个或多个参考点;
    映射模块,适于根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中;
    生成模块,适于基于所述一帧或多帧扭曲图片生成动态效果。
  20. 一种基于交互操作产生图片动态效果的方法,包括步骤:
    在静态图片中选取特征区域;
    当监听到指定的交互操作事件时,根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向;
    在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
  21. 如权利要求20所述的方法,其特征在于,所述指定的交互操作事件包括摇晃事件,所述根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向的步骤包括:
    设置摇晃事件的摇晃方向为所述特征区域中的至少部分像素点的运动方向。
  22. 如权利要求20所述的方法,其特征在于,所述指定的交互操作事件包括屏幕点击事件,所述根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向的步骤包括:
    设置指向发生屏幕点击事件的位置为所述特征区域中的至少部分像素点的运动方 向。
  23. 如权利要求20或21或22所述的方法,其特征在于,所述在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果的步骤包括:
    将所述特征区域划分一个或多个绘制图形;每个绘制图形中具有多个顶点,每个顶点具有纹理坐标;
    在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图形的顶点;
    针对每个绘制图形,使用图形绘制接口按照每个顶点的纹理坐标对绘制图形中的像素点进行纹理映射,产生包含一帧或多帧变化图片的动态效果。
  24. 如权利要求23所述的方法,其特征在于,所述预设模式包括简谐运动模式和/或阻尼振动模式;
    所述在所述运动方向上,按照预设模式在一个或多个时间点移动每个绘制图形的顶点的步骤包括:
    在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图形的顶点。
  25. 如权利要求24所述方法,其特征在于,所述在所述运动方向上,按照简谐运动模式和/或阻尼振动模式在一个或多个时间点移动每个绘制图形的顶点的步骤包括:
    确定每个绘制图形的顶点的加速度;每个绘制图形的顶点具有原始坐标;
    按照所述加速度和/或预设的阻尼系数,计算在一个或多个时间点内沿所述运动方向移动每个绘制图形的顶点的移动距离;
    由所述原始坐标和所述移动距离计算每个绘制图形的顶点的目标坐标。
  26. 一种基于交互操作产生图片动态效果的装置,包括:
    选取模块,适于在静态图片中选取特征区域;
    确定模块,适于在监听到指定的交互操作事件时,根据指定的操作事件确定所述特征区域中的至少部分像素点的运动方向;
    映射模块,适于在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧扭曲图片变化的动态效果。
  27. 一种生成可定制动态图的方法,包括步骤:
    从动态交互文件中读取动态信息;所述动态交互文件包括脚本对象和静态图片;
    由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
  28. 如权利要求27所述的方法,其特征在于,所述动态信息包括特征区域、特征点、一个或多个参考点;
    所述由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化的步骤包括:
    由所述脚本对象根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中。
  29. 如权利要求28所述的方法,其特征在于,所述特征区域包括凸区域,所述特征点包括重心点。
  30. 如权利要求29所述的方法,其特征在于,所述由所述脚本对象根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤包括:
    由所述脚本对象生成扭曲图片;
    将在所述特征区域中第一连线上的像素点映射到第二连线上;
    将所述第二连线上的像素点拷贝到在所述扭曲图片中的相同位置;
    其中,所述第一连线为所述特征点与边缘点之间的连线,所述第二连线为当前参考点与边缘点的连线,所述边缘点为所述特征区域边缘上的坐标点。
  31. 如权利要求30所述的方法,其特征在于,所述将在所述特征区域中第一连线上 的像素点映射到第二连线上的步骤包括:
    计算在所述特征区域中第一连线上的像素点,在第一连线上的相对位置;
    按照所述相对位置,将所述像素点拷贝到第二连线上。
  32. 如权利要求30所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    在所述特征区域外的像素点映射到在所述扭曲图片中的相同位置。
  33. 如权利要求30或31或32所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    对扭曲图片中位置重叠的像素点进行像素点叠加处理。
  34. 如权利要求30或31或32所述的方法,其特征在于,所述根据所述特征点和所述一个或多个参考点将所述静态图片的像素点映射到一帧或多帧扭曲图片中的步骤还包括:
    对扭曲图片中的空白位置进行像素点插值处理。
  35. 如权利要求27所述的方法,其特征在于,所述动态信息包括特征区域、所述特征区域中的至少部分像素点的运动方向;
    所述由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化的步骤包括:
    由所述脚本对象在所述运动方向上,按照预设模式对所述特征区域中的至少部分像素点进行纹理映射,产生包含一帧或多帧变化图片的动态效果。
  36. 一种生成可定制动态图的装置,包括:
    读取模块,适于从动态交互文件中读取动态信息;所述动态交互文件包括脚本对象和静态图片;
    映射模块,适于由所述脚本对象根据所述动态信息对所述静态图片中的至少部分像素点映射到一帧或多帧扭曲图片中,以驱动所述静态图片逐帧变化。
  37. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算设备上运行时,导致所述计算设备执行根据权利要求1-9中的任一个所述的基于静态图片的动态交互方法,或者,导致所述计算设备执行根据权利要求11-18中的任一个所述的基于静态图片生成动态效果的方法,或者,导致所述计算设备执行根据权利要求20-25中的任一个所述的基于交互操作产生图片动态效果的方法,或者,导致所述计算设备执行根据权利要求27-35中的任一个所述的生成可定制动态图的方法。
  38. 一种计算机可读介质,其中存储了如权利要求37所述的计算机程序。
PCT/CN2015/095933 2014-12-31 2015-11-30 一种基于静态图片的动态交互方法和装置 WO2016107356A1 (zh)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
CN201410854768.5 2014-12-31
CN201410854766.6 2014-12-31
CN201410854768.5A CN104574473B (zh) 2014-12-31 2014-12-31 一种基于静态图片生成动态效果的方法和装置
CN201410855538.0A CN104571887B (zh) 2014-12-31 2014-12-31 一种基于静态图片的动态交互方法和装置
CN201410854767.0 2014-12-31
CN201410855538.0 2014-12-31
CN201410854766.6A CN104574483A (zh) 2014-12-31 2014-12-31 一种生成可定制动态图的方法和装置
CN201410854767.0A CN104574484B (zh) 2014-12-31 2014-12-31 一种基于交互操作产生图片动态效果的方法和装置

Publications (1)

Publication Number Publication Date
WO2016107356A1 true WO2016107356A1 (zh) 2016-07-07

Family

ID=56284192

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095933 WO2016107356A1 (zh) 2014-12-31 2015-11-30 一种基于静态图片的动态交互方法和装置

Country Status (1)

Country Link
WO (1) WO2016107356A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373034B2 (en) 2016-05-10 2019-08-06 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating two-dimensional barcode picture having dynamic effect

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411791A (zh) * 2010-09-19 2012-04-11 三星电子(中国)研发中心 一种静止图像动态化的方法和设备
CN102855648A (zh) * 2012-08-09 2013-01-02 北京小米科技有限责任公司 一种图像处理方法及装置
CN103473799A (zh) * 2013-09-02 2013-12-25 腾讯科技(深圳)有限公司 一种图片的动态处理方法及装置、终端设备
CN104574484A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于交互操作产生图片动态效果的方法和装置
CN104574483A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种生成可定制动态图的方法和装置
CN104571887A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于静态图片的动态交互方法和装置
CN104574473A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于静态图片生成动态效果的方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102411791A (zh) * 2010-09-19 2012-04-11 三星电子(中国)研发中心 一种静止图像动态化的方法和设备
CN102855648A (zh) * 2012-08-09 2013-01-02 北京小米科技有限责任公司 一种图像处理方法及装置
CN103473799A (zh) * 2013-09-02 2013-12-25 腾讯科技(深圳)有限公司 一种图片的动态处理方法及装置、终端设备
CN104574484A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于交互操作产生图片动态效果的方法和装置
CN104574483A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种生成可定制动态图的方法和装置
CN104571887A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于静态图片的动态交互方法和装置
CN104574473A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于静态图片生成动态效果的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373034B2 (en) 2016-05-10 2019-08-06 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating two-dimensional barcode picture having dynamic effect
US10706343B2 (en) 2016-05-10 2020-07-07 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating two-dimensional barcode picture having dynamic effect

Similar Documents

Publication Publication Date Title
US8610714B2 (en) Systems, methods, and computer-readable media for manipulating graphical objects
US8154544B1 (en) User specified contact deformations for computer graphics
CN104571887B (zh) 一种基于静态图片的动态交互方法和装置
US8538737B2 (en) Curve editing with physical simulation of mass points and spring forces
CN104574484B (zh) 一种基于交互操作产生图片动态效果的方法和装置
US8803880B2 (en) Image-based lighting simulation for objects
WO2014158928A2 (en) Mapping augmented reality experience to various environments
KR20140030098A (ko) 애니메이션화 된 페이지 넘기기
CN109584377A (zh) 一种用于呈现增强现实内容的方法与设备
US9508108B1 (en) Hardware-accelerated graphics for user interface elements in web applications
CN104574473B (zh) 一种基于静态图片生成动态效果的方法和装置
CN104574483A (zh) 一种生成可定制动态图的方法和装置
WO2016107356A1 (zh) 一种基于静态图片的动态交互方法和装置
EP4083794A1 (en) Rendering of persistent particle trails for dynamic displays
KR101630257B1 (ko) 3d 이미지 제공 시스템 및 그 제공방법
US11012531B2 (en) Systems and methods for culling requests for hierarchical level of detail content over a communications network
JP2019121237A (ja) プログラム、画像処理方法、及び画像処理装置
Zamri et al. Research on atmospheric clouds: a review of cloud animation methods in computer graphics
EP4083793B1 (en) Rendering of persistent particle trails for dynamic displays
US11657562B2 (en) Utilizing hemispherical clamping for importance sampling of image-based light to render a virtual environment
US10964081B2 (en) Deformation mesh control for a computer animated artwork
WO2022186235A1 (ja) コンテンツ動画再生プログラム、コンテンツ動画再生装置、コンテンツ動画再生方法、コンテンツ動画データ生成プログラム、及びコンテンツ動画データ生成装置
KR101824178B1 (ko) 3차원 렌더링 장치에서 시점을 기반으로 투명도를 조절하는 방법 및 장치
Cao et al. Computer Simulation of Water Flow Animation Based on Two‐Dimensional Navier‐Stokes Equations
US20240020891A1 (en) Vector Object Jitter Application

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15875035

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15875035

Country of ref document: EP

Kind code of ref document: A1