CN109801351B - Dynamic image generation method and processing device - Google Patents

Dynamic image generation method and processing device Download PDF

Info

Publication number
CN109801351B
CN109801351B CN201711128596.3A CN201711128596A CN109801351B CN 109801351 B CN109801351 B CN 109801351B CN 201711128596 A CN201711128596 A CN 201711128596A CN 109801351 B CN109801351 B CN 109801351B
Authority
CN
China
Prior art keywords
dimensional
acquiring
observation point
dimensional scene
dimensional images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711128596.3A
Other languages
Chinese (zh)
Other versions
CN109801351A (en
Inventor
马春阳
王彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201711128596.3A priority Critical patent/CN109801351B/en
Priority to PCT/CN2018/114540 priority patent/WO2019096057A1/en
Publication of CN109801351A publication Critical patent/CN109801351A/en
Application granted granted Critical
Publication of CN109801351B publication Critical patent/CN109801351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The application provides a dynamic image generation method and processing equipment, wherein the method comprises the following steps: constructing a three-dimensional scene according to the position relation among the plurality of display elements; acquiring a plurality of two-dimensional images of the three-dimensional scene formed at the observation points by adjusting parameters of the observation points; and generating a dynamic image according to the plurality of two-dimensional images. By the scheme, the technical problem that dynamic images can be formed only by setting and adjusting the pictures by one frame per se in the prior art, so that the dynamic image generation efficiency is too low is solved, and the technical effect of simply and efficiently generating the dynamic images is achieved.

Description

Dynamic image generation method and processing device
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a dynamic image generation method and processing device.
Background
With the improvement of the performance of the mobile device of the user, people have an increasing demand for dynamic views. Such as dynamic advertising creatives, dynamic merchandise introductions, dynamic task images, and so forth.
However, in the conventional moving image generation method, a picture is generally produced frame by frame, and then a moving image is formed. When the mode has batch requirements, the operation amount is huge, the mode cannot be copied, and the mode needs to be remade aiming at the requirement of another group of pictures, so that the realization workload is extremely large, and the realization efficiency is very low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a dynamic image generation method and processing equipment, so as to achieve the technical effect of simply and efficiently generating a dynamic image.
A dynamic image generation method, comprising:
constructing a three-dimensional scene according to the position relation among the plurality of display elements;
acquiring a plurality of two-dimensional images of the three-dimensional scene formed at the observation points by adjusting parameters of the observation points;
and generating a dynamic image according to the plurality of two-dimensional images.
A dynamic image generation method, comprising:
acquiring the position relation among the imported multiple display elements;
constructing a three-dimensional scene according to the position relation among the plurality of display elements;
acquiring a plurality of groups of parameter data set for observation points;
mapping to obtain a two-dimensional image corresponding to each group of parameter data;
and generating dynamic images in batches according to the two-dimensional images corresponding to each group of parameter data.
A processing device comprising a processor and a memory for storing processor-executable instructions that when executed by the processor implement:
constructing a three-dimensional scene according to the position relation among the plurality of display elements;
acquiring a plurality of two-dimensional images of the three-dimensional scene formed at the observation points by adjusting parameters of the observation points;
and generating a dynamic image according to the plurality of two-dimensional images.
A processing device comprising a processor and a memory for storing processor-executable instructions that when executed by the processor implement:
acquiring the position relation among the imported multiple display elements;
constructing a three-dimensional scene according to the position relation among the plurality of display elements;
acquiring a plurality of groups of parameter data set for observation points;
mapping to obtain a two-dimensional image corresponding to each group of parameter data;
and generating dynamic images in batches according to the two-dimensional images corresponding to each group of parameter data. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of claim.
In the embodiment of the application, the animation effect of the movement of the object is simulated by changing the parameters of the observation point, so that a dynamic image can be generated based on a set of given three-dimensional scene elements. By the mode, the technical problem that dynamic images can be formed only by setting and adjusting pictures by one frame per se in the prior art, so that the dynamic image generation efficiency is too low is solved, and the technical effect of simply and efficiently generating the dynamic images is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this application, and are not intended to limit the application. In the drawings:
FIG. 1 is a method flow diagram of a dynamic image generation method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a three-dimensional scene element according to an embodiment of the application;
FIG. 3 is a schematic diagram of a simulated camera movement direction according to one embodiment of the present application;
FIG. 4 is another schematic diagram of simulating a direction of movement of a camera according to one embodiment of the present application;
FIG. 5 is yet another schematic diagram of simulating camera movement directions according to one embodiment of the present application;
FIG. 6 is a schematic three-dimensional projection according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a coordinate system according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an imaging principle according to an embodiment of the present application;
FIG. 9 is a schematic view of a camera movement according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a two-dimensional image obtained by a perspective mode of a camera according to an embodiment of the present application;
FIG. 11 is another schematic diagram of a two-dimensional image obtained by a perspective mode of a camera according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a two-dimensional image obtained in a perspective mode of a camera according to an embodiment of the present application;
FIG. 13 is a further schematic diagram of a two-dimensional image obtained by a perspective mode of a camera according to an embodiment of the present application;
FIG. 14 is a method flow diagram of an animation generation method according to an embodiment of the application;
FIG. 15 is a schematic flow chart diagram of another method of animation generation according to an embodiment of the application;
fig. 16 is an architecture diagram of a user terminal according to an embodiment of the present application;
fig. 17 is a block diagram of the structure of an animation generation device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the following embodiments and the accompanying drawings. The exemplary embodiments and descriptions thereof herein are provided to explain the present application and should not be taken as limiting the present application.
At present, in order to generate animation, it is generally required to generate one frame of image, and then to animate the one frame of image. The method inevitably causes that the animation generation template cannot be copied, the animation cannot be generated in batch, the realization is complex, and the efficiency is low. For this reason, it is considered that, after elements (for example, a picture with a pattern and a picture with characters) for generating an animation are acquired, if the elements are used as a basis, the movement of the camera can be simulated, so that an animation effect of the movement of the object can be achieved by adjusting parameters of the camera under the condition that the elements are not moved, and thus the animation can be generated simply. And the mode can be copied, and then a group of other three-dimensional scene elements are provided, and the animation with similar effect and representing different objects by the user can be obtained according to the same mode.
FIG. 1 is a flow chart of a method of one embodiment of a method of animation generation as described herein. Although the present application provides method operational steps or apparatus configurations as illustrated in the following examples or figures, more or fewer operational steps or modular units may be included in the methods or apparatus based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution sequence of the steps or the module structure of the apparatus is not limited to the execution sequence or the module structure described in the embodiments and shown in the drawings of the present application. When the described method or module structure is applied in an actual device or end product, the method or module structure according to the embodiments or shown in the drawings can be executed sequentially or executed in parallel (for example, in a parallel processor or multi-thread processing environment, or even in a distributed processing environment).
As shown in fig. 1, the animation generation method may include the steps of:
step 101: constructing a three-dimensional scene according to the position relation among the plurality of display elements;
the display elements may be text pictures, image pictures, and the like. For example, as shown in fig. 2, the display elements are four image pictures and one text picture.
In the case of acquiring only display elements, a three-dimensional scene cannot be formed, that is, a three-dimensional space interface cannot be formed, and in order to form the three-dimensional space interface, a positional relationship between each element is also required, for example, there is a need for: relative distance, relative magnitude, relative azimuth. After the relative distance, the relative size and the relative azimuth angle between the elements are obtained, the three-dimensional scene can be formed. That is, after the relative distances, relative sizes, and relative azimuth angles between the four image pictures and one text picture are acquired, as shown in fig. 2, a three-dimensional scene as shown in fig. 2 may be formed.
For example, as shown in fig. 3 and 4, a three-dimensional scene is schematically illustrated. Fig. 3 and 4 show the same three-dimensional scene elements, namely material 1, material 2, and material 3. The materials in fig. 3 and 4 form different three-dimensional scenes due to different relative positions.
It should be noted, however, that the above listed position relationship is only an illustrative description, and other position relationships may be implemented, and the present application is not limited thereto.
Step 102: acquiring a plurality of two-dimensional images of the three-dimensional scene formed at the observation points by adjusting parameters of the observation points;
when generating a plurality of two-dimensional images, one or more of the following parameters of an observation point (e.g., a camera) can be adjusted to obtain two-dimensional images from a plurality of different perspectives: focal length of the camera, camera optical center position, distance of the camera from the three-dimensional scene element, and angle of view of the camera.
Taking the adjustment of the distance of the camera as an example, the following steps are performed:
1) As shown in fig. 4, the analog camera moves at a preset speed in a direction parallel to the three-dimensional scene from left to right to obtain a plurality of two-dimensional images;
for example, each 2cm movement generates a two-dimensional image, and assuming a total movement of 10cm, 6 two-dimensional images can be obtained, which are two-dimensional images of the same three-dimensional scene when the camera is at different viewing angles.
2) As shown in fig. 5, the simulation observation point moves at a preset speed in a direction perpendicular to the three-dimensional scene from back to front, and a plurality of two-dimensional images are obtained;
3) As shown in fig. 3, the observation point is simulated to move at a preset speed in a direction parallel to the three-dimensional scene from top to bottom, and a plurality of two-dimensional images are acquired.
However, it should be noted that the above-mentioned only forms different two-dimensional images by adjusting the distance between the observation point model and the three-dimensional scene, and when it is implemented, different images based on the same three-dimensional scene can also be obtained by adjusting the focal length of the observation point, or the optical center position of the observation point, so that a dynamic feeling can be formed without moving the three-dimensional scene.
The observation points may be, for example: the device or apparatus that can be imaged, such as an analog camera, a physical camera, etc., in which form or form can be selected according to actual needs and situations, and the present application is not limited thereto.
It can be understood that, when a person drives a vehicle, the telegraph pole, the house, the mountain and the like outside the window are different, but at this time, the telegraph pole, the house, the mountain and the like seen by the person are in a moving state along with the movement of the vehicle where the person is located.
As shown in fig. 6, it is a schematic diagram of the imaging principle of a three-dimensional space, that is, the three-dimensional space forms a two-dimensional image in a projection space through an observation point model, and if the position of an observation point, the position of the observation point from the projection space, and so on are adjusted, the corresponding two-dimensional image is different and varied.
In generating the two-dimensional image, the observation point may be a perspective model so that the resultant two-dimensional image is a two-dimensional image formed by overlapping a plurality of elements. The so-called perspective model may refer to a process of mapping a three-dimensional scene into a two-dimensional image through a camera parameter model, that is, a process of simulating a real photographing.
Step 103: and generating a dynamic image according to the plurality of two-dimensional images.
Each obtained two-dimensional image is an image formed by the three-dimensional scene at a certain position and parameter of an observation point. After the two-dimensional images are acquired, the two-dimensional images can be generated into a two-dimensional image sequence, and then the two-dimensional image sequence is subjected to dynamic image coding to form dynamic images.
When forming the two-dimensional image sequence, the sequence may be obtained by arranging the two-dimensional images in the order in which they are formed, may be obtained by arranging the two-dimensional images in the reverse order, or may be obtained by repeating the sequence periodically. For example, the two-dimensional image sequence may be formed in a direction from the rear to the front of the observation point perpendicular to the three-dimensional scene, the two-dimensional image sequence may be formed in a direction from the front to the rear of the observation point perpendicular to the three-dimensional scene, the two-dimensional image sequence may be formed in a direction from the rear to the front of the observation point perpendicular to the three-dimensional scene and then from the front to the rear of the observation point, and the two-dimensional image sequence may be formed in a direction from the front to the rear of the observation point perpendicular to the three-dimensional scene and then from the rear to the front of the observation point.
The formation mode of the two-dimensional image sequence listed above is only a schematic description, and in practical implementation, the two-dimensional image sequence may be formed by using other formation sequences, which may be flexibly adjusted and selected, and the present application does not limit this.
When the method is implemented, a plurality of dynamic image samples can be generated in a batch mode through one-click and batch operation. And the observation point change parameters and the like set at this time can also be stored and applied to other three-dimensional scene elements to obtain other dynamic images with similar change modes.
In one embodiment, the method for generating a dynamic image may further include:
s1: acquiring the position relation among the imported multiple display elements;
that is, the display elements (i.e., pictures) may be manually introduced, or the pictures may be directly called and the positional relationship between the pictures may be set, which are arranged according to the positional relationship, to form a three-dimensional scene, which is stored in advance in a computer.
For example, the position relationship between the pictures, for example, the coordinates (x-axis, y-axis, z-axis coordinates) in the three-dimensional world coordinate system of each picture, may be obtained based on pre-configuration or real-time calculation, wherein the pictures are arranged in the same coordinate system.
S2: constructing a three-dimensional scene according to the position relation among the plurality of display elements;
and according to the three-dimensional world coordinate information of each picture, placing the pictures in the same three-dimensional coordinate system to form a three-dimensional scene. When the method is implemented, the coordinate of the XYZ axis of the picture is taken as a parameter to be input into the function, and a three-dimensional scene is formed, wherein the three-dimensional scene can have a perspective effect.
S3: acquiring a plurality of groups of parameter data set for observation points;
the parameter data for the observation points may include, but is not limited to: position, focal length, position of the aperture, etc. By adjusting the parameter data of the observation points, different imaging effects of the same target object can be obtained. For example, when the same thing is photographed, the thing is larger as the focal length is smaller, and the thing is smaller as the focal length is larger.
S4: mapping to obtain a two-dimensional image corresponding to each group of parameter data;
in the case of adjusting the parameter data, two-dimensional images (i.e., imaging results) different from the same imaging object (i.e., a three-dimensional scene established based on the above-described picture and the positional relationship between the pictures) can be obtained by three-dimensional modeling.
When the two-dimensional image corresponding to each group of parameter data is obtained through mapping, the two-dimensional image can be obtained through projecting the model. That is, a three-dimensional scene may be projected by an imaging lens onto a two-dimensional image plane of a camera, which projection may be represented by an imaging transformation, i.e. a projection model. Specifically, for the projection model, the following coordinate systems are mainly involved: an image coordinate system, a camera coordinate system, a world coordinate system.
The image collected by the camera can be converted into a digital image through a high-speed image collecting system in the form of a standard television signal, and the converted digital image is input into a computer. Each image may be an array of M x N, and the value of each element (i.e., pixel) in the image in M rows and N columns is the brightness (i.e., grayscale) of the image point.
As shown in fig. 7, (u, v) indicates image coordinate system coordinates in units of pixels. Since (u, v) only represents the number of columns and rows of a pixel in the array, the position of the pixel in the image is not represented by a physical unit. Thus, an image coordinate system expressed in physical units (e.g., millimeters), i.e., the XOY coordinate system as shown in FIG. 2, can be established.
In the XOY coordinate system, the origin O is usually defined as the intersection of the optical axis of the camera with the image plane, which is generally located at the center of the image, but sometimes this origin O is also deviated. The transformation of the two-dimensional camera coordinate system into the image coordinate system can be characterized by the following matrix:
Figure GDA0004112277510000071
the imaging geometry of the camera can be characterized by fig. 3.
In fig. 8, point O is the camera optical center, the X-axis and Y-axis are parallel to the X-axis and Y-axis of the image, and the z-axis is the camera optical axis and perpendicular to the image plane. The focal point of the optical axis and the image plane is the origin of the image coordinate system, the rectangular coordinate system composed of the point O and the x, y, z axes is called the camera coordinate system, and OO1 is the focal length of the camera.
The world coordinate system is arbitrarily selected, and the transformation from the camera coordinate system to the world coordinate system is a 3D-to-3D transformation process which can be characterized by a rotation matrix R and a translation vector t. That is, the following relationship exists:
Figure GDA0004112277510000072
based on the three coordinate systems, the linear model and the non-linear model of the camera are modeled as follows:
1) Pinhole imaging model (i.e., linear model):
the pinhole imaging model may also be referred to as a linear computer model. The imaging position of any point P in space in the image can be approximated by a pinhole imaging model, i.e. the projection position P of any point P in the image is taken as the intersection point of the line OP connecting the optical center O and the point P and the image plane. This relationship may also be referred to as central projection (i.e., transmissive projection).
The proportional relationship can be expressed as:
Figure GDA0004112277510000081
wherein, (X, Y) is the image coordinate of the P point, (X, Y, z) is the coordinate of the space point P in the camera coordinate system, and f is the distance between the xy plane and the image plane (f is the focal length of the camera). The above proportional relationship can be represented by a matrix as follows:
Figure GDA0004112277510000082
where s is a scale factor and P is the perspective projection matrix we are most concerned with.
Then, from the above summary, we can easily obtain the coordinate transformation relationship between the P point in the world coordinate system and the P point in the image coordinate system. The details are as follows:
Figure GDA0004112277510000083
wherein ax = f/dX, which is a scale factor on the u-axis (normalized focal length on the u-axis); ay = f/dY, scale factor on v-axis (normalized focal length on v-axis). M is a projection matrix; m1 is determined by four parameters ax, ay, u0, v0 (these parameters are related only to the camera intrinsic parameters and therefore may be referred to as camera intrinsic parameters). M2 is determined by the orientation of the camera relative to the world coordinate system and is called the camera extrinsic parameter. And determining internal and external parameters of a certain video camera, which can be called as calibration of the camera.
From the above equation, it can be found that if the inside and outside parameters of the camera are obtained, it is equivalent to obtaining the projection matrix M, which is that for any spatial point P, if the coordinates Cw = (Xw, yw, zw) of the spatial point in the world coordinate system are known, the projection position of the point in the image can be located. However, the reverse is reasonable, mainly because the camera loses imaging depth during imaging.
2) Non-linear model
Considering that real lenses are not perfect perspective imaging but have different degrees of distortion, so that the image of spatial points is not at the position (X, Y) described by the linear model, but is shifted from the actual image plane coordinates (X ', Y') under the influence of lens distortion:
Figure GDA0004112277510000091
where δ x and δ y represent non-linear distortion values, which are related to the position of the image point in the image. Theoretically, the lens will have both radial and tangential distortion, however, since the tangential distortion varies little, the correction for radial distortion is represented by an even power polynomial model of the radial distance from the center of the image:
Figure GDA0004112277510000092
where (u 0, v 0) is the exact value of the principal point location, however:
r 2 =(X′-u 0 ) 2 +(Y′-υ 0 ) 2
it follows that the relative values of the distortion in the X and Y directions (δ X/X, δ Y/Y) are proportional to the radial radius squared, i.e. the distortion is greater at the edges of the image. For non-precision machine vision, the radial distortion of the first order may describe the non-linear distortion, and accordingly, the above equation may be expressed as:
Figure GDA0004112277510000093
in the above manner, the nonlinear model camera intrinsic parameters may include: linear model parameters (ax, ay, u0, v 0) + nonlinear distortion parameters (k 1, k 2).
Through the projection model, two-dimensional images corresponding to each group of parameter data can be obtained through mapping.
S5: and generating dynamic images in batches according to the two-dimensional images corresponding to each group of parameter data.
And continuously playing the obtained two-dimensional images one frame by one frame to form a dynamic image.
In the above example, the two-dimensional images formed under the multiple sets of parameter data are obtained by the multiple sets of parameter data set for the observation points, because different two-dimensional images correspond to the same observation positions with different elements, so that the two-dimensional images correspond to the projection images obtained under the 3D scene, and when being played, the two-dimensional images are visually provided with depth parallax, that is, the animation is formed with depth effect.
The above-mentioned dynamic image generation method is described below with reference to a specific scene, however, it should be noted that this specific embodiment is only for better explaining the present application, and is not to be construed as a limitation to the present application.
In this example, the three-dimensional scene elements shown in fig. 2 and the positional relationship between the elements are obtained, so as to obtain the three-dimensional scene shown in fig. 2. It is possible to move the camera after setting the wide angle of the camera with the camera as the observation point, as shown in fig. 9, so that the relative position between the camera and the three-dimensional scene changes to obtain different two-dimensional images. Different two-dimensional images as shown in fig. 8 to 11 can be obtained by adjusting the position of the camera, for example, and by comparing the two-dimensional images shown in fig. 10 to 13, it can be seen that although the three-dimensional scene does not change, the relative relationship of the elements in the formed images changes, and by animating these pictures, a dynamically changing image can be formed,
the animation generation method described above can be applied to, for example: the product on the shopping platform is popularized with animation, animated advertisement put on television, dynamic images displayed on display screens of shopping malls and the like, or dynamic images in videos of animation and the like. The animation generation method described above can be applied wherever such animation creatives are required.
Specifically, as shown in fig. 14 and 15, the method includes:
s1: according to three-dimensional scene elements set by a user and the position relation among the elements (such as the distance among the elements) constructing three-dimensional scenes corresponding to the three-dimensional scene elements;
s2: performing perspective projection reconstruction on the three-dimensional scene through a camera set according to preset parameters to obtain a two-dimensional image of a view angle corresponding to the parameters;
s3: changing information such as visual angle and focal length of a camera, performing perspective projection reconstruction on the three-dimensional scene according to the camera corresponding to the changed parameters, and obtaining a two-dimensional image again (the step can be executed for multiple times);
s4: and carrying out dynamic image coding on the obtained two-dimensional image sequence according to information such as a frame rate.
From the user plane, the following steps may be included:
s1: the user can self-define and import the picture elements required to be subjected to animation generation;
s2: the user can set parameter information of the picture in the three-dimensional scene, information such as camera focal length and the like;
s3: the user can generate the animation rapidly through one-click and batch actions, and when the animation generating method is implemented, a plurality of animations can be generated at one time and selected from the animations.
It should be noted that, according to the embodiments of the present application, the steps illustrated in the flowcharts of the figures may be executed in a computer system such as a set of computer executable instructions, and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be executed in an order different from that described herein.
The method embodiment provided by the application can be applied to a mobile terminal, a computer terminal or similar equipment with processing capability. Taking the example of the method executed on the mobile terminal, fig. 16 is a hardware structure block diagram of the mobile terminal of an animation generation method according to the embodiment of the present application. As shown in fig. 16, the mobile terminal 10 may include one or more (only one shown) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 16 is merely illustrative and is not intended to limit the structure of the electronic device. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 16, or have a different configuration than shown in FIG. 16.
The memory 104 may be configured to store software programs and modules of application software, such as program instructions/modules corresponding to the short message sending method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by operating the software programs and modules stored in the memory 104, that is, implements the short message sending method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission module 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission module 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Referring to fig. 17, in a software implementation, the animation generating apparatus may be applied in a client, or may be applied in a server, and may include: a building module 1501, an obtaining module 1502, and a generating module 1503. Wherein:
a building module 1501, configured to build a three-dimensional scene according to a positional relationship between a plurality of display elements;
an obtaining module 1502, configured to obtain a plurality of two-dimensional images of the three-dimensional scene formed at an observation point by adjusting a parameter of the observation point;
a generating module 1503, configured to generate a dynamic image according to the plurality of two-dimensional images.
In one embodiment, the positional relationship may include, but is not limited to, at least one of: relative distance, relative magnitude, relative azimuth.
In one embodiment, the parameters of the observation points may include, but are not limited to, at least one of: focal length, optical center position, distance from each display element, and viewing angle.
In one embodiment, the processor obtains a plurality of two-dimensional images of the three-dimensional scene formed at the observation point by adjusting parameters of the observation point, which may include but is not limited to at least one of:
simulating the observation point to move at a preset speed in a direction parallel to the three-dimensional scene from left to right, and acquiring a plurality of two-dimensional images formed at the observation point;
simulating the observation point to move at a preset speed according to a direction perpendicular to the three-dimensional scene from back to front, and acquiring a plurality of two-dimensional images formed at the observation point;
and simulating the observation point to move at a preset speed in a direction parallel to the three-dimensional scene from top to bottom, and acquiring a plurality of two-dimensional images formed at the observation point.
In the software embodiment, the animation generation apparatus may be applied to a client, may also be applied to a server, and may include: the device comprises a first acquisition module, a construction module, a second acquisition module, a mapping module and a generation module. Wherein:
the first acquisition module is used for acquiring the position relation among the imported display elements;
the construction module is used for constructing a three-dimensional scene according to the position relation among the plurality of display elements;
the second acquisition module is used for acquiring a plurality of groups of parameter data set for the observation points;
the mapping module is used for mapping to obtain a two-dimensional image corresponding to each group of parameter data;
and the generation module is used for generating dynamic images in batches according to the two-dimensional images corresponding to each group of parameter data.
In the embodiment of the application, the animation effect of the movement of the object is simulated through the change of the camera parameters, so that the dynamic image can be generated based on the given three-dimensional scene element. By the mode, the technical problem that dynamic images can be formed only by setting and adjusting pictures by one frame per se in the prior art, so that the dynamic image generation efficiency is too low is solved, and the technical effect of simply and efficiently generating the dynamic images is achieved.
Although the present application provides method steps as described in an embodiment or flowchart, additional or fewer steps may be included based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of sequences, and does not represent a unique order of performance. When an actual apparatus or client product executes, it may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The apparatuses or modules illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. The functionality of the modules may be implemented in the same one or more software and/or hardware implementations of the present application. Of course, a module that implements a certain function may also be implemented by a plurality of sub-modules or a combination of sub-units.
The methods, apparatus or modules described herein may be implemented in computer readable program code to a controller implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application Specific Integrated Circuits (ASICs), programmable logic controllers and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in purely computer readable program code means, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be conceived to be both a software module implementing the method and a structure within a hardware component.
Some of the modules in the apparatus described herein may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary hardware. Based on such understanding, the technical solution of the present application, which essentially or contributes to the prior art, may be embodied in the form of a software product, and may also be embodied in the implementation process of data migration. The computer software product may be stored in a storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, mobile terminal, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. All or portions of the present application are operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, mobile communication terminals, multiprocessor systems, microprocessor-based systems, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (11)

1. A moving image generation method, comprising:
constructing a three-dimensional scene according to the position relation among a plurality of display elements, wherein the display elements are as follows: the picture, the position relation is: relative distance, relative size, and relative azimuth between elements;
acquiring a plurality of groups of parameter data set for observation points;
acquiring a plurality of two-dimensional images of the three-dimensional scene formed at the observation points by adjusting parameters of the observation points;
and generating a dynamic image according to the plurality of two-dimensional images.
2. The method of claim 1, wherein the parameters of the observation points comprise at least one of: focal length, optical center position, distance from each display element, and viewing angle.
3. The method of claim 1, wherein acquiring the plurality of two-dimensional images of the three-dimensional scene formed at the observation point by adjusting a parameter of the observation point comprises at least one of:
simulating the observation point to move at a preset speed in a direction parallel to the three-dimensional scene from left to right, and acquiring a plurality of two-dimensional images formed at the observation point;
simulating the observation point to move at a preset speed according to a direction perpendicular to the three-dimensional scene from back to front, and acquiring a plurality of two-dimensional images formed at the observation point;
and simulating the observation point to move at a preset speed in a direction parallel to the three-dimensional scene from top to bottom, and acquiring a plurality of two-dimensional images formed at the observation point.
4. The method of any one of claims 1 to 3, wherein generating a dynamic image from the plurality of two-dimensional images comprises:
and performing dynamic image coding on the plurality of two-dimensional images to generate dynamic images.
5. The method of any of claims 1-3, wherein the observation points comprise at least one of: physical camera, virtual camera.
6. A moving image generation method, comprising:
acquiring a position relation among a plurality of imported display elements, wherein the display elements are: the picture, the position relation is: relative distance, relative magnitude, and relative azimuth between elements;
constructing a three-dimensional scene according to the position relation among the plurality of display elements;
acquiring a plurality of groups of parameter data set for observation points;
mapping to obtain a two-dimensional image corresponding to each group of parameter data;
and generating dynamic images in batches according to the two-dimensional images corresponding to each group of parameter data.
7. A processing device comprising a processor and a memory for storing processor-executable instructions that when executed by the processor implement:
constructing a three-dimensional scene according to the position relation among a plurality of display elements, wherein the display elements are as follows: the picture, the position relation is: relative distance, relative magnitude, and relative azimuth between elements;
acquiring a plurality of groups of parameter data set for observation points;
acquiring a plurality of two-dimensional images of the three-dimensional scene formed at the observation points by adjusting parameters of the observation points;
and generating a dynamic image according to the plurality of two-dimensional images.
8. The apparatus of claim 7, wherein the parameters of the observation points comprise at least one of: focal length, optical center position, distance from each display element, and viewing angle.
9. The apparatus of claim 7, wherein the plurality of two-dimensional images of the three-dimensional scene formed at the observation point are obtained by adjusting a parameter of the observation point, comprising at least one of:
simulating the observation point to move at a preset speed in a direction parallel to the three-dimensional scene from left to right, and acquiring a plurality of two-dimensional images formed at the observation point;
simulating the observation point to move at a preset speed according to a direction perpendicular to the three-dimensional scene from back to front, and acquiring a plurality of two-dimensional images formed at the observation point;
and simulating the observation point to move at a preset speed in a direction parallel to the three-dimensional scene from top to bottom, and acquiring a plurality of two-dimensional images formed at the observation point.
10. A processing device comprising a processor and a memory for storing processor-executable instructions, the instructions when executed by the processor implementing:
acquiring a position relation among a plurality of imported display elements, wherein the display elements are as follows: the picture, the position relation is: relative distance, relative size, and relative azimuth between elements;
constructing a three-dimensional scene according to the position relation among the plurality of display elements;
acquiring a plurality of groups of parameter data set for observation points;
mapping to obtain a two-dimensional image corresponding to each group of parameter data;
and generating dynamic images in batches according to the two-dimensional images corresponding to each group of parameter data.
11. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 5.
CN201711128596.3A 2017-11-15 2017-11-15 Dynamic image generation method and processing device Active CN109801351B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711128596.3A CN109801351B (en) 2017-11-15 2017-11-15 Dynamic image generation method and processing device
PCT/CN2018/114540 WO2019096057A1 (en) 2017-11-15 2018-11-08 Dynamic image generation method, and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711128596.3A CN109801351B (en) 2017-11-15 2017-11-15 Dynamic image generation method and processing device

Publications (2)

Publication Number Publication Date
CN109801351A CN109801351A (en) 2019-05-24
CN109801351B true CN109801351B (en) 2023-04-14

Family

ID=66539359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711128596.3A Active CN109801351B (en) 2017-11-15 2017-11-15 Dynamic image generation method and processing device

Country Status (2)

Country Link
CN (1) CN109801351B (en)
WO (1) WO2019096057A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579225A (en) * 2019-09-30 2021-03-30 北京国双科技有限公司 Processing method and device for delayed element display
CN112348938A (en) * 2020-10-30 2021-02-09 杭州安恒信息技术股份有限公司 Method, device and computer equipment for optimizing three-dimensional object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850352A (en) * 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
JP2003337947A (en) * 2002-05-21 2003-11-28 Iwane Kenkyusho:Kk Method and device for image display, and storage medium recorded with image display method
JP2010157035A (en) * 2008-12-26 2010-07-15 Ritsumeikan System, method, and program for displaying composite image

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6278460B1 (en) * 1998-12-15 2001-08-21 Point Cloud, Inc. Creating a three-dimensional model from two-dimensional images
JP4285422B2 (en) * 2005-03-04 2009-06-24 日本電信電話株式会社 Moving image generation system, moving image generation apparatus, moving image generation method, program, and recording medium
TW201130285A (en) * 2010-02-26 2011-09-01 Hon Hai Prec Ind Co Ltd System and method for controlling 3D images
JP5375897B2 (en) * 2011-08-25 2013-12-25 カシオ計算機株式会社 Image generation method, image generation apparatus, and program
US8988446B2 (en) * 2011-10-07 2015-03-24 Zynga Inc. 2D animation from a 3D mesh
EP2779102A1 (en) * 2013-03-12 2014-09-17 E.sigma Systems GmbH Method of generating an animated video sequence
CN103514621B (en) * 2013-07-17 2016-04-27 宝鸡翼迅网络科技有限公司 The authentic dynamic 3D reproducting method of case, event scenarios and reconfiguration system
CN103679791A (en) * 2013-12-19 2014-03-26 广东威创视讯科技股份有限公司 Split screen updating method and system for three-dimensional scene
CN103714565A (en) * 2013-12-31 2014-04-09 广州市久邦数码科技有限公司 Method and system for generating dynamic image with audio
CN105551084B (en) * 2016-01-28 2018-06-08 北京航空航天大学 A kind of outdoor three-dimensional scenic combination construction method of image content-based parsing
CN107134000B (en) * 2017-05-23 2020-10-23 张照亮 Reality-fused three-dimensional dynamic image generation method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850352A (en) * 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
JP2003337947A (en) * 2002-05-21 2003-11-28 Iwane Kenkyusho:Kk Method and device for image display, and storage medium recorded with image display method
JP2010157035A (en) * 2008-12-26 2010-07-15 Ritsumeikan System, method, and program for displaying composite image

Also Published As

Publication number Publication date
CN109801351A (en) 2019-05-24
WO2019096057A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
CN109658365B (en) Image processing method, device, system and storage medium
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
KR102096730B1 (en) Image display method, method for manufacturing irregular screen having curved surface, and head-mounted display device
Matsuyama et al. 3D video and its applications
KR20180111798A (en) Adaptive stitching of frames in the panorama frame creation process
WO2019049421A1 (en) Calibration device, calibration system, and calibration method
JP6672315B2 (en) Image generation device and image display control device
CN110249626B (en) Method and device for realizing augmented reality image, terminal equipment and storage medium
CN107341832B (en) Multi-view switching shooting system and method based on infrared positioning system
CN108769462B (en) Free visual angle scene roaming method and device
JP2018522429A (en) Capture and render panoramic virtual reality content
CN108369639B (en) Image-based image rendering method and system using multiple cameras and depth camera array
CN108594999B (en) Control method and device for panoramic image display system
US9001115B2 (en) System and method for three-dimensional visualization of geographical data
CN110648274B (en) Method and device for generating fisheye image
CN105809729B (en) A kind of spherical panorama rendering method of virtual scene
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
CN113223130B (en) Path roaming method, terminal equipment and computer storage medium
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
CN110689626A (en) Game model rendering method and device
CN109801351B (en) Dynamic image generation method and processing device
CN108205822B (en) Picture pasting method and device
CN114926612A (en) Aerial panoramic image processing and immersive display system
KR20080034419A (en) 3d image generation and display system
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant