CN109801351A - Dynamic image generation method and processing equipment - Google Patents
Dynamic image generation method and processing equipment Download PDFInfo
- Publication number
- CN109801351A CN109801351A CN201711128596.3A CN201711128596A CN109801351A CN 109801351 A CN109801351 A CN 109801351A CN 201711128596 A CN201711128596 A CN 201711128596A CN 109801351 A CN109801351 A CN 109801351A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- observation point
- image
- dynamic image
- dimensional scenic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
This application provides a kind of dynamic image generation method and processing equipments, wherein this method comprises: constructing three-dimensional scenic according to the positional relationship between multiple display elements;By adjusting the parameter of observation point, multiple two dimensional images that the three-dimensional scenic is formed at the observation point are obtained;According to the multiple two dimensional image, dynamic image is generated.It solves the setting of existing needs one frame frame oneself and adjustment picture through the above scheme, could form the too low technical problem of dynamic image formation efficiency caused by dynamic image, reach the technical effect for being simple and efficient generation dynamic image.
Description
Technical field
This application involves technical field of data processing, in particular to a kind of dynamic image generation method and processing equipment.
Background technique
With the promotion of user's mobile device performance, demand of the people to dynamic view is increasing.For example, dynamic wide
Accuse intention, dynamic buyer's guide, dynamic task image etc..
However, existing dynamic image generating mode typically makes picture frame by frame, dynamic image is then formed.It is this
For mode when having batch demand, operating quantity is huge, and mode is not reproducible, for another set picture demand, it is necessary to
It remakes so that it is especially big to implement workload, realizes that efficiency is very low.
In view of the above-mentioned problems, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the present application provides a kind of dynamic image generation method and processing equipment, and to reach, to be simple and efficient generation dynamic
The technical effect of state image.
A kind of dynamic image generation method, comprising:
According to the positional relationship between multiple display elements, three-dimensional scenic is constructed;
By adjusting the parameter of observation point, multiple X-Y schemes that the three-dimensional scenic is formed at the observation point are obtained
Picture;
According to the multiple two dimensional image, dynamic image is generated.
A kind of dynamic image generation method, comprising:
Obtain the positional relationship between the multiple display elements imported;
Three-dimensional scenic is constructed according to the positional relationship between the multiple display elements;
It is retrieved as the multiple groups supplemental characteristic of observation point setting;
Mapping obtains the corresponding two dimensional image of each group supplemental characteristic;
According to the corresponding two dimensional image of each group supplemental characteristic, Mass production dynamic image.
A kind of processing equipment, including processor and for the memory of storage processor executable instruction, the processing
Device is realized when executing described instruction:
According to the positional relationship between multiple display elements, three-dimensional scenic is constructed;
By adjusting the parameter of observation point, multiple X-Y schemes that the three-dimensional scenic is formed at the observation point are obtained
Picture;
According to the multiple two dimensional image, dynamic image is generated.
A kind of processing equipment, including processor and for the memory of storage processor executable instruction, the processing
Device is realized when executing described instruction:
Obtain the positional relationship between the multiple display elements imported;
Three-dimensional scenic is constructed according to the positional relationship between the multiple display elements;
It is retrieved as the multiple groups supplemental characteristic of observation point setting;
Mapping obtains the corresponding two dimensional image of each group supplemental characteristic;
According to the corresponding two dimensional image of each group supplemental characteristic, Mass production dynamic image.A kind of computer-readable storage medium
Matter, is stored thereon with computer instruction, and described instruction is performed the step of realizing preceding claim method.
In the embodiment of the present application, the mobile animation effect of object is simulated by the change to observation point parameter, thus
Dynamic image can be generated based on one group of given three-dimensional scenic element.Solves one frame of existing needs through the above way
Frame oneself setting and adjustment picture, could form the too low technical problem of dynamic image formation efficiency caused by dynamic image,
The technical effect for being simple and efficient and generating dynamic image is reached.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, not
Constitute the restriction to the application.In the accompanying drawings:
Fig. 1 is the method flow diagram according to the dynamic image generation method of the embodiment of the present application;
Fig. 2 is the three-dimensional scenic element schematic diagram according to the embodiment of the present application;
Fig. 3 is the schematic diagram according to the analogue camera moving direction of the application one embodiment;
Fig. 4 is another schematic diagram according to the analogue camera moving direction of the application one embodiment;
Fig. 5 is the another schematic diagram according to the analogue camera moving direction of the application one embodiment;
Fig. 6 is tripleplane's schematic diagram according to the embodiment of the present application;
Fig. 7 is the coordinate system schematic diagram according to the embodiment of the present application;
Fig. 8 is the image-forming principle schematic diagram according to the embodiment of the present application;
Fig. 9 is according to the mobile schematic diagram of the camera of the embodiment of the present application;
Figure 10 is a kind of schematic diagram according to the two dimensional image obtained by camera perspective mode of the embodiment of the present application;
Figure 11 is another schematic diagram according to the two dimensional image obtained by camera perspective mode of the embodiment of the present application;
Figure 12 is another schematic diagram according to the two dimensional image obtained by camera perspective mode of the embodiment of the present application;
Figure 13 is another schematic diagram according to the two dimensional image obtained by camera perspective mode of the embodiment of the present application;
Figure 14 is the method flow schematic diagram according to the animation producing method of the embodiment of the present application;
Figure 15 is the another method flow diagram according to the animation producing method of the embodiment of the present application;
Figure 16 is the configuration diagram according to the user terminal of the embodiment of the present application;
Figure 17 is the structural block diagram according to the animation producing device of the embodiment of the present application.
Specific embodiment
It is right below with reference to embodiment and attached drawing for the purposes, technical schemes and advantages of the application are more clearly understood
The application is described in further details.Here, the exemplary embodiment and its explanation of the application be for explaining the application, but simultaneously
Not as the restriction to the application.
Currently in order to generating animation, generation image one by one is generally required, then again by this image one by one
Form animation.It is not reproducible that this mode inevitably results in animation producing template, is unable to Mass production animation, implements more
Complexity, efficiency are lower.For this purpose, it is considered that getting the element for generating animation (for example, patterned picture, has text
The picture of word) after, if by based on these elements the movement of camera can be simulated, so as in element itself
In the case where motionless, by adjusting the parameter of camera, reach the mobile animation effect of object, thus can simply generate dynamic
It draws.And mode can replicate, then provide one group of others three-dimensional scenic element, in the same way, available effect is close
As user characterize the animations of different objects.
Fig. 1 is a kind of method flow diagram of herein described animation producing method one embodiment.Although the application provides
As the following examples or method operating procedure shown in the drawings or apparatus structure, but based on conventional or without creative labor
Move in the method or device may include more or less operating procedure or modular unit.It is not present in logicality
In the step of necessary causality or structure, the execution sequence of these steps or the modular structure of device are not limited to the application implementation
Example description and execution shown in the drawings sequence or modular structure.The device in practice of the method or modular structure or end
It, can be according to embodiment or the connection carry out sequence execution or simultaneously of method shown in the drawings or modular structure when holding products application
Row executes (such as environment or even distributed processing environment of parallel processor or multiple threads).
As shown in Figure 1, animation producing method may include steps of:
Step 101: according to the positional relationship between multiple display elements, constructing three-dimensional scenic;
Above-mentioned display elements can be text picture, be also possible to image graphic etc..For example, as shown in Fig. 2, display member
Element is four image graphics and a text picture.
Three-dimensional scenic can not be formed in the case where only getting display elements, that is, a three-dimensional space can not be formed
Between interface, in order to form three-dimensional space interface, it is also necessary to have the positional relationship between each element, for example, it is desired to have: it is opposite away from
From, relative size, relative bearing.Get the relative distance between each element, phase size and relative bearing it
Afterwards, so that it may form three-dimensional scenic.I.e., it is possible to as shown in Fig. 2, get this four image graphics and text picture it
Between relative distance, relative size, after relative bearing, so that it may form three-dimensional scenic as shown in Figure 2.
For example, as shown in Figure 3 and Figure 4, three-dimensional scenic schematic diagram.Fig. 3 and Fig. 4 is same three-dimensional scenic element,
For material 1, material 2 and material 3.Each material in Fig. 3 and Fig. 4 is because relative position has been differently formed different three dimensional fields
Scape.
It is important to note, however, that above-mentioned cited positional relationship is only a kind of schematic description, when realizing,
There can also be other positional relationships, the application is not construed as limiting this.
Step 102: by adjusting the parameter of observation point, obtain the three-dimensional scenic formed at the observation point it is multiple
Two dimensional image;
When generating multiple two dimensional images, it can be in the following parameter by adjusting observation point (such as: camera)
One or more two dimensional images to obtain multiple and different visual angles: the focal length of camera, camera photocentre position, camera distance are three-dimensional
The visual angle of the distances of situation elements, camera.
It is illustrated by taking the distance for adjusting camera as an example:
It 1) can be as shown in figure 4, analogue camera be moved according to the direction of three-dimensional scenic from left to right is parallel to scheduled rate
It is dynamic, obtain multiple two dimensional images;
For example, every mobile 2cm generates a two dimensional image, it is assumed that the 10cm moved altogether can be obtained by 6 X-Y schemes
Picture, this six two dimensional images are exactly the two dimensional image that same three-dimensional scenic is formed when camera is in different perspectives.
2) can with as shown in figure 5, analogue observation point according to the direction perpendicular to the three-dimensional scenic from back to front, with default
Speed is mobile, obtains multiple two dimensional images;
3) can with as shown in figure 3, simulate the observation point according to being parallel to the direction of the three-dimensional scenic from top to bottom, with
Scheduled rate is mobile, obtains multiple two dimensional images.
It should be noted, however, that above-mentioned is only to adjust observation the distance between point model and three-dimensional scenic and be formed not
Same two dimensional image can also be by adjusting the focal length of observation point or optical center position of observation point etc. when realizing
Deng, the different images based on same three-dimensional scenic are obtained, so that can be in the case where three-dimensional scenic is motionless, formation is dynamic
Feel.
Above-mentioned observation point can be: the device that analogue camera, physics camera etc. can be imaged, specifically with assorted
Form or form exist and can select according to actual needs with situation, and the application is not construed as limiting this.
It is to be understood that people drive when, although electric pole, house, mountain etc. outside window is different, this
When a, with the movement of the vehicle where people, can make it is seen that electric pole, house, mountain etc. in a mobile shape
In state.
As shown in fig. 6, being the image-forming principle schematic diagram of three-dimensional space, that is, three-dimensional space is being projected by observing point model
Space forms two dimensional image, if the position etc. of the position and observation point of adjustment observation point apart from projector space, accordingly
Obtained two dimensional image is also different, variation.
When generating two dimensional image, observation point can be perspective model, so that obtained two dimensional image is multiple members
The two dimensional image of element overlapped to form.Wherein, so-called perspective model may refer to three-dimensional scenic passing through camera parameter model
It is mapped as the process of two dimensional image, i.e. the true process of taking pictures of simulation.
Step 103: dynamic image is generated according to the multiple two dimensional image.
Wherein, each two dimensional image obtained is that above-mentioned three-dimensional scenic is formed at some position of observation point and parameter
Image.After getting these two dimensional images, these two dimensional images can be generated to a two-dimensional image sequence, it is then, right
The two-dimensional image sequence carries out moving picture encoding to form dynamic image.
When forming two-dimensional image sequence, it can be and arrange to obtain sequence according to the sequence for forming two dimensional image,
Can be and reverse to obtain sequence according to form two dimensional image, or by periodically repeatedly in the way of, obtain sequence.
For example, two-dimensional image sequence can be formed perpendicular to three-dimensional scenic direction from back to front according to observation point, be also possible to according to
Observation point forms two-dimensional image sequence perpendicular to the direction of three-dimensional scenic from front to back, is also possible to according to observation point perpendicular to three
Tie up scene from back to front, then direction formation two-dimensional image sequence from front to back is mobile with pre-set velocity, can also be according to sight
From front to back perpendicular to three-dimensional scenic, then direction from back to front forms two-dimensional image sequence to measuring point.
The generation type of above-mentioned cited two-dimensional image sequence is only a kind of schematic description, actually realize when
It waits, can also be what the other formation of use sequentially formed, can be adjusted flexibly and select, the application is not construed as limiting this.
When realizing, it can be through one-touch and mass operation, the multiple dynamic image samples of Mass production.
And the observation point running parameter etc. of this setting, it can also be saved, be applied in other three-dimensional scenic elements, to obtain
The similar dynamic image of other variation patterns.
In one embodiment, dynamic image generation method, can also be and include:
S1: the positional relationship between multiple display elements of importing is obtained;
I.e., it is possible to be it is artificial import display elements (that is, picture), be also possible to have pre-deposited in computer, directly
Picture is transferred, and sets the positional relationship between picture, is arranged according to positional relationship, three-dimensional scenic is formed.
For example, the positional relationship between picture can be obtained, for example, each based on being pre-configured with or calculating in real time
Coordinate (x-axis, y-axis, z-axis coordinate) under the three-dimensional world coordinate system of picture, wherein these pictures are set to the same coordinate system
In.
S2: three-dimensional scenic is constructed according to the positional relationship between the multiple display elements;
According to the three-dimensional world coordinate information of each picture, placed in same three-dimensional system of coordinate, so that it may form one
A three-dimensional scenic.When realizing, it can be and be input to the coordinate of picture XYZ axis in function as parameter, formed three-dimensional
Scene, wherein the three-dimensional scenic can be that there are transparent effects.
S3: it is retrieved as the multiple groups supplemental characteristic of observation point setting;
The supplemental characteristic of observation point can include but is not limited to: position, focal length, the position of aperture etc..By to observation
The adjustment of point supplemental characteristic, the different imaging effect of available same object.For example, when shooting same thing,
Focal length is smaller, then thing is bigger, and focal length is bigger, then thing is smaller.
S4: mapping obtains the corresponding two dimensional image of each group supplemental characteristic;
In the case where adjusting parameter data, by the available same imaging object of three-dimensional modeling (namely based on above-mentioned figure
The three-dimensional scenic that positional relationship between piece and picture is established) different two dimensional image (that is, imaging results).
When mapping obtains each group supplemental characteristic corresponding two dimensional image, X-Y scheme can be obtained by projecting model
Picture.I.e., it is possible to which this projection can use imaging by imaging len by the two-dimensional image plane of three-dimensional scene projection to video camera
Transformation is indicated, that is, projection model.Specifically, relating generally to following several coordinate systems: figure for projection model
As coordinate system, camera coordinate system, world coordinate system.
The image of camera acquisition can be transformed to number through high-speed image acquisition system in the form of the TV signal of standard
Image, and the digital picture being converted to is inputted into computer.Each image can all be the array of M*N, the image of M row N column
Each of element (namely pixel) numerical value be exactly picture point brightness (namely image grayscale).
As shown in fig. 7, (u, v) indicates the image coordinate system coordinate as unit of pixel.Since (u, v) only indicates pixel position
Columns and line number in array, not useful physical unit represent the position of the pixel in the picture.Therefore, it can establish
With the image coordinate system that physical unit (such as: millimeter) indicates, that is, XOY coordinate system as shown in Figure 2.
In the XOY coordinate system, origin O is normally defined the intersection point of camera optical axis and the plane of delineation, which is normally at
At picture centre, however this origin O can also deviate sometimes.The transformation of two-dimensional camera coordinate system to image coordinate system can
To be portrayed by following matrix:
The imaging geometry of camera can be portrayed by Fig. 3.
In fig. 8, O point is that camera optical center, x-axis and y-axis are parallel with the X-axis of image, Y-axis, and z-axis is camera optical axis,
It is vertical with the plane of delineation.The focus of optical axis and the plane of delineation is the origin of image coordinate system, is made of point O and x, y, z axis
Rectangular coordinate system is known as camera coordinates system, and OO1 is focal length of camera.
The selection of world coordinate system has arbitrariness, is one by the transformation that camera coordinate system is transformed into world coordinate system
The conversion process of 3D to 3D, the conversion process can be portrayed by spin matrix R and translation vector t.There is such as ShiShimonoseki
System:
Based on three kinds of above-mentioned coordinate systems, model is carried out to the linear model of camera and nonlinear model below:
1) pin-hole imaging model (that is, linear model):
Pin-hole imaging model is properly termed as linear computer model again.The imaging position of space any point P in the picture
Pin-hole imaging model approximate representation can be used, that is, line of the projected position p of any point P in the picture as optical center O and P point
The intersection point of OP and the plane of delineation.This relationship is referred to as central projection (that is, transmission projection perspective
projection)。
Proportionate relationship can indicate are as follows:
Wherein, (X, Y) is the image coordinate of p point, and (x, y, z) is coordinate of the spatial point P under camera coordinate system, and f is
X/y plane (focal length of f as video camera) at a distance from the plane of delineation.Aforementioned proportion relationship can be indicated by following matrix:
Wherein, s is a scale factor, and P is exactly the perspective projection matrix that we are concerned about the most.
So, by summary above, we can easily obtain P point under world coordinate system expression and its
The coordinate conversion relation of p point under image coordinate system.Shown in specific as follows:
Wherein, ax=f/dX, are as follows: the scale factor (normalizing focal length on u axis) on u axis;Ay=f/dY is on v axis
Scale factor (normalizes focal length on v axis).M is projection matrix;M1 determines that (these parameters are only by tetra- parameters of ax, ay, u0, v0
Therefore camera internal parameter is properly termed as with camera internal relating to parameters).Orientation of the M2 by camera relative to world coordinate system
It determines, referred to as camera external parameter.The inside and outside parameter for determining a certain video camera is properly termed as the calibration of camera.
Pass through above formula, it is found that if having obtained the inside and outside parameter of camera, it is equivalent to obtain projection matrix M, this
It is for any spatial point P, if it is known that coordinate Cw=(Xw, Yw, Zw) of the spatial point under world coordinate system, then can
To orient the projected position of point in the picture.However, counter push away is reasonably that this is primarily due to camera in the process of imaging
In, have lost imaging depth.
2) nonlinear model
In view of real camera is not ideal perspective imaging, but different degrees of distortion is had, so that empty
Between point imaging not linear model description position (X, Y), but is influenced by lens distortions and deviate actually
As plane coordinates (X', Y'):
Wherein, δ x and δ y indicates non-thread distortion value, this is related to the position of picture point in the picture.Theoretically, camera lens meeting
Radial distortion and tangential distortion are existed simultaneously, however, since tangential distortion varies less, and the amendment of radial distortion is by away from image
The even power multinomial model of the radial distance at center indicates:
Wherein, (u0, v0) is the exact value of principle point location, however:
r2=(X '-u0)2+(Y′-υ0)2
It can be seen that the distortion relative value (δ x/X, δ y/Y) of X-direction and Y-direction is directly proportional to radial radius squared, that is, exist
It distorts at image border larger.For the machine vision of non-precision, the radial distortion of single order can describe nonlinear distortion,
Correspondingly, above formula can indicate are as follows:
By the above-mentioned means, make nonlinear model camera intrinsic parameter may include: PARAMETERS IN THE LINEAR MODEL (ax, ay, u0,
V0)+nonlinear distortion variable element (k1, k2).
By above-mentioned projection model, can map to obtain the corresponding two dimensional image of each group supplemental characteristic.
S5: according to the corresponding two dimensional image of each group supplemental characteristic, Mass production dynamic image.
Multiple one frame frames of two dimensional image obtained above are continuously played, so that it may form dynamic image.
In upper example, by the multiple groups supplemental characteristic being arranged for observation point, obtain being formed in multiple groups supplemental characteristic is lower
For two dimensional image because different two dimensional images corresponds to the different observation position of same several elements so that these two
Dimension image corresponds to the projection image obtained under 3D scene, is to have depth of field parallax from visual effect when playing
, that is, the animation of formation has Deep Canvas.
Above-mentioned dynamic image generation method is illustrated below with reference to a concrete scene, it is important to note, however, that
The specific embodiment does not constitute an undue limitation on the present application merely to the application is better described.
In this example, obtain the positional relationship between three-dimensional scenic element and each element as shown in Figure 2, obtain as
Three-dimensional scenic shown in Fig. 2.It can be as shown in figure 9, after setting the wide-angle of camera, being moved using camera as observation point
Camera, so that the relative position between camera and three-dimensional scenic changes, to obtain different two dimensional images.Such as pass through tune
The position of whole camera, available different two dimensional image as shown in Fig. 8 to 11, as to two dimensional image shown in Figure 10 to 13
Although comparing it can be found that three-dimensional scenic does not change, the relativeness for being formed by each element in image is
There are variations, these pictures are generated animation, can form the image of a dynamic change,
Above-mentioned animation producing method can be applied for example: the product promotion animation on shopping platform, and TV is launched dynamic
Picture advertisement, market etc. show the dynamic image etc. in the videos such as dynamic image or the animation of screen display.As long as needing
Above-mentioned animation producing method can be applied by wanting the place of this animation intention.
Specifically, can be as shown in Figure 14 and Figure 15, comprising:
S1: according between the three-dimensional scenic element and element of user setting positional relationship (such as: between element away from
From etc.), construct these corresponding three-dimensional scenics of three-dimensional scenic element;
S2: perspective projection reconstruction is carried out to above-mentioned three-dimensional scenic by the camera being arranged according to parameter preset, obtains the ginseng
The two dimensional image at the corresponding visual angle of number;
S3: converting the information such as visual angle, the focal length of camera, according to the corresponding camera of transformed parameter, to three-dimensional scenic into
Row perspective projection is rebuild, and two dimensional image is regained (step can execute repeatedly);
S4: moving picture encoding is carried out according to two-dimensional image sequence of the information such as frame per second to acquisition.
For user level, it may include steps of:
S1: user customized can import the picture element for needing to carry out animation producing;
S2: the information such as parameter information and camera focus of the picture in three-dimensional scenic can be set in user;
S3: user can quickly generate animation by one-touch and mass movement, can be with one when realizing
It is secondary to generate multiple animations, therefrom selected.
According to the embodiment of the present application, it should be noted that step shown in the flowchart of the accompanying drawings can be at such as one group
It is executed in the computer system of computer executable instructions, although also, logical order is shown in flow charts, at certain
It, can be with the steps shown or described are performed in an order that is different from the one herein in a little situations.
Embodiment of the method provided herein in mobile terminal, terminal or similar can have processing energy
In the equipment of power.For running on mobile terminals, Figure 16 is a kind of movement of animation producing method of the embodiment of the present application
The hardware block diagram of terminal.As shown in figure 16, mobile terminal 10 may include at one or more (only showing one in figure)
(processor 102 can include but is not limited to the processing dress of Micro-processor MCV or programmable logic device FPGA etc. to reason device 102
Set), memory 104 for storing data and the transmission module 106 for communication function.Those of ordinary skill in the art
It is appreciated that structure shown in Figure 16 is only to illustrate, the structure of above-mentioned electronic device is not caused to limit.For example, mobile
Terminal 10 may also include than shown in Figure 16 more perhaps less component or with the configuration different from shown in Figure 16.
Memory 104 can be used for storing the software program and module of application software, such as the short message in the embodiment of the present application
Corresponding program instruction/the module of the sending method of breath, the software program that processor 102 is stored in memory 104 by operation
And module realizes the transmission of the short message of above-mentioned application program thereby executing various function application and data processing
Method.Memory 104 may include high speed random access memory, may also include nonvolatile memory, such as one or more magnetism
Storage device, flash memory or other non-volatile solid state memories.In some instances, memory 104 can further comprise phase
The memory remotely located for processor 102, these remote memories can pass through network connection to terminal 10.On
The example for stating network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Transmission module 106 is used to that data to be received or sent via a network.Above-mentioned network specific example may include
The wireless network that the communication providers of terminal 10 provide.In an example, transmission module 106 includes that a network is suitable
Orchestration (Network Interface Controller, NIC), can be connected by base station with other network equipments so as to
Internet is communicated.In an example, transmission module 106 can be radio frequency (Radio Frequency, RF) module,
For wirelessly being communicated with internet.
Figure 17 is please referred to, in Software Implementation, which be can be applied in client, can also be answered
With in the server, may include: building module 1501, obtain module 1502 and generation module 1503.Wherein:
Module 1501 is constructed, for constructing three-dimensional scenic according to the positional relationship between multiple display elements;
It obtains module 1502 and obtains the three-dimensional scenic at the observation point for the parameter by adjusting observation point
The multiple two dimensional images formed;
Generation module 1503, for generating dynamic image according to the multiple two dimensional image.
In one embodiment, positional relationship can include but is not limited at least one of: relative distance, relatively large
Small, relative bearing.
In one embodiment, the parameter of the observation point can include but is not limited at least one of: focal length, light
Heart position shows original distance, visual angle apart from each.
In one embodiment, processor obtains the three-dimensional scenic in the sight by adjusting the parameter of observation point
The multiple two dimensional images formed at measuring point, can include but is not limited at least one of:
The observation point is simulated according to the direction of the three-dimensional scenic from left to right is parallel to, with scheduled rate movement, is obtained
Take the multiple two dimensional images formed at the observation point;
It simulates the observation point and is obtained according to the direction perpendicular to the three-dimensional scenic from back to front with pre-set velocity movement
Take the multiple two dimensional images formed at the observation point;
The observation point is simulated according to the direction of the three-dimensional scenic from top to bottom is parallel to, with scheduled rate movement, is obtained
Take the multiple two dimensional images formed at the observation point.
In Software Implementation, which be can be applied in client, can also be applied in server
In, it may include: the first acquisition module, building module, the second acquisition module, mapping block and generation module.Wherein:
First obtains module, for obtaining the positional relationship between the multiple display elements imported;
Module is constructed, for constructing three-dimensional scenic according to the positional relationship between the multiple display elements;
Second obtains module, for being retrieved as the multiple groups supplemental characteristic of observation point setting;
Mapping block obtains the corresponding two dimensional image of each group supplemental characteristic for mapping;
Generation module, for according to the corresponding two dimensional image of each group supplemental characteristic, Mass production dynamic image.
In the embodiment of the present application, the mobile animation effect of object is simulated by the change to camera parameter, thus base
Dynamic image can be generated in given three-dimensional scenic element.Solves existing needs one frame frame oneself through the above way
Setting and adjustment picture, could form the too low technical problem of dynamic image formation efficiency caused by dynamic image, reach
It is simple and efficient the technical effect for generating dynamic image.
Although this application provides the method operating procedure as described in embodiment or flow chart, based on conventional or noninvasive
The labour for the property made may include more or less operating procedure.The step of enumerating in embodiment sequence is only numerous steps
One of execution sequence mode, does not represent and unique executes sequence.It, can when device or client production in practice executes
To execute or parallel execute (such as at parallel processor or multithreading according to embodiment or method shown in the drawings sequence
The environment of reason).
The device or module that above-described embodiment illustrates can specifically realize by computer chip or entity, or by having
The product of certain function is realized.For convenience of description, it is divided into various modules when description apparatus above with function to describe respectively.
The function of each module can be realized in the same or multiple software and or hardware when implementing the application.It is of course also possible to
Realization the module for realizing certain function is combined by multiple submodule or subelement.
Method, apparatus or module described herein can realize that controller is pressed in a manner of computer readable program code
Any mode appropriate is realized, for example, controller can take such as microprocessor or processor and storage can be by (micro-)
The computer-readable medium of computer readable program code (such as software or firmware) that processor executes, logic gate, switch, specially
With integrated circuit (Application Specific Integrated Circuit, ASIC), programmable logic controller (PLC) and embedding
Enter the form of microcontroller, the example of controller includes but is not limited to following microcontroller: ARC625D, Atmel AT91SAM,
Microchip PIC18F26K20 and Silicone Labs C8051F320, Memory Controller are also implemented as depositing
A part of the control logic of reservoir.It is also known in the art that in addition to real in a manner of pure computer readable program code
Other than existing controller, completely can by by method and step carry out programming in logic come so that controller with logic gate, switch, dedicated
The form of integrated circuit, programmable logic controller (PLC) and insertion microcontroller etc. realizes identical function.Therefore this controller
It is considered a kind of hardware component, and hardware can also be considered as to the device for realizing various functions that its inside includes
Structure in component.Or even, it can will be considered as the software either implementation method for realizing the device of various functions
Module can be the structure in hardware component again.
Part of module in herein described device can be in the general of computer executable instructions
Upper and lower described in the text, such as program module.Generally, program module includes executing particular task or realization specific abstract data class
The routine of type, programs, objects, component, data structure, class etc..The application can also be practiced in a distributed computing environment,
In these distributed computing environment, by executing task by the connected remote processing devices of communication network.In distribution
It calculates in environment, program module can be located in the local and remote computer storage media including storage equipment.
As seen through the above description of the embodiments, those skilled in the art can be understood that the application can
It is realized by the mode of software plus required hardware.Based on this understanding, the technical solution of the application is substantially in other words
The part that contributes to existing technology can be embodied in the form of software products, and can also pass through the implementation of Data Migration
It embodies in the process.The computer software product can store in storage medium, such as ROM/RAM, magnetic disk, CD, packet
Some instructions are included to use so that a computer equipment (can be personal computer, mobile terminal, server or network are set
It is standby etc.) execute method described in certain parts of each embodiment of the application or embodiment.
Each embodiment in this specification is described in a progressive manner, the same or similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.The whole of the application or
Person part can be used in numerous general or special purpose computing system environments or configuration.Such as: personal computer, server calculate
Machine, handheld device or portable device, mobile communication terminal, multicomputer system, based on microprocessor are at laptop device
System, programmable electronic equipment, network PC, minicomputer, mainframe computer, the distribution including any of the above system or equipment
Formula calculates environment etc..
Although depicting the application by embodiment, it will be appreciated by the skilled addressee that the application there are many deformation and
Variation is without departing from spirit herein, it is desirable to which the attached claims include these deformations and change without departing from the application's
Spirit.
Claims (12)
1. a kind of dynamic image generation method characterized by comprising
According to the positional relationship between multiple display elements, three-dimensional scenic is constructed;
By adjusting the parameter of observation point, multiple two dimensional images that the three-dimensional scenic is formed at the observation point are obtained;
According to the multiple two dimensional image, dynamic image is generated.
2. the method according to claim 1, wherein the parameter of the observation point includes at least one of: burnt
Away from, optical center position, apart from the original distance of each display, visual angle.
3. the method according to claim 1, wherein obtaining the three dimensional field by adjusting the parameter of observation point
Multiple two dimensional images that scape is formed at the observation point, including at least one of:
The observation point is simulated according to the direction of the three-dimensional scenic from left to right is parallel to, mobile with scheduled rate, acquisition exists
The multiple two dimensional images formed at the observation point;
The observation point is simulated according to the direction perpendicular to the three-dimensional scenic from back to front, mobile with pre-set velocity, acquisition exists
The multiple two dimensional images formed at the observation point;
The observation point is simulated according to the direction of the three-dimensional scenic from top to bottom is parallel to, mobile with scheduled rate, acquisition exists
The multiple two dimensional images formed at the observation point.
4. the method according to claim 1, wherein the positional relationship includes at least one of: it is opposite away from
From, relative size, relative bearing.
5. method according to claim 1 to 4, which is characterized in that generated according to the multiple two dimensional image
Dynamic image, comprising:
Moving picture encoding is carried out to the multiple two dimensional image, generates dynamic image.
6. method according to claim 1 to 4, which is characterized in that the observation point include it is following at least it
One: physics camera, virtual camera.
7. a kind of dynamic image generation method characterized by comprising
Obtain the positional relationship between the multiple display elements imported;
Three-dimensional scenic is constructed according to the positional relationship between the multiple display elements;
It is retrieved as the multiple groups supplemental characteristic of observation point setting;
Mapping obtains the corresponding two dimensional image of each group supplemental characteristic;
According to the corresponding two dimensional image of each group supplemental characteristic, Mass production dynamic image.
8. a kind of processing equipment, including processor and for the memory of storage processor executable instruction, the processor
It is realized when executing described instruction:
According to the positional relationship between multiple display elements, three-dimensional scenic is constructed;
By adjusting the parameter of observation point, multiple two dimensional images that the three-dimensional scenic is formed at the observation point are obtained;
According to the multiple two dimensional image, dynamic image is generated.
9. equipment according to claim 8, which is characterized in that the parameter of the observation point includes at least one of: burnt
Away from, optical center position, apart from the original distance of each display, visual angle.
10. equipment according to claim 8, which is characterized in that by adjusting the parameter of observation point, obtain the three dimensional field
Multiple two dimensional images that scape is formed at the observation point, including at least one of:
The observation point is simulated according to the direction of the three-dimensional scenic from left to right is parallel to, mobile with scheduled rate, acquisition exists
The multiple two dimensional images formed at the observation point;
The observation point is simulated according to the direction perpendicular to the three-dimensional scenic from back to front, mobile with pre-set velocity, acquisition exists
The multiple two dimensional images formed at the observation point;
The observation point is simulated according to the direction of the three-dimensional scenic from top to bottom is parallel to, mobile with scheduled rate, acquisition exists
The multiple two dimensional images formed at the observation point.
11. a kind of processing equipment, including processor and for the memory of storage processor executable instruction, the processor
It is realized when executing described instruction:
Obtain the positional relationship between the multiple display elements imported;
Three-dimensional scenic is constructed according to the positional relationship between the multiple display elements;
It is retrieved as the multiple groups supplemental characteristic of observation point setting;
Mapping obtains the corresponding two dimensional image of each group supplemental characteristic;
According to the corresponding two dimensional image of each group supplemental characteristic, Mass production dynamic image.
12. a kind of computer readable storage medium is stored thereon with computer instruction, described instruction, which is performed, realizes that right is wanted
The step of seeking any one of 1 to 6 the method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711128596.3A CN109801351B (en) | 2017-11-15 | 2017-11-15 | Dynamic image generation method and processing device |
PCT/CN2018/114540 WO2019096057A1 (en) | 2017-11-15 | 2018-11-08 | Dynamic image generation method, and processing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711128596.3A CN109801351B (en) | 2017-11-15 | 2017-11-15 | Dynamic image generation method and processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109801351A true CN109801351A (en) | 2019-05-24 |
CN109801351B CN109801351B (en) | 2023-04-14 |
Family
ID=66539359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711128596.3A Active CN109801351B (en) | 2017-11-15 | 2017-11-15 | Dynamic image generation method and processing device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109801351B (en) |
WO (1) | WO2019096057A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112579225A (en) * | 2019-09-30 | 2021-03-30 | 北京国双科技有限公司 | Processing method and device for delayed element display |
CN113450434A (en) * | 2020-03-27 | 2021-09-28 | 北京沃东天骏信息技术有限公司 | Method and device for generating dynamic image |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348938A (en) * | 2020-10-30 | 2021-02-09 | 杭州安恒信息技术股份有限公司 | Method, device and computer equipment for optimizing three-dimensional object |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6278460B1 (en) * | 1998-12-15 | 2001-08-21 | Point Cloud, Inc. | Creating a three-dimensional model from two-dimensional images |
JP2006244306A (en) * | 2005-03-04 | 2006-09-14 | Nippon Telegr & Teleph Corp <Ntt> | Animation generation system, animation generation device, animation generation method, program, and storage medium |
TW201130285A (en) * | 2010-02-26 | 2011-09-01 | Hon Hai Prec Ind Co Ltd | System and method for controlling 3D images |
US20130050527A1 (en) * | 2011-08-25 | 2013-02-28 | Casio Computer Co., Ltd. | Image creation method, image creation apparatus and recording medium |
US20130088491A1 (en) * | 2011-10-07 | 2013-04-11 | Zynga Inc. | 2d animation from a 3d mesh |
CN103514621A (en) * | 2013-07-17 | 2014-01-15 | 宝鸡翼迅网络科技有限公司 | Case and event scene all-true dynamic 3D representation method and reconstruction system |
EP2779102A1 (en) * | 2013-03-12 | 2014-09-17 | E.sigma Systems GmbH | Method of generating an animated video sequence |
CN105551084A (en) * | 2016-01-28 | 2016-05-04 | 北京航空航天大学 | Outdoor three-dimensional scene combined construction method based on image content parsing |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5850352A (en) * | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
JP2003337947A (en) * | 2002-05-21 | 2003-11-28 | Iwane Kenkyusho:Kk | Method and device for image display, and storage medium recorded with image display method |
JP5258549B2 (en) * | 2008-12-26 | 2013-08-07 | 学校法人立命館 | Composite image display system, composite image display method, and composite image display program |
CN103679791A (en) * | 2013-12-19 | 2014-03-26 | 广东威创视讯科技股份有限公司 | Split screen updating method and system for three-dimensional scene |
CN103714565A (en) * | 2013-12-31 | 2014-04-09 | 广州市久邦数码科技有限公司 | Method and system for generating dynamic image with audio |
CN107134000B (en) * | 2017-05-23 | 2020-10-23 | 张照亮 | Reality-fused three-dimensional dynamic image generation method and system |
-
2017
- 2017-11-15 CN CN201711128596.3A patent/CN109801351B/en active Active
-
2018
- 2018-11-08 WO PCT/CN2018/114540 patent/WO2019096057A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6278460B1 (en) * | 1998-12-15 | 2001-08-21 | Point Cloud, Inc. | Creating a three-dimensional model from two-dimensional images |
JP2006244306A (en) * | 2005-03-04 | 2006-09-14 | Nippon Telegr & Teleph Corp <Ntt> | Animation generation system, animation generation device, animation generation method, program, and storage medium |
TW201130285A (en) * | 2010-02-26 | 2011-09-01 | Hon Hai Prec Ind Co Ltd | System and method for controlling 3D images |
US20130050527A1 (en) * | 2011-08-25 | 2013-02-28 | Casio Computer Co., Ltd. | Image creation method, image creation apparatus and recording medium |
US20130088491A1 (en) * | 2011-10-07 | 2013-04-11 | Zynga Inc. | 2d animation from a 3d mesh |
EP2779102A1 (en) * | 2013-03-12 | 2014-09-17 | E.sigma Systems GmbH | Method of generating an animated video sequence |
CN103514621A (en) * | 2013-07-17 | 2014-01-15 | 宝鸡翼迅网络科技有限公司 | Case and event scene all-true dynamic 3D representation method and reconstruction system |
CN105551084A (en) * | 2016-01-28 | 2016-05-04 | 北京航空航天大学 | Outdoor three-dimensional scene combined construction method based on image content parsing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112579225A (en) * | 2019-09-30 | 2021-03-30 | 北京国双科技有限公司 | Processing method and device for delayed element display |
CN113450434A (en) * | 2020-03-27 | 2021-09-28 | 北京沃东天骏信息技术有限公司 | Method and device for generating dynamic image |
Also Published As
Publication number | Publication date |
---|---|
WO2019096057A1 (en) | 2019-05-23 |
CN109801351B (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109584295A (en) | The method, apparatus and system of automatic marking are carried out to target object in image | |
CN110336987A (en) | A kind of projector distortion correction method, device and projector | |
CN109242961A (en) | A kind of face modeling method, apparatus, electronic equipment and computer-readable medium | |
CN108769462B (en) | Free visual angle scene roaming method and device | |
CN108594999B (en) | Control method and device for panoramic image display system | |
US9001115B2 (en) | System and method for three-dimensional visualization of geographical data | |
JP2019512902A (en) | Image display method, method of generating a forming sled, and head mounted display device | |
CN110648274B (en) | Method and device for generating fisheye image | |
CN110874818A (en) | Image processing and virtual space construction method, device, system and storage medium | |
CN109801351A (en) | Dynamic image generation method and processing equipment | |
CN103634588A (en) | Image composition method and electronic apparatus | |
CN109661816A (en) | The method and display device of panoramic picture are generated and shown based on rendering engine | |
CN104751506B (en) | A kind of Cluster Rendering method and apparatus for realizing three-dimensional graphics images | |
CN105979248A (en) | Image processing system with hybrid depth estimation and method of operation thereof | |
Heindl et al. | Blendtorch: A real-time, adaptive domain randomization library | |
CN102111562A (en) | Projection conversion method for three-dimensional model and device adopting same | |
JP5252703B2 (en) | 3D image display device, 3D image display method, and 3D image display program | |
Słomiński et al. | Intelligent object shape and position identification for needs of dynamic luminance shaping in object floodlighting and projection mapping | |
Soile et al. | Accurate 3D textured models of vessels for the improvement of the educational tools of a museum | |
CN102110298A (en) | Method and device for projecting three-dimensional model in virtual studio system | |
JP6882266B2 (en) | Devices and methods for generating data representing pixel beams | |
Belhi et al. | An integrated framework for the interaction and 3D visualization of cultural heritage | |
Wang et al. | Roaming of oblique photography model in unity3D | |
CN108475421A (en) | Method and apparatus for generating the data for indicating pixel light beam | |
CN113001985A (en) | 3D model, device, electronic equipment and storage medium based on oblique photography construction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |