CN115019019B - Method for realizing 3D special effect editor - Google Patents

Method for realizing 3D special effect editor Download PDF

Info

Publication number
CN115019019B
CN115019019B CN202210622500.3A CN202210622500A CN115019019B CN 115019019 B CN115019019 B CN 115019019B CN 202210622500 A CN202210622500 A CN 202210622500A CN 115019019 B CN115019019 B CN 115019019B
Authority
CN
China
Prior art keywords
rendering
camera
scene
operations
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210622500.3A
Other languages
Chinese (zh)
Other versions
CN115019019A (en
Inventor
骆伟
刘歆宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Neusoft University of Information
Original Assignee
Dalian Neusoft University of Information
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Neusoft University of Information filed Critical Dalian Neusoft University of Information
Priority to CN202210622500.3A priority Critical patent/CN115019019B/en
Publication of CN115019019A publication Critical patent/CN115019019A/en
Application granted granted Critical
Publication of CN115019019B publication Critical patent/CN115019019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for realizing a 3D special effect editor, which comprises the steps of defining metadata, constructing scenes, performing scene interaction and executing rendering, wherein the defining metadata is used for defining resource files, constructing rendering objects and setting scenes, and the resource files are used for storing grids, materials and textures of rendering objects; the rendering object is used for representing all the rendering objects which can be rendered in the scene, the scene is set for setting a rendering target and a rendering sequence, and the rendering sequence is the rendering sequence of the camera; constructing a scene; the scene interaction means dragging a camera in a mouse driving scene to perform visual angle transformation; performing rendering refers to constructing a renderable entity object according to metadata, rendering the entity object, managing operations generated in the rendering process, packaging the operations and storing the operations in a queue, and performing the packaged operations in the queue first before performing rendering. The 3D special effect editor can be constructed, and the same type of operation can be combined in the rendering process, so that the execution efficiency is improved.

Description

Method for realizing 3D special effect editor
Technical Field
The invention relates to the field of video editing, in particular to a method for realizing a 3D special effect editor.
Background
At present, short video applications such as sound trembling are popular worldwide, and the application contains very rich special effect materials including 2D,3D special effects and the like. And 3D special effects are more favored by users due to vivid effects. Most manufacturers of 3D special effect materials adopt UGC (user produced content) modes, and a professional 3D special effect editor is needed, so that users can produce various special effect contents. The scene part in the 3D special effects editor is mainly used for editing 3D scenes, as shown in fig. 2. The scene mainly comprises ground, a camera, an object and lamplight (parallel light, point light sources, envMap and the like).
The images in the preview picture and the scenes are closely connected, and if the scenes are film studio, the images in the preview picture are images which are displayed on a display after being shot by a camera in the scenes, namely mirror images shot by the camera in the scenes, namely effects actually seen by a user on a screen. The scene of fig. 2 is the scene camera looking at the studio, and the preview scene is the scene taken by the camera model in the scene. The two renderings are in fact the same set of metadata drivers, and the technology involved here is how to define metadata and how to implement rendering of scenes and previews and interaction of scenes simultaneously.
Currently, in order to encourage OGC (user to generate content), all big and short video manufacturers basically release their own 3D special effect editing tools, but how to define metadata, and how to implement scene interaction and technical details related to rendering of previews and scenes in an editor are not open. Meanwhile, the existing 3D special effect editing tool adopts different metadata to drive when performing scene and preview rendering, the rendering efficiency is low, the operation generated by a user is not processed in the rendering process, and the scene and preview rendering cannot be simultaneously performed.
Disclosure of Invention
The invention provides a method for realizing a 3D special effect editor, which aims to overcome the technical problems.
A method of implementing a 3D special effects editor, comprising,
Defining metadata, wherein the metadata comprises a resource file, a rendering object and scene setting, the resource file is used for storing information of the rendering object, and the information of the rendering object comprises grid information, material information and texture information; the method comprises the steps that a rendering object is used for representing all the rendering objects which can be rendered in a scene, the rendering object further comprises a camera and a light source, the camera is used for determining the renderable rendering objects according to the range of a camera view-finding frame, the light source is used for simulating rays in reality, scene setting comprises a rendering target and a rendering sequence, the rendering target comprises a real-time target and a capturing target, and the rendering sequence refers to the rendering sequence of the camera;
step two, constructing a scene which is a 3D visual space containing rendering objects,
Thirdly, performing scene interaction, namely driving a camera in a scene to perform visual angle transformation by dragging a mouse, and performing translation, scaling and rotation operations of rendering objects in the scene;
And step four, performing rendering, which specifically comprises rendering according to scene information, rendering object information and resource file information, constructing a rendering management object, instantiating the rendering management object into a scene rendering instance and a preview rendering instance, performing scene rendering by the scene rendering instance, previewing the scene rendering instance according to the renderable objects, the camera, the light source, the ground, the camera sight, the view plane and the operations generated in scene interaction in the scene, previewing the scene rendering instance according to the rendering objects in the camera viewfinder in the scene rendering instance and the operations generated in the scene interaction, and managing and executing the operations generated in the rendering process, wherein the operations comprise, but are not limited to, deleting and moving the rendering objects, the management is that the operations generated by a user in the current rendering process are firstly packaged and stored in a queue, and then the packaged operations in the queue are firstly executed before each rendering.
Preferably, the preview rendering example further includes performing layer-by-layer rendering on each camera shooting picture when the number of cameras in the scene rendering example is more than one, generating a map on each camera shooting picture, and then respectively mixing pixels of every two adjacent maps according to the rendering sequence of the cameras to generate a new map, and mixing the new map with the next layer of map, wherein the mixing refers to a color superposition mode of one map and other maps.
Preferably, the performing the operations encapsulated in the queue further includes performing merging of operations when the types of operations are consistent.
Preferably, said driving the camera in the scene by dragging the mouse to change the perspective comprises,
Step 4a, dragging the mouse by taking the angle of the user in display and watching as an observation point, obtaining the moving distance of the mouse on the display screen, and normalizing the moving distance through the length and the width of the display screen;
Step 4b, constructing a world coordinate system according to the observation point, constructing a camera coordinate system according to a camera mechanism, wherein the camera coordinate system comprises a Right coordinate axis, an Up coordinate axis and a Foward coordinate axis, converting the moving distance into a rotation angle, wherein the rotation angle comprises deltaX, deltaY, deltaX, deltaY, storing the rotation angle deltaX as q1 in a quaternion manner, storing the rotation angle deltaY as q2 in a quaternion manner,
Step 4c, calculating the vector v1 of the camera relative to the viewpoint according to formula (1),
v1=Translationcamera-focusPoint (1)
Where Translation camera represents the camera's position coordinates in the world coordinate system, focusPoint represents the camera's viewpoint coordinates,
Step 4d, calculating the vector v1 according to the formula (2) to obtain v3, updating the position coordinates of the camera in the world coordinate system according to the formula (3),
v3=q2·q1·v1·q1*·q2* (2)
Translationcamera=v3+focusPoint (3)
Wherein q1 is a conjugated quaternion of q1, q2 is a conjugated quaternion of q2,
Step 4e, the Right coordinate axis in the camera coordinate system rotates according to formula (4) to obtain v2, the up coordinate axis in the camera coordinate system is calculated according to formula (5), the transformation matrix of the camera is calculated by calling lookAt functions in OpenGL, the quaternion qr is calculated according to the transformation matrix and converted into a rotation matrix R, the position v 'after the transformation of the camera is calculated according to formula (6), v is the position before the transformation of the camera, the information of the rendered object is obtained according to v' and displayed in the display screen,
v2=q2·right·q2* (4)
up=v3×v2 (5)
v'=Translationcamera·R·v (6)
Where v is the position before the camera is changed and v' is the position after the camera is changed.
The invention provides a method for realizing a 3D special effect editor, which can solve the specific schemes of metadata definition, scene and preview rendering of the 3D special effect editor, and simultaneously provides a scheme for combining the same type of operation in the scene and preview rendering, and can optimize the execution efficiency of related operations generated in the rendering execution process. In scene interaction, the technical problem of how to realize rotation of the visual angle around the observation point through mouse movement is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a scene representation of a 3D effect editor of the present invention;
FIG. 3 is a schematic block diagram of a camera type of PERSPECTIVE of the present invention;
fig. 4 is a block diagram of the world coordinate system and the camera coordinate system of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
FIG. 1 is a flowchart of the method of the present invention, as shown in FIG. 1, the method of the present embodiment may include:
A method for implementing a 3D effect editor includes defining metadata, constructing a scene, performing scene interactions, and performing rendering. The metadata definition of the 3D special effect editor is a data base for realizing rendering, the execution of rendering is a core function of the 3D special effect editor, and the interaction of scenes is a key for realizing man-machine operation.
Metadata refers to the underlying data definition of scene and preview rendering, containing all rendering information. The rendering operations of the upper layer are driven in dependence on the metadata of the lower layer.
The rendering of scenes and previews in a 3D editor is based on the same set of underlying data drivers and must contain all the information of the rendering process, so the definition of metadata is a precondition for completing the subsequent rendering. The definition metadata are used for defining resource files, constructing rendering objects and setting scenes.
The resource file is used for storing information of a rendering object, wherein the information of the rendering object comprises Mesh information Mesh, material information Material and Texture information Texture, the Mesh information is the shape of the rendering object, the Material information is optical characteristics, and the Texture information is such as marble grains, wood grains and the like. The specific definition format is as follows:
Since the definition formats of the three resource file types are the same, only MESH is taken as an example in the above example. Where files represent a list of resource files, each item in the list representing a resource file. The id in the resource file represents the unique identification of the resource, and the rendering object can be indexed through the id; fileType represents the resource file type, including MESH, MATERIAL, TEXTURE total 3 types; path represents the relative path of the resource file; name represents the name of the resource file displayed in the 3D editor; the hash is md5 of the file, and before adding the resource file, it is first checked whether the file with the same hash value exists in the files in the metadata. If so, the existing resource files are directly utilized without adding, and a large number of repeated files are prevented from being repeatedly copied.
The rendering objects are used for representing all rendering objects which can be rendered in the scene, the types are Node, and the rendering objects represent the position relative relation through tree relation, and the specific definition is as follows:
Where scenes denotes a scene list, multiple scenes can be defined. objects represent a list of rendered objects, elements in the list represent the rendered objects in a tree structure, leaf represents leaf nodes of the current node. meshId and MATERIALID represent ids of associated Mesh and Material resource files; type represents a render object type, here set to NODE; the transform is set to render position transformation information of the object, including translation, scale, rotation. In the above example, object1 has a leaf child node object2 below it, and object 2's transform is relative to the parent node. In solving the position of the object2 relative to the world coordinate system, all parent nodes need to be traversed to perform cumulative calculation, and in the above example, the transformation of the object relative to the world coordinate system is [ -100+100, -200+200,0+300] = [0,0,300].
The rendering objects may also include cameras, light sources, which render objects, although defined in metadata, are only visible in the scene, and should not appear in the preview screen because the cameras are added to show the preview screen, and the light sources are more realistic for rendering the objects.
And selecting a rendering object to be rendered into a rendering target through the range of the camera view finding frame so as to show a preview picture, wherein the rendering object is of a NODE type. The camera comprises PERSPECTIVE and Orthographic types, PERSPECTIVE simulates perspective effect and provides depth effect for displaying 3D object; orthographic adopts a parallel viewing cone mode, the size of the object does not change along with the distance, and the object is used for displaying images and characters of a screen. At least one camera is required in the scene. Taking PERSPECTIVE cameras as an example, the following is specifically defined:
Wherein transform represents position conversion information of the camera, settings represents PERSPECTIVE related settings, and specific parameters are shown in fig. 3, wherein aspect represents a ratio of width to height in the camera view cone, w represents the width of the camera view cone, and h represents the height of the camera view cone; far represents the distance of the far plane from the camera; near represents the distance of the near-plane from the camera; fovy denotes the open angle of the field of view, i.e. the viewing cone.
The light source is used for simulating light rays in reality, and in order to enable object rendering to be more vivid, the light source is divided into a point light source, directional light, ambient light and an ambient map. These types are used to simulate different light effects in reality, respectively. Taking directional light as an example, the specific definition is as follows:
info denotes the property of the light source, color denotes the color of the light source, in the above example [1.0,1.0 ] denotes white light, LIGHTTYPE denotes the light source type, and DIRECTIONAL denotes the directional light. Since the directional light is parallel light from infinity, only the angle of the light source needs to be set, and rotation represents the angle between the light source and the plane and represents the quaternion.
The scene setting is used for setting rendering targets including real-time targets and capturing targets, which are visible only in real-time and recording, as real-time targets, which are visible not only in real-time but also in the last recorded video, and rendering orders. Therefore, some UI controls or prompt words can be hidden, so that the user can only see the user in a real-time target and cannot see the user in a recorded video. The above rendering targets are set for cameras, and in the case of multiple cameras, each camera can set one rendering target to set whether the image seen is visible only in real time or both real time and video.
The rendering order refers to the rendering order of the camera. When a scene contains a plurality of cameras, a rendering sequence is also required to be set, and because the pictures seen by each camera need to be rendered on a screen, the rendering sequence problem must be considered, the pictures of each camera can be regarded as one layer, and the plurality of camera pictures are subjected to layer-by-layer mapping according to the setting of a blending mode, and the blending mode comprises but is not limited to superposition based on the colors of the mapping to generate a new mapping.
A scene is constructed, which is a 3D visualization space containing rendered objects. The rendering of a scene is an image taken by a camera (also called a scene camera) at a first view angle of the user (i.e. looking at the view angle of the operator himself). The scene here presents an edited scene, so the light sources in space, the camera is visible to the scene camera, but the scene camera itself is not visible in any case.
Previews are 3D spaces taken by cameras placed in a scene, which are the same 3D space as the pictures presented in the scene, but differ in rendering results due to different angles of view. In addition, since the preview screen refers to the final special effect presented to the user after the material is produced, the rendering objects such as the light source, the camera, the ground and the like placed in the scene for editing are invisible to the preview screen.
The scene rendering and previewing means that a renderable entity object is constructed according to metadata, and specifically comprises a json format when metadata is defined, and scene information, resource information and rendering objects are needed to be included in actual rendering. It is therefore necessary to parse the json data into memory objects using jsonparsor, and construct the physical objects available for rendering from the information provided in the metadata. Where FileMeta denotes a resource object, SCENEMETA denotes a scene object, and ObjectMeta denotes a rendering object. In addition, a render management object RENDERMANAGER needs to be defined to be responsible for managing the rendering and scheduling of the scene.
First, RENDERMANAGER requires maintenance of two rendering instances RENDERINSTANCE, these two RENDERINSTANCE representing scene rendering and preview rendering instances, respectively, that render scenes for renderable render objects, cameras, light sources, ground, camera view, view planes, and operations generated in scene interactions, where ground, camera view, view planes, etc. render objects that are only visible in the edited scene. Meanwhile, the camera transformation for scene rendering is based on the camera of the user's first perspective (i.e., the scene camera). The preview rendering instance performs preview rendering according to the rendering objects in the Camera view frame in the scene rendering instance and the operation generated in the scene interaction, and for the Camera and Light types, the rendering nodes need to be ignored when creating.
The camera transformation of preview rendering is based on pictures shot by camera models placed in scenes, if a plurality of cameras are placed, a map is generated on the pictures shot by each camera according to the rendering sequence mentioned in the scene setting, then pixels of every two adjacent maps are mixed according to a rendering hierarchy in a set mixing mode, wherein the mixing mode is a mixing mode commonly used in photoshop, such as multiple, overlay, screen and the like, then a new texture is generated and mixed with the next layer of map, so that a 3D model, UI control, text and the like can be placed under the plurality of cameras, and layer-by-layer rendering is realized.
The two rendering modes are based on the same set of metadata, but the rendering types are different, one of the rendering modes needs to render an edited scene, and the other rendering mode needs to render a preview picture, so that the created rendering nodes are also different. Although based on the same 3D space, the rendering results are also quite different due to the different perspectives of the cameras. The scheme of rendering the scene and the preview picture simultaneously by the same set of metadata is realized.
Meanwhile, since a change may occur to a rendering object during execution of rendering, that is, operations such as adding and deleting resources, moving, etc., which may generate an abnormality if executed during execution of rendering, are generated in a user scene interaction. There is therefore a need to manage operations generated in performing a rendering process, including but not limited to delete, move, encapsulate and store them in a queue, after which the operations encapsulated in the queue are performed first before each rendering is performed. Specifically, one CommandQueue is created in each RENDERINSTANCE to store operations to be performed. Thus, no matter which thread submits the operation to be executed during the rendering, the operation is packaged into a Command and put into a queue. Command in the queue is sequentially fetched and executed in the rendering thread before each rendering execution. Wherein Command contains 3 parameters:
Object, object for executing Command
FuncPtr function pointer to execute command
Args parameter transfer at command execution
In executing the task in CommandQueue, a Command is popped from CommandQueue and then executed (object- >. FuncPtr) (args). This is cycled until CommandQueue is empty.
In some scenarios, such as moving objects, the mouse movement may generate multiple movement events in between renderings, which may accumulate multiple commands in CommandQueue. In practice, the multiple commands may be combined. So to improve performance, a Command may be declared a getId function, and if two neighboring Command possess the same id, then the MERGEWITH function is called. The specific definition is as follows:
bool mergeWith(const RenderCommand*command)
The parameter indicates Command of the newly entered queue, and the return value is of the pool type, which indicates whether Command is merged in CommandQueue. If true is returned, the Command newly entered into the queue and the current Command are combined in MERGEWITH functions, and then commandQueue deletes the Command newly entered into the queue, so that the Command number is reduced; if false is returned, no change is made to the command of the newly entered queue.
For mouse movement operation belonging to Command capable of being combined, the movement offset of each time can be accumulated. Therefore, tens of mobile commands can be combined into 1 Command, and the execution efficiency of CommandQueue is improved.
One important function in 3D editors is interaction, including operations such as scene camera perspective transformation, and operations such as translation, scaling, rotation of rendering objects in a scene. Whereas the function of 360-degree omnidirectional observation of an object by rotating the view angle is most complex, the problem to be solved here is how the scene camera can realize the rotation of the view angle around the observation point by the movement of the mouse, the structure diagram of the world coordinate system and the camera coordinate system is shown in figure 4,
The method comprises the steps of performing scene interaction, namely performing view transformation by dragging a camera in a mouse driving scene, and performing translation, scaling and rotation operations of rendering objects in the scene. In particular to the preparation method of the composite material,
Step 4a, taking the viewing angle of the user as a viewing point, namely to in fig. 4 as a viewing point, dragging the mouse to obtain the moving distance of the mouse on the display screen, and normalizing the moving distance through the length and width of the display screen;
Step 4b, constructing a world coordinate system according to the observation point, wherein the world coordinate system comprises X, Y, Z axes in fig. 4, constructing a camera coordinate system according to a camera mechanism, the camera coordinate system comprises Right coordinate axis, up coordinate axis and Foward coordinate axis as shown in fig. 4, converting the moving distance into a rotation angle, wherein the rotation angle comprises deltaX, deltaY, deltaX, deltaY and quaternion, the rotation angle of the camera around the Y axis of the world coordinate system is represented, the rotation angle of the camera around the camera coordinate system is represented by deltaY, and the euler angle is converted into the quaternion to perform angle calculation because of the euler angle, and the quaternion is more convenient in space rotation calculation. Rotation angle deltaX is stored as q1 in a quaternion manner, rotation angle deltaY is stored as q2 in a quaternion manner,
The euler angle rotation about a given axis of rotation is then converted to a quaternion q representation using equation (1):
q=qw+qxi+qyj+qzk (1)
wherein the quaternion q consists of one real number and 3 imaginary units i, j, k, and has:
Wherein kx, ky, kz denote the rotation axis, the rotation angle is θ, and there are
For deltaX, the rotation of the view around the Y-axis of the world coordinate system is represented, so the rotation axis k= (0, 1, 0), θ= -deltaX. Using the above method to obtain quaternion q1 of rotation of the view angle around the Y axis of the world coordinate system:
For deltaY, the rotation of the view angle about the Right axis of the own coordinate system is represented, so the rotation axis k= (rightx, righty, rightz), θ= -deltaY. The quaternion q2 of the rotation of the view angle around the Right axis of the self coordinate system is obtained by utilizing the above method:
step 4c, calculating the vector v1 of the camera relative to the viewpoint according to formula (4),
v1=Translationcamera-focusPoint (4)
Where Translation camera represents the camera's position coordinates in the world coordinate system, focusPoint represents the camera's viewpoint coordinates,
Step 4d, calculating the vector v1 according to the formula (5) to obtain v3, updating the position coordinates of the camera in the world coordinate system according to the formula (6),
v3=q2·q1·v1·q1*·q2* (5)
Translationcamera=v3+focusPoint (6)
Wherein q1 is a conjugated quaternion of q1, q2 is a conjugated quaternion of q2,
Step 4e, rotating the Right coordinate axis in the camera coordinate system according to the formula (7) to obtain v2,
v2=q2·right·q2* (7)
The up coordinate axis in the camera coordinate system is calculated according to equation (8),
up=v3×v2 (8)
By invoking lookAt functions in OpenGL, the transformation matrix of the camera is computed, lookAt functions include 3 parameters:
eyePosition representing the position of the camera
Center point of view
Up-shaft of video camera
Wherein eyePosition is v3 obtained above; center is the origin of the world coordinate system; up is upVector calculated as above. The transformation matrix m of the camera can be determined using these 3 parameters. m is the transformation matrix that transforms the object into the camera coordinate system. Therefore, when solving the transformation matrix of the camera in the original world coordinate system, the matrix m needs to be inverted. That is, m=m -1, and the obtained transformation matrix m is represented by the following formula:
Calculating a quaternion qr according to a transformation matrix, and converting the quaternion qr into a rotation matrix R, wherein the method specifically comprises the steps of firstly solving a Trace (Trace) of the matrix according to a formula (9) when solving the quaternion qr:
trace=1+r11+r22+r33 (9)
The solution is carried out according to the size of the trace in the following two cases:
Case 1, trace >0:
otherwise, case 2 is entered. In case 2, it is also necessary to divide into three sub-cases:
If max (r 11,r22,r33)=r11, let
If max (r 11,r22,r33)=r22, let/>
If max (r 11,r22,r33)=r33, let
Then, the quaternion representation q r=(qx,qy,qz,qw of the rotation matrix can be found from the above equation). Calculating the position v' after camera transformation according to formula (10), v being the position before camera transformation,
v'=Translationcamera·R·v (10)
Wherein v is the position before the transformation of the camera, v' is the position after the transformation of the camera,
Through the above steps, the Translation amount Translation camera and the quaternion rotation amount q r of the scene camera v can be obtained, and q r is converted into a rotation matrix according to the following formula.
Then, the position v '=transmission camera ·r·v after the scene camera is transformed, the information of the rendered object is acquired according to v' and displayed in the display screen, and further the scene camera can be rotated around the observation point by dragging of a mouse.
The invention provides a method for realizing a 3D special effect editor, which can solve the specific schemes of metadata definition, scene and preview rendering of the 3D special effect editor, and simultaneously provides a scheme for combining the same type of operation in the scene and preview rendering, and can optimize the execution efficiency of related operations generated in the rendering execution process. In scene interaction, the technical problem of how to realize rotation of the visual angle around the observation point through mouse movement is solved.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (3)

1. A method of implementing a 3D special effects editor, comprising,
Defining metadata, wherein the metadata comprises a resource file, a rendering object and scene setting, the resource file is used for storing information of the rendering object, and the information of the rendering object comprises grid information, material information and texture information; the method comprises the steps that a rendering object is used for representing all the rendering objects which can be rendered in a scene, the rendering object further comprises a camera and a light source, the camera is used for determining the renderable rendering objects according to the range of a camera view-finding frame, the light source is used for simulating rays in reality, scene setting comprises a rendering target and a rendering sequence, the rendering target comprises a real-time target and a capturing target, and the rendering sequence refers to the rendering sequence of the camera;
step two, constructing a scene which is a 3D visual space containing rendering objects,
Thirdly, performing scene interaction, namely driving a camera in a scene to perform visual angle transformation by dragging a mouse, and performing translation, scaling and rotation operations of rendering objects in the scene;
and step four, performing rendering, which specifically comprises rendering according to scene information, rendering object information and resource file information, constructing a rendering management object, instantiating the rendering management object into a scene rendering instance and a preview rendering instance, performing scene rendering by the scene rendering instance, previewing by the preview rendering instance according to the renderable objects, the camera, the light source, the ground, the camera sight, the view plane and the operations generated in scene interaction in the scene, previewing by the preview rendering instance according to the rendering objects in the camera viewfinder in the scene rendering instance and the operations generated in the scene interaction, and managing and executing the operations generated in the rendering process, wherein the operations comprise, but are not limited to, deleting and moving the rendering objects, the management comprises firstly encapsulating the operations generated by a user in the current rendering process and storing the operations in a queue, then firstly executing the operations encapsulated in the queue before each execution, and combining and executing the operations encapsulated in the queue when the types of the operations are consistent.
2. The method for implementing the 3D special effect editor of claim 1, wherein the previewing the rendering instance further includes rendering each camera shot layer by layer when the number of cameras in the scene rendering instance is more than one, generating a map for each camera shot, and then mixing pixels of each two adjacent maps according to the rendering order of the cameras to generate a new map, and mixing the new map with the next layer of map, wherein the mixing refers to a color overlapping manner of one map and other maps.
3. The method of claim 1, wherein driving the camera in the scene by dragging the mouse to change the perspective comprises,
Step 4a, dragging the mouse by taking the angle of the user in display and watching as an observation point, obtaining the moving distance of the mouse on the display screen, and normalizing the moving distance through the length and the width of the display screen;
Step 4b, constructing a world coordinate system according to the observation point, constructing a camera coordinate system according to a camera mechanism, wherein the camera coordinate system comprises a Right coordinate axis, an Up coordinate axis and a Foward coordinate axis, converting the moving distance into a rotation angle, wherein the rotation angle comprises deltaX, deltaY, deltaX, deltaY, storing the rotation angle deltaX as q1 in a quaternion manner, storing the rotation angle deltaY as q2 in a quaternion manner,
Step 4c, calculating the vector v1 of the camera relative to the viewpoint according to formula (1),
v1=Translationcamera-focusPoint (1)
Where Translation camera represents the camera's position coordinates in the world coordinate system, focusPoint represents the camera's viewpoint coordinates,
Step 4d, calculating the vector v1 according to the formula (2) to obtain v3, updating the position coordinates of the camera in the world coordinate system according to the formula (3),
v3=q2·q1·v1·q1*·q2* (2)
Translationcamera=v3+focusPoint (3)
Wherein q1 is a conjugated quaternion of q1, q2 is a conjugated quaternion of q2,
Step 4e, the Right coordinate axis in the camera coordinate system rotates according to formula (4) to obtain v2, the up coordinate axis in the camera coordinate system is calculated according to formula (5), the transformation matrix of the camera is calculated by calling lookAt functions in OpenGL, the quaternion qr is calculated according to the transformation matrix and converted into a rotation matrix R, the position v 'after the transformation of the camera is calculated according to formula (6), v is the position before the transformation of the camera, the information of the rendered object is obtained according to v' and displayed in the display screen,
v2=q2·right·q2* (4)
up=v3×v2 (5)
v′=Translationcamera·R·v (6)
Where v is the position before the camera is changed and v' is the position after the camera is changed.
CN202210622500.3A 2022-06-01 2022-06-01 Method for realizing 3D special effect editor Active CN115019019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210622500.3A CN115019019B (en) 2022-06-01 2022-06-01 Method for realizing 3D special effect editor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210622500.3A CN115019019B (en) 2022-06-01 2022-06-01 Method for realizing 3D special effect editor

Publications (2)

Publication Number Publication Date
CN115019019A CN115019019A (en) 2022-09-06
CN115019019B true CN115019019B (en) 2024-04-30

Family

ID=83073286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210622500.3A Active CN115019019B (en) 2022-06-01 2022-06-01 Method for realizing 3D special effect editor

Country Status (1)

Country Link
CN (1) CN115019019B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354872A (en) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 Rendering engine, implementation method and producing tools for 3D web game
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
WO2017092307A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Model rendering method and device
CN111701238A (en) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354872A (en) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 Rendering engine, implementation method and producing tools for 3D web game
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
WO2017092307A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Model rendering method and device
CN111701238A (en) * 2020-06-24 2020-09-25 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向大数据集的地形模型多分辨率建模关键技术研究;张慧杰;《博士电子期刊》;20090831;第1-125页 *

Also Published As

Publication number Publication date
CN115019019A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN108648269B (en) Method and system for singulating three-dimensional building models
US8633939B2 (en) System and method for painting 3D models with 2D painting tools
US20140218356A1 (en) Method and apparatus for scaling images
WO2021135320A1 (en) Video generation method and apparatus, and computer system
EP3282427B1 (en) Composing an animation scene in a computer-generated animation
CA3139657C (en) Apparatus for multi-angle screen coverage analysis
US20220375152A1 (en) Method for Efficiently Computing and Specifying Level Sets for Use in Computer Simulations, Computer Graphics and Other Purposes
US11238657B2 (en) Augmented video prototyping
Trapp et al. Colonia 3D communication of virtual 3D reconstructions in public spaces
CN115019019B (en) Method for realizing 3D special effect editor
US20230066931A1 (en) Method for associating production elements with a production approach
EP4097607B1 (en) Applying non-destructive edits to nested instances for efficient rendering
US11625900B2 (en) Broker for instancing
CN112070904A (en) Augmented reality display method applied to museum
Giertsen et al. An open system for 3D visualisation and animation of geographic information
Trapp et al. Communication of digital cultural heritage in public spaces by the example of roman cologne
JP2020013390A (en) Information processing apparatus, information processing program, and information processing method
Li et al. Design of Product Characteristics 3D Model Display System Based on Digital Earth
Denisov Elaboration of New Viewing Modes in CATIA CAD for Lighting Simulation Purpose
CN117853662A (en) Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player
CN115134529A (en) Method and device for displaying project model in multiple views and readable storage medium
Cheng Human Skeleton System Animation
CN118250450A (en) Virtual light field simulation system
Abidi et al. Hybrid 3D Modeling of Large Landscapes from Satellite Maps
CN116738540A (en) Method for presenting and using BIM data on mobile device through graphic interaction engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant