CN111640174B - Furniture growth animation cloud rendering method and system based on fixed viewing angle - Google Patents

Furniture growth animation cloud rendering method and system based on fixed viewing angle Download PDF

Info

Publication number
CN111640174B
CN111640174B CN202010387691.0A CN202010387691A CN111640174B CN 111640174 B CN111640174 B CN 111640174B CN 202010387691 A CN202010387691 A CN 202010387691A CN 111640174 B CN111640174 B CN 111640174B
Authority
CN
China
Prior art keywords
rendering
sequence
key frame
scene
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010387691.0A
Other languages
Chinese (zh)
Other versions
CN111640174A (en
Inventor
郑哲浩
叶青
何迅
余星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202010387691.0A priority Critical patent/CN111640174B/en
Publication of CN111640174A publication Critical patent/CN111640174A/en
Application granted granted Critical
Publication of CN111640174B publication Critical patent/CN111640174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a furniture growth animation cloud rendering method and system based on a fixed viewing angle, which mainly comprise scene data preprocessing, spatial position relation calculation, growth sequence planning, key frame image sequence generation, animation video synthesis and the like. The invention ensures that an operator only needs to provide a rendering scene and a fixed view angle, a pre-calculation-planning module can automatically calculate the filling sequence of elements in the scene, a frame sequence from an empty scene to a complete scene is constructed, the whole construction process is less than 20 seconds, and no manual intervention is needed; the rendering-generating-synthesizing module can automatically generate each frame of image and synthesized video according to the frame sequence, and the whole process only needs a plurality of minutes; multiple links requiring manual intervention in the prior art can be completed in an automatic mode, so that the cost in various aspects is obviously reduced; the method has the advantages of performing targeted optimization on the field of interior decoration, and having more application value.

Description

Furniture growth animation cloud rendering method and system based on fixed viewing angle
Technical Field
The invention relates to the field of digital media, in particular to a furniture growth animation cloud rendering method and system based on a fixed viewing angle.
Background
Growth animation, i.e., build timing animation (construction-sequencing animation), is a representation of building animation (architectural animation); in a short time, the elements are gradually filled into a space modeling rendering scene by special effect means such as appearance, displacement, scaling and the like to form continuous images, and the digital streaming media work is subjected to artistic simplification and processing in the construction process of the objective world.
In the prior art, the production of a growing animation requires the operator to: completing scene modeling in three-dimensional animation software; planning the filling sequence of elements, and editing animation presentation of various elements one by using a software function or a third-party plug-in supported by software; generating a static frame image sequence by combining an offline renderer; importing the video editing software to the nonlinear video editing software to finish editing.
The prior art has the following disadvantages: the operators are required to master the use of multiple types of software, the software is mainly large-scale commercial software, and the software cost and the learning cost are high; the operator is required to be equipped with a computer which is higher than the mainstream configuration of the consumer market, such as a professional graphic workstation, so that the requirements of the running environment for modeling, rendering and video generation can be met, and the hardware cost is high; the operator is required to edit and store in a single computer environment meeting the requirements of software and hardware, and the migration cost and the risk cost generated by extremely relying on the single computer environment are high; the operator is required to invest a great deal of effort from macroscopic sequential planning to microscopic detailed animation effect implementation, with high time and labor costs. How to overcome the limitation of the prior art and reduce the cost of growing the animation from multiple aspects is a problem to be solved.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a furniture growth animation cloud rendering method and system based on a fixed visual angle.
The invention aims at being completed by the following technical scheme: the furniture growth animation cloud rendering method based on the fixed visual angle mainly comprises the following steps:
1) Preprocessing scene data: obtaining depth, mesh and other meta information of a rendering object under the current camera view angle, and encoding and storing the meta information into two-dimensional data and necessary auxiliary indexes according to a pre-designed data format;
2) Calculating the spatial position relation: deducing and establishing a spatial position relation of the rendering object by utilizing the result data preprocessed in the step 1, deducing and reconstructing a three-dimensional spatial position relation of the rendering object from the two-dimensional meta-information, and providing a context for automatic planning and filling sequence;
3) Planning a growth sequence: combining class marking information and other expansion information of the rendering object, and automatically planning filling sequence of each element in the current camera view angle on the basis of spatial position relation;
4) Generating a key frame image sequence: based on the filling sequence and other user inputs, splitting rendering scene data into a key frame data sequence, and generating a key frame image sequence by combining an offline rendering device;
5) Synthesizing an animation video: and synthesizing the final animation video by inserting frames to fill the frame sequence.
The scene data preprocessing uses a pre-rendering method to extract three-dimensional space information in a current camera view angle into two-dimensional meta-information of a plurality of feature spaces, the pre-rendering method stores a mapping relation between a mesh index and a rendering object as a hash through a ray tracing or rasterization rendering algorithm, and the attribute data of the rendering object comprises unique ids as an auxiliary index of calculation.
The calculating the spatial position relation mainly comprises the following steps:
1) Mapping the mesh meta information to depth information by using a SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix;
2) Dividing the space of the current camera view angle into a plurality of connected subspaces with proper granularity, dividing the subspaces according to the depth distances, clustering the rendering objects, further sequencing the rendering objects along the direction of a set space axis by using the depth distances in each subclass, and finally merging sequencing results in a plurality of subspaces into the spatial position relation of the rendering objects of the current scene.
The growth order plan includes predefined category priority fill order rules, a user-given desired fill order, and potential combination relationships between elements.
The key frame image sequence and the composite animation video are generated based on the key frame sequence, scene data are split into key frame data sequences, an offline rendering device is used for generating the key frame image sequence, the key frame image sequence is used as input, an image processing method is used for generating interpolation frames, the interpolation frames are finally exported in a streaming media file format, and a user finished product is delivered by means of public cloud distributed object storage service.
The furniture growth animation cloud rendering system based on the fixed view angle mainly comprises a scene data preprocessing unit, a space position calculating unit, a growth sequence planning unit, a key frame image sequence generating unit and an animation video synthesizing unit, wherein the scene data preprocessing unit obtains the depth, the mesh and other meta information of a rendering object under the current camera view angle; the spatial position calculating unit derives the result data of the scene data preprocessing unit and establishes the spatial position relation of the rendering object; the growth sequence planning and planning unit automatically plans the filling sequence of each element in the current camera view angle; the key frame image sequence generating unit splits rendering scene data into key frame data sequences, and generates the key frame image sequences by combining an offline rendering device; the animated video composition unit generates a final animated video by inserting frames to fill in the frame sequence.
The beneficial effects of the invention are as follows: according to the invention, an operator only needs to provide a rendering scene and a fixed view angle, a pre-calculation-planning module can automatically calculate the filling sequence of elements in the scene to construct a frame sequence from an empty scene to a complete scene, the whole construction process is less than 20 seconds, no manual intervention is needed any more, the traditional method needs the operator to think about the content of each frame by frame, the elements in the scene are manually added and deleted, and the time consumption is different from tens of minutes to hours; the rendering-generating-synthesizing module can automatically generate each frame of image and synthesized video according to the frame sequence, the whole process only needs a plurality of minutes, and the traditional method needs an operator to render all images frame by frame, often needs a plurality of hours; multiple links requiring manual intervention in the prior art can be completed in an automatic mode, so that the cost in various aspects is obviously reduced; the method has the advantages of performing targeted optimization on the field of interior decoration, and having more application value.
Drawings
Fig. 1 is a flow chart of a furniture growth animation cloud rendering method based on a fixed viewing angle.
Fig. 2 is a block diagram of a furniture growth animation cloud rendering system based on a fixed viewing angle.
FIG. 3 is a diagram of a key frame screenshot of a growing animation video automatically synthesized in a rendered scene for a fixed viewing angle in accordance with the present invention.
Reference numerals illustrate: a scene data preprocessing unit 101, a spatial position calculation unit 102, a growth order planning and planning unit 103, a key frame image sequence generation unit 104 and an animation video synthesis unit 105.
Detailed Description
The invention will be described in detail below with reference to the attached drawings:
as shown in the attached drawing, the furniture growth animation cloud rendering method based on the fixed visual angle mainly comprises the following steps:
1) Preprocessing scene data: obtaining depth, mesh and other meta information of a rendering object under the current camera view angle, and encoding and storing the meta information into two-dimensional data and necessary auxiliary indexes according to a pre-designed data format;
2) Calculating the spatial position relation: deducing and establishing a spatial position relation of the rendering object by utilizing the result data preprocessed in the step 1, deducing and reconstructing a three-dimensional spatial position relation of the rendering object from the two-dimensional meta-information, thereby greatly improving the calculation performance of the position relation and providing a context for automatic planning and filling sequence;
3) Planning a growth sequence: combining class marking information and other expansion information of the rendering object, and automatically planning filling sequence of each element in the current camera view angle on the basis of spatial position relation;
4) Generating a key frame image sequence: based on the filling sequence and other user inputs, splitting rendering scene data into a key frame data sequence, and generating a key frame image sequence by combining an offline rendering device;
5) Synthesizing an animation video: and synthesizing the final animation video by inserting frames to fill the frame sequence.
The scene data preprocessing uses a pre-rendering method, three-dimensional space information in a current camera view angle is extracted into two-dimensional meta-information of a plurality of feature spaces, which is different from traditional illumination rendering, the pre-rendering method is characterized in that a ray tracing or rasterizing rendering algorithm (the ray tracing algorithm is used for transmitting a beam of rays to a scene from a camera position through a sampling position on an image plane to obtain the closest intersection point of the rays and a scene geometric model to obtain mesh index and depth information of the intersection point model, the rasterizing algorithm is used for mapping the geometric model surface elements of the scene onto a camera image plane one by one through coordinate transformation, each pixel point is used for accessing the closest surface element mesh index and depth information of the camera to obtain a final rendering image), only the meta-information such as mesh and depth values in the scene is extracted, the information is coded into two-dimensional data according to a pre-designed data format, meanwhile, the mapping relation between the mesh index and a rendering object is stored as hash for being used as an auxiliary index for calculation, and attribute data of the rendering object contains unique id, so that the corresponding relation can be efficiently searched.
The calculating of the spatial position relationship mainly comprises the following steps:
1) Mapping the mesh meta information to depth information by using a SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix; specifically, reading a mesh bitmap and a depth bitmap, and mapping the mesh bitmap and the depth bitmap to a mesh matrix and a depth matrix; the mesh matrix is the gray value corresponding to each pixel point, different gray values are divided in the preprocessing step, and a hash index from the unique mesh id to the gray value is constructed; the depth matrix is the depth value corresponding to each pixel point. For each unique id, expressed in pseudo-code as follows:
input: m is a mesh matrix, N is a depth matrix, and a gray index set;
and (3) outputting: d (D) index A depth matrix corresponding to each mesh;
algorithm:
for index in { Gray index set }
Figure BDA0002484674720000044
Where the f function produces a logical matrix mapped to boolean domain b= {0,1} defined as:
Figure BDA0002484674720000041
Figure BDA0002484674720000045
the operation is the Hadamard product of the two matrices. The algorithm is implemented by a scientific operation library supporting SIMD instruction setsNow, the process is performed.
Further, we define the depth distance of each mesh as:
Figure BDA0002484674720000042
the spatial depth of the current camera view is defined as: s= |max ((d (y)) -min (d (x))|.
2) The method comprises the steps of dividing the space of a current camera view angle into a plurality of connected subspaces according to proper granularity (calculated by weighting and calculating the characteristics such as the number of the rendering objects), dividing the space into a plurality of connected subspaces according to the depth distances of the subspaces, clustering the rendering objects, further sequencing the rendering objects along the direction of a set space axis by using the depth distances in each subclass, and finally merging sequencing results in a plurality of subspaces into the spatial position relation of the rendering objects of the current scene. The specific algorithm flow is as follows:
1) Dividing the space depth S into k subspaces according to the number N of rendering objects and a preset threshold T,
Figure BDA0002484674720000043
2) Traversing the depth distances of all rendering objects, judging the distances from the center point of the corresponding mesh depth distance to the center point of the appointed subspace, and classifying the model into the subspace with the minimum distance;
3) The sub-space center points are recalculated and updated for all the category points processed in the step 2;
4) Repeating the operations of the steps 2 and 3 until the subspace center point is stable or the designated iteration times are reached;
5) And in each subspace, performing multi-keyword sequencing by using d (x), d (y) and |d (y) -d (x) | of each rendering object corresponding to the mesh, and combining sequencing results of all subspaces to obtain a global sequence.
And the growth sequence planning is based on the calculation result of the calculated spatial position relation, and the filling sequence of each element in the current camera view angle is planned by combining the category information of the rendering object and the user input. Including predefined category priority fill order rules, user-given desired fill orders, and potential combination relationships between elements. The method and the device have the advantages that the indoor decoration scene is subjected to targeted optimization, such as presetting of priorities of categories of hard clothes, soft clothes, furniture and the like, so that planning results are more in line with the actual decoration arrangement implementation sequence. Specifically, the invention combines the business database to automatically add category labels to all elements in the rendering scene. An implementation of the business rule sequence is presented herein:
1) Blank house (reject all elements, only keep house type modeling);
2) Wall top surface paint and floor pavement;
3) Door pocket, door, window frame installation (including model, custom door, bim door);
4) Customizing home assembly, cabinet bodies, table tops, top leg wires and the like;
5) Sofa, curtain cloth, soft ornament placement, etc.;
6) And finally generating a key frame sequence based on the planning result.
Generating a key frame image sequence and synthesizing an animation video, splitting scene data into a key frame data sequence based on the key frame sequence, further generating the key frame image sequence by using an offline rendering device, taking the key frame image sequence as input, generating an interpolation frame by using an image processing method, finally deriving the interpolation frame in a streaming media file format, and delivering a user finished product by relying on a public cloud distributed object storage service.
The furniture growth animation cloud rendering system based on the fixed view angle mainly comprises a scene data preprocessing unit 101, a space position calculating unit 102, a growth sequence planning unit 103, a key frame image sequence generating unit 104 and an animation video synthesizing unit 105, wherein the scene data preprocessing unit 101 obtains depth, mesh and other meta information of a rendering object under the current camera view angle; the spatial position calculating unit 102 derives the result data of the scene data preprocessing unit 101, and establishes a spatial position relation of the rendering object; the growth sequence planning and planning unit 103 automatically plans the filling sequence of each element in the current camera view; the key frame image sequence generating unit 104 splits the rendering scene data into a key frame data sequence, and generates the key frame image sequence in combination with an offline rendering device; the animated video composition unit 105 generates a final animated video by inserting a frame sequence of filler frames. Each unit can be split and deployed in a distributed computing environment, and the resource bottleneck is eliminated by utilizing a multi-computing node parallel architecture, so that the computing efficiency is improved; closely related units may also be collocated to improve the performance of the data exchange.
It should be understood that equivalents and modifications to the technical scheme and the inventive concept of the present invention should fall within the scope of the claims appended hereto.

Claims (5)

1. A furniture growth animation cloud rendering method based on a fixed visual angle is characterized by comprising the following steps of: the method comprises the following steps:
1) Preprocessing scene data: obtaining depth and mesh meta information of a rendering object under a current camera view angle, and storing the meta information code into two-dimensional data and an auxiliary index according to a pre-designed data format;
2) Calculating the spatial position relation: deducing and establishing a spatial position relation of the rendering object by utilizing the result data preprocessed in the step 1, deducing and reconstructing a three-dimensional spatial position relation of the rendering object from the two-dimensional meta-information, and providing a context for automatic planning and filling sequence;
calculating the spatial positional relationship comprises the following steps:
(1) Mapping the mesh meta information to depth information by using a SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix;
(2) Dividing the space of the current camera view angle into a plurality of connected subspaces with proper granularity, dividing the subspaces according to the depth distances, clustering the rendering objects, further sequencing the rendering objects along the direction of a set space axis by using the depth distances in each subclass, and finally merging sequencing results in a plurality of subspaces into the spatial position relation of the rendering objects of the current scene;
3) Planning a growth sequence: combining class marking information and extension information of the rendering object, and automatically planning filling sequence of each element in the current camera view angle on the basis of spatial position relation;
4) Generating a key frame image sequence: based on the filling sequence and user input, splitting rendering scene data into a key frame data sequence, and generating a key frame image sequence by combining an offline rendering device;
5) Synthesizing an animation video: and synthesizing the final animation video by inserting frames to fill the frame sequence.
2. The furniture growth animation cloud rendering method based on a fixed viewing angle according to claim 1, wherein: the scene data preprocessing uses a pre-rendering method to extract three-dimensional space information in a current camera view angle into two-dimensional meta-information of a plurality of feature spaces, the pre-rendering method stores a mapping relation between a mesh index and a rendering object as a hash through a ray tracing or rasterization rendering algorithm, and the attribute data of the rendering object comprises unique ids as an auxiliary index of calculation.
3. The furniture growth animation cloud rendering method based on a fixed viewing angle according to claim 1, wherein: the growth order plan includes predefined category priority fill order rules, a user-given desired fill order, and potential combination relationships between elements.
4. The furniture growth animation cloud rendering method based on a fixed viewing angle according to claim 1, wherein: the key frame image sequence and the composite animation video are generated based on the key frame sequence, scene data are split into key frame data sequences, an offline rendering device is used for generating the key frame image sequence, the key frame image sequence is used as input, an image processing method is used for generating interpolation frames, the interpolation frames are finally exported in a streaming media file format, and a user finished product is delivered by means of public cloud distributed object storage service.
5. A furniture growth animation cloud rendering system based on a fixed viewing angle is characterized in that: the method comprises a scene data preprocessing unit (101), a space position calculating unit (102), a growth sequence planning unit (103), a key frame image sequence generating unit (104) and an animation video synthesizing unit (105), wherein the scene data preprocessing unit (101) obtains depth and mesh meta information of a rendering object under the current camera view angle; a spatial position calculating unit (102) derives result data of the scene data preprocessing unit (101) and establishes a spatial position relation of a rendering object; a growth sequence planning and planning unit (103) automatically plans the filling sequence of each element in the current camera view; a key frame image sequence generating unit (104) splits rendering scene data into a key frame data sequence, and generates the key frame image sequence by combining an offline rendering device; an animated video synthesizing unit (105) generates a final animated video by inserting frames to fill the frame sequence;
calculating the spatial positional relationship comprises the following steps:
(1) Mapping the mesh meta information to depth information by using a SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix;
(2) Dividing the space of the current camera view angle into a plurality of connected subspaces with proper granularity, dividing the subspaces according to the depth distances, clustering the rendering objects, further sequencing the rendering objects along the direction of a set space axis by using the depth distances in each subclass, and finally merging sequencing results in a plurality of subspaces into the spatial position relation of the rendering objects of the current scene.
CN202010387691.0A 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle Active CN111640174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010387691.0A CN111640174B (en) 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010387691.0A CN111640174B (en) 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle

Publications (2)

Publication Number Publication Date
CN111640174A CN111640174A (en) 2020-09-08
CN111640174B true CN111640174B (en) 2023-04-21

Family

ID=72330887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010387691.0A Active CN111640174B (en) 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle

Country Status (1)

Country Link
CN (1) CN111640174B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660528B (en) * 2021-05-24 2023-08-25 杭州群核信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN113421337A (en) * 2021-07-21 2021-09-21 北京臻观数智科技有限公司 Method for improving model rendering efficiency

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636220B1 (en) * 2000-01-05 2003-10-21 Microsoft Corporation Video-based rendering
WO2007130018A1 (en) * 2006-04-27 2007-11-15 Pixar Image-based occlusion culling
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
CN103020340A (en) * 2012-11-29 2013-04-03 上海量维信息科技有限公司 Three-dimensional home design system based on web
CN103544731A (en) * 2013-09-30 2014-01-29 北京航空航天大学 Quick reflection drawing method on basis of multiple cameras
CN105303603A (en) * 2015-10-16 2016-02-03 深圳市天华数字电视有限公司 Three-dimensional production system used for demonstrating document and production method thereof
CN106056661A (en) * 2016-05-31 2016-10-26 钱进 Direct3D 11-based 3D graphics rendering engine
CN107680042A (en) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 Rendering intent, device, engine and storage medium
CN107862718A (en) * 2017-11-02 2018-03-30 深圳市自由视像科技有限公司 4D holographic video method for catching
CN107871338A (en) * 2016-09-27 2018-04-03 重庆完美空间科技有限公司 Real-time, interactive rendering intent based on scene decoration
CN109891462A (en) * 2016-08-10 2019-06-14 维亚科姆国际公司 The system and method for interactive 3D environment are generated for using virtual depth
CN110689626A (en) * 2019-09-25 2020-01-14 网易(杭州)网络有限公司 Game model rendering method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636220B1 (en) * 2000-01-05 2003-10-21 Microsoft Corporation Video-based rendering
WO2007130018A1 (en) * 2006-04-27 2007-11-15 Pixar Image-based occlusion culling
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
CN103020340A (en) * 2012-11-29 2013-04-03 上海量维信息科技有限公司 Three-dimensional home design system based on web
CN103544731A (en) * 2013-09-30 2014-01-29 北京航空航天大学 Quick reflection drawing method on basis of multiple cameras
CN105303603A (en) * 2015-10-16 2016-02-03 深圳市天华数字电视有限公司 Three-dimensional production system used for demonstrating document and production method thereof
CN106056661A (en) * 2016-05-31 2016-10-26 钱进 Direct3D 11-based 3D graphics rendering engine
CN109891462A (en) * 2016-08-10 2019-06-14 维亚科姆国际公司 The system and method for interactive 3D environment are generated for using virtual depth
CN107871338A (en) * 2016-09-27 2018-04-03 重庆完美空间科技有限公司 Real-time, interactive rendering intent based on scene decoration
CN107680042A (en) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 Rendering intent, device, engine and storage medium
CN107862718A (en) * 2017-11-02 2018-03-30 深圳市自由视像科技有限公司 4D holographic video method for catching
CN110689626A (en) * 2019-09-25 2020-01-14 网易(杭州)网络有限公司 Game model rendering method and device

Also Published As

Publication number Publication date
CN111640174A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
US8379036B2 (en) Mesh transfer
US7652666B2 (en) Hybrid hardware-accelerated relighting system for computer cinematography
JPH1091809A (en) Operating method for function arithmetic processor control machine
US20100053172A1 (en) Mesh transfer using uv-space
CN111640174B (en) Furniture growth animation cloud rendering method and system based on fixed viewing angle
JP3870167B2 (en) Rendering system, rendering method and recording medium thereof
CN107633544B (en) Processing method and device for ambient light shielding
JPH10255081A (en) Image processing method and image processor
CN112446937B (en) Project progress three-dimensional visualization method based on BIM technology
Martin-Brualla et al. Gelato: Generative latent textured objects
Zhou et al. Decorating surfaces with bidirectional texture functions
Schäfer et al. Local Painting and Deformation of Meshes on the GPU
CN109615709A (en) The modeling of multiple person cooperational three-dimensional scenic and method for drafting based on cloud computing
Jobst et al. Mechanisms on graphical core variables in the design of cartographic 3D city presentations
CN111475969B (en) Large-scale crowd behavior simulation system
JP2004272580A (en) Device, method and program for composing high-dimensional texture
Masuch et al. Visualising ancient architecture using animated line drawings
CN109509249B (en) Virtual scene light source intelligent generation method based on components
Chang et al. Texture tiling on 3d models using automatic polycube-maps and wang tiles
Fellner et al. Modeling of and navigation in complex 3D documents
US11587277B2 (en) Weight maps to generate off-center split maps of a shape
Lin et al. A feature-adaptive subdivision method for real-time 3D reconstruction of repeated topology surfaces
Zhang Virtual design method of interior landscape based on 3D vision
US8189006B1 (en) Caching attributes of surfaces without global parameterizations
ZEHNER Landscape visualization in high resolution stereoscopic visualization environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant