CN111640174A - Furniture growth animation cloud rendering method and system based on fixed visual angle - Google Patents

Furniture growth animation cloud rendering method and system based on fixed visual angle Download PDF

Info

Publication number
CN111640174A
CN111640174A CN202010387691.0A CN202010387691A CN111640174A CN 111640174 A CN111640174 A CN 111640174A CN 202010387691 A CN202010387691 A CN 202010387691A CN 111640174 A CN111640174 A CN 111640174A
Authority
CN
China
Prior art keywords
rendering
sequence
key frame
animation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010387691.0A
Other languages
Chinese (zh)
Other versions
CN111640174B (en
Inventor
郑哲浩
叶青
何迅
余星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN202010387691.0A priority Critical patent/CN111640174B/en
Publication of CN111640174A publication Critical patent/CN111640174A/en
Application granted granted Critical
Publication of CN111640174B publication Critical patent/CN111640174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a furniture growth animation cloud rendering method and system based on a fixed visual angle. The method ensures that an operator only needs to provide a rendering scene and a fixed visual angle, the pre-calculation-planning module can automatically calculate the filling sequence of elements in the scene, and construct a frame sequence from an empty scene to a complete scene, the whole construction process is less than 20 seconds, and no manual intervention is needed; the rendering-generating-synthesizing module can automatically generate each frame of image and synthesized video according to the frame sequence, and the whole process only needs a plurality of minutes; a plurality of links needing manual intervention in the prior art can be finished in an automatic mode, and the cost in all aspects is obviously reduced; the method performs targeted optimization on the field of indoor decoration, so that the method has higher application value.

Description

Furniture growth animation cloud rendering method and system based on fixed visual angle
Technical Field
The invention relates to the field of digital media, in particular to a furniture growth animation cloud rendering method and system based on a fixed visual angle.
Background
Growing animation, namely building time-series animation (construction-sequential animation), is an expression form of architectural animation (architectural animation); in a short time, the method gradually fills each element into a space modeling rendering scene by using special effect means such as appearance, displacement, scaling and the like to form a continuous image, and is a digital streaming media work obtained by simplifying and processing the art in the construction process of an objective world.
In the prior art, the production of a growing animation requires the operator to: completing scene modeling in three-dimensional animation software; planning the filling sequence of the elements, and editing the animation presentation of various elements one by using software functions or third-party plug-ins supported by the software; generating a static frame image sequence by combining an offline renderer; and importing the video into nonlinear video editing software to complete the editing.
The prior art has the following disadvantages: an operator is required to master the use of a plurality of software, which are mainly large commercial software, and the software cost and the learning cost are high; operators are required to be equipped with computers higher than the mainstream configuration of the consumer market, such as professional graphic workstations, so that the requirements of the modeling, rendering and video generation on the operating environment can be met, and the hardware cost is high; operators are required to edit and store in a single computer environment meeting the requirements of software and hardware, and the migration cost and risk cost which are extremely dependent on the single computer environment are high; the operator is required to put a great deal of effort into the detailed implementation of the specific animation effect from the macroscopic sequence planning to the microscopic, and the time cost and the labor cost are high. How to overcome the limitations of the prior art and reduce the cost of growing animation from multiple aspects is a problem to be solved urgently.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a furniture growth animation cloud rendering method and system based on a fixed visual angle.
The purpose of the invention is achieved by the following technical scheme: the furniture growth animation cloud rendering method based on the fixed visual angle mainly comprises the following steps:
1) scene data preprocessing: acquiring the depth, mesh and other meta-information of a rendering object under the current camera view angle, and coding and storing the meta-information into two-dimensional data and necessary auxiliary indexes according to a pre-designed data format;
2) calculating the spatial position relation: deducing and establishing a spatial position relationship of the rendering object by utilizing the result data preprocessed in the step 1, deducing and reconstructing a three-dimensional spatial position relationship of the rendering object from the two-dimensional meta-information, and providing context for automatically planning a filling sequence;
3) planning a growth sequence: automatically planning the filling sequence of each element in the current camera view angle on the basis of the spatial position relation by combining the category marking information and other extended information of the rendering object;
4) generating a key frame image sequence: splitting rendering scene data into a key frame data sequence based on the filling sequence and other user input, and generating a key frame image sequence by combining an offline rendering device;
5) synthesizing an animation video: and filling the frame sequence through the frame interpolation to synthesize the final animation video.
The scene data preprocessing adopts a pre-rendering method, three-dimensional space information in a current camera view angle is extracted into two-dimensional meta-information of a plurality of characteristic spaces, the pre-rendering method stores a mapping relation between a mesh index and a rendering object as a hash through a ray tracing or rasterization rendering algorithm, the hash is used as a calculated auxiliary index, and attribute data of the rendering object contains a unique id.
The calculation of the spatial position relationship mainly comprises the following steps:
1) mapping the mesh meta information to depth information by using an SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix;
2) the space of the current camera view angle is divided into a plurality of connected subspaces with proper granularity, the rendered objects are clustered according to the depth distance division of the subspaces, the rendered objects are sequenced along the direction of a set spatial axis by using the depth distance in each subspace, and finally the sequencing results in the plurality of subspaces are merged into the spatial position relation of the rendered objects of the current scene.
The growing order plan includes predefined category priority filling order rules, a desired filling order given by the user, and potential composition relationships between elements.
The method comprises the steps of generating a key frame image sequence and a composite animation video, splitting scene data into a key frame data sequence based on the key frame sequence, further generating the key frame image sequence by using an offline rendering device, taking the key frame image sequence as input, generating an interpolation frame by using an image processing method, finally exporting the interpolation frame in a streaming media file format, and delivering a user finished product by relying on public cloud distributed object storage service.
The furniture growth animation cloud rendering system based on the fixed visual angle mainly comprises a scene data preprocessing unit, a spatial position calculating unit, a growth sequence planning and planning unit, a key frame image sequence generating unit and an animation video synthesizing unit, wherein the scene data preprocessing unit acquires the depth, mesh and other meta-information of a rendering object under the current camera visual angle; the spatial position calculation unit deduces the result data of the scene data preprocessing unit to establish the spatial position relationship of the rendering object; the growth sequence planning and planning unit automatically plans the filling sequence of each element in the current camera view; the key frame image sequence generation unit splits rendering scene data into key frame data sequences and generates key frame image sequences by combining an offline rendering device; the animation video synthesis unit fills the frame sequence through the frame interpolation to generate the final animation video.
The invention has the beneficial effects that: according to the method, an operator only needs to provide a rendering scene and a fixed view angle, a pre-calculation-planning module of the method can automatically calculate the filling sequence of elements in the scene to construct a frame sequence from an empty scene to a complete scene, the whole construction process is less than 20 seconds, no manual intervention is needed, and the traditional method needs the operator to conceive the content of each frame by frame, manually add and delete the elements in the scene, and the consumed time is different from dozens of minutes to hours; the rendering-generating-synthesizing module can automatically generate each frame of image and synthesized video according to the frame sequence, the whole process only needs several minutes, and the traditional method needs an operator to render all images frame by frame and often needs several hours; a plurality of links needing manual intervention in the prior art can be finished in an automatic mode, and the cost in all aspects is obviously reduced; the method performs targeted optimization on the field of indoor decoration, so that the method has higher application value.
Drawings
FIG. 1 is a flow chart of a furniture growth animation cloud rendering method based on a fixed viewing angle.
Fig. 2 is a configuration diagram of a furniture growth animation cloud rendering system based on a fixed viewing angle.
FIG. 3 is a screenshot of automatically synthesized growing animation video keyframes in a rendered scene for a fixed perspective of the present invention.
Description of reference numerals: a scene data preprocessing unit 101, a spatial position calculating unit 102, a growth sequence planning unit 103, a key frame image sequence generating unit 104, and an animation video synthesizing unit 105.
Detailed Description
The invention will be described in detail below with reference to the following drawings:
as shown in the attached drawings, the furniture growth animation cloud rendering method based on the fixed visual angle mainly comprises the following steps:
1) scene data preprocessing: acquiring the depth, mesh and other meta-information of a rendering object under the current camera view angle, and coding and storing the meta-information into two-dimensional data and necessary auxiliary indexes according to a pre-designed data format;
2) calculating the spatial position relation: deducing and establishing a spatial position relationship of the rendering object by utilizing the result data preprocessed in the step 1, and deducing and reconstructing a three-dimensional spatial position relationship of the rendering object from the two-dimensional meta-information, so that the calculation performance of the position relationship is greatly improved, and a context is provided for automatically planning a filling sequence;
3) planning a growth sequence: automatically planning the filling sequence of each element in the current camera view angle on the basis of the spatial position relation by combining the category marking information and other extended information of the rendering object;
4) generating a key frame image sequence: splitting rendering scene data into a key frame data sequence based on the filling sequence and other user input, and generating a key frame image sequence by combining an offline rendering device;
5) synthesizing an animation video: and filling the frame sequence through the frame interpolation to synthesize the final animation video.
The scene data preprocessing uses a prerendering method, three-dimensional space information in the current camera view angle is extracted into two-dimensional meta-information of a plurality of characteristic spaces, the prerendering method is different from the traditional illumination rendering, a beam of light is emitted to a scene through a light tracking or rasterization rendering algorithm (the light tracking algorithm, from the position of a camera, through a sampling position on an image plane, an intersection point of the light and a scene geometric model is obtained, the mesh index and the depth information of an intersection point model are obtained, the rasterization algorithm maps the geometric model bins of the scene onto the image plane of the camera one by one through coordinate transformation, each pixel point accesses the mesh index and the depth information of the bin closest to the camera, a final rendering image is obtained, only the meta-information such as mesh and depth value in the scene is extracted, the information is coded into two-dimensional data according to a pre-designed data format, and meanwhile, the calculation is convenient, the mapping relation between the mesh index and the rendering object is stored as a hash and used as an auxiliary index for calculation, and the attribute data of the rendering object comprises a unique id, so that the corresponding relation can be efficiently found.
The calculation of the spatial position relationship mainly comprises the following steps:
1) mapping the mesh meta information to depth information by using an SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix; specifically, reading a mesh bitmap and a depth bitmap, and mapping the mesh bitmap and the depth bitmap to a mesh matrix and a depth matrix; the mesh matrix is a gray value corresponding to each pixel point, is divided by different gray values in the preprocessing step, and constructs a hash index from a mesh unique id to the gray value; the depth matrix is the depth value corresponding to each pixel point. For each unique id, the pseudo code is expressed as follows:
inputting: m is a mesh matrix, N is a depth matrix and a gray index set;
and (3) outputting: dindexA depth matrix corresponding to each mesh;
the algorithm is as follows:
for index in (set of gray indices) }
Figure BDA0002484674720000044
Where the f function produces a logical matrix mapped to the boolean domain B {0,1}, which is defined as:
Figure BDA0002484674720000041
Figure BDA0002484674720000045
the operation is a Hadamard product of the two matrices. The algorithm is implemented by a scientific operation library supporting a SIMD instruction set.
Further, we define the depth distance of each mesh as:
Figure BDA0002484674720000042
the spatial depth of the current camera view is defined as: s ═ max ((d (y)) -min (d (x)) |.
2) Dividing the space of the current camera view angle into a plurality of connected subspaces according to proper granularity (obtained by weighting and calculating the number of rendering objects and other features), dividing the rendering objects according to the depth distance of the subspaces, clustering the rendering objects, sequencing the rendering objects along the direction of a set space axis by using the depth distance in each subspace, and finally merging the sequencing results in the plurality of subspaces into the rendering object space position relation of the current scene. The specific algorithm flow is as follows:
1) dividing the spatial depth S into k subspaces according to the number N of rendering objects and a preset threshold value T,
Figure BDA0002484674720000043
2) traversing the depth distances of all rendering objects, judging the distance from the center point of the corresponding mesh depth distance to the center point of the appointed subspace, and putting the model into the subspace with the minimum distance;
3) recalculating and updating the subspace center points of all the category points processed in the step 2;
4) repeating the operations of the steps 2 and 3 until the central point of the subspace is stable or the specified iteration times are reached, and stopping;
5) in each subspace, using d (x), d (y) and | d (y) -d (x) | of mesh corresponding to each rendering object to perform multi-keyword sequencing, and combining sequencing results of all subspaces to obtain a global sequence.
And planning the filling sequence of each element in the current camera view angle by combining the category information of the rendering object and the user input based on the calculation result of the spatial position relationship. Including predefined category priority fill order rules, user-given desired fill orders, and potential combinatorial relationships between elements. The indoor decoration scene is optimized in a targeted mode, for example, the priority of category relations of hard clothes, soft clothes, furniture and the like is preset, so that the planning result is more consistent with the decoration arrangement implementation sequence in reality. Specifically, the invention automatically adds category labels to all elements in the rendered scene in combination with the service database. An implementation of a business rule sequence is presented herein:
1) blank room (eliminating all elements, only keeping house type modeling);
2) wall top coating and floor paving;
3) door pocket, door, window frame installation (including model, custom door, bim door);
4) customizing home assembly, a cabinet body, a table top, a top leg line and the like;
5) sofa, curtain cloth, soft ornament, etc.;
6) finally, based on the planning result, a key frame sequence is generated.
Generating a key frame image sequence and a composite animation video based on the key frame sequence, splitting scene data into the key frame data sequence, further generating the key frame image sequence by using an offline rendering device, taking the key frame image sequence as input, generating an interpolation frame by using an image processing method, finally exporting the interpolation frame in a streaming media file format, and delivering a user finished product by relying on public cloud distributed object storage service.
The furniture growth animation cloud rendering system based on the fixed visual angle mainly comprises a scene data preprocessing unit 101, a spatial position calculating unit 102, a growth sequence planning and planning unit 103, a key frame image sequence generating unit 104 and an animation video synthesizing unit 105, wherein the scene data preprocessing unit 101 obtains the depth, mesh and other meta-information of a rendering object under the current camera visual angle; the spatial position calculation unit 102 deduces the result data of the scene data preprocessing unit 101 to establish a spatial position relationship of the rendering object; the growth sequence planning unit 103 automatically plans the filling sequence of each element in the current camera view; the key frame image sequence generating unit 104 splits the rendering scene data into a key frame data sequence, and generates a key frame image sequence by combining an offline rendering device; the animation video synthesis unit 105 fills the frame sequence by interpolation to generate a final animation video. Each unit can be split and deployed in a distributed computing environment, resource bottlenecks are eliminated by utilizing a multi-computing node parallel architecture, and computing efficiency is improved; closely related units may also be collocated to improve the performance of data exchange.
It should be understood that equivalent substitutions and changes to the technical solution and the inventive concept of the present invention should be made by those skilled in the art to the protection scope of the appended claims.

Claims (6)

1. A furniture growth animation cloud rendering method based on a fixed visual angle is characterized by comprising the following steps: the method mainly comprises the following steps:
1) scene data preprocessing: acquiring the depth, mesh and other meta-information of a rendering object under the current camera view angle, and coding and storing the meta-information into two-dimensional data and necessary auxiliary indexes according to a pre-designed data format;
2) calculating the spatial position relation: deducing and establishing a spatial position relationship of the rendering object by utilizing the result data preprocessed in the step 1, deducing and reconstructing a three-dimensional spatial position relationship of the rendering object from the two-dimensional meta-information, and providing context for automatically planning a filling sequence;
3) planning a growth sequence: automatically planning the filling sequence of each element in the current camera view angle on the basis of the spatial position relation by combining the category marking information and other extended information of the rendering object;
4) generating a key frame image sequence: splitting rendering scene data into a key frame data sequence based on the filling sequence and other user input, and generating a key frame image sequence by combining an offline rendering device;
5) synthesizing an animation video: and filling the frame sequence through the frame interpolation to synthesize the final animation video.
2. The fixed-view-angle-based furniture-growth animation cloud rendering method according to claim 1, wherein: the scene data preprocessing adopts a pre-rendering method, three-dimensional space information in a current camera view angle is extracted into two-dimensional meta-information of a plurality of characteristic spaces, the pre-rendering method stores a mapping relation between a mesh index and a rendering object as a hash through a ray tracing or rasterization rendering algorithm, the hash is used as a calculated auxiliary index, and attribute data of the rendering object contains a unique id.
3. The fixed-view-angle-based furniture-growth animation cloud rendering method according to claim 1, wherein: the calculation of the spatial position relationship mainly comprises the following steps:
1) mapping the mesh meta information to depth information by using an SIMD vectorization method and a hash index, and further establishing mapping of each rendering object and a depth value matrix;
2) the space of the current camera view angle is divided into a plurality of connected subspaces with proper granularity, the rendered objects are clustered according to the depth distance division of the subspaces, the rendered objects are sequenced along the direction of a set spatial axis by using the depth distance in each subspace, and finally the sequencing results in the plurality of subspaces are merged into the spatial position relation of the rendered objects of the current scene.
4. The fixed-view-angle-based furniture-growth animation cloud rendering method according to claim 1, wherein: the growing order plan includes predefined category priority filling order rules, a desired filling order given by the user, and potential composition relationships between elements.
5. The fixed-view-angle-based furniture-growth animation cloud rendering method according to claim 1, wherein: the method comprises the steps of generating a key frame image sequence and a composite animation video, splitting scene data into a key frame data sequence based on the key frame sequence, further generating the key frame image sequence by using an offline rendering device, taking the key frame image sequence as input, generating an interpolation frame by using an image processing method, finally exporting the interpolation frame in a streaming media file format, and delivering a user finished product by relying on public cloud distributed object storage service.
6. The utility model provides a furniture growth animation cloud system of rendering based on fixed visual angle which characterized in that: the method mainly comprises a scene data preprocessing unit (101), a spatial position calculating unit (102), a growth sequence planning and planning unit (103), a key frame image sequence generating unit (104) and an animation video synthesizing unit (105), wherein the scene data preprocessing unit (101) obtains the depth, mesh and other meta-information of a rendering object under the current camera view angle; the spatial position calculation unit (102) deduces the result data of the scene data preprocessing unit (101) to establish the spatial position relationship of the rendering object; a growth sequence planning unit (103) automatically plans the filling sequence of each element in the current camera view; a key frame image sequence generation unit (104) splits rendering scene data into key frame data sequences, and generates key frame image sequences by combining an offline rendering device; an animation video synthesis unit (105) generates a final animation video by filling the frame sequence with frame interpolation.
CN202010387691.0A 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle Active CN111640174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010387691.0A CN111640174B (en) 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010387691.0A CN111640174B (en) 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle

Publications (2)

Publication Number Publication Date
CN111640174A true CN111640174A (en) 2020-09-08
CN111640174B CN111640174B (en) 2023-04-21

Family

ID=72330887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010387691.0A Active CN111640174B (en) 2020-05-09 2020-05-09 Furniture growth animation cloud rendering method and system based on fixed viewing angle

Country Status (1)

Country Link
CN (1) CN111640174B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421337A (en) * 2021-07-21 2021-09-21 北京臻观数智科技有限公司 Method for improving model rendering efficiency
CN113660528A (en) * 2021-05-24 2021-11-16 杭州群核信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636220B1 (en) * 2000-01-05 2003-10-21 Microsoft Corporation Video-based rendering
WO2007130018A1 (en) * 2006-04-27 2007-11-15 Pixar Image-based occlusion culling
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
CN103020340A (en) * 2012-11-29 2013-04-03 上海量维信息科技有限公司 Three-dimensional home design system based on web
CN103544731A (en) * 2013-09-30 2014-01-29 北京航空航天大学 Quick reflection drawing method on basis of multiple cameras
CN105303603A (en) * 2015-10-16 2016-02-03 深圳市天华数字电视有限公司 Three-dimensional production system used for demonstrating document and production method thereof
CN106056661A (en) * 2016-05-31 2016-10-26 钱进 Direct3D 11-based 3D graphics rendering engine
CN107680042A (en) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 Rendering intent, device, engine and storage medium
CN107862718A (en) * 2017-11-02 2018-03-30 深圳市自由视像科技有限公司 4D holographic video method for catching
CN107871338A (en) * 2016-09-27 2018-04-03 重庆完美空间科技有限公司 Real-time, interactive rendering intent based on scene decoration
CN109891462A (en) * 2016-08-10 2019-06-14 维亚科姆国际公司 The system and method for interactive 3D environment are generated for using virtual depth
CN110689626A (en) * 2019-09-25 2020-01-14 网易(杭州)网络有限公司 Game model rendering method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636220B1 (en) * 2000-01-05 2003-10-21 Microsoft Corporation Video-based rendering
WO2007130018A1 (en) * 2006-04-27 2007-11-15 Pixar Image-based occlusion culling
CN102722861A (en) * 2011-05-06 2012-10-10 新奥特(北京)视频技术有限公司 CPU-based graphic rendering engine and realization method
CN103020340A (en) * 2012-11-29 2013-04-03 上海量维信息科技有限公司 Three-dimensional home design system based on web
CN103544731A (en) * 2013-09-30 2014-01-29 北京航空航天大学 Quick reflection drawing method on basis of multiple cameras
CN105303603A (en) * 2015-10-16 2016-02-03 深圳市天华数字电视有限公司 Three-dimensional production system used for demonstrating document and production method thereof
CN106056661A (en) * 2016-05-31 2016-10-26 钱进 Direct3D 11-based 3D graphics rendering engine
CN109891462A (en) * 2016-08-10 2019-06-14 维亚科姆国际公司 The system and method for interactive 3D environment are generated for using virtual depth
CN107871338A (en) * 2016-09-27 2018-04-03 重庆完美空间科技有限公司 Real-time, interactive rendering intent based on scene decoration
CN107680042A (en) * 2017-09-27 2018-02-09 杭州群核信息技术有限公司 Rendering intent, device, engine and storage medium
CN107862718A (en) * 2017-11-02 2018-03-30 深圳市自由视像科技有限公司 4D holographic video method for catching
CN110689626A (en) * 2019-09-25 2020-01-14 网易(杭州)网络有限公司 Game model rendering method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660528A (en) * 2021-05-24 2021-11-16 杭州群核信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN113660528B (en) * 2021-05-24 2023-08-25 杭州群核信息技术有限公司 Video synthesis method and device, electronic equipment and storage medium
CN113421337A (en) * 2021-07-21 2021-09-21 北京臻观数智科技有限公司 Method for improving model rendering efficiency

Also Published As

Publication number Publication date
CN111640174B (en) 2023-04-21

Similar Documents

Publication Publication Date Title
Deussen et al. Realistic modeling and rendering of plant ecosystems
JPH1091809A (en) Operating method for function arithmetic processor control machine
KR100503789B1 (en) A rendering system, rendering method, and recording medium therefor
CN104318513A (en) Building three-dimensional image display platform and application system thereof
JPH10255081A (en) Image processing method and image processor
CN107633544B (en) Processing method and device for ambient light shielding
CN111640174B (en) Furniture growth animation cloud rendering method and system based on fixed viewing angle
WO2020081017A1 (en) A method based on unique metadata for making direct modifications to 2d, 3d digital image formats quickly and rendering the changes on ar/vr and mixed reality platforms in real-time
Würfel et al. Natural Phenomena as Metaphors for Visualization of Trend Data in Interactive Software Maps.
Buckley et al. The virtual forest: Advanced 3-D visualization techniques for forest management and research
Roy et al. 3-D object decomposition with extended octree model and its application in geometric simulation of NC machining
Vyatkin et al. Offsetting and blending with perturbation functions
Schäfer et al. Local Painting and Deformation of Meshes on the GPU
CN102646286A (en) Digital graph medium simulation method with three-dimensional space structure
Dong et al. A time-critical adaptive approach for visualizing natural scenes on different devices
Lee et al. Adaptive synthesis of distance fields
WO2019106629A1 (en) Object generation
CN111475969B (en) Large-scale crowd behavior simulation system
US11625900B2 (en) Broker for instancing
Merrell et al. Example-based curve synthesis
WO2021203076A1 (en) Method for understanding and synthesizing differentiable scenes from input images
Fellner et al. Modeling of and navigation in complex 3D documents
KR100450210B1 (en) System and method for compositing three dimension scan face model and recording medium having program for three dimension scan face model composition function
Kim et al. WYSIWYG Stereo Paintingwith Usability Enhancements
Lin et al. A feature-adaptive subdivision method for real-time 3D reconstruction of repeated topology surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant