CN111354067B - Multi-model same-screen rendering method based on Unity3D engine - Google Patents
Multi-model same-screen rendering method based on Unity3D engine Download PDFInfo
- Publication number
- CN111354067B CN111354067B CN202010136853.3A CN202010136853A CN111354067B CN 111354067 B CN111354067 B CN 111354067B CN 202010136853 A CN202010136853 A CN 202010136853A CN 111354067 B CN111354067 B CN 111354067B
- Authority
- CN
- China
- Prior art keywords
- objects
- scene
- rendering
- model
- cells
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of screen rendering, in particular to a multi-model same-screen rendering method based on a Unity3D engine, which comprises the following steps: A. 2D-3D three-dimensional reconstruction: based on a neural network and other deep learning algorithms, a user can create a 3D character model through one photo, and the method has the advantage that the similarity with a real person can reach more than 90%; B. occlusion Culling (occlusion culling): using the "director" approach, i.e., letting the user follow the director's perspective to observe objects within the rendered field of view; C. dynamic AutoLOD: according to the scene requirement, objects are dynamically loaded or unloaded, the pressure of the scene on the CPU and the display card is reduced, meanwhile, the virtual camera is dynamically optimized according to the position, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized. The invention effectively increases the efficiency of model rendering through a series of methods, and can adopt real-time rendering to meet the demands of a broadcast television system on the number and the precision of virtual audiences.
Description
Technical Field
The invention relates to the technical field of screen rendering, in particular to a multi-model same-screen rendering method based on a Unity3D engine.
Background
The current multi-model same-screen rendering technology is mainly applied to two fields of film and television and games, but has the following problems: the model of the film and television level is too exquisite, the manufacturing period is long, the cost is too high (taking averda as an example, the rendering manufacturing time exceeds 1 year); while models of game level (CS for example) are rough, affecting look and feel and are deficient in number.
In the prior art, the multi-model same-screen rendering technology is fully applied to a broadcast television system, but the following defects and shortcomings still exist: the manufacturing period is long, the manufacturing cost is too high, the model precision is poor, and the model is difficult to personalize.
Disclosure of Invention
The invention aims to provide a multi-model same-screen rendering method based on a Unity3D engine, which has the advantages of high efficiency, high quality and low cost, and solves the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a multi-model on-screen rendering method based on a Unity3D engine comprises the following steps:
A. 2D-3D three-dimensional reconstruction: based on a neural network and other deep learning algorithms, a user can create a 3D character model through one photo, and the method has the advantage that the similarity with a real person can reach more than 90%;
B. OcclusionCulling: using the "director" approach, i.e., letting the user follow the director's perspective to observe objects within the rendered field of view;
C. dynamic AutoLOD: according to the scene requirement, objects are dynamically loaded or unloaded, the pressure of the scene on the CPU and the display card is reduced, meanwhile, the virtual camera is dynamically optimized according to the position, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized.
Preferably, the specific flow of the step A is as follows,
the front self-photographing (2D) of a person is used for matching with the three-dimensional head scanning (3D) of the person, after the relationship between the color distribution of the photo and the three-dimensional depth is fully understood, a high-precision 3D model of a task can be quickly created within 30 seconds, and therefore the problems that the traditional model is high in manufacturing cost, long in period, poor in precision and difficult to personalize are solved.
Preferably, the specific flow of the step B is as follows,
an occlusion region is created in the scene space, the occlusion region being made up of cells (cells). Each cell is a part of the whole scene shielding area, the cells can split the whole scene into a plurality of parts, when the virtual camera can see the cell, objects representing the cell can be rendered, and other objects are not rendered and are directly removed from the memory, so that the number of the objects which are simultaneously rendered is increased.
Occlusion culling (OcclusionCulling) is the application of a hierarchical view of a set of potentially visible objects constructed using virtual cameras to an entire scene; in operation, each camera uses this data to determine visible and invisible objects; with this information, unity will ensure that only visible objects are sent for rendering, thereby reducing the number of draw calls and improving performance;
more specifically, occlusion culling (OcclusionCulling) data is composed of cells (cells) that form a binary tree; occlusion culling uses two trees, one static object (staticinjects) for view Cells (viewtells) and one moving object (MovingObjects) for Target Cells (Target Cells); the view cells are mapped to an index list defining the visible static objects, so that the eliminating result of the static objects is more accurate.
Compared with the prior art, the invention has the following beneficial effects:
the model rendering efficiency is effectively increased through a series of methods of 2D-3D three-dimensional reconstruction, occlusion culling and dynamic AutoLOD, so that the model rendering quantity is increased, the triangular surface is increased from 1500 ten thousand to over 6400 ten thousand, and real-time rendering can be adopted, so that the demands of a broadcasting and television system on the quantity and the precision of virtual audiences are met.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a practical effect diagram of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, a multi-model on-screen rendering method based on a Unity3D engine includes the following steps:
A. 2D-3D three-dimensional reconstruction: based on a neural network and other deep learning algorithms, a user can create a 3D character model through one photo, and the method has the advantage that the similarity with a real person can reach more than 90%;
B. OcclusionCulling: using the "director" approach, i.e., letting the user follow the director's perspective to observe objects within the rendered field of view;
C. dynamic AutoLOD: according to the scene requirement, objects are dynamically loaded or unloaded, the pressure of the scene on the CPU and the display card is reduced, meanwhile, the virtual camera is dynamically optimized according to the position, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized.
The specific flow of the step A is as follows,
the front self-photographing (2D) of a person is used for matching with the three-dimensional head scanning (3D) of the person, after the relationship between the color distribution of the photo and the three-dimensional depth is fully understood, a high-precision 3D model of a task can be quickly created within 30 seconds, and therefore the problems that the traditional model is high in manufacturing cost, long in period, poor in precision and difficult to personalize are solved.
The specific flow of the step B is as follows,
an occlusion region is created in the scene space, the occlusion region being made up of cells (cells). Each cell is a part of the whole scene shielding area, the cells can split the whole scene into a plurality of parts, when the virtual camera can see the cell, objects representing the cell can be rendered, and other objects are not rendered and are directly removed from the memory, so that the number of the objects which are simultaneously rendered is increased.
Occlusion culling (OcclusionCulling) is the application of a hierarchical view of a set of potentially visible objects constructed using virtual cameras to an entire scene; in operation, each camera uses this data to determine visible and invisible objects; with this information, unity will ensure that only visible objects are sent for rendering, thereby reducing the number of draw calls and improving performance;
more specifically, occlusion culling (OcclusionCulling) data is composed of cells (cells) that form a binary tree; occlusion culling uses two trees, one static object (staticinjects) for view Cells (viewtells) and one moving object (MovingObjects) for Target Cells (Target Cells); the view cells are mapped to an index list defining the visible static objects, so that the eliminating result of the static objects is more accurate.
Examples:
1. a 2D-3D three-dimensional reconstruction technology is used for creating a high-precision 3D image for a photo submitted by a user, so that each model is generated according to the actual image of the user and has uniqueness;
2. according to the requirement of program recording, the occlusioncutting shielding is used for removing, the 3D user models outside the video field are processed, and resources consumed by the 3D models outside the video field are greatly reduced, so that the real-time interaction requirement of hundreds of people is met;
3. the dynamic AutoLOD is used for loading and unloading objects in real time according to a stage scene, so that the pressure of the scene on a CPU and a display card is reduced, and meanwhile, the consumption of memory and GPU operation resources is greatly reduced according to the dynamic optimization of the position of a virtual camera;
4. and 3, projecting the real-time rendered image to a large screen of a program recording site through HDMI, and guiding a user to perform necessary real-time interaction according to the requirements of a director group.
In the practice of '2019 CCTV host large race', the number of high-precision 3D character models used in each period reaches 400, the number of accumulated created models reaches 1000, and meanwhile, the number of rendered triangular faces is increased to more than 6400 universal from the traditional 1500 universal, so that the requirement of recording programs of a broadcasting and television system is basically met.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (1)
1. A multi-model on-screen rendering method based on a Unity3D engine is characterized in that: the method comprises the following steps:
A. 2D-3D three-dimensional reconstruction: based on a neural network and other deep learning algorithms, a user can create a 3D character model through one photo, and the method has the advantage that the similarity with a real person can reach more than 90%;
the front self-photographing 2D of a person is matched with the three-dimensional head scanning 3D of the person, after the relationship between the color distribution of the photo and the three-dimensional depth is fully understood, a high-precision 3D model of a task can be quickly created within 30 seconds, so that the problems that the traditional model is high in manufacturing cost, long in period, poor in precision and difficult to personalize are solved;
B. shielding and removing: using the "director" approach, i.e., letting the user follow the director's perspective to observe objects within the rendered field of view;
the specific flow of step B is as follows,
creating an occlusion region in the scene space, the occlusion region being comprised of cells; each cell is a part of the whole scene shielding area, the cells divide the whole scene into a plurality of parts, when the virtual camera can see the cell, the object representing the cell is rendered, and the other objects are not rendered and are directly removed from the memory, so that the number of the objects which are simultaneously rendered is increased;
occlusion culling is the application of a hierarchical view of a set of potentially visible objects constructed using virtual cameras to the entire scene; in operation, each camera uses this data to determine visible and invisible objects; with the data information, unity will ensure that only visible objects are sent for rendering, thereby reducing the number of draw calls and further improving performance;
more specifically, the occlusion-culled data consists of cells that form a binary tree; occlusion culling uses two trees, one for static objects of view cells and the other for moving objects of target cells; the view cell is mapped to an index list defining the visible static object, so that the eliminating result of the static object is more accurate;
C. dynamic AutoLOD: according to the scene requirement, objects are dynamically loaded or unloaded, the pressure of the scene on the CPU and the display card is reduced, meanwhile, the virtual camera is dynamically optimized according to the position, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010136853.3A CN111354067B (en) | 2020-03-02 | 2020-03-02 | Multi-model same-screen rendering method based on Unity3D engine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010136853.3A CN111354067B (en) | 2020-03-02 | 2020-03-02 | Multi-model same-screen rendering method based on Unity3D engine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111354067A CN111354067A (en) | 2020-06-30 |
CN111354067B true CN111354067B (en) | 2023-08-22 |
Family
ID=71197225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010136853.3A Active CN111354067B (en) | 2020-03-02 | 2020-03-02 | Multi-model same-screen rendering method based on Unity3D engine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111354067B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627221B (en) * | 2021-12-08 | 2023-11-10 | 北京蓝亚盒子科技有限公司 | Scene rendering method and device, operator and readable storage medium |
CN114972630A (en) * | 2022-04-19 | 2022-08-30 | 威睿科技(武汉)有限责任公司 | Method for presenting digital three-dimensional space on television set top box based on Unity3D |
CN117130573B (en) * | 2023-10-26 | 2024-02-20 | 北京世冠金洋科技发展有限公司 | Multi-screen control method, device, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004145657A (en) * | 2002-10-24 | 2004-05-20 | Space Tag Inc | Virtual museum system |
US9396588B1 (en) * | 2015-06-30 | 2016-07-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality virtual theater system |
WO2017164924A1 (en) * | 2016-03-21 | 2017-09-28 | Siemens Product Lifecycle Management Software Inc. | System for gpu based depth reprojection for accelerating depth buffer generation |
EP3226213A1 (en) * | 2016-03-29 | 2017-10-04 | Roland Judex | Method of computationally augmenting a video feed, data-processing apparatus, and computer program therefor |
CN109191613A (en) * | 2018-08-21 | 2019-01-11 | 国网江西省电力有限公司南昌供电分公司 | A kind of automatic machine room method for inspecting based on 3D technology |
CN110769261A (en) * | 2019-06-28 | 2020-02-07 | 叠境数字科技(上海)有限公司 | Compression coding method of three-dimensional dynamic sequence model |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102196300A (en) * | 2010-03-18 | 2011-09-21 | 国际商业机器公司 | Providing method and device as well as processing method and device for images of virtual world scene |
US20140198097A1 (en) * | 2013-01-16 | 2014-07-17 | Microsoft Corporation | Continuous and dynamic level of detail for efficient point cloud object rendering |
CN106683155B (en) * | 2015-11-04 | 2020-03-10 | 南京地心坐标信息科技有限公司 | Comprehensive dynamic scheduling method for three-dimensional model |
CN110442925B (en) * | 2019-07-16 | 2020-05-15 | 中南大学 | Three-dimensional visualization method and system based on real-time dynamic segmentation reconstruction |
CN110738719A (en) * | 2019-09-27 | 2020-01-31 | 杭州师范大学 | Web3D model rendering method based on visual range hierarchical optimization |
-
2020
- 2020-03-02 CN CN202010136853.3A patent/CN111354067B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004145657A (en) * | 2002-10-24 | 2004-05-20 | Space Tag Inc | Virtual museum system |
US9396588B1 (en) * | 2015-06-30 | 2016-07-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality virtual theater system |
WO2017164924A1 (en) * | 2016-03-21 | 2017-09-28 | Siemens Product Lifecycle Management Software Inc. | System for gpu based depth reprojection for accelerating depth buffer generation |
EP3226213A1 (en) * | 2016-03-29 | 2017-10-04 | Roland Judex | Method of computationally augmenting a video feed, data-processing apparatus, and computer program therefor |
CN109191613A (en) * | 2018-08-21 | 2019-01-11 | 国网江西省电力有限公司南昌供电分公司 | A kind of automatic machine room method for inspecting based on 3D technology |
CN110769261A (en) * | 2019-06-28 | 2020-02-07 | 叠境数字科技(上海)有限公司 | Compression coding method of three-dimensional dynamic sequence model |
Non-Patent Citations (1)
Title |
---|
Yamasaki,T等.mathematical error analysis of normal map compression based on unity.international conference on image processing.2006,第1-5卷全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111354067A (en) | 2020-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111354067B (en) | Multi-model same-screen rendering method based on Unity3D engine | |
CN107493488B (en) | Method for intelligently implanting video content based on Faster R-CNN model | |
CN106713988A (en) | Beautifying method and system for virtual scene live | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
CN113269863B (en) | Video image-based foreground object shadow real-time generation method | |
CN111951368A (en) | Point cloud, voxel and multi-view fusion deep learning method | |
CN111583378B (en) | Virtual asset processing method and device, electronic equipment and storage medium | |
CN116310045A (en) | Three-dimensional face texture creation method, device and equipment | |
CN115100337A (en) | Whole body portrait video relighting method and device based on convolutional neural network | |
CN107871338B (en) | Real-time, interactive rendering method based on scene decoration | |
Yao et al. | Accurate silhouette extraction of multiple moving objects for free viewpoint sports video synthesis | |
Liu et al. | Stereo-based bokeh effects for photography | |
CN117557721A (en) | Method, system, equipment and medium for reconstructing detail three-dimensional face of single image | |
Wu et al. | A hybrid image retargeting approach via combining seam carving and grid warping | |
CN112257729A (en) | Image recognition method, device, equipment and storage medium | |
Liu et al. | Fog effect for photography using stereo vision | |
CN111652807A (en) | Eye adjustment method, eye live broadcast method, eye adjustment device, eye live broadcast device, electronic equipment and storage medium | |
CN112435322B (en) | Rendering method, device and equipment of 3D model and storage medium | |
Hanika et al. | Camera space volumetric shadows | |
Ceylan et al. | MatAtlas: Text-driven Consistent Geometry Texturing and Material Assignment | |
KR20220026907A (en) | Apparatus and Method for Producing 3D Contents | |
CN111243099A (en) | Method and device for processing image and method and device for displaying image in AR (augmented reality) device | |
US11501468B2 (en) | Method for compressing image data having depth information | |
Xu et al. | Summarization of 3D video by rate-distortion trade-off | |
Liu et al. | 3D Animation Graphic Enhancing Process Effect Simulation Analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |