CN111354067A - Multi-model same-screen rendering method based on Unity3D engine - Google Patents
Multi-model same-screen rendering method based on Unity3D engine Download PDFInfo
- Publication number
- CN111354067A CN111354067A CN202010136853.3A CN202010136853A CN111354067A CN 111354067 A CN111354067 A CN 111354067A CN 202010136853 A CN202010136853 A CN 202010136853A CN 111354067 A CN111354067 A CN 111354067A
- Authority
- CN
- China
- Prior art keywords
- objects
- cells
- model
- scene
- rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the technical field of screen rendering, in particular to a multi-model same-screen rendering method based on a Unity3D engine, which comprises the following steps: A. 2D-3D three-dimensional reconstruction: based on deep learning algorithms such as a neural network, a user can create a 3D character model through one photo, and the method has the advantages that the similarity with a real person can reach more than 90%; B. occlusion Culling: using a "director" approach, i.e. letting the user follow the director's view to observe the objects in the rendered view; C. dynamic AutoLOD: according to the scene requirements, objects are dynamically loaded or unloaded, the pressure of the scene on a CPU and a display card is reduced, meanwhile, according to the dynamic optimization of the position of the virtual camera, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized. According to the method, the efficiency of model rendering is effectively increased through a series of methods, and real-time rendering can be adopted, so that the requirements of the broadcasting and television system on the number and the precision of virtual audiences are met.
Description
Technical Field
The invention relates to the technical field of screen rendering, in particular to a multi-model same-screen rendering method based on a Unity3D engine.
Background
The current multi-model one-screen rendering technology is mainly applied to two fields of movies and games, but has the following problems: the film and television level model is too delicate, the production period is long, and the cost is too high (taking Afada as an example, the rendering production time exceeds 1 year); the models at the game level (CS for example) are rough, influence the appearance and are insufficient in quantity.
In the prior art, a multi-model one-screen rendering technology is fully applied to a broadcasting and television system, but the following defects and shortcomings still exist: the manufacturing period is long, the manufacturing cost is too high, the model precision is poor, and the model is difficult to individualize.
Disclosure of Invention
The invention aims to provide a multi-model same-screen rendering method based on a Unity3D engine, which has the advantages of high efficiency, high quality and low cost and solves the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a multi-model on-screen rendering method based on a Unity3D engine comprises the following steps:
A. 2D-3D three-dimensional reconstruction: based on deep learning algorithms such as a neural network, a user can create a 3D character model through one photo, and the method has the advantages that the similarity with a real person can reach more than 90%;
B. occlusionfilling (occlusion culling): using a "director" approach, i.e. letting the user follow the director's view to observe the objects in the rendered view;
C. dynamic AutoLOD: according to the scene requirements, objects are dynamically loaded or unloaded, the pressure of the scene on a CPU and a display card is reduced, meanwhile, according to the dynamic optimization of the position of the virtual camera, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized.
Preferably, the specific process of step A is as follows,
the front face self-photographing (2D) of a person is used for matching with the three-dimensional head scanning (3D), and after the relation between the color distribution and the three-dimensional depth of a photo is fully understood, a high-precision 3D model of a task can be quickly established within 30 seconds, so that the problems that the traditional model is high in manufacturing cost, long in period, poor in precision and difficult to personalize are solved.
Preferably, the specific process of step B is as follows,
an occlusion region is created in the scene space, the occlusion region being composed of cells (cells). Each cell forms a part of the whole scene shielding area, the cells can split the whole scene into a plurality of parts, when the virtual camera can see the cells, objects representing the cells can be rendered, and other objects cannot be rendered and are directly removed from the memory, so that the number of objects which are rendered at the same time is increased.
Occlusion culling (OcclusionCuling) is the application of a hierarchical view of a set of potential visible objects built using virtual cameras to the entire scene; in operation, each camera uses these data to determine visible and invisible objects; with this information, Unity will ensure that only visible objects are sent for rendering, thereby reducing the number of draw calls and further improving performance;
more specifically, the data of occlusion culling (OcclusionCuling) is composed of cells (cells), which constitute a binary tree; occlusion culling uses two trees, one for static objects (staticiobjects) for view Cells (ViewCells) and one for moving objects (MovingObjects) for Target Cells (Target Cells); the view cells are mapped to an index list defining visible static objects, so that the elimination result of the static objects is more accurate.
Compared with the prior art, the invention has the following beneficial effects:
through a series of methods of 2D-3D three-dimensional reconstruction, Occlusion culling and dynamic autolOD, the efficiency of model rendering is effectively increased, so that the number of model rendering is increased, the number of triangle surfaces is increased from 1500 ten thousand to more than 6400 thousand, and real-time rendering can be adopted, so that the requirements of a radio and television system on the number and the precision of virtual audiences are met.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a diagram illustrating the practical effects of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, a multi-model on-screen rendering method based on a Unity3D engine includes the following steps:
A. 2D-3D three-dimensional reconstruction: based on deep learning algorithms such as a neural network, a user can create a 3D character model through one photo, and the method has the advantages that the similarity with a real person can reach more than 90%;
B. occlusionfilling (occlusion culling): using a "director" approach, i.e. letting the user follow the director's view to observe the objects in the rendered view;
C. dynamic AutoLOD: according to the scene requirements, objects are dynamically loaded or unloaded, the pressure of the scene on a CPU and a display card is reduced, meanwhile, according to the dynamic optimization of the position of the virtual camera, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized.
The specific process of the step A is as follows,
the front face self-photographing (2D) of a person is used for matching with the three-dimensional head scanning (3D), and after the relation between the color distribution and the three-dimensional depth of a photo is fully understood, a high-precision 3D model of a task can be quickly established within 30 seconds, so that the problems that the traditional model is high in manufacturing cost, long in period, poor in precision and difficult to personalize are solved.
The specific process of the step B is as follows,
an occlusion region is created in the scene space, the occlusion region being composed of cells (cells). Each cell forms a part of the whole scene shielding area, the cells can split the whole scene into a plurality of parts, when the virtual camera can see the cells, objects representing the cells can be rendered, and other objects cannot be rendered and are directly removed from the memory, so that the number of objects which are rendered at the same time is increased.
Occlusion culling (OcclusionCuling) is the application of a hierarchical view of a set of potential visible objects built using virtual cameras to the entire scene; in operation, each camera uses these data to determine visible and invisible objects; with this information, Unity will ensure that only visible objects are sent for rendering, thereby reducing the number of draw calls and further improving performance;
more specifically, the data of occlusion culling (OcclusionCuling) is composed of cells (cells), which constitute a binary tree; occlusion culling uses two trees, one for static objects (staticiobjects) for view Cells (ViewCells) and one for moving objects (MovingObjects) for Target Cells (Target Cells); the view cells are mapped to an index list defining visible static objects, so that the elimination result of the static objects is more accurate.
Example (b):
1. a 2D-3D three-dimensional reconstruction technology is used for creating a high-precision 3D image for a photo submitted by a user, and each model is guaranteed to be generated according to the actual image of the user and has uniqueness;
2. according to the requirement of program recording, Occlusion culling is used for shielding and removing, 3D user models inside and outside the field of view are processed, resources consumed by the 3D models outside the field of view are greatly reduced, and therefore the requirement of hundreds of people for real-time interaction is met;
3. the dynamic AutoLOD is used for loading and unloading objects in real time according to the stage scene, so that the pressure of the scene on a CPU (central processing unit) and a display card is reduced, and meanwhile, the dynamic optimization is performed according to the position of a virtual camera, so that the consumption of memory and GPU (graphic processing Unit) operation resources is greatly reduced;
4. and projecting the real-time rendered image to a large screen of a program recording site through the HDMI, and guiding a user to perform necessary real-time interaction according to the requirements of a director group.
In the practice of '2019 CCTV presenter tournament', the number of high-precision 3D character models used at each period reaches 400, the number of accumulated models is 1000, the number of rendered triangular surfaces is increased from 1500 ten thousand to more than 6400 ten thousand in the prior art, and the requirement of recording programs of a broadcasting and television system is basically met.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (3)
1. A multi-model same-screen rendering method based on a Unity3D engine is characterized by comprising the following steps: the method comprises the following steps:
A. 2D-3D three-dimensional reconstruction: based on deep learning algorithms such as a neural network, a user can create a 3D character model through one photo, and the method has the advantages that the similarity with a real person can reach more than 90%;
B. occlusion Culling: using a "director" approach, i.e. letting the user follow the director's view to observe the objects in the rendered view;
C. dynamic AutoLOD: according to the scene requirements, objects are dynamically loaded or unloaded, the pressure of the scene on a CPU and a display card is reduced, meanwhile, according to the dynamic optimization of the position of the virtual camera, the occupied memory is reduced, the rendering quantity is increased, and high-efficiency operation is realized.
2. The multi-model on-screen rendering method based on the Unity3D engine of claim 1, wherein: the specific process of the step A is as follows,
the front face self-photographing (2D) of a person is used for matching with the three-dimensional head scanning (3D), and after the relation between the color distribution and the three-dimensional depth of a photo is fully understood, a high-precision 3D model of a task can be quickly established within 30 seconds, so that the problems that the traditional model is high in manufacturing cost, long in period, poor in precision and difficult to personalize are solved.
3. The multi-model on-screen rendering method based on the Unity3D engine of claim 1, wherein: the specific process of the step B is as follows,
an occlusion region is created in the scene space, the occlusion region being composed of cells (cells). Each cell forms a part of the whole scene shielding area, the cells can split the whole scene into a plurality of parts, when the virtual camera can see the cells, objects representing the cells can be rendered, and other objects cannot be rendered and are directly removed from the memory, so that the number of objects which are rendered at the same time is increased.
Occlusion Culling (Occlusion Culling) is the application of a hierarchical view of a set of potential visible objects built using virtual cameras to the entire scene; in operation, each camera uses these data to determine visible and invisible objects; with this information, Unity will ensure that only visible objects are sent for rendering, thereby reducing the number of draw calls and further improving performance;
more specifically, the data of Occlusion Culling (Occlusion Culling) is composed of cells (cells) which form a binary tree; occlusion culling uses two trees, one static object (staticiobjects) for the View Cells (View Cells) and another Moving object (Moving Objects) for the Target Cells (Target Cells); the view cells are mapped to an index list defining visible static objects, so that the elimination result of the static objects is more accurate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010136853.3A CN111354067B (en) | 2020-03-02 | 2020-03-02 | Multi-model same-screen rendering method based on Unity3D engine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010136853.3A CN111354067B (en) | 2020-03-02 | 2020-03-02 | Multi-model same-screen rendering method based on Unity3D engine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111354067A true CN111354067A (en) | 2020-06-30 |
CN111354067B CN111354067B (en) | 2023-08-22 |
Family
ID=71197225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010136853.3A Active CN111354067B (en) | 2020-03-02 | 2020-03-02 | Multi-model same-screen rendering method based on Unity3D engine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111354067B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627221A (en) * | 2021-12-08 | 2022-06-14 | 北京蓝亚盒子科技有限公司 | Scene rendering method and device, runner and readable storage medium |
CN114972630A (en) * | 2022-04-19 | 2022-08-30 | 威睿科技(武汉)有限责任公司 | Method for presenting digital three-dimensional space on television set top box based on Unity3D |
CN117130573A (en) * | 2023-10-26 | 2023-11-28 | 北京世冠金洋科技发展有限公司 | Multi-screen control method, device, equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004145657A (en) * | 2002-10-24 | 2004-05-20 | Space Tag Inc | Virtual museum system |
US20110227938A1 (en) * | 2010-03-18 | 2011-09-22 | International Business Machines Corporation | Method and system for providing images of a virtual world scene and method and system for processing the same |
US20140198097A1 (en) * | 2013-01-16 | 2014-07-17 | Microsoft Corporation | Continuous and dynamic level of detail for efficient point cloud object rendering |
US9396588B1 (en) * | 2015-06-30 | 2016-07-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality virtual theater system |
CN106683155A (en) * | 2015-11-04 | 2017-05-17 | 闫烁 | Three-dimensional model comprehensive dynamic scheduling method |
WO2017164924A1 (en) * | 2016-03-21 | 2017-09-28 | Siemens Product Lifecycle Management Software Inc. | System for gpu based depth reprojection for accelerating depth buffer generation |
EP3226213A1 (en) * | 2016-03-29 | 2017-10-04 | Roland Judex | Method of computationally augmenting a video feed, data-processing apparatus, and computer program therefor |
CN109191613A (en) * | 2018-08-21 | 2019-01-11 | 国网江西省电力有限公司南昌供电分公司 | A kind of automatic machine room method for inspecting based on 3D technology |
CN110442925A (en) * | 2019-07-16 | 2019-11-12 | 中南大学 | A kind of three-dimensional visualization method and system based on the reconstruct of real-time dynamic partition |
CN110738719A (en) * | 2019-09-27 | 2020-01-31 | 杭州师范大学 | Web3D model rendering method based on visual range hierarchical optimization |
CN110769261A (en) * | 2019-06-28 | 2020-02-07 | 叠境数字科技(上海)有限公司 | Compression coding method of three-dimensional dynamic sequence model |
-
2020
- 2020-03-02 CN CN202010136853.3A patent/CN111354067B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004145657A (en) * | 2002-10-24 | 2004-05-20 | Space Tag Inc | Virtual museum system |
US20110227938A1 (en) * | 2010-03-18 | 2011-09-22 | International Business Machines Corporation | Method and system for providing images of a virtual world scene and method and system for processing the same |
US20140198097A1 (en) * | 2013-01-16 | 2014-07-17 | Microsoft Corporation | Continuous and dynamic level of detail for efficient point cloud object rendering |
US9396588B1 (en) * | 2015-06-30 | 2016-07-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality virtual theater system |
CN106683155A (en) * | 2015-11-04 | 2017-05-17 | 闫烁 | Three-dimensional model comprehensive dynamic scheduling method |
WO2017164924A1 (en) * | 2016-03-21 | 2017-09-28 | Siemens Product Lifecycle Management Software Inc. | System for gpu based depth reprojection for accelerating depth buffer generation |
EP3226213A1 (en) * | 2016-03-29 | 2017-10-04 | Roland Judex | Method of computationally augmenting a video feed, data-processing apparatus, and computer program therefor |
CN109191613A (en) * | 2018-08-21 | 2019-01-11 | 国网江西省电力有限公司南昌供电分公司 | A kind of automatic machine room method for inspecting based on 3D technology |
CN110769261A (en) * | 2019-06-28 | 2020-02-07 | 叠境数字科技(上海)有限公司 | Compression coding method of three-dimensional dynamic sequence model |
CN110442925A (en) * | 2019-07-16 | 2019-11-12 | 中南大学 | A kind of three-dimensional visualization method and system based on the reconstruct of real-time dynamic partition |
CN110738719A (en) * | 2019-09-27 | 2020-01-31 | 杭州师范大学 | Web3D model rendering method based on visual range hierarchical optimization |
Non-Patent Citations (2)
Title |
---|
YAMASAKI, T等: "mathematical error analysis of normal map compression based on unity", INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, vol. 1 * |
王峥;佟志强;: "虚拟现实系统中实现真实感影像的一种方法", 现代电影技术, no. 08 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627221A (en) * | 2021-12-08 | 2022-06-14 | 北京蓝亚盒子科技有限公司 | Scene rendering method and device, runner and readable storage medium |
CN114627221B (en) * | 2021-12-08 | 2023-11-10 | 北京蓝亚盒子科技有限公司 | Scene rendering method and device, operator and readable storage medium |
CN114972630A (en) * | 2022-04-19 | 2022-08-30 | 威睿科技(武汉)有限责任公司 | Method for presenting digital three-dimensional space on television set top box based on Unity3D |
CN117130573A (en) * | 2023-10-26 | 2023-11-28 | 北京世冠金洋科技发展有限公司 | Multi-screen control method, device, equipment and storage medium |
CN117130573B (en) * | 2023-10-26 | 2024-02-20 | 北京世冠金洋科技发展有限公司 | Multi-screen control method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111354067B (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107493488B (en) | Method for intelligently implanting video content based on Faster R-CNN model | |
CN111354067B (en) | Multi-model same-screen rendering method based on Unity3D engine | |
US9053582B2 (en) | Streaming light propagation | |
CN105654471A (en) | Augmented reality AR system applied to internet video live broadcast and method thereof | |
CN106713988A (en) | Beautifying method and system for virtual scene live | |
CN104469179A (en) | Method for combining dynamic pictures into mobile phone video | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
US10095932B2 (en) | Video abstract using signed foreground extraction and fusion | |
CN113269863B (en) | Video image-based foreground object shadow real-time generation method | |
CN111583378B (en) | Virtual asset processing method and device, electronic equipment and storage medium | |
CN111951368A (en) | Point cloud, voxel and multi-view fusion deep learning method | |
CN115100337A (en) | Whole body portrait video relighting method and device based on convolutional neural network | |
Ling et al. | Re-visiting discriminator for blind free-viewpoint image quality assessment | |
CN103914822A (en) | Interactive video foreground object extraction method based on super pixel segmentation | |
Yao et al. | Accurate silhouette extraction of multiple moving objects for free viewpoint sports video synthesis | |
Wu et al. | A hybrid image retargeting approach via combining seam carving and grid warping | |
CN107833266A (en) | A kind of hologram image acquisition methods based on color lump matching and motion correction | |
Liu et al. | Fog effect for photography using stereo vision | |
CN111105484B (en) | Paperless 2D serial frame optimization method | |
CN114782600A (en) | Video specific area rendering system and rendering method based on auxiliary grid | |
CN111652023B (en) | Mouth-type adjustment and live broadcast method and device, electronic equipment and storage medium | |
CN112991498B (en) | System and method for rapidly generating lens animation | |
CN111652807B (en) | Eye adjusting and live broadcasting method and device, electronic equipment and storage medium | |
CN111652024B (en) | Face display and live broadcast method and device, electronic equipment and storage medium | |
Kim et al. | Compensated visual hull for defective segmentation and occlusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |