CN109685885A - A kind of fast method using depth map conversion 3D rendering - Google Patents

A kind of fast method using depth map conversion 3D rendering Download PDF

Info

Publication number
CN109685885A
CN109685885A CN201710978092.4A CN201710978092A CN109685885A CN 109685885 A CN109685885 A CN 109685885A CN 201710978092 A CN201710978092 A CN 201710978092A CN 109685885 A CN109685885 A CN 109685885A
Authority
CN
China
Prior art keywords
depth
rendering
dimensional
camera
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710978092.4A
Other languages
Chinese (zh)
Other versions
CN109685885B (en
Inventor
马凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhizhun Electronic Science And Technology Co ltd Shanghai
Original Assignee
Shanghai Zun Zun Culture Media Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zun Zun Culture Media Development Co Ltd filed Critical Shanghai Zun Zun Culture Media Development Co Ltd
Priority to CN201710978092.4A priority Critical patent/CN109685885B/en
Publication of CN109685885A publication Critical patent/CN109685885A/en
Application granted granted Critical
Publication of CN109685885B publication Critical patent/CN109685885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The present invention relates to 3 D stereo Rendering fields, specifically a kind of fast method using depth map conversion 3D rendering, it renders depth map Z-Depth by d-making software, and the conversion of 2D to 3D is carried out using depth map Z-Depth, obtains two Eye Views in left and right, it is three-dimensional in flakes to form 3D, repetition rendering when d-making software development 3D solid film source is reduced, render time can be saved, reduces rendering cost, fabrication cycle is greatly shortened, improves working efficiency.

Description

A kind of fast method using depth map conversion 3D rendering
Technical field
It is specifically a kind of to utilize the fast of depth map conversion 3D rendering the present invention relates to 3 D stereo Rendering field Fast method.
Background technique
Existing 2D turns to generally require to repeat when the technology of 3D to render, and render time is long, and rendering is at high cost, substantially prolongs Fabrication cycle, working efficiency are low.
Summary of the invention
The purpose of the present invention is overcome the deficiencies in the prior art, render depth map Z-Depth by d-making software, The conversion that 2D to 3D is carried out using depth map Z-Depth obtains two Eye Views in left and right, forms 3D solid in flakes, reduces Repetition rendering when d-making software development 3D solid film source, can save render time, reduce rendering cost, significantly contract Short fabrication cycle improves working efficiency.
To achieve the above object, a kind of fast method using depth map conversion 3D rendering is designed, which is characterized in that three Step is handled as follows in dimension software:
(1), a three-dimensional scene models are created;
(2), light camera is set up in three-dimensional scenic;
(3), in three-dimensional scenic, with tape measure light camera to object distal end distance, to obtain the depth of Z-Depth Value;
(4), in three-dimensional scenic, measurement obtains distance value Length, and Z-Depth is arranged by distance value;
(5), maximum value Z-Depth is arranged in Z-Depth numerical valuemaxWith minimum value Z-Depthmin, it is rendered to figure, is finally obtained Depth map;
(6), video sequence frame is imported, by being a horizontal position to be found, with true referring to Orient Scene with scene Video camera is determined to the camera perspective view of video;
(7), depth map Z-Depth is imported;
(8), using the channel Z add stereoscopic camera stereo Render, to that sequence frame currently adjusted carry out to Preceding and tracking resolving backward, obtains the hum pattern of the Z-direction of entire sequence frame;The depth of field is whiter, corresponding feature on the image Point is closer from picture plane, i.e. g (x)=n/ λ, wherein g (x) is depth of field degree, and n is proportionality constant, and λ is black and white depth of field degree;
(9), by stereo camera Stereo Render migration processing, three-dimensional two spacing are manually adjusted The specific location of Interocular and zero plane Convergence, finally by Z channel filter Z-Depth Filter Blur filter is carried out, obtains the smooth solution nomogram with the channel Z, is rendered by solid, the sequence frame of output 3D format is carried out.
The three-dimensional software includes 3ds Max.
It repeats to render the technical advantages of the present invention are that reducing, greatly shortens render time, reduce rendering cost, substantially Degree reduces fabrication cycle, improves working efficiency.
Detailed description of the invention
Fig. 1 is the depth of field and characteristic point distance relation in the present invention.
Specific embodiment
The present invention is further described now in conjunction with accompanying drawings and embodiments.
Embodiment 1
Referring to Fig. 1, it is a kind of using depth map conversion 3D rendering fast method, which is characterized in that in three-dimensional software into The following processing step of row:
(1), a three-dimensional scene models are created;
(2), light camera is set up in three-dimensional scenic;
(3), in three-dimensional scenic, with tape measure light camera to object distal end distance, to obtain the depth of Z-Depth Value;
(4), in three-dimensional scenic, measurement obtains distance value Length, and Z-Depth is arranged by distance value;
(5), maximum value Z-Depth is arranged in Z-Depth numerical valuemaxWith minimum value Z-Depthmin, it is rendered to figure, is finally obtained Depth map;
(6), video sequence frame is imported, by being a horizontal position to be found, with true referring to Orient Scene with scene Video camera is determined to the camera perspective view of video;
(7), depth map Z-Depth is imported;
(8), using the channel Z add stereoscopic camera stereo Render, to that sequence frame currently adjusted carry out to Preceding and tracking resolving backward, obtains the hum pattern of the Z-direction of entire sequence frame;The depth of field is whiter, corresponding feature on the image Point is closer from picture plane, i.e. g (x)=n/ λ, wherein g (x) is depth of field degree, and n is proportionality constant, and λ is black and white depth of field degree, ginseng See Fig. 1, wherein h is the depth of field;
(9), by stereo camera Stereo Render migration processing, three-dimensional two spacing are manually adjusted The specific location of Interocular and zero plane Convergence, finally by Z channel filter Z-Depth Filter Blur filter is carried out, obtains the smooth solution nomogram with the channel Z, is rendered by solid, the sequence frame of output 3D format is carried out.
Further, the three-dimensional software includes 3ds Max.
By above-mentioned processing step, we obtain stereo-picture quickly, and have carried out three-dimensional test, and test result is as follows:
It is T1=60 minutes that the one camera single picture time is rendered in entire step;
Rendering the two picture time of stereoscopic camera is T2=TI*2=60m*2=120 minutes;
Entire shot sequence is 100 frames, renders one camera sequence frame time as T3=T1*100=60m*100=6000 points Clock;
Rendering stereoscopic camera sequence frame time is T4=T2*100=120m*100=12000 minutes;
It is T5=60 minutes using the Z-Depth production rendering three-dimensional sequences time;
Three-dimensional software and Z-Depth production three-dimensional sequences time are T6=T5+T3=6060 minutes;
It can be seen that three-dimensional software by the above numerical value and make three-dimensional rendering solid more direct than three-dimensional software with Z-Depth It is saved on time many times, T7=T4-T6=12000m-6060m=5940 minutes, it can be seen that three-dimensional software and use Z- Depth almost saves for 49.5% time.Repetition rendering when reducing d-making software development 3D solid film source, can be with Render time is saved, rendering cost is reduced, fabrication cycle is greatly shortened, improve working efficiency.
General stereoscopic render mode is rendered respectively to right and left eyes.Equipped with n sequence frames, total time Tn ', left eye is T1, right eye are k1 ∈ (0,0.01) with respect to the error amount of left eye;
Then rendering single frames three-dimensional total time is T1 '+(T1 '+k1T1 '), renders total time Tn '=nT1 ' (2 of entire sequence +k1);
Render mode using depth map conversion stereo-picture is only to render simple eye sequence, and additional increase imports Depth Time.Equipped with n sequence frames, total time is Tn ", and single frames render time is Tcenter, and additional increase imports Depth Time be Tdepth, Tdepth is about T1 ';Rendering error amount between single frames left eye or right eye and Tcenter is k2;
The total time for then rendering entire sequence is Tn "=nT1 (1+k2)+Tdepth;If it is Tm that total time is saved in rendering, then Tm=Tn '-Tn "=nT1 ' (1+k-k2)-T1 ';
Using the render mode of depth map conversion stereo-picture with respect to the time scale that General stereoscopic render mode is saved For q=Tm/Tn ' * 100%.

Claims (2)

1. a kind of fast method using depth map conversion 3D rendering, which is characterized in that be handled as follows in three-dimensional software Step:
(1), a three-dimensional scene models are created;
(2), light camera is set up in three-dimensional scenic;
(3), in three-dimensional scenic, with tape measure light camera to object distal end distance, to obtain the depth value of Z-Depth;
(4), in three-dimensional scenic, measurement obtains distance value Length, and Z-Depth is arranged by distance value;
(5), maximum value Z-Depth is arranged in Z-Depth numerical valuemaxWith minimum value Z-Depthmin, it is rendered to figure, finally obtains depth Figure;
(6), video sequence frame is imported, by being to find a horizontal position referring to Orient Scene with scene, is taken the photograph with determination Camera perspective view of the camera to video;
(7), depth map Z-Depth is imported;
(8), add stereoscopic camera stereo Render using the channel Z, to that sequence frame for currently adjusting carry out forward and Tracking resolves backward, obtains the hum pattern of the Z-direction of entire sequence frame;The depth of field is whiter, on the image corresponding characteristic point from Picture plane is closer, i.e. g (x)=n/ λ, wherein g (x) is depth of field degree, and n is proportionality constant, and λ is black and white depth of field degree;
(9), by stereo camera Stereo Render migration processing, manually adjust two spacing Interocular of solid with And the specific location of zero plane Convergence, blur filter is carried out finally by Z channel filter Z-Depth Filter, It obtains the smooth solution nomogram with the channel Z, is rendered by solid, carry out the sequence frame of output 3D format.
2. a kind of fast method using depth map conversion 3D rendering as described in claim 1, which is characterized in that described three Tieing up software includes 3ds Max.
CN201710978092.4A 2017-10-18 2017-10-18 Rapid method for converting 3D image by using depth map Active CN109685885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710978092.4A CN109685885B (en) 2017-10-18 2017-10-18 Rapid method for converting 3D image by using depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710978092.4A CN109685885B (en) 2017-10-18 2017-10-18 Rapid method for converting 3D image by using depth map

Publications (2)

Publication Number Publication Date
CN109685885A true CN109685885A (en) 2019-04-26
CN109685885B CN109685885B (en) 2023-05-23

Family

ID=66183546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710978092.4A Active CN109685885B (en) 2017-10-18 2017-10-18 Rapid method for converting 3D image by using depth map

Country Status (1)

Country Link
CN (1) CN109685885B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11861775B2 (en) 2019-10-17 2024-01-02 Huawei Technologies Co., Ltd. Picture rendering method, apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285779B1 (en) * 1999-08-02 2001-09-04 Trident Microsystems Floating-point complementary depth buffer
CN101208723A (en) * 2005-02-23 2008-06-25 克雷格·萨默斯 Automatic scene modeling for the 3D camera and 3D video
US20120007950A1 (en) * 2010-07-09 2012-01-12 Yang Jeonghyu Method and device for converting 3d images
CN102333229A (en) * 2010-06-21 2012-01-25 壹斯特股份有限公司 Method and apparatus for converting 2d image into 3d image
US20130010067A1 (en) * 2011-07-08 2013-01-10 Ashok Veeraraghavan Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes
US20130060540A1 (en) * 2010-02-12 2013-03-07 Eidgenossische Tehnische Hochschule Zurich Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285779B1 (en) * 1999-08-02 2001-09-04 Trident Microsystems Floating-point complementary depth buffer
CN101208723A (en) * 2005-02-23 2008-06-25 克雷格·萨默斯 Automatic scene modeling for the 3D camera and 3D video
US20130060540A1 (en) * 2010-02-12 2013-03-07 Eidgenossische Tehnische Hochschule Zurich Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
CN102333229A (en) * 2010-06-21 2012-01-25 壹斯特股份有限公司 Method and apparatus for converting 2d image into 3d image
US20120007950A1 (en) * 2010-07-09 2012-01-12 Yang Jeonghyu Method and device for converting 3d images
US20130010067A1 (en) * 2011-07-08 2013-01-10 Ashok Veeraraghavan Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卞玲艳: "基于深度图的2D转3D视频算法的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11861775B2 (en) 2019-10-17 2024-01-02 Huawei Technologies Co., Ltd. Picture rendering method, apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN109685885B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
KR100966592B1 (en) Method for calibrating a camera with homography of imaged parallelogram
EP2323416A2 (en) Stereoscopic editing for video production, post-production and display adaptation
CN104918034A (en) 3D image capturing device, capturing method and 3D image system
CN102523464A (en) Depth image estimating method of binocular stereo video
CN102724531B (en) A kind of two-dimensional video turns the method and system of 3 D video
TW201243763A (en) Method for 3D video content generation
CN108053373A (en) One kind is based on deep learning model fisheye image correcting method
CN101287143A (en) Method for converting flat video to tridimensional video based on real-time dialog between human and machine
CN102881018B (en) Method for generating depth maps of images
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
CN106056622B (en) A kind of multi-view depth video restored method based on Kinect cameras
CN106447718B (en) A kind of 2D turns 3D depth estimation method
CN103634588A (en) Image composition method and electronic apparatus
CN102547350A (en) Method for synthesizing virtual viewpoints based on gradient optical flow algorithm and three-dimensional display device
TWI608447B (en) Stereo image depth map generation device and method
CN111047636B (en) Obstacle avoidance system and obstacle avoidance method based on active infrared binocular vision
CN109685885A (en) A kind of fast method using depth map conversion 3D rendering
CN102647602B (en) System for converting 2D (two-dimensional) video into 3D (three-dimensional) video on basis of GPU (Graphics Processing Unit)
CN104599308A (en) Projection-based dynamic mapping method
CN106534814B (en) A kind of method and apparatus that dual camera picture quality is synchronous
CN105578172A (en) Naked-eye 3D video displaying method based on Unity 3D engine
CN103413337B (en) A kind of color fog generation method based on man-machine interactively
CN103714543A (en) Simple tree dynamic programming binocular and stereo matching method based on invariant moment spatial information
CN106910240B (en) Real-time shadow generation method and device
CN109816710A (en) A kind of binocular vision system high-precision and the parallax calculation method without smear

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220419

Address after: 201400 No. 299, Zhuangwu Road, Zhuangxing Town, Fengxian District, Shanghai

Applicant after: ZHIZHUN ELECTRONIC SCIENCE AND TECHNOLOGY Co.,Ltd. SHANGHAI

Address before: 201415 room 130, No. 410, Zhuangbei Road, Zhuangxing Town, Fengxian District, Shanghai

Applicant before: SHANGHAI ZHIZUN CULTURE MEDIA DEVELOPMENT CO.,LTD.

GR01 Patent grant
GR01 Patent grant