CN105913499A - Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system - Google Patents
Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system Download PDFInfo
- Publication number
- CN105913499A CN105913499A CN201610224160.3A CN201610224160A CN105913499A CN 105913499 A CN105913499 A CN 105913499A CN 201610224160 A CN201610224160 A CN 201610224160A CN 105913499 A CN105913499 A CN 105913499A
- Authority
- CN
- China
- Prior art keywords
- depth map
- environment
- synthesis
- ownership
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a three-dimensional conversion synthesis method and a three-dimensional conversion synthesis system, wherein the method comprises the steps of a first step, acquiring environment data, figure data and background data; a second step, processing the acquired environment data for obtaining an environment depth map; a third step, processing the obtained figure data for obtaining a figure depth map; a fourth step, processing the environment depth map according to the background data, and erasing the superfluous background; and a fifth step, combining the environment depth map after erasing of the superfluous background with the figure depth map, and obtaining a three-dimensional conversion map, and ending. The novel process provided by the invention improves precision and efficiency in manufacture. Sizes and position relations of all objects in the environment can be accurately restored through environment scanning. The depth channel of a whole scene can be directly output in three-dimensional software. Through the manner of the invention, scene depth manufacture efficiency of personnel and depth accuracy can be improved.
Description
Technical field
The present invention relates to the method and system of a kind of three-dimensional transformation of ownership synthesis.
Background technology
Traditional fabrication flow process:
1 button picture;
2 depth maps make;
2.1 overall situation degree of depth making: the tradition overall situation degree of depth makes that use or passes through artist self
Picture is understood and judges to be detained as the color of color lump is detained as shape is come ring with changing by adjustment the most again
The degree of depth in border is mated, and wherein there is substantial amounts of error, if the producer lacked experience runs into multiple
The camera lens of miscellaneous motion has the highest probability misjudgment.
2.2 character details's degree of depth make: what tradition character face's details making used still passes through artist
Self personage's shape of face in picture is understood and as the color of color lump and change button by adjustment button again after judging
As the face degree of depth is mated by shape, the head that wherein there is substantial amounts of error, especially personage exists
Swing and to swing up and down the change in depth of whole face during action extremely complex, so far the most very well
The solution manually adjusted, artistical experience can only be relied on to judge.Due to same
Individual personage is the substantial amounts of appearance of meeting in whole film, and artisies different during causing manually adjusting is to people
The structure in object plane portion has different understanding and judgement, causes the personage that different artisies is made
The face degree of depth is uneven, can not effectively mate.
Summary of the invention
The technical problem to be solved is to provide one and solves and cause owing to personnel lack experience
The problem of maloperation, skipped the overall precision of the three-dimensional transformation of ownership and the method for the three-dimensional transformation of ownership synthesis of effect
And system.
A kind of method that the technical scheme is that three-dimensional transformation of ownership synthesis,
Specifically include following steps:
Step 1: gather environmental data, character data and background data;
Step 2: the environmental data collected is carried out process and obtains environment depth map;
Step 3: the character data collected is carried out process and obtains personage's depth map;
Step 4: process environment depth map according to background data, wipes unnecessary background;
Step 5: environment depth map and personage's depth map after wiping unnecessary background synthesize, and obtain
Three-dimensional transformation of ownership figure, terminates.
Described three-dimensional transformation of ownership figure is the 3D rendering with the degree of depth realized by synthesis, and playing continuously becomes
Film becomes 3D film.
The invention has the beneficial effects as follows: what the Innovative Production flow process that the present invention provides promoted is the essence on making
Degree and efficiency.By environmental scanning can the size of all objects in reducing environment accurately, position is closed
System, directly can export the depth channel of whole scene in three-dimensional software.Can by such mode
To improve staff's levels of precision in the efficiency with the raising degree of depth making scene depth.
On the basis of technique scheme, the present invention can also do following improvement.
Further, described step 1 use radar scanning gather environmental data;Employing digital scan gathers
Character data.
Using above-mentioned further scheme to provide the benefit that, described environmental data and character data are by existing
Imaging object general in technology is followed the tracks of software (such as: PFtrack) and is acquired.
Further, described environmental data include ambient image data and record ambient image with gather camera lens it
The range information data of spacing.
Using above-mentioned further scheme to provide the benefit that, it is the most right to be realized by range information data
The setting of the environment depth of field, it is achieved build environment depth map.
Further, described step 2 specifically includes following steps:
Step 2.1: the environmental data collected is carried out modelling process, obtains and current environment ratio
Environment mathematical model for preset ratio;Wherein optimal preset ratio is 1:1;
Step 2.2: environment mathematical model is rendered and obtains environment depth map.
Further, described step 2.2 also includes environment depth map is believed according to the distance in environmental data
Breath data are adjusted, the environment depth map after being adjusted.
Further, described personage's depth map includes head depth map and health depth map.
Further, described step 3 specifically includes following steps:
Step 3.1: the character data collected is set up digitized person model;
Step 3.2: person model is decomposed and obtains head model and body model;
Step 3.3: head model is rendered and obtains head depth map, body model is rendered
Obtain health depth map.
Use above-mentioned further scheme to provide the benefit that, the present invention to the collection of person model by cincture
The multiple photographic head arranged carry out comprehensive scanning to personage, it is thus achieved that personage's omnibearing image combination producing
Head model and body model.
Further, step 3.3 also including, the head depth map range information data to obtaining are adjusted,
Head depth map after being adjusted.
Further, step a is also included between described step 3 and step 4:
Step a: environment depth map and personage's depth map are carried out edge preparation.
Use above-mentioned further scheme to provide the benefit that, described environment depth map and personage's depth map are entered
Row edge preparation, the edge preparation mode of employing is edge preparation mode general in prior art.
The technical scheme is that the system of a kind of three-dimensional transformation of ownership synthesis,
Including acquisition module, advanced treating module, background erasing module and synthesis module;
Described acquisition module is used for gathering environmental data, character data and background data;
Described advanced treating module obtains environment depth map for the environmental data collected is carried out process;
The character data collected is carried out process and obtains personage's depth map;
Described background erasing module is for processing environment depth map according to background data, and it is unnecessary to wipe
Background;
Described synthesis module is used for the environment depth map after wiping unnecessary background and personage's depth map is carried out
Synthesis, obtains three-dimensional transformation of ownership figure.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 1;
Fig. 2 is the system architecture diagram of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 1;
Fig. 3 is that in the method described in the concrete example of the present invention, the environment degree of depth renders schematic diagram;
Fig. 4 is that in the method described in the concrete example of the present invention, personage's head degree of depth renders schematic diagram;
Fig. 5 is that in the method described in the concrete example of the present invention, character physical's details degree of depth renders schematic diagram.
In accompanying drawing, the list of parts representated by each label is as follows:
1, acquisition module, 2, advanced treating module, 3, background erasing module, 4, synthesis module.
Detailed description of the invention
Being described principle and the feature of the present invention below in conjunction with accompanying drawing, example is served only for explaining this
Invention, is not intended to limit the scope of the present invention.
As it is shown in figure 1, the method synthesized for a kind of three-dimensional transformation of ownership described in the embodiment of the present invention 1, specifically
Comprise the following steps:
Step 1: gather environmental data, character data and background data;
Step 2: the environmental data collected is carried out process and obtains environment depth map;
Step 3: the character data collected is carried out process and obtains personage's depth map;
Step 4: process environment depth map according to background data, wipes unnecessary background;
Step 5: environment depth map and personage's depth map after wiping unnecessary background synthesize, and obtain
Three-dimensional transformation of ownership figure, terminates.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 2, on the basis of embodiment 1,
Described step 1 use radar scanning gather environmental data;Digital scan is used to gather character data.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 3, at the base of embodiment 1 or 2
On plinth, described environmental data includes ambient image data and record ambient image and the spacing gathering camera lens
Range information data.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 4, on the basis of embodiment 3,
Described step 2 specifically includes following steps:
Step 2.1: the environmental data collected is carried out modelling process, obtains and current environment ratio
Environment mathematical model for 1:1;
Step 2.2: environment mathematical model is rendered and obtains environment depth map.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 5, on the basis of embodiment 4,
Described step 2.2 also includes environment depth map is adjusted according to the range information data in environmental data
Whole, after being adjusted environment depth map.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 6, in any one of embodiment 1-5
On the basis of, described personage's depth map includes head depth map and health depth map.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 7, on the basis of embodiment 6,
Described step 3 specifically includes following steps:
Step 3.1: the character data collected is set up digitized person model;
Step 3.2: person model is decomposed and obtains head model and body model;
Step 3.3: head model is rendered and obtains head depth map, body model is rendered
Obtain health depth map.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 8, on the basis of embodiment 7,
Step 3.3 also including, the head depth map range information data to obtaining are adjusted, after being adjusted
Head depth map.
The method of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 9, in any one of embodiment 1-8
On the basis of, also include step a between described step 3 and step 4:
Step a: environment depth map and personage's depth map are carried out edge preparation.
The concrete example of the present invention said method comprising the steps of:
1 button picture:
Personage is scanned, and personage's scanning device is a set of body scans hardware and integrated software system built;
Scene scans;
2 follow the tracks of:
2.1 personage's head trackings: realize head tracking by PFtrack software;
2.2 scenes are followed the tracks of: realize scene by PFtrack software and follow the tracks of;
3 depth maps make:
The 3.1 overall situation degree of depth render: as it is shown on figure 3, just can render environment after having mated environment
Depth map;
3.2 personage's head degree of depth render: as shown in Figure 4, and coupling perfect person's thing head just can render later
Go out head depth map;
3.3 character physical's details degree of depth make: as shown in Figure 5.
3.4 edge details arrange.
4 erasing backgrounds:
Unnecessary background can be formed after the transformation of ownership, by post-production, utilize software that unnecessary background is wiped
Remove.
5 three-dimensional synthesis.
As in figure 2 it is shown, the system of a kind of three-dimensional transformation of ownership synthesis described in the embodiment of the present invention 1, including adopting
Collection module 1, advanced treating module 2, background erasing module 3 and synthesis module 4;
Described acquisition module 1 is used for gathering environmental data, character data and background data;
Described advanced treating module 2 obtains the environment degree of depth for the environmental data collected is carried out process
Figure;The character data collected is carried out process and obtains personage's depth map;
Described background erasing module 3 is for processing environment depth map according to background data, and erasing is many
Remaining background;
Described synthesis module 4 is used for the environment depth map after wiping unnecessary background and personage's depth map is carried out
Synthesis, obtains three-dimensional transformation of ownership figure.
The process of environmental scanning in the embodiment of the present invention: at film shooting scene be no matter real environment also
The environment in film studio, we can by the way of radar scanning to whole circumference 300 meters within
Environment carry out range data collection, the data collected are carried out modelling process, we just can give birth to
Become out the mathematical model of and shooting environmental 1:1.The model generated can and photograph in post-production
Picture carry out high-precision coupling, by the dynamic depth map of model generation so that solve personnel's experience not
The maloperation that foot causes, improves overall precision and the effect of the three-dimensional transformation of ownership.
The process of personage's scanning in the embodiment of the present invention: sweep by the face of dramatis personae is carried out numeral
Retouching, by the digitized person model of data genaration collected, we can gather by this method
To pressing close to real character's head mathematical model completely.The head model generated is energy and quilt in post-production
The personage photographed carries out high-precision coupling, by the dynamic depth map of model generation and then the personnel of solution
Lack experience and production method is variant and the mismatch problem that causes, improve the overall precision of the three-dimensional transformation of ownership
And effect.
The process of unnecessary background erasing in the embodiment of the present invention: scheme of the present invention can greatly be reduced
The workload of background erasing.Before film starts shooting, we is as got involved, in the front formulation photographic schemes stage
Which the background graphics camera lens that can analyze needs to gather has.For needing to wipe the camera lens of background, I
Side can follow play staff at floor and provide technical support, arranges shooting personnel to enter the background in picture
Line space background gathers.Simultaneously shooting environmental can be scanned so that post-production by we.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all in the present invention
Spirit and principle within, any modification, equivalent substitution and improvement etc. made, should be included in this
Within bright protection domain.
Claims (10)
1. the method for three-dimensional transformation of ownership synthesis, it is characterised in that specifically include following steps:
Step 1: gather environmental data, character data and background data;
Step 2: the environmental data collected is carried out process and obtains environment depth map;
Step 3: the character data collected is carried out process and obtains personage's depth map;
Step 4: process environment depth map according to background data, wipes unnecessary background;
Step 5: environment depth map and personage's depth map after wiping unnecessary background synthesize, and obtain
Three-dimensional transformation of ownership figure, terminates.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 1, it is characterised in that described
Step 1 use radar scanning gather environmental data;Digital scan is used to gather character data.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 1, it is characterised in that described
Environmental data includes ambient image data and record ambient image and the distance letter of the spacing gathering camera lens
Breath data.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 3, it is characterised in that described
Step 2 specifically includes following steps:
Step 2.1: the environmental data collected is carried out modelling process, obtains and current environment ratio
Environment mathematical model for preset ratio;
Step 2.2: environment mathematical model is rendered and obtains environment depth map.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 4, it is characterised in that described
Step 2.2 also includes environment depth map is adjusted according to the range information data in environmental data,
Environment depth map after being adjusted.
6., according to the method for a kind of three-dimensional transformation of ownership synthesis described in any one of claim 1-5, its feature exists
In, described personage's depth map includes head depth map and health depth map.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 6, it is characterised in that described
Step 3 specifically includes following steps:
Step 3.1: the character data collected is set up digitized person model;
Step 3.2: person model is decomposed and obtains head model and body model;
Step 3.3: head model is rendered and obtains head depth map, body model is rendered
Obtain health depth map.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 7, it is characterised in that step
Also include in 3.3 that the head depth map range information data to obtaining are adjusted, the head after being adjusted
Portion's depth map.
The method of a kind of three-dimensional transformation of ownership synthesis the most according to claim 1, it is characterised in that described
Step a is also included between step 3 and step 4:
Step a: environment depth map and personage's depth map are carried out edge preparation.
10. the system of a three-dimensional transformation of ownership synthesis, it is characterised in that include acquisition module, depth
Reason module, background erasing module and synthesis module;
Described acquisition module is used for gathering environmental data, character data and background data;
Described advanced treating module obtains environment depth map for the environmental data collected is carried out process;
The character data collected is carried out process and obtains personage's depth map;
Described background erasing module is for processing environment depth map according to background data, and it is unnecessary to wipe
Background;
Described synthesis module is used for the environment depth map after wiping unnecessary background and personage's depth map is carried out
Synthesis, obtains three-dimensional transformation of ownership figure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610224160.3A CN105913499A (en) | 2016-04-12 | 2016-04-12 | Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610224160.3A CN105913499A (en) | 2016-04-12 | 2016-04-12 | Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105913499A true CN105913499A (en) | 2016-08-31 |
Family
ID=56745920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610224160.3A Pending CN105913499A (en) | 2016-04-12 | 2016-04-12 | Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105913499A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107483845A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Photographic method and its device |
CN107680169A (en) * | 2017-09-28 | 2018-02-09 | 宝琳创展国际文化科技发展(北京)有限公司 | Apply and make general and VR stereopsis method in depth map addition curved surface |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101536040A (en) * | 2006-11-17 | 2009-09-16 | 汤姆森许可贸易公司 | System and method for model fitting and registration of objects for 2D-to-3D conversion |
CN102106152A (en) * | 2008-07-24 | 2011-06-22 | 皇家飞利浦电子股份有限公司 | Versatile 3-D picture format |
CN102685533A (en) * | 2006-06-23 | 2012-09-19 | 图象公司 | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
-
2016
- 2016-04-12 CN CN201610224160.3A patent/CN105913499A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102685533A (en) * | 2006-06-23 | 2012-09-19 | 图象公司 | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
CN101536040A (en) * | 2006-11-17 | 2009-09-16 | 汤姆森许可贸易公司 | System and method for model fitting and registration of objects for 2D-to-3D conversion |
CN102106152A (en) * | 2008-07-24 | 2011-06-22 | 皇家飞利浦电子股份有限公司 | Versatile 3-D picture format |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107483845A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Photographic method and its device |
CN107483845B (en) * | 2017-07-31 | 2019-09-06 | Oppo广东移动通信有限公司 | Photographic method and its device |
CN107680169A (en) * | 2017-09-28 | 2018-02-09 | 宝琳创展国际文化科技发展(北京)有限公司 | Apply and make general and VR stereopsis method in depth map addition curved surface |
CN107680169B (en) * | 2017-09-28 | 2021-05-11 | 宝琳创展科技(佛山)有限公司 | Method for manufacturing VR (virtual reality) stereoscopic image by adding curved surface to depth map |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106067954B (en) | Imaging unit and system | |
CN103971408B (en) | Three-dimensional facial model generating system and method | |
US8823775B2 (en) | Body surface imaging | |
US9418474B2 (en) | Three-dimensional model refinement | |
CN104077804B (en) | A kind of method based on multi-frame video picture construction three-dimensional face model | |
CN104463880B (en) | A kind of RGB D image acquiring methods | |
CN101667303B (en) | Three-dimensional reconstruction method based on coding structured light | |
CN106997605B (en) | A method of foot type video is acquired by smart phone and sensing data obtains three-dimensional foot type | |
CN105243637B (en) | One kind carrying out full-view image joining method based on three-dimensional laser point cloud | |
CN105303616B (en) | Embossment modeling method based on single photo | |
WO2015188684A1 (en) | Three-dimensional model reconstruction method and system | |
CN105427385A (en) | High-fidelity face three-dimensional reconstruction method based on multilevel deformation model | |
CN107123156A (en) | A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision | |
CN104268138A (en) | Method for capturing human motion by aid of fused depth images and three-dimensional models | |
TW201001330A (en) | Method and apparatus for forming 3-D image | |
CN113192206B (en) | Three-dimensional model real-time reconstruction method and device based on target detection and background removal | |
CN106420108B (en) | A kind of personalized breast prosthesis preparation method based on mammary gland digitized video integration technology | |
CN206311076U (en) | Very fast 3D anthropometric scanning instrument based on speckle | |
CN108154552A (en) | A kind of stereo laparoscope method for reconstructing three-dimensional model and device | |
US20150172637A1 (en) | Apparatus and method for generating three-dimensional output data | |
CN107370950B (en) | Focusing process method, apparatus and mobile terminal | |
CN108614277A (en) | Double excitation single camera three-dimensional imaging scan table and scanning, imaging method | |
CN107220954A (en) | A kind of 3D scanning systems based on mobile terminal | |
CN102961201A (en) | Method for manufacturing personalized facial prosthesis by laser scanning and quick molding technologies | |
CN105913499A (en) | Three-dimensional conversion synthesis method and three-dimensional conversion synthesis system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160831 |
|
RJ01 | Rejection of invention patent application after publication |