CN107918948A - 4D Video Rendering methods - Google Patents
4D Video Rendering methods Download PDFInfo
- Publication number
- CN107918948A CN107918948A CN201711061009.3A CN201711061009A CN107918948A CN 107918948 A CN107918948 A CN 107918948A CN 201711061009 A CN201711061009 A CN 201711061009A CN 107918948 A CN107918948 A CN 107918948A
- Authority
- CN
- China
- Prior art keywords
- triangular facet
- texture
- visible
- calibration
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of 4D Video Renderings method, including camera selection, the calculating of visible dough sheet, visible dough sheet texture blending, the calculating of fusion weight map, Fusion Edges and synthesis render;The threedimensional model progress texture mapping that the 4D Video Renderings method is capable of the scene main body of high efficiency, high quality to being obtained by 4D seizure from the polyphaser view data of shooting renders so that threedimensional model presentation 4D video effects true to nature.
Description
Technical field
The present invention relates to multimedia technology field, is specifically related to a kind of 4D Video Renderings method.
Background technology
4D videos include 4 dimensions of three dimensions dimension and one-dimensional time dimension.Holographic catch of 4D refers to realistic space
Either statically or dynamically scene high quality, 4D virtual appearing methods true to nature.Existing 3D films are usually by the visual angle that user watches
Upper to form left and right eyes imaging view respectively, so as to form a kind of viewing experience of 3D solids at the visual angle of user, it is actually
The depth information of 2.5 dimensions has simply been obtained, has not restored the real 3D views effect of reference object.
The real-time 4D videos of high quality have many latent in virtual reality (VR), augmented reality (AR) or mixed reality (MR)
Application.For example, the role available for 3D game, 3D animations creates, in garment industry, 4D videos can realize shooting main body
Virtual online fitting, clothes walk the scenes such as show;In video display medium field, AR relays, and video display spy can be with reference to AR technologies
Effect;In sports field, available for train, analysis etc. of competing.
Although have there is equipment and the products such as various VR panorama cameras, VR glasses, the helmet in VR markets both at home and abroad at present,
But the VR contents of ornamental lack very much, limit the popularization of Related product and application.Existing portable panorama camera can be shot
360 videos, but main view Angle Position in 360 videos and walking about all is determined by photographer, can not be determined completely by user.And
360 video of gained is not real VR videos.3D human bodies can be created using the 3D softwares such as instrument such as 3D Max, Maya at present
Model, but this CG model creation process is complicated, and fidelity is low, and image fault is serious, can not meet the requirement of real VR videos.
Current method can not only meet VR video invitations, but also true to nature in the form of 3D be created that in virtual world or the mixing world
Reference object.
It is to obtain high fidelity that how the reconstruction 3D network models progress high quality texture mapping to shooting main body, which renders,
One of key issue of 4D videos.Existing 3D network models rendering intent there are it is computationally intensive, treatment effeciency is low, render after clap
Take the photograph the problem of object fidelity is poor.During in particular for handling the image sequence being made of great amount of images, treatment effeciency is low
Problem is more prominent.
The content of the invention
It is an object of the invention to provide a kind of 4D Video Renderings method, which solves the existing texture mapping side of rendering
Method is computationally intensive, treatment effeciency is low, renders the poor technical problem of rear reference object fidelity.
The present invention provides a kind of 4D Video Renderings method, comprises the following steps:
Step S100:Using polyphaser synchronization scaling method, the camera parameter of N number of calibration for cameras is demarcated, is set afterwards each
The color correction matrix parameter of calibration for cameras, and the projection matrix P (i) of calibration for cameras is obtained, around reference object arrangement calibration
Camera, shooting obtain the image sequence of reference object;
Step S200:The sequence signature information of reference object, is adopted according to sequence signature information in the every frame image sequence of extraction
With the three-dimensional rebuilding method based on polyphaser, the three dimensional point cloud of reference object in obtaining per frame image sequence, to three-dimensional point
Cloud data carry out triangulation, the Three-dimensional network model of reference object in obtaining per frame image sequence;
Step S300:Adjacent camera C={ c are used as using the m calibration for cameras sufficiently small with virtual view angular separation1,
c2,…,cm, the triangular facet for the Three-dimensional network model that adjacent camera C can be photographed is calculated, it is visible as visible triangular facet, traversal
Triangular facet set S={ p1,p2,…,pn, respectively obtain any visible triangular facet P(i)The texture list and color value list;
Step S400:After the textures synthesis weighted value for calculating the image sequence acquired in each adjacent camera respectively, to appointing
One visible triangular facet carries out textures synthesis, travels through all visible triangular facets and obtains textures synthesis result;
Step S400 comprises the following steps:
Step S410:According to the degree of closeness of each adjacent camera and virtual view, the conjunction of each texture in the texture list is calculated
Into weighted value, it is set as that the adjoining camera weight more adjacent to virtual view is higher;
Step S420:Travel through visible triangular facet set S={ p1,p2,…,pn, by synthetic weight value in the texture list
Each texture is weighted averagely, as visible triangular facet P(i)Textures synthesis is as a result, with pixel color value in foregoing colors list
Weighted average is as visible triangular facet P(i)The color value on each vertex;
Step S430:Travel through visible triangular facet set S={ p1,p2,…,pn, judge visible triangular facet whether in all neighbours
Connecing has complete texture on camera, if any visible triangular facet does not belong to the complete texture of any adjacent camera, by this
It can be seen that the weighted average color value on three vertex of triangular facet carries out textures synthesis, if any visible triangular facet, which has, belongs to any
The complete texture of adjacent camera then presses step S420 processing;
Step S500:Sprout wings behind the edge of processing textures synthesis result, render display synthesis texture and Three-dimensional network model,
Obtain corresponding textured three-dimensional rendering model on the virtual view.
Further, step S300 comprises the following steps:
Step S310:Calculate normal vector:Assuming that three apex coordinates of any triangular facet are respectively in Three-dimensional network model
V1, V2And V3, then the normal vector N of the triangular facet be:
N=Norm ((V2-V1)×(V1-V3))
Wherein × and representing vectorial multiplication cross, Norm represents vector normalization;
Step S320:Normal vector N and the angle in adjacent camera view direction are calculated, judges whether angle is more than default threshold
Value, when angle is more than predetermined threshold value, then it is assumed that triangular facet is visible triangular facet;
Step S330:To all triangular facet repeat step S310~S320 in Three-dimensional network model, visible triangle is obtained
Face set S={ p1,p2,…,pn, wherein piRepresent i-th of triangular facet, pn is set intermediate cam face number.
Further, it is seen that triangular facet texture blending comprises the following steps:S340:Calculate any visible triangular facet P(i)
Adjacent camera ciOn projection, if gained projection not by visible triangular facet P(i+1)Projection covered, then, it is seen that triangular facet
P(i)To adjacent camera CiAs it can be seen that extract visible triangular facet P(i)In adjacent camera ciOn projection covering image-region as pair
piIn ciOn the texture list, and record visible triangular facet P(i)Three vertex project to piPixel color in correspondence image
Value, as color value list, travels through visible triangular facet set S={ p1,p2,…,pn, respectively obtain any visible triangular facet P(i)
The texture list and color value list.
Further, step S430 comprises the following steps:Judge that visible triangular facet there are several color value vertex by any neighbour
Connect calibration for cameras as it can be seen that if two or three vertex as it can be seen that then with three vertex color values according to vertex distance size into
Texel value of the line weighted average as the visible triangular facet, if an only vertex as it can be seen that if with the vertex
Weighted average color value is the texel value of the visible triangular facet.
Further, step S500 comprises the following steps:
Step S510:For belonging to set S={ p1,p2,…,pnThe corresponding textures synthesis result of each triangular facet into
Row judges, if the visible triangular facet texture that any two is adjacent, its synthesis is the figure by coming from different adjoining cameras
Picture, then carry out emergence to the textures synthesis results of the two visible triangular facets and handle to obtain final texture, the two visible triangles
Face is as emergence triangular facet;
Step S520:The corresponding emergence weighted value of texture object of emergence triangular facet is calculated, it is corresponding with emergence triangular facet
Each adjacent camera corresponds to the emergence weighted value result of weighted average of texture as final texture;Wherein, if some adjoining camera
Emergence weighted value be more than more than 2 times of emergence weighted value of other each adjacent cameras, then using the texture of the adjoining camera as
The texture of emergence triangular facet.
Further, the acquisition of Three-dimensional network model, comprises the following steps:
1) characteristic information of reference object is extracted from every frame image sequence;
2) according to the spatial smoothness between characteristic information and characteristic information, fusion profile, edge, color and depth are created
Spend the residual equation of information;
3) global optimization's solution is carried out to residual equation, obtains cloud data;
4) data are rebuild to a cloud and carries out triangular plate processing, obtain Three-dimensional network model.
Further, polyphaser synchronization scaling method comprises the following steps:
1) in the shooting area of polyphaser image sequence acquisition system, wave and mobile both ends are respectively provided with the mark for demarcating thing
Fixed pole passes through each calibration for cameras, obtains uncalibrated image sequence;
2) detection positions the calibration thing in each frame uncalibrated image in the uncalibrated image sequence that any calibration for cameras obtains, and
Extract the profile of the calibration thing in each frame uncalibrated image;
3) according to two frame calibration maps of the calibration thing characteristic point on two frame uncalibrated image of arbitrary neighborhood and office arbitrary neighborhood
The distance between same calibration thing as in, estimates the approximate evaluation value of each calibration for cameras parameter;
4) using approximate evaluation value as initial value, the camera of each calibration for cameras is calculated by light-stream adjustment iteration optimization respectively
The exact value of parameter;
Camera parameter includes:Camera intrinsic parameter, the Position and orientation parameters under opposite the same space coordinate system, white balance increase
Beneficial parameter.
Further, polyphaser image sequence synchronous includes:N number of calibration for cameras and curtain, calibration for cameras are solid
Due on stent, arranged around reference object, and the imaging background of each calibration for cameras is curtain;Curtain is arranged at the outer of stent
Side.
The technique effect of the present invention:
The present invention provides 4D Video Rendering methods, and according to user's observation virtual view angle used, selection obtains phase
The adjoining camera of image is answered, visible triangular facet extraction, texture blending are carried out to image obtained by the camera and its neighbouring camera, and it is right
The texture extracted carries out recalculating for texture and color value according to the visible situation of each adjacent camera, and to this visible three
Edged surface carries out texture blending and sprouts wings to render processing, not only improves treatment effeciency, but also obtain the display with compared with high fidelity
As a result.Can the completion of high speed the textures of whole image sequence are rendered, obtain can the 4D of high fidelity reduction reference object regard
Frequently.Gained three-dimensional rendering model can need to carry out being rotated at random adjustment according to the observation of user, can obtain high true to nature
Three-dimensional rendering model.
The present invention provides 4D Video Rendering methods, passes through the visible triangular facet of adjoining camera to being chosen according to virtual view
Multinomial weight setting is carried out, improves the image restoring degree of user's visibility region, improves texture mapping quality, raising renders speed
Degree, can high-effect high-quality texture mapping and real-time rendering are carried out to 3D models from multi views.
The described below of the various embodiments that 4D Video Renderings method according to the present invention proposes specifically is refer to, will be caused
It is apparent in terms of the above and other of the present invention.
Brief description of the drawings
Fig. 1 is the 4D Video Rendering method flow schematic diagrams caught provided by the present invention for 4D;、
Fig. 2 is video camera arrangement schematic diagram in the preferred embodiment of the present invention;
Fig. 3 is that visible dough sheet calculates schematic diagram in the preferred embodiment of the present invention;
Fig. 4 is the 3D patch model schematic diagrames of main object in 4D videos in the preferred embodiment of the present invention;
Fig. 5 is that acquired results schematic diagram is rendered in the preferred embodiment of the present invention.
Embodiment
The attached drawing for forming the part of the present invention is used for providing a further understanding of the present invention, schematic reality of the invention
Apply example and its explanation is used to explain the present invention, do not form inappropriate limitation of the present invention.
Referring to Fig. 1,4D Video Renderings method provided by the invention, comprises the following steps:
Step S100:Using polyphaser synchronization scaling method, the camera parameter of N number of calibration for cameras is demarcated, is set afterwards each
The color correction matrix parameter of the calibration for cameras, and obtain the projection matrix P of the calibration for cameras(i), around the shooting pair
As arranging the calibration for cameras, shooting obtains the image sequence of the reference object;The equivalent focal length of intrinsic parameter including camera and
The inner parameters such as optical center principal point coordinate.
Step S200:The sequence signature information of reference object, special according to the sequence in the every frame described image sequence of extraction
Reference breath uses the three-dimensional rebuilding method based on polyphaser, obtains the three-dimensional point cloud number per reference object in frame described image sequence
According to carrying out triangulation to the three dimensional point cloud, the three-dimensional network of reference object in obtaining per frame described image sequence
Model;
Step S300:To be used as adjacent camera C=with the sufficiently small m in the virtual view angular separation calibration for cameras
{c1,c2,…,cm, the triangular facet for the Three-dimensional network model that the calculating adjacent camera C can be photographed, as visible triangle
Face, travels through the visible triangular facet set S={ p1,p2,…,pn, respectively obtain any visible triangular facet P(i)Texture
List and color value list;
Step S400:The textures synthesis weighted value of the described image sequence acquired in each adjacent camera is calculated respectively
Afterwards, textures synthesis is carried out to any visible triangular facet, travels through all visible triangular facets and obtain textures synthesis result;
Step S400 comprises the following steps:
Step S410:According to the degree of closeness of each adjacent camera and virtual view, each texture in described the texture list is calculated
Synthetic weight value, be set as that the adjoining camera weight more adjacent to virtual view is higher;
Step S420:Travel through the visible triangular facet set S={ p1,p2,…,pn, by the synthetic weight value to described
Each texture in the texture list is weighted averagely, as the visible triangular facet P(i)Textures synthesis with foregoing color as a result, arranged
The weighted average of pixel color value is as the visible triangular facet P in table(i)The color value on each vertex;
Step S430:Travel through the visible triangular facet set S={ p1,p2,…,pn, whether judge the visible triangular facet
There is complete texture on all of its neighbor camera, if any visible triangular facet does not belong to the complete of any adjacent camera
Texture, then carry out textures synthesis, if any described visible by the weighted average color value on three vertex of the visible triangular facet
Triangular facet has the complete texture for belonging to any adjacent camera then by step S420 processing;
Step S500:Sprout wings behind the edge for handling the textures synthesis result, render the display synthesis texture and described
Three-dimensional network model, obtains corresponding textured three-dimensional rendering model on the virtual view.
Repeat step 100~500 handles contained each two field picture in the image sequence containing the reference object, by described image
Sequence order broadcasting renders subject image per frame and obtains rendering 4D holographic videos.In use, user can be as needed to three-dimensional
Network model is rotated at random, and can obtain the Three-dimensional network model of reference object different angle after rotation every time.
Calibration for cameras used herein, is arranged in polyphaser image sequence synchronous.Virtually regarded in the present invention
Point refers to sometime corresponding observation viewpoint of the user by interaction observing and nursing.Can show herein refer to, Yong Hucong
When Three-dimensional network model is observed in virtual view orientation, the part that can see.Texture mapping is only needed for can see three
Edged surface is handled, and so-called " can see " refers to tri patch being seen from virtual view orientation, not being blocked.
Reason is clicked here, treatment effeciency can be effectively improved, reduces calculation amount.The selection principle of the adjacent camera is for calibration for cameras and virtually
The angle in viewpoint orientation is sufficiently small.
In order to enable the texture that visible triangular facet can be synthesized, it is preferred that the adjacent camera m>=3.
Preferably, the step S300 comprises the following steps:
Step S310:Calculate normal vector:Assuming that three apex coordinates of any triangular facet divide in the Three-dimensional network model
Wei not V1, V2And V3, then the normal vector N of the triangular facet be:
N=Norm ((V2-V1)×(V1-V3))
Wherein × and representing vectorial multiplication cross, Norm represents vector normalization;
Step S320:The angle of the normal vector N and the adjacent camera view direction are calculated, whether judges the angle
More than predetermined threshold value, when the angle is more than predetermined threshold value, then it is assumed that the triangular facet is visible triangular facet;
Step S330:To all triangular facet repeat step S310~S320 in the Three-dimensional network model, obtain described
It can be seen that triangular facet set S={ p1,p2,…,pn, wherein pi represents i-th of triangular facet, pnFor set intermediate cam face number.
Threshold value herein can be set to 90 °.Also can be set as needed.
Preferably, the visible triangular facet texture blending comprises the following steps:
S340:Calculate any visible triangular facet P(i)In the adjacent camera ciOn projection, if gained projection do not have
Have by the visible triangular facet P(i+1)Projection covered, then the visible triangular facet P(i)To the adjacent camera ciAs it can be seen that carry
Take the visible triangular facet P(i)In the adjacent camera ciOn projection covering image-region be used as to piIn ciOn texture
List, and record the visible triangular facet P(i)Three vertex project to piPixel color value in correspondence image, as color
Value list, travels through the visible triangular facet set S={ p1,p2,…,pn, respectively obtain any visible triangular facet P(i)'s
The texture list and color value list.
Specifically, to piPass through ciProjection matrix P(i)It is calculated in ciOn projection, if the projection is not by other
The projection of triangular facet is covered, then triangular facet piTo ciIt is visible, so that the image-region for obtaining projection covering is used as to pi
In ciOn texture;Recording three vertex projects to p at the same timeiImage on corresponding pixel color value.Through this step, to belonging to
In set S={ p1,p2,…,pnEach triangular facet, the texture list on corresponding adjacent camera image can be found, with
And the color value list of each vertex correspondence.
Preferably, step S430 comprises the following steps:Judging the visible triangular facet has several color value vertex any
Adjacent calibration for cameras is as it can be seen that if two or three vertex as it can be seen that then with three vertex color values according to vertex distance size
Carry out texel value of the linear weighted function averagely as the visible triangular facet, if an only vertex as it can be seen that if with the vertex
Weighted average color value be the visible triangular facet texel value.
Preferably, the step S500 comprises the following steps:
Step S510:For belonging to set S={ p1,p2,…,pnThe corresponding textures synthesis result of each triangular facet into
Row judges, if the visible triangular facet texture that any two is adjacent, its synthesis is the figure by coming from different adjoining cameras
Picture, then carry out emergence to the textures synthesis results of the two visible triangular facets and handle to obtain final texture, the two visible triangles
Face is as emergence triangular facet;
Step S520:The corresponding emergence weighted value of texture object of the emergence triangular facet is calculated, with the emergence triangle
The corresponding each adjacent camera in face corresponds to the emergence weighted value result of weighted average of texture as final texture;Wherein, if some
The emergence weighted value of adjacent camera is more than more than 2 times of the emergence weighted value of other each adjacent cameras, then with the adjoining camera
Texture of the texture as emergence triangular facet.
Preferably, the emergence weighted value in the step S520 is obtained by three weighted value weighted averages, three weighted value:
Weight factor one is the degree of closeness of the calibration for cameras and virtual view that close on emergence triangular facet;
Weight factor two is visible level of the corresponding adjacent camera to emergence triangular facet object;
Weight factor three is by the relative size of each triangular facet object projected area in each adjacent camera image planes;
This three parts weight factor value is all normalized between 0 to 1, and the average value of three is as emergence weighted value.By
This obtains emergence weight of the emergence triangular facet on each adjacent camera.
By this processing method, it can effectively reduce due to texture blooming caused by grain table, improve line
The degree true to nature of reason.
Specifically, method provided by the invention comprises the following steps:
Need to obtain main object (such as human body) video image and main body pair by acquisition polyphaser sync pulse jamming first
The three-dimensional body patch model data of elephant.Assuming that camera number is N, t moment is known as to each moment t polyphaser images and is regarded more
Figure, with { I(t,i)| i=1,2 ..., N } represent, wherein i represents i-th camera, I(t, i)Refer to i-th of camera of t moment to be gathered
Image.Camera is by calibration, i.e., the projection matrix P of each camera(i)It is known.Known three-dimensional body data one
As use tri patch method for expressing, i.e., threedimensional model is by many triangular facet (including the data such as vertex, normal vector) data
Composition.Assuming that the three-dimensional body that t moment obtains is S(t).The target of 4D Video Renderings is by multi views data and three-dimensional
Data are rebuild, texture mapping true to nature is carried out to three-dimensional body.
Step S100 cameras select:According to virtual view orientation, hithermost several cameras are selected, with these cameras
Basic data of the image as model textures synthesis.Specifically, according to virtual view orientation, m camera image is selected as line
The data of synthesis are managed, obtain one group of adjoining camera set C={ c1,c2,…,cm}.The principle of selection be each camera ci with it is virtual
Viewpoint direction angle should be sufficiently small, and wherein camera direction is camera optical axis direction shooting object direction.It is in order to enable each visual
In the range of the texture that can be synthesized of triangular facet, select camera number m should be enough.Generally optional m>=3;Fig. 2 is
Camera selects schematic diagram.As shown in the figure, it is a virtual view that user mutual observation visual angle is corresponding, diagram one is also corresponded to
Adjacent camera.At this time, three cameras of camera direction and the angle minimum of adjacent camera direction, i.e. calibration for cameras 1, calibration are selected
Camera 2 and calibration for cameras 3 are as texture blending and the image sources rendered.
Step S200:It can be seen that dough sheet calculates:The threedimensional model triangular facet that " can be seen " on calculating virtual view direction
Set.Texture mapping only needs to be handled for the triangular facet that can be seen.The normal vector of each triangular facet is calculated first:
Assuming that three apex coordinates of a certain triangular facet are respectively V1, V2And V3, then the normal vector N of triangular facet be calculated as:
N=Norm ((V2-V1)×(V1-V3))
Wherein × and representing vectorial multiplication cross, Norm represents vector normalization.To each triangular facet, its normal vector and adjoining are calculated
The corner dimension in camera view direction, when angle is more than the threshold value of setting, then it is assumed that the triangular facet is potential to adjacent camera
It is visible.Generally, it is 90 degree to take the angle threshold value.As shown in figure 3, triangular facet P1With triangular facet P2Corresponding normal vector difference
For N1And N2, both are respectively θ with the angle for abutting camera view direction1And θ2.Because θ1> 90 and θ2< 90, therefore triangular facet
P1Adjacent camera is visible, and triangular facet P2It is sightless to adjacent camera.According to the method, to current virtual
Viewpoint can obtain one group of visible triangular facet set S={ p1,p2..., pn }, wherein pi represents i-th of triangular facet, and pn is set
Intermediate cam face number;
Step S300:It can be seen that dough sheet texture blending:Each visible triangular facet is calculated in selected adjacent camera image planes
Upper corresponding texture and the color value on each vertex.First, to each triangular facet p for belonging to set Pi, it is calculated each selected
The adjoining camera c selectedi(ciBelong to set C={ c1,c2,…,cm) visible situation.Specifically, to piPass through ciProjection square
Battle array P(i)It is calculated in ciOn projection, if the projection not by other triangular facets projection cover, triangular facet piTo ci
It is visible, so that the image-region for obtaining projection covering is used as to piIn ciOn texture;Three vertex projections are recorded at the same time
Pixel color value.Through this step, to belonging to set S={ p1,p2,…,pnEach triangular facet, can find corresponding
The texture list on adjacent camera image, and the color value list of each vertex correspondence.
Step S400:Grain table weight map calculates:The camera for calculating each participation textures synthesis is each to synthesis texture
Weighted value in pixel.According to each adjacent camera and virtual view degree of closeness, the corresponding textures synthesis of each adjacent camera is calculated
Weighted value.Belong to set S={ p to each1,p2,…,pnTriangular facet, its texture is each line in the texture list that will be found
Reason carries out the average composite result of linear weighted function according to textures synthesis weight;Each vertex color value of triangular facet is visible to its
Projected color value weighted average on each adjacent camera obtains;If certain triangular facet, which is not found, completely belongs to an adjoining
The texture of camera, then carry out textures synthesis by the weighted average color value on three of them vertex;If two or three vertex quilts
Adjacent camera is as it can be seen that then texel value averagely obtains for three vertex color values according to vertex distance size progress linear weighted function
Arrive;If an only vertex abutted camera as it can be seen that if texel value be directly disposed as the weighted average face on the vertex
Colour.
Step S500:Edge emergence processing:The texture of synthesis carries out emergence processing on edge so that transitions smooth.Choosing
Select emergence triangular facet texture object:For belonging to set S={ p1,p2,…,pnThe corresponding texture of each triangular facet, if phase
Adjacent any two triangular facet texture comes from different adjoining camera images, then carries out plumage to two triangular facet textures
Change is handled;To any triangular facet texture object selected by step S500, corresponding emergence weight is calculated;Emergence weighted value
Obtained by three weight portion weighted averages:Weight factor one is by corresponding adjacent camera and virtual view adjoining degree;Weight because
Son two is the visible level by corresponding adjacent camera to selected triangular facet object;Weight factor three is by selected three
Edged surface object projected area relative size in each adjacent camera image planes;This three parts weight factor value is all normalized 0 to 1
Between, the average value of three is as final emergence weighted value.Thus each adjacent camera can be obtained on selected triangle
In face of the emergence weighted value of elephant;According to the emergence weighted value of each adjacent camera, texture object is merged:By each adjacent phase
The corresponding texture of machine, merges to obtain final texture according to emergence weighted value weighted average;If the emergence power of some adjoining camera
Weight values more than 2 times, then will directly use the texture conduct of the adjoining camera compared to the emergence weighted value of other each adjacent cameras
The texture of selected corresponding triangular facet.
Synthesis renders:The texture of synthesis is carried out rendering display.By OpenGL rendering intents to obtained model and
Texture carries out rendering display.
Preferably, polyphaser image sequence synchronous includes:Multiple calibration for cameras and curtain, calibration for cameras are fixed
In on stent, arranged around reference object, and the imaging background of each calibration for cameras is curtain;Curtain is arranged at the outer of stent
Side.
Multiple calibration for cameras cooperate after fixing and form polyphaser image sequence acquisition system.These calibration for cameras according to
Certain posture and position is fixed about being installed on around shooting area.Calibration for cameras covers reference object from all angles.Mark
Determining camera can be with sync pulse jamming object by external trigger device.Shoot lamp and be installed on shooting area top or other suitable positions,
Preferably, polyphaser image sequence synchronous further includes:For making reference object obtain the multiple of enough uniform illuminations
Lamp is shot, shooting lamp is arranged at the top of reference object.So that the illumination of reference object main body is enough and uniform, it is each to shoot lamp installation
Position need to avoid the field range of each calibration for cameras.Reference object can be that all kinds of objects can be that athletic posture can also be quiet
Only state.
Calibration for cameras herein is a series of high resolution cameras, it is preferred that camera is remotely controlled cameras, resolution ratio 800
× 600 industrial camera, camera lens are the camera lens of focal length adaptation, and support acquisition frame rate is 30 frames/s.
Preferably, calibration for cameras is more than 8, it is furthermore preferred that calibration for cameras is 12.Each calibration for cameras can be by equal
Even angle mounting arrangements.The setting height(from bottom) of calibration for cameras is set according to the height of reference object.Such as human body is shot, then
Calibration for cameras is 1~2 meter away from ground level.Calibration for cameras is connected with computer carries out pictures subsequent processing.A general meter
Calculation machine can support 4-8 platform calibration for cameras, it is preferred that using USB 3.0 as data-interface connection camera and computer, USB numbers
Computer motherboard is connected to by PCIE expansion cards according to line.
In order to there is enough load-bearing, stent preferably selects the aluminium alloy of firm stable or irony bar to form.Design shooting place
The radius of size is 3 meters, a height of 3 meters of cylindrical space.Reference object is movable in the space, around installed in different angle
Each calibration for cameras visual field can cover reference object and its zone of action.Preferably, curtain is dedicated for the stingy figure of green
Shading curtain.
Each shooting lamp installation site need to avoid the field range of each calibration for cameras.Preferably, bat of the lamp using specialty is shot
Light compensating lamp is taken the photograph, is usually the white lamps of LED.Such as the shooting area for 5 meters of radiuses, 3 meters of height, installation rated power are 60w's
Six LED light.Reference object should be located at shooting area center during shooting so that each camera can be covered as it can be seen that adjusting illumination
Brightness make it that reference object brightness is enough, without shade.Dynamic reference object should be noted that range of movement and amplitude, ensure to be moved through
Journey is maintained in the visibility region of each calibration for cameras.
Preferably, polyphaser synchronization scaling method comprises the following steps:
1) in the shooting area of polyphaser image sequence acquisition system, wave and mobile both ends are respectively provided with the mark for demarcating thing
Fixed pole passes through each calibration for cameras, obtains uncalibrated image sequence;
2) detection positions the calibration thing in each frame uncalibrated image in the uncalibrated image sequence that any calibration for cameras obtains, and
Extract the profile of the calibration thing in each frame uncalibrated image;
3) according to two frame calibration maps of the calibration thing characteristic point on two frame uncalibrated image of arbitrary neighborhood and office arbitrary neighborhood
The distance between same calibration thing as in, estimates the approximate evaluation value of each camera parameter;
4) using approximate evaluation value as initial value, the camera of each calibration for cameras is calculated by light-stream adjustment iteration optimization respectively
The exact value of parameter;
Camera parameter includes:Camera intrinsic parameter, the Position and orientation parameters under opposite the same space coordinate system, white balance increase
Beneficial parameter.
This scaling method causes each calibration for cameras to have overlapping field of view, ensure that calibration process is quickly accurate
Really.
Demarcate bar includes the connecting rod that length is L, and the both ends of connecting rod set the first mark thing and the second marker respectively,
The color of first marker, the second marker and curtain three is different.This color is set easily and accurately to mark
Thing carries out spheroid form detection and contours extract.
Specifically, assuming that camera number is N, 3 × 4 projection matrixes of each camera are Pi=[pi1;pi2;Pi3], wherein
Pi1, pi2, pi3 are 1 × 3 vector, these parameters are to need the parameter of calibrated and calculated.It is assumed that gained feature point number is M,
Each characteristic point has corresponding extraction pixel on the uncalibrated image of each camera.If uij=(uij, vij) is jth
Pixel extraction coordinate of a characteristic point on i-th of camera, Xj are characterized a little corresponding three-dimensional space position coordinate, are unknown
Amount.Establish following residual equation:
Above-mentioned projection matrix is solved by bundle adjustment method iteration optimization, obtains position and the posture ginseng of each camera
Number.
Color calibration can also be carried out to calibration for cameras as needed.Preferably, color calibration bag is carried out to calibration for cameras
Include following steps:
1) under normal photographing light environment, white paper is shot;
2) white paper pixel region is extracted, obtains the pixel set of pixel region;
3) pixel value of tri- passages of RGB is calculated respectively, obtains tri- numerical value of sumR, sumG, sumB;
4) RGB gain coefficients are calculated by the following formula:
GainR=maxSum/sumR
GainG=maxSum/sumG
GainB=maxSum/sumB
Wherein, maxSum is the maximum in each sumR, sumG, sumB.
White paper is shot by each camera, white area in paper is extracted, calculates each camera color RGB gains system
Number, and then each camera RGB channel is adjusted.According to colour temperature is horizontally selected, corresponding color correction matrix parameter is set, from
And make it that the color of each camera is accurate, full, keep uniformity spatially;Demarcated by above-mentioned color calibration method each
Camera white balance gains parameter, and color correction matrix parameter is set, so that the being consistent property of color of each camera, just
Processing is rendered in texture.
Preferably, the acquisition of Three-dimensional network model, comprises the following steps:
1) characteristic information of reference object is extracted from every frame image sequence;
2) according to the spatial smoothness between characteristic information and characteristic information, fusion profile, edge, color and depth are created
Spend the residual equation of information;
3) global optimization's solution is carried out to residual equation, obtains cloud data;
4) data are rebuild to a cloud and carries out triangular plate processing, obtain Three-dimensional network model.
Three-dimensional rebuilding method used based on polyphaser, is carried out by existing method step.Obtain the three of reference object
Tie up cloud data.Triangulation is carried out to cloud data, obtains the Three-dimensional network model of main object.Using one kind based on more
View Stereo Vision reconstructs the three dimensional point cloud of main object.This method establish one with estimation of Depth data,
Constraint equation based on color data, outline data and edge detection data, by a kind of global optimization approach to the party
Journey system optimizes calculating, therefrom calculates the three dimensional point cloud collection of all kinds of observed quantities of accurate description and constraint.Obtain point
After cloud data set, then using tri patch network model creation method, the three-dimensional of reconstruction main object model from cloud data
Network model.
Method provided by the invention is described in detail below in conjunction with instantiation.
Handled by above-mentioned steps in polyphaser image sequence synchronous, the human body arbitrarily waved collected
Shooting result.Fig. 4 and Fig. 5 is a certain frame example to one group of 4D Video Rendering.Fig. 4 is the 3D faces of main object in 4D videos
Piece model, Fig. 5 are then the design sketch rendered by the method provided by the present invention.As shown in Figure 4,3D patch models, which exist, rebuilds
Error, the depth data error of partial 3-D point are larger.And the Fig. 5 obtained by rendering intent provided by the invention is clearly forced
Very, it is high to shooting the reduction degree of main body.It can effectively overcome due to rebuilding the error brought, obtain the high texture of comparison and render effect
Fruit.
Those skilled in the art will be clear that the scope of the present invention is not restricted to example discussed above, it is possible to which it is carried out
Some changes and modification, the scope of the present invention limited without departing from the appended claims.Although oneself is through in attached drawing and explanation
The present invention is illustrated and described in book in detail, but such illustrate and describe only is explanation or schematical, and it is nonrestrictive.
The present invention is not limited to the disclosed embodiments.
By to attached drawing, the research of specification and claims, when implementing the present invention, those skilled in the art can be with
Understand and realize the deformation of the disclosed embodiments.In detail in the claims, term " comprising " is not excluded for other steps or element,
And indefinite article "one" or " one kind " be not excluded for it is multiple.The some measures quoted in mutually different dependent claims
The fact does not mean that the combination of these measures cannot be advantageously used.Any reference marker in claims is not formed pair
The limitation of the scope of the present invention.
Claims (8)
- A kind of 1. 4D Video Renderings method, it is characterised in that comprise the following steps:Step S100:Using polyphaser synchronization scaling method, the camera parameter of N number of calibration for cameras is demarcated, is set afterwards each described The color correction matrix parameter of calibration for cameras, and obtain the projection matrix P of the calibration for cameras(i), around the reference object cloth The calibration for cameras is put, shooting obtains the image sequence of the reference object;Step S200:The sequence signature information of reference object, base is used according to sequence signature information in the every frame image sequence of extraction In the three-dimensional rebuilding method of polyphaser, the three dimensional point cloud of reference object in obtaining per frame image sequence, to three-dimensional point cloud number According to triangulation is carried out, the Three-dimensional network model of reference object in every frame image sequence is obtained;Step S300:To be used as adjacent camera C={ c with the sufficiently small m in the virtual view angular separation calibration for cameras1, c2,…,cm, the triangular facet of the Three-dimensional network model that the adjacent camera C can be photographed is calculated, as visible triangular facet, Travel through the visible triangular facet set S={ p1,p2,…,pn, respectively obtain any visible triangular facet P(i)The texture list With color value list;Step S400:After the textures synthesis weighted value for calculating the described image sequence acquired in each adjacent camera respectively, Textures synthesis is carried out to any visible triangular facet, all visible triangular facets is traveled through and obtains textures synthesis result;Step S400 comprises the following steps:Step S410:According to the degree of closeness of each adjacent camera and virtual view, the conjunction of each texture in described the texture list is calculated Into weighted value, it is set as that the adjoining camera weight more adjacent to virtual view is higher;Step S420:Travel through the visible triangular facet set S={ p1,p2,…,pn, by the synthetic weight value to the texture Each texture in list is weighted averagely, as the visible triangular facet P(i)Textures synthesis is as a result, with foregoing colors list The weighted average of pixel color value is as the visible triangular facet P(i)The color value on each vertex;Step S430:Travel through the visible triangular facet set S={ p1,p2,…,pn, judge the visible triangular facet whether in institute Having has complete texture on adjacent camera, if any visible triangular facet does not belong to the complete line of any adjacent camera Reason, then carry out textures synthesis by the weighted average color value on three vertex of the visible triangular facet, if any described visible three Edged surface has the complete texture for belonging to any adjacent camera then by step S420 processing;Step S500:Sprout wings behind the edge for handling the textures synthesis result, render the display synthesis texture and the three-dimensional Network model, obtains corresponding textured three-dimensional rendering model on the virtual view.
- 2. 4D Video Renderings method according to claim 1, it is characterised in that the step S300 comprises the following steps:Step S310:Calculate normal vector:Assuming that three apex coordinates of any triangular facet are respectively in the Three-dimensional network model V1, V2And V3, then the normal vector N of the triangular facet be:N=Norm ((V2-V1)×(V1-V3))Wherein × and representing vectorial multiplication cross, Norm represents vector normalization;Step S320:The angle of the normal vector N and the adjacent camera view direction are calculated, judges whether the angle is more than Predetermined threshold value, when the angle is more than predetermined threshold value, then it is assumed that the triangular facet is visible triangular facet;Step S330:To all triangular facet repeat step S310~S320 in the Three-dimensional network model, obtain described visible Triangular facet set S={ p1,p2,…,pn, wherein piRepresent i-th of triangular facet, pnFor set intermediate cam face number.
- 3. 4D Video Renderings method according to claim 2, it is characterised in that the visible triangular facet texture blending includes Following steps:S340:Calculate any visible triangular facet P(i)In the adjacent camera CiOn projection, if gained project Not by the visible triangular facet P(i+1)Projection covered, then the visible triangular facet P(i)To the adjacent camera CiAs it can be seen that Extract the visible triangular facet P(i)In the adjacent camera CiOn projection covering image-region be used as to piIn CiOn line List is managed, and records the visible triangular facet P(i)Three vertex project to piPixel color value in correspondence image, as face Colour list, travels through the visible triangular facet set S={ p1,p2,…,pn, respectively obtain any visible triangular facet P(i) The texture list and color value list.
- 4. 4D Video Renderings method according to claim 3, it is characterised in that the step S430 comprises the following steps: Judge that the visible triangular facet there are several color value vertex by any adjacent calibration for cameras as it can be seen that if two or three vertex As it can be seen that then carrying out linear weighted function according to vertex distance size using three vertex color values is averagely used as the texture of the visible triangular facet Pixel value, if an only vertex as it can be seen that if texture picture using the weighted average color value on the vertex as the visible triangular facet Element value.
- 5. 4D Video Renderings method according to claim 4, it is characterised in that the step S500 comprises the following steps:Step S510:For belonging to set S={ p1,p2,…,pnThe corresponding textures synthesis result of each triangular facet sentenced Disconnected, if the visible triangular facet texture that any two is adjacent, its synthesis is the image by coming from different adjoining cameras, then Emergence is carried out to the textures synthesis results of the two visible triangular facets to handle to obtain final texture, the two visible triangular facet conducts Emergence triangular facet;Step S520:The corresponding emergence weighted value of texture object of the emergence triangular facet is calculated, with the emergence triangular facet pair Each adjacent camera answered corresponds to the emergence weighted value result of weighted average of texture as final texture;Wherein, if some is abutted The emergence weighted value of camera is more than more than 2 times of the emergence weighted value of other each adjacent cameras, then with the texture of the adjoining camera Texture as emergence triangular facet.
- 6. 4D Video Renderings method according to claim 5, it is characterised in that the acquisition of the Three-dimensional network model, bag Include following steps:1) characteristic information of the reference object is extracted from every frame described image sequence;2) according to the spatial smoothness between the characteristic information and the characteristic information, fusion profile, edge, color are created With the residual equation of depth information;3) global optimization's solution is carried out to the residual equation, obtains cloud data;4) data are rebuild to described cloud and carries out triangular plate processing, obtain Three-dimensional network model.
- 7. 4D Video Renderings method according to claim 1, it is characterised in that the polyphaser synchronization scaling method includes Following steps:1) in the shooting area of the polyphaser image sequence acquisition system, wave and mobile both ends are respectively provided with the mark for demarcating thing Fixed pole passes through each calibration for cameras, obtains uncalibrated image sequence;2) calibration in the uncalibrated image sequence that any calibration for cameras of detection positioning obtains in each frame uncalibrated image Thing, and extract the profile of the calibration thing in each frame uncalibrated image;3) according in two frame uncalibrated image of the calibration thing characteristic point on two frame uncalibrated image of arbitrary neighborhood and office arbitrary neighborhood The distance between same calibration thing, estimates the approximate evaluation value of each calibration for cameras parameter;4) using the approximate evaluation value as initial value, each calibration for cameras is calculated by light-stream adjustment iteration optimization respectively The exact value of camera parameter;Camera parameter includes:Camera intrinsic parameter, the Position and orientation parameters under opposite the same space coordinate system, white balance gains ginseng Number.
- 8. 4D Video Renderings method according to claim 7, it is characterised in that the polyphaser image sequence synchronous acquisition System includes:The N number of calibration for cameras and curtain, the calibration for cameras are fixed on stent, are arranged around the reference object, And the imaging background of each calibration for cameras is curtain;The curtain is arranged at the outside of the stent.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711061009.3A CN107918948B (en) | 2017-11-02 | 2017-11-02 | 4D video rendering method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711061009.3A CN107918948B (en) | 2017-11-02 | 2017-11-02 | 4D video rendering method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107918948A true CN107918948A (en) | 2018-04-17 |
CN107918948B CN107918948B (en) | 2021-04-16 |
Family
ID=61895152
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711061009.3A Active CN107918948B (en) | 2017-11-02 | 2017-11-02 | 4D video rendering method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107918948B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110536076A (en) * | 2018-05-23 | 2019-12-03 | 福建天晴数码有限公司 | A kind of method and terminal that Unity panoramic video is recorded |
CN110675506A (en) * | 2019-08-21 | 2020-01-10 | 佳都新太科技股份有限公司 | System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion |
CN111768452A (en) * | 2020-06-30 | 2020-10-13 | 天津大学 | Non-contact automatic mapping method based on deep learning |
CN111988535A (en) * | 2020-08-10 | 2020-11-24 | 山东金东数字创意股份有限公司 | System and method for optically positioning fusion picture |
CN111986296A (en) * | 2020-08-20 | 2020-11-24 | 叠境数字科技(上海)有限公司 | CG animation synthesis method for bullet time |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101763649A (en) * | 2009-12-30 | 2010-06-30 | 北京航空航天大学 | Method for drawing enhanced model contour surface point |
EP2315181A1 (en) * | 2009-10-09 | 2011-04-27 | Deutsche Telekom AG | Method and system for reconstructing the surface of 3D objects in a multi-camera system |
CN103345771A (en) * | 2013-06-28 | 2013-10-09 | 中国科学技术大学 | Efficient image rendering method based on modeling |
CN103456039A (en) * | 2013-08-30 | 2013-12-18 | 西北工业大学 | Large-scale scene virtual sky modeling method under condition of multi-viewpoint multi-view-angle view field displaying |
CN104599243A (en) * | 2014-12-11 | 2015-05-06 | 北京航空航天大学 | Virtual and actual reality integration method of multiple video streams and three-dimensional scene |
CN104616243A (en) * | 2015-01-20 | 2015-05-13 | 北京大学 | Effective GPU three-dimensional video fusion drawing method |
CN104809759A (en) * | 2015-04-03 | 2015-07-29 | 哈尔滨工业大学深圳研究生院 | Large-area unstructured three-dimensional scene modeling method based on small unmanned helicopter |
CN104837000A (en) * | 2015-04-17 | 2015-08-12 | 东南大学 | Virtual viewpoint synthesis method using contour perception |
US20150310135A1 (en) * | 2014-04-24 | 2015-10-29 | The Board Of Trustees Of The University Of Illinois | 4d vizualization of building design and construction modeling with photographs |
CN105205866A (en) * | 2015-08-30 | 2015-12-30 | 浙江中测新图地理信息技术有限公司 | Dense-point-cloud-based rapid construction method of urban three-dimensional model |
WO2016085571A2 (en) * | 2014-09-30 | 2016-06-02 | Washington University | Compressed-sensing ultrafast photography (cup) |
CN105701857A (en) * | 2014-12-10 | 2016-06-22 | 达索系统公司 | Texturing a 3d modeled object |
KR20160114818A (en) * | 2015-03-25 | 2016-10-06 | (주) 지오씨엔아이 | Automated 3D modeling and rendering method on real-time moving view point of camera for flood simulation |
CN106157354A (en) * | 2015-05-06 | 2016-11-23 | 腾讯科技(深圳)有限公司 | A kind of three-dimensional scenic changing method and system |
CN106469448A (en) * | 2015-06-26 | 2017-03-01 | 康耐视公司 | Carry out automatic industrial inspection using 3D vision |
CN106537458A (en) * | 2014-03-12 | 2017-03-22 | 利弗环球有限责任公司 | Systems and methods for reconstructing 3-dimensional model based on vertices |
CN106651870A (en) * | 2016-11-17 | 2017-05-10 | 山东大学 | Method for segmenting out-of-focus fuzzy regions of images in multi-view three-dimensional reconstruction |
WO2017121926A1 (en) * | 2016-01-15 | 2017-07-20 | Nokia Technologies Oy | Method and apparatus for calibration of a multi-camera system |
CN107170043A (en) * | 2017-06-19 | 2017-09-15 | 电子科技大学 | A kind of three-dimensional rebuilding method |
CN107197318A (en) * | 2017-06-19 | 2017-09-22 | 深圳市望尘科技有限公司 | A kind of real-time, freedom viewpoint live broadcasting method shot based on multi-cam light field |
US20170287196A1 (en) * | 2016-04-01 | 2017-10-05 | Microsoft Technology Licensing, Llc | Generating photorealistic sky in computer generated animation |
-
2017
- 2017-11-02 CN CN201711061009.3A patent/CN107918948B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2315181A1 (en) * | 2009-10-09 | 2011-04-27 | Deutsche Telekom AG | Method and system for reconstructing the surface of 3D objects in a multi-camera system |
CN101763649A (en) * | 2009-12-30 | 2010-06-30 | 北京航空航天大学 | Method for drawing enhanced model contour surface point |
CN103345771A (en) * | 2013-06-28 | 2013-10-09 | 中国科学技术大学 | Efficient image rendering method based on modeling |
CN103456039A (en) * | 2013-08-30 | 2013-12-18 | 西北工业大学 | Large-scale scene virtual sky modeling method under condition of multi-viewpoint multi-view-angle view field displaying |
CN106537458A (en) * | 2014-03-12 | 2017-03-22 | 利弗环球有限责任公司 | Systems and methods for reconstructing 3-dimensional model based on vertices |
US20150310135A1 (en) * | 2014-04-24 | 2015-10-29 | The Board Of Trustees Of The University Of Illinois | 4d vizualization of building design and construction modeling with photographs |
WO2016085571A2 (en) * | 2014-09-30 | 2016-06-02 | Washington University | Compressed-sensing ultrafast photography (cup) |
CN105701857A (en) * | 2014-12-10 | 2016-06-22 | 达索系统公司 | Texturing a 3d modeled object |
CN104599243A (en) * | 2014-12-11 | 2015-05-06 | 北京航空航天大学 | Virtual and actual reality integration method of multiple video streams and three-dimensional scene |
CN104616243A (en) * | 2015-01-20 | 2015-05-13 | 北京大学 | Effective GPU three-dimensional video fusion drawing method |
KR20160114818A (en) * | 2015-03-25 | 2016-10-06 | (주) 지오씨엔아이 | Automated 3D modeling and rendering method on real-time moving view point of camera for flood simulation |
CN104809759A (en) * | 2015-04-03 | 2015-07-29 | 哈尔滨工业大学深圳研究生院 | Large-area unstructured three-dimensional scene modeling method based on small unmanned helicopter |
CN104837000A (en) * | 2015-04-17 | 2015-08-12 | 东南大学 | Virtual viewpoint synthesis method using contour perception |
CN106157354A (en) * | 2015-05-06 | 2016-11-23 | 腾讯科技(深圳)有限公司 | A kind of three-dimensional scenic changing method and system |
CN106469448A (en) * | 2015-06-26 | 2017-03-01 | 康耐视公司 | Carry out automatic industrial inspection using 3D vision |
CN105205866A (en) * | 2015-08-30 | 2015-12-30 | 浙江中测新图地理信息技术有限公司 | Dense-point-cloud-based rapid construction method of urban three-dimensional model |
WO2017121926A1 (en) * | 2016-01-15 | 2017-07-20 | Nokia Technologies Oy | Method and apparatus for calibration of a multi-camera system |
US20170287196A1 (en) * | 2016-04-01 | 2017-10-05 | Microsoft Technology Licensing, Llc | Generating photorealistic sky in computer generated animation |
CN106651870A (en) * | 2016-11-17 | 2017-05-10 | 山东大学 | Method for segmenting out-of-focus fuzzy regions of images in multi-view three-dimensional reconstruction |
CN107170043A (en) * | 2017-06-19 | 2017-09-15 | 电子科技大学 | A kind of three-dimensional rebuilding method |
CN107197318A (en) * | 2017-06-19 | 2017-09-22 | 深圳市望尘科技有限公司 | A kind of real-time, freedom viewpoint live broadcasting method shot based on multi-cam light field |
Non-Patent Citations (5)
Title |
---|
MINGSONG DOU 等: "Fusion4D: Real-time Performance Capture of Challenging Scenes", 《ACM TRANS. GRAPH》 * |
ROBERTO SONNINO 等: "Fusion4D: 4D Unencumbered Direct Manipulation and Visualization", 《2013 XV SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY》 * |
王守尊 等: "一种基于视点的纹理渲染方法", 《湖北工业大学学报》 * |
程龙 等: "基于光场渲染的动态3D目标重构技术", 《中国科学院研究生院学报》 * |
陈晓琳: "多视视频生成中的纹理技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊 )》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110536076A (en) * | 2018-05-23 | 2019-12-03 | 福建天晴数码有限公司 | A kind of method and terminal that Unity panoramic video is recorded |
CN110675506A (en) * | 2019-08-21 | 2020-01-10 | 佳都新太科技股份有限公司 | System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion |
CN111768452A (en) * | 2020-06-30 | 2020-10-13 | 天津大学 | Non-contact automatic mapping method based on deep learning |
CN111768452B (en) * | 2020-06-30 | 2023-08-01 | 天津大学 | Non-contact automatic mapping method based on deep learning |
CN111988535A (en) * | 2020-08-10 | 2020-11-24 | 山东金东数字创意股份有限公司 | System and method for optically positioning fusion picture |
CN111986296A (en) * | 2020-08-20 | 2020-11-24 | 叠境数字科技(上海)有限公司 | CG animation synthesis method for bullet time |
CN111986296B (en) * | 2020-08-20 | 2024-05-03 | 叠境数字科技(上海)有限公司 | CG animation synthesis method for bullet time |
Also Published As
Publication number | Publication date |
---|---|
CN107918948B (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961395B (en) | A method of three dimensional spatial scene is rebuild based on taking pictures | |
Debevec et al. | Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach | |
CN107918948A (en) | 4D Video Rendering methods | |
KR100950169B1 (en) | Method for multiple view synthesis | |
CN106327532B (en) | A kind of three-dimensional registration method of single image | |
CN107862718B (en) | 4D holographic video capture method | |
CN105427385B (en) | A kind of high-fidelity face three-dimensional rebuilding method based on multilayer deformation model | |
CN106133796B (en) | For indicating the method and system of virtual objects in the view of true environment | |
CN105574922B (en) | A kind of threedimensional model high quality texture mapping method of robust | |
KR101183000B1 (en) | A system and method for 3D space-dimension based image processing | |
US7573475B2 (en) | 2D to 3D image conversion | |
CN108876926A (en) | Navigation methods and systems, AR/VR client device in a kind of panoramic scene | |
CN106373178A (en) | Method and apparatus for generating an artificial picture | |
JP2006053694A (en) | Space simulator, space simulation method, space simulation program and recording medium | |
GB2464453A (en) | Determining Surface Normals from Three Images | |
CN103607584A (en) | Real-time registration method for depth maps shot by kinect and video shot by color camera | |
JP2019046077A (en) | Video synthesizing apparatus, program and method for synthesizing viewpoint video by projecting object information onto plural surfaces | |
WO2014081394A1 (en) | Method, apparatus and system for virtual clothes modelling | |
CN108629828B (en) | Scene rendering transition method in the moving process of three-dimensional large scene | |
CN109218706A (en) | A method of 3 D visual image is generated by single image | |
US9897806B2 (en) | Generation of three-dimensional imagery to supplement existing content | |
Cheung et al. | Markerless human motion transfer | |
CN109462748A (en) | A kind of three-dimensional video-frequency color correction algorithm based on homography matrix | |
JP6799468B2 (en) | Image processing equipment, image processing methods and computer programs | |
CN113763544A (en) | Image determination method, image determination device, electronic equipment and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230621 Address after: 410008 Room 107-B77, Building 18, No. 118, Luodaozui, Jinxia Road, Furongbeilu, Kaifu District, Changsha, Hunan Province Patentee after: Changsha Stereoscopic Vision Technology Co.,Ltd. Address before: 518000 13-1, Lianhe Road, Henggang street, Longgang District, Shenzhen City, Guangdong Province Patentee before: SHENZHEN FREEDOM4D TECHNOLOGY CO.,LTD. |
|
TR01 | Transfer of patent right |