CN106131535B - Video capture method and device, video generation method and device - Google Patents
Video capture method and device, video generation method and device Download PDFInfo
- Publication number
- CN106131535B CN106131535B CN201610614498.XA CN201610614498A CN106131535B CN 106131535 B CN106131535 B CN 106131535B CN 201610614498 A CN201610614498 A CN 201610614498A CN 106131535 B CN106131535 B CN 106131535B
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- data
- dimensional
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
Abstract
The present invention relates to a kind of video capture method and device, video generation method and device, including:Obtain the frame of video of each shooting direction needed for structure multi-angle video;Each frame of video is mapped into two-dimensional space, obtains two dimensional model data of each frame of video in two-dimensional space;Each frame of video for mapping to two-dimensional space is mapped into three dimensions according to predetermined threedimensional model, obtain the three-dimensional modeling data of each frame of video in three dimensions, and send the video requency frame data of frame of video and model data to server to build multi-angle video, wherein model data includes type, two dimensional model data and the three-dimensional modeling data of threedimensional model.The present invention being capable of the video requency frame data based on acquisition, two dimensional model data, three-dimensional modeling data and threedimensional model on each frame of video, generation includes panoramic video, any non-panoramic video or other continuous or discrete multi-angle videos, and reduce distortion and deformation etc. to the full extent, and then lift Consumer's Experience.
Description
Technical field
The present invention relates to video technique field, more particularly to a kind of video capture method and device, video generation method and
Device.
Background technology
The collection of video information and the important directions that transmission is Information Technology Development, conventional video is due to single shooting
Head limited view, can only absorb scene a certain layout, it is impossible to allow different users and meanwhile watch different angle oneself feel it is emerging
The scene of interest, the individual demand of user can not be met.However, the panoramic video to come into vogue in recent years overcomes conventional video
Drawbacks described above.
Panoramic video technology is related to the fields such as computer graphics, human-computer interaction technology, sensing technology, artificial intelligence, it
With computational methods generate three-dimensional true to nature depending on, listen sensation, there is provided simulation of the user on sense organs such as vision, the sense of hearings, allow user to lead to
Cross and use various devices that into this virtual environment, oneself " projection " is made into user as immersively observed three dimensions
Interior scene.Panoramic video can regard a kind of special case of multi-angle video as, that is, include 360 ° of level and vertical 360 ° of institutes
There is the video at visual angle.
At present, the two-dimensional video data collected is generally only supplied to video generation side, video generation by video acquisition side
Root is established three-dimensional panoramic video by specific model (such as spherical, regular hexahedron or cone etc.) according to the two-dimensional video data
Watched with supplying user, still, the two-dimensional video data that video acquisition side provides is usually applicable only to generate panoramic video,
Video generation side can not easily and flexibly generate various discontinuous visual angles as required based on the two-dimensional video data provided
Multi-angle video.And because the model that different video generation side uses is different, and the two-dimensional video that video acquisition side is gathered
Data not necessarily with video generation side used in model be adapted, the effect for causing video to show is not fully up to expectations (such as to be turned round
Bent, deformation etc.).Also, prior art is typically only capable to generate aphorama based on the existing threedimensional model such as spheroid, regular hexahedron
Frequently, if self-defined other kinds of model, more likely reduced because two-dimensional video data can not be adapted to threedimensional model
Display effect.
The content of the invention
Technical problem
In view of this, the present invention proposes a kind of video capture method and device, video generation method and device, can be free
The multi-angle video at continuous visual angle or discontinuous visual angle is flexibly generated, and improves the effect that video is shown, so as to improve use
Experience at family.
Solution
On the one hand, it is proposed that a kind of video capture method, including:Obtain each shooting direction needed for structure multi-angle video
Frame of video;Each frame of video is mapped into two-dimensional space, obtains two dimensional model data of each frame of video in two-dimensional space;It will reflect
Each frame of video for being incident upon the two-dimensional space maps to three dimensions according to predetermined threedimensional model, obtains each frame of video in three-dimensional
Three-dimensional modeling data in space, and by the video requency frame data of the frame of video and model data send to server so as to
Multi-angle video is built, wherein the type of the model data including the threedimensional model, two dimensional model data and described
Three-dimensional modeling data.
On the other hand, it is proposed that a kind of video generation method, including:Obtain each shooting side needed for structure multi-angle video
To frame of video video requency frame data and model data, the model data maps to two-dimensional space including each frame of video and obtains
Two dimensional model data, each frame of video for mapping to two-dimensional space maps to three-dimensional modeling data and the institute that three dimensions obtains
State the type of the threedimensional model corresponding to three dimensions;Carried out according to the type of the three-dimensional modeling data and threedimensional model three-dimensional
Modeling, the threedimensional model rebuild;According to the two dimensional model data, the video requency frame data and the three-dimensional of the reconstruction
Model generates the multi-angle video.
Another aspect, it is proposed that a kind of video acquisition device, including:Acquiring unit, multi-angle video is built for obtaining
The frame of video of required each shooting direction;First map unit, for each frame of video to be mapped into two-dimensional space, obtain each video
Two dimensional model data of the frame in two-dimensional space;Second map unit, for each frame of video of the two-dimensional space will to be mapped to
Three dimensions is mapped to by predetermined threedimensional model, obtains the three-dimensional modeling data of each frame of video in three dimensions, Yi Jifa
Unit is sent, is regarded for the video requency frame data of the frame of video and model data to be sent to server to build various visual angles
Frequently, wherein the model data includes type, the two dimensional model data and the three-dimensional modeling data of the threedimensional model.
Another further aspect, it is proposed that a kind of video-generating device, including:Acquiring unit, multi-angle video is built for obtaining
The video requency frame data and model data of the frame of video of required each shooting direction, the model data map to including frame of video
The two dimensional model data that two-dimensional space obtains, each frame of video for mapping to two-dimensional space map to the three-dimensional mould that three dimensions obtains
The type of threedimensional model corresponding to type data and the three dimensions;Modeling unit, for according to the threedimensional model number
According to and threedimensional model type carry out three-dimensional modeling, the threedimensional model rebuild;Generation unit, for according to the two-dimentional mould
The threedimensional model of type data, the video requency frame data and the reconstruction generates the multi-angle video.
Beneficial effect
According to various aspects of the invention, by obtaining the frame of video of each shooting direction, each video data will be included, respectively regarded
Frequency frame maps to the two dimensional model data that two-dimensional space obtains, maps to each frame of video of two-dimensional space by predetermined threedimensional model
The type for mapping to three-dimensional modeling data that three dimensions obtains and threedimensional model is sent to server jointly, for video
Generation side builds multi-angle video.By the video requency frame data and its two dimensional model data, three-dimensional modeling data and three that are provided
The model datas such as the type of dimension module are associated so that it is more flexible to generate the mode of multi-angle video, and is not limited only to panorama
Video, arbitrary continuation or the multi-angle video at discontinuous visual angle can be included.Also, the video requency frame data of collection can be suitably
Applied to the threedimensional model of any type (including customization type), and can be with video during multi-angle video generates
Model data corresponding to frame data is foundation, so as to reduce distortion and deformation etc. to the full extent, and then lifts Consumer's Experience.
According to below with reference to the accompanying drawings becoming to detailed description of illustrative embodiments, further feature of the invention and aspect
It is clear.
Brief description of the drawings
Comprising in the description and the accompanying drawing of a part for constitution instruction and specification together illustrate the present invention's
Exemplary embodiment, feature and aspect, and for explaining the principle of the present invention.
Fig. 1 shows the flow chart of video capture method according to an embodiment of the invention.
Fig. 2 shows another flow chart of video capture method according to an embodiment of the invention.
Fig. 3 shows the another flow chart of video capture method according to an embodiment of the invention.
Fig. 4 shows alternative predetermined threedimensional model schematic diagram.
Fig. 5 shows the flow chart of video generation method according to an embodiment of the invention.
Fig. 6 shows the structure chart of video acquisition device according to an embodiment of the invention.
Fig. 7 shows the structure chart of video-generating device according to an embodiment of the invention.
Fig. 8 shows the structure chart of video generation equipment according to an embodiment of the invention.
Embodiment
Describe various exemplary embodiments, feature and the aspect of the present invention in detail below with reference to accompanying drawing.It is identical in accompanying drawing
Reference represent the same or analogous element of function.Although the various aspects of embodiment are shown in the drawings, remove
Non-specifically point out, it is not necessary to accompanying drawing drawn to scale.
Special word " exemplary " is meant " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the present invention, numerous details is given in embodiment below.
It will be appreciated by those skilled in the art that without some details, the present invention can equally be implemented.In some instances, for
Device, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the present invention.
Embodiment 1
Fig. 1 shows the flow chart of video capture method according to an embodiment of the invention.As shown in figure 1, this method is main
Including:
Step 101, the frame of video of each shooting direction needed for structure multi-angle video is obtained;
Step 102, each frame of video is mapped into two-dimensional space, obtains two dimensional model number of each frame of video in two-dimensional space
According to;
Step 103, each frame of video for mapping to the two-dimensional space is mapped into three-dimensional space according to predetermined threedimensional model
Between, the three-dimensional modeling data of each frame of video in three dimensions is obtained, and
Step 104, the video requency frame data of the frame of video and model data is sent to server to build regard more
Angle video, wherein the model data includes type, the two dimensional model data and the threedimensional model of the threedimensional model
Data.
According to the method for the embodiment, by obtaining the frame of video of each shooting direction, each video data, each video will be included
Each frame of video that frame maps to the two dimensional model data that two-dimensional space obtains, maps to two-dimensional space is reflected by predetermined threedimensional model
The common transmission of type of three-dimensional modeling data and threedimensional model that three dimensions obtains is incident upon to server, so that video is given birth to
Into the structure multi-angle video such as side.By the video requency frame data and its two dimensional model data, three-dimensional modeling data and three that are provided
The model datas such as the type of dimension module are associated so that it is more flexible to generate the mode of multi-angle video, and is not limited only to panorama
Video, arbitrary continuation or the multi-angle video at discontinuous visual angle can be included.Also, the video requency frame data of collection can be suitably
Applied to the threedimensional model of any type (including customization type), and can be with video during multi-angle video generates
Model data corresponding to frame data is foundation, so as to reduce distortion and deformation etc. to the full extent, and then lifts Consumer's Experience.
The multi-angle video built by the present embodiment is also referred to as " space video ", and it causes any in real space
Section can be combined according to free rule model, can be even combined across time dimension, with needed for generation
The video image wanted.
Below in conjunction with some specific examples, to be illustrated to the various possible specific implementations of the present embodiment.This
A little examples are only exemplary and explanat, are not intended to limit the present invention.
In one example, the frame of video for building each shooting direction needed for multi-angle video can be from video capture device
Obtain, such as collecting device can be that camera, sensor etc. can arbitrarily gather each shooting built needed for multi-angle video
The device of the frame of video in direction (each visual angle) or its combination, collecting device can be one or more, can be laid in a distributed manner
Desired position.In another example, the frame of video for building each shooting direction needed for multi-angle video can also be from the 3rd
Side, which receives, to be obtained.Those skilled in the art can obtain each bat needed for structure multi-angle video by known prior art means
The frame of video in direction is taken the photograph, the present invention is without limitation.
In one example, each shooting direction can continuously can also be discontinuous in real space.
In one example, pressed using the video requency frame data of frame of video of the appropriate compress mode to being collected
Contracting, to reduce data volume.
In one example, by each frame of video map to two-dimensional space can using the shooting direction of each frame of video as foundation,
For example, being directed to discontinuous two frame of video of same time point shooting direction, it can map in two-dimensional space and not connect accordingly
Continuous position, in order to generate the discontinuous multi-angle video in visual angle.Certainly, those skilled in the art can also set as needed
Put the relation between the shooting direction of frame of video and its mapping position in two-dimensional space, it may for example comprise by shooting direction not
Continuous frame of video is spliced in two-dimensional space, makes their mapping positions in two-dimensional space continuous, needed for obtaining
The composite video image wanted.
In one example, two dimensional model data may include coordinate of the characteristic point in two-dimensional space in frame of video, three
Dimension module data may include the coordinate of characteristic point in three dimensions in frame of video, and the type of threedimensional model can be conventional
The types such as spheroid, regular hexahedron.If self-defined threedimensional model, the mould of the type available customization threedimensional model of threedimensional model
Shape parameter represents.
In one example, as shown in Fig. 2 step 102 can include:
Step 201, in the two-dimensional space, each frame of video is divided into multiple units of polygon.
Specifically, for each frame of video of acquisition, each frame of video can be divided into polygon in two-dimensional space
Multiple units.In other words, each frame of video after singulated is spliced by multiple polygonal elements.Wherein, each frame of video
The multiple units being divided into can be same type of polygon, such as all units are all triangle or other polygons, or
Person can be different types of polygon, such as can include triangle and other polygons etc., the invention is not limited in this regard.
In one example, processing can be compressed to each video after segmentation.
Each frame of video is divided into multiple units of polygon, in order to obtain two dimensional model data and threedimensional model number
According to, and then build multi-angle video beneficial to according to the type of two dimensional model data, three-dimensional modeling data and threedimensional model.And
And by being smaller unit by frame of video cutting, enabling two dimensional image is mapped into three-dimensional with smaller unit, more can
Enough it is adapted to different threedimensional model (including self-defined threedimensional model), reduce further the distortion of image, improve
Imaging effect.
Step 202, the number of vertices and top of the number of unit and each unit in two-dimensional space in each frame of video are obtained
Point position, as the two dimensional model data.
For example, x ∈ [0,1] can be established, y ∈ [0,1] two-dimensional space, each frame of video is divided into the multiple of polygon
After unit, the two-dimensional space (or frame of video is mapped to after two-dimensional space to the segmentation for carrying out unit again) established is mapped to, and
Determine position of the summit of each unit in the two-dimensional space, the position can using coordinate of the summit in two-dimensional space come
Represent.The number on summit and the position on summit of the number, each unit for the unit that each frame of video can be split in the two-dimensional space
Put (such as coordinate) and be used as two dimensional model data.
The unit that frame of video or its segmentation obtain is mapped into two-dimensional space, can be according to well known by persons skilled in the art
Any mapping mode realizes that the present invention is without limitation.
In one example, as shown in figure 3, step 103 can include:
Step 301, for each frame of video, each unit is mapped into three dimensions according to predetermined threedimensional model;
Step 302, each unit number of vertices in three dimensions and vertex position are obtained, as the three-dimensional mould
Type data.
For example, x ∈ [- 1,1] can be established, and y ∈ [- 1,1], z ∈ [- 1,1] three dimensions, in step 102
Each frame of video, each unit of each frame of video can be mapped in the three dimensions established according to predetermined threedimensional model.Really
Determine position of the summit of each unit in the three dimensions, the position can be using the coordinate of the summit in three dimensions come table
Show.The number on summit that can be using each unit in the three dimensions and the position (such as coordinate) on summit are as threedimensional model number
According to.Wherein, predetermined threedimensional model can be spherical, regular hexahedron or cone etc., or customized other kinds of
Threedimensional model.In one example, for self-defined threedimensional model, model data may also include the mould of the self-defined threedimensional model
Shape parameter, so that video generation side reconstructs multi-angle video.
The unit that the frame of video for mapping to two-dimensional space or segmentation obtain is mapped into three dimensions, can be according to this area
Any mapping mode known to technical staff realizes that the present invention is without limitation.
Fig. 4 shows alternative predetermined threedimensional model schematic diagram, the type of the predetermined threedimensional model include but
It is not limited to the model example shown in Fig. 4.
In one example, for step 104, the video requency frame data of the frame of video and model data can be sent
To server (such as transcoding server etc.), in order to subsequent builds multi-angle video.Wherein, the video requency frame data can be
Using the video requency frame data after conventional coding compression, model data can include two dimensional model data, example in such as step 202
As the threedimensional model in step 301 type and such as step 302 in three-dimensional modeling data.Wherein, sending method can be with
Using wired or be wirelessly transmitted, the invention is not limited in this regard.
Server can carry out package to send to video generation side, package to the video requency frame data and model data received
Information may include above-mentioned two dimensional model data and three-dimensional modeling data.These packet informations can be transmitted as protocol data and answered
With layer, video compress layer, such as data transfer layer, coding layer etc. can also be stored in.
In one example, the unit that the video requency frame data can include obtaining after in step 201 splitting regards
Frequency frame data.
In one example, in order to realize the special-effect of video, each frame of video for example can be cut out or be stretched
Etc. appropriate processing.Exemplified by being stretched to video, for each frame of video, well known by persons skilled in the art can be used
What can realize the method to frame of video progress stretch processing to realize the purpose, such as bilinear interpolation etc..These processing
It can be carried out during frame of video or its unit are mapped into two-dimensional space.
In one example, in the case where having determined that two dimensional model data and three-dimensional modeling data, that is to say, that each
The number of unit in frame of video, number of vertices of each unit in two-dimensional space, vertex position, each unit is in three dimensions
, can before sending, by fixed two dimensional model data and threedimensional model number in the case that number of vertices, vertex position determine
The data volume of transmission is reduced according to processing is compressed in association with corresponding video requency frame data, to ensure video frame number according to this
And the related data in model data matches, it is easy to improve the quality of subsequent builds multi-angle video.
According to above example, video acquisition side can select appropriate mode according to the characteristic of the frame of video collected
Video is compressed, stretches and the specially treated such as cuts out, frame of video is carried out based on appropriate model and mapping mode two dimension and
Three-dimensional mapping, and the related information (such as model data) of these processing and mapping is passed into video generation side so that video
Generation side can generate multi-angle video on this basis so that the generation of multi-angle video more flexibly, suitability it is higher, into
Picture effect is more preferable, meanwhile, it is easy to establish unified data transfer format between video acquisition side and video generation side, and also drop
The low processing pressure of video generation side.
Embodiment 2
Fig. 5 shows the flow chart of video generation method according to an embodiment of the invention.As shown in figure 5, this method is main
Including:
Step 501, the video requency frame data and mould of the frame of video of each shooting direction needed for structure multi-angle video are obtained
Type data, the model data map to the two dimensional model data that two-dimensional space obtains including each frame of video, map to two-dimentional sky
Between each frame of video map to threedimensional model corresponding to three-dimensional modeling data and the three dimensions that three dimensions obtains
Type;
Step 502, three-dimensional modeling, three rebuild are carried out according to the type of the three-dimensional modeling data and threedimensional model
Dimension module;
Step 503, generated according to the two dimensional model data, the video requency frame data and the threedimensional model of the reconstruction
The multi-angle video.
According to the method for the embodiment, regarded by the frame of video for obtaining each shooting direction needed for structure multi-angle video
Frequency frame data and model data, threedimensional model is rebuild according to the model data, and according to model data, the video requency frame data
Multi-angle video is generated with the threedimensional model of the reconstruction.Two-dimensional space is mapped to due to including each frame of video in model data
Obtained two dimensional model data, map to two-dimensional space each frame of video map to three-dimensional modeling data that three dimensions obtains with
And the type of the threedimensional model corresponding to the three dimensions, with these model datas when can enable generation multi-angle video
For foundation, generating process and the suitability of the video data provided are improved, at utmost to be gone back to video requency frame data
Original, so as to obtain the more excellent multi-angle video of display effect, lift Consumer's Experience.In addition, can be easily using model data
The multi-angle video of generation continuously or discontinuously, generating mode are more flexible.
In one example, according to the two dimensional model data, the video requency frame data and the three-dimensional mould of the reconstruction
Type generates the multi-angle video and can realized by textures (such as texture mapping) mode, such as can be according to the two dimensional model
Data and the video requency frame data carry out textures to the threedimensional model of the reconstruction, to generate multi-angle video.
Wherein, the description to the type of two dimensional model data, three-dimensional modeling data and threedimensional model can be found in embodiment 1
In, here is omitted.
In one example, the frame of video includes multiple units of polygon, and the two dimensional model data include respectively regarding
The vertex position of the number of unit and each unit in two-dimensional space in frequency frame;The three-dimensional modeling data includes each list
First number of vertices and vertex position in three dimensions.Wherein, to the slit mode of the frame of video and its unit, two-dimentional mould
Type data and three-dimensional modeling data are similar with embodiment 1, and here is omitted.
In one example, two dimensional model data can be corresponding with the shooting direction of frame of video, easily to build
The video at the continuously or discontinuously visual angle corresponding with shooting direction.On corresponded manner reference can be made to embodiment 1.
In one example, step 502 can include:According to the type of the threedimensional model and each unit three
Number of vertices and vertex position in dimension space, three-dimensional modeling is carried out to each unit, obtains the threedimensional model of the reconstruction of each unit.
Specifically, x ∈ [- 1,1] can be initially set up, y ∈ [- 1,1], z ∈ [- 1,1] three dimensions, according to receiving
Threedimensional model type (if self-defined threedimensional model, it may include model parameter) and each unit in three dimensions
Number of vertices and vertex position (such as coordinate) re-establish consistent with the type of threedimensional model in the three dimensions of the foundation
Threedimensional model, the threedimensional model as reconstruction.
In one example, step 503 can include:For each unit, according to summit of the unit in two-dimensional space
The threedimensional model of reconstruction of the video requency frame data of number and vertex position and the unit to the unit carries out stick picture disposing, with
To the multi-angle video.
By rebuilding the threedimensional model consistent with the type of the threedimensional model in model data, the two-dimentional mould based on frame of video
Type data and three-dimensional modeling data, stick picture disposing is carried out to the threedimensional model using video requency frame data and obtains multi-angle video, can
With it is maximum reduce distortion and deformation phenomenon, also, video acquisition side map and processing procedure in video data
The stretching done, the various optimization processings such as cut out and can also reflect in the multi-angle video of reconstruction, therefore regard can be strengthened more
The display effect of angle video, lift Consumer's Experience.
In one example, before stick picture disposing, can as needed to the part in each unit of each data frame or
Whole video datas are handled (such as cut out or stretch).By taking stretching as an example, it can use known to those skilled in the art
Any can realize that the method that stretch processing is carried out to frame of video realizes that the stretch processing to each unit, such as bilinearity are inserted
Value method etc..
In one example, video generation can be as needed, chooses to freedom and flexibility in received frame of video
Any frame of video even any unit, to generate the multi-angle video at continuously or discontinuously visual angle.
In one example, after stick picture disposing is completed, the various visual angles generated can be regarded according to visual angle projection theory
Frequency shows user.Wherein visual angle projection theory, it is exactly the visual effect difference that object gives people in different depth, for example, together
One object different depth position in front of human eye, it is different to respectfully present the size given people and/or angle etc..
In one example, the multi-angle video generated can be overlapped with other videos or image, to generate most
The video pictures of user are presented to eventually.For example, the picture if necessary to displaying is fixed background picture and some positions
On different visual angles video pictures, such as in static background picture, have two discontinuous positions be respectively necessary for presenting it is corresponding
The video pictures at visual angle, at this point it is possible to based on received video data and model data in the two discontinuous positions
It is upper to generate the video pictures of different visual angles respectively, and be added on static background frame.Because the data volume of static picture
The far smaller than data volume of frame of video, by the way of above-mentioned superposition, in the display for the multi-angle video for ensureing not influenceing generation
In the case of effect, code check can be effectively reduced, lifts Consumer's Experience.
Embodiment 3
Fig. 6 shows the structure chart of video acquisition device according to an embodiment of the invention.As shown in fig. 6, the device can use
In the operation for realizing each step of method in embodiment 1, the video acquisition device mainly includes:
Acquiring unit 601, for obtaining the frame of video of each shooting direction needed for structure multi-angle video;
First map unit 602, for each frame of video to be mapped into two-dimensional space, each frame of video is obtained in two-dimensional space
Two dimensional model data;
Second map unit 603, for each frame of video for mapping to the two-dimensional space to be reflected by predetermined threedimensional model
Three dimensions is incident upon, obtains the three-dimensional modeling data of each frame of video in three dimensions, and
Transmitting element 604, for by the video requency frame data of the frame of video and model data send to server so as to
Multi-angle video is built, wherein the type of the model data including the threedimensional model, two dimensional model data and described
Three-dimensional modeling data.
According to the video acquisition device of the embodiment, by obtaining the frame of video of each shooting direction, each video counts will be included
Two dimensional model data that two-dimensional space obtains are mapped to according to, each frame of video, map to each frame of video of two-dimensional space by predetermined
Threedimensional model maps to the three-dimensional modeling data that three dimensions obtains and the type of threedimensional model is sent to server jointly,
So that video generation side etc. builds multi-angle video.By the video requency frame data and its two dimensional model data, three-dimensional mould that are provided
The model data such as type data and the type of threedimensional model is associated so that and it is more flexible to generate the mode of multi-angle video, and not
Panoramic video is only limitted to, arbitrary continuation or the multi-angle video at discontinuous visual angle can be included.Also, the video requency frame data of collection
The threedimensional model of any type (including customization type) can be suitably applied to, and during multi-angle video generates
Can be using model data corresponding to video requency frame data as foundation, so as to reduce distortion and deformation etc., Jin Erti to the full extent
Rise Consumer's Experience.
In one example, acquiring unit 601 can be those skilled in the art can use can obtain structure it is more
The obtaining widget of the frame of video of each shooting direction needed for multi-view video, such as collecting device (camera, sensor etc.),
Device or its combination of the frame of video of each shooting direction needed for structure multi-angle video can be arbitrarily gathered, collecting device can be with
It is one or more, can is that distribution sets up separately in desired position.In another embodiment, acquiring unit 601 can be with
It is the part that can receive the frame of video from third-party each shooting direction, the invention is not limited in this regard.
In one example, each shooting direction can continuously can also be discontinuous in real space.
In one example, the video acquisition unit can also include compression unit, and compression unit can utilize appropriate
The video requency frame data of frame of video of the compress mode to being collected is compressed, to reduce data volume.
In one example, the first map unit 602 each frame of video is mapped into two-dimensional space can be with each frame of video
Shooting direction is foundation, for example, being directed to discontinuous two frame of video of same time point shooting direction, can be reflected in two-dimensional space
Corresponding discontinuous position is incident upon, in order to generate the discontinuous multi-angle video in visual angle.Certainly, those skilled in the art
The relation that can be arranged as required between the shooting direction of frame of video and its mapping position in two-dimensional space, for example, bag
Include and spliced the discontinuous frame of video of shooting direction in two-dimensional space, connect their mapping positions in two-dimensional space
It is continuous, to obtain required composite video image.
In one example, two dimensional model data may include coordinate of the characteristic point in two-dimensional space in frame of video, three
Dimension module data may include the coordinate of characteristic point in three dimensions in frame of video, and the type of threedimensional model can be conventional
The types such as spheroid, regular hexahedron.If self-defined threedimensional model, the mould of the type available customization threedimensional model of threedimensional model
Shape parameter represents.
In one example, each frame of video can be mapped to two-dimentional sky by the first map unit 602 in the following way
Between, obtain two dimensional model data of each frame of video in two-dimensional space.
For example, first, in the two-dimensional space, each frame of video is divided into multiple units of polygon.Specifically
Ground, for each frame of video of acquisition, each frame of video can be divided into multiple units of polygon in two-dimensional space.Change
Yan Zhi, each frame of video after singulated is spliced by multiple polygonal elements.Wherein, multiple lists that each frame of video is divided into
Member can be same type of polygon, such as all units are all triangle or other polygons, or can be inhomogeneity
The polygon of type, such as triangle and other polygons etc., the invention is not limited in this regard can be included.In one example,
Processing can be compressed to each video after segmentation.
Each frame of video is divided into multiple units of polygon, in order to obtain two dimensional model data and threedimensional model number
According to, and then build multi-angle video beneficial to according to the type of two dimensional model data, three-dimensional modeling data and threedimensional model.And
And by being smaller unit by frame of video cutting, enabling two dimensional image is mapped into three-dimensional with smaller unit, more can
Enough it is adapted to different threedimensional model (including self-defined threedimensional model), reduce further the distortion of image, improve
Imaging effect.
Secondly, the number of vertices and summit position of the number of unit and each unit in two-dimensional space in each frame of video are obtained
Put, as the two dimensional model data.
For example, x ∈ [0,1] can be established, y ∈ [0,1] two-dimensional space, each frame of video is divided into the multiple of polygon
After unit, the two-dimensional space (or frame of video is mapped to after two-dimensional space to the segmentation for carrying out unit again) established is mapped to, and
Determine position of the summit of each unit in the two-dimensional space, the position can using coordinate of the summit in two-dimensional space come
Represent.The number on summit and the position on summit of the number, each unit for the unit that each frame of video can be split in the two-dimensional space
Put (such as coordinate) and be used as two dimensional model data.
First map unit 602 can well known by persons skilled in the art each frame of video can be mapped into two-dimensional space
Part, such as can be realized by general processor combination logical order, the present invention is without limitation.
In one example, the second map unit 603 can will map to the two-dimensional space in the following way
Each frame of video maps to three dimensions according to predetermined threedimensional model, obtains the threedimensional model number of each frame of video in three dimensions
According to.
For example, for each frame of video, each unit is mapped into three dimensions according to predetermined threedimensional model;Obtain
Each unit number of vertices in three dimensions and vertex position, as the three-dimensional modeling data.Specifically, example
Such as, x ∈ [- 1,1], y ∈ [- 1,1], z ∈ [- 1,1] three dimensions, each video for mapping to two-dimensional space can be established
Frame, each unit of each frame of video can be mapped in the three dimensions established according to predetermined threedimensional model.Determine each list
Position of the summit of member in the three dimensions, the position can be represented using the coordinate of the summit in three dimensions.Can
Using the position (such as coordinate) on the number on summit of each unit in the three dimensions and summit as three-dimensional modeling data.Its
In, predetermined threedimensional model can be spherical, regular hexahedron or cone etc., or customized other kinds of three-dimensional mould
Type.In one example, for self-defined threedimensional model, model data may also include the model ginseng of the self-defined threedimensional model
Number, so that video generation side reconstructs multi-angle video.
Second map unit 603 can be each video well known by persons skilled in the art that can will map to two-dimensional space
Frame maps to the part of three dimensions according to predetermined threedimensional model, such as can be by general processor combination logical order come real
Existing, the present invention is without limitation.
In one example, transmitting element 604 can send the video requency frame data of the frame of video and model data
To server (such as transcoding server etc.), in order to subsequent builds multi-angle video.Wherein, the video requency frame data can be
Using the video requency frame data after conventional coding compression, model data can include the two dimension that for example the first map unit 602 obtains
Model data, the such as second mapping are if only the type and such as the second map unit 603 of 603 threedimensional models used obtain
Three-dimensional modeling data.Wherein, sending method can be used wired or is wirelessly transmitted, and the present invention does not make to this
Limitation.
Transmitting element 604 can be it is well known by persons skilled in the art can by the video requency frame data of the frame of video and
Model data is sent to the part of server, such as can be realized by general transmission hardware module combination interrelated logic module,
The present invention is without limitation.
Server can carry out package to send to video generation side, package to the video requency frame data and model data received
Information may include above-mentioned two dimensional model data and three-dimensional modeling data.These packet informations can be transmitted as protocol data and answered
With layer, video compress layer, such as data transfer layer, coding layer etc. can also be stored in.
In one example, the video requency frame data can include the video requency frame data of the unit obtained after segmentation.
In one example, the video acquisition unit can also include processing unit, in order to realize the special effect of video
Fruit, the processing that processing unit can be appropriate such as being cut out or stretch to each frame of video.Exemplified by being stretched to video,
For each frame of video, those skilled in the art can utilize processing unit and using such as bilinearity differential technique come to each frame of video
Carry out stretch processing.These processing can be carried out during frame of video or its unit are mapped into two-dimensional space.Processing unit
Can be it is well known by persons skilled in the art can be to the part of each frame of video processing such as being cut out or stretch, such as can
Realized by general processor combination logical order, the present invention is without limitation.
In one example, the video acquisition device may also include compression unit, have determined that two dimensional model data and
In the case of three-dimensional modeling data, that is to say, that the number of unit in each frame of video, summit of each unit in two-dimensional space
Number, vertex position, in the case that each unit number of vertices in three dimensions, vertex position determine, can before sending,
Video requency frame data will be compressed in association with corresponding according to fixed two dimensional model data and three-dimensional modeling data
Handle to reduce the data volume of transmission, to ensure that the related data in video requency frame data and model data matches, be easy to carry
The quality of high subsequent builds multi-angle video.Compression unit can be it is well known by persons skilled in the art can be to video requency frame data
The part being compressed, such as can be realized by general processor combination logical order, the present invention is without limitation.
According to above example, video acquisition side can select appropriate mode according to the characteristic of the frame of video collected
Video is compressed, stretches and the specially treated such as cuts out, frame of video is carried out based on appropriate model and mapping mode two dimension and
Three-dimensional mapping, and the related information (such as model data) of these processing and mapping is passed into video generation side so that video
Generation side can generate multi-angle video on this basis so that the generation of multi-angle video more flexibly, suitability it is higher, into
Picture effect is more preferable, meanwhile, it is easy to establish unified data transfer format between video acquisition side and video generation side, and also drop
The low processing pressure of video generation side.
Embodiment 4
Fig. 7 shows the structure chart of video-generating device according to an embodiment of the invention.As shown in fig. 7, the device can use
In the operation for realizing each step of method in embodiment 2, the video-generating device mainly includes:
Acquiring unit 701, the video frame number of the frame of video for obtaining each shooting direction needed for structure multi-angle video
According to this and model data, the model data map to the two dimensional model data that two-dimensional space obtains including frame of video, mapped to
Each frame of video of two-dimensional space map to the three-dimensional modeling data that three dimensions obtains and corresponding to the three dimensions three
The type of dimension module;
Modeling unit 702, for carrying out three-dimensional modeling according to the type of the three-dimensional modeling data and threedimensional model, obtain
The threedimensional model of reconstruction;
Generation unit 703, for according to the two dimensional model data, the video requency frame data and the three-dimensional of the reconstruction
Model generates the multi-angle video.
According to the video-generating device of the embodiment, by obtaining regarding for each shooting direction needed for structure multi-angle video
The video requency frame data and model data of frequency frame, threedimensional model is rebuild according to the model data, and according to model data, described regard
Frequency frame data and the threedimensional model of reconstruction generation multi-angle video.Mapped to due to including each frame of video in model data
The two dimensional model data that two-dimensional space obtains, each frame of video for mapping to two-dimensional space map to the three-dimensional mould that three dimensions obtains
The type of threedimensional model corresponding to type data and the three dimensions, with these when can enable generation multi-angle video
Model data is foundation, generating process and the suitability of the video data provided is improved, with utmost to video frame number
According to being reduced, so as to obtain the more excellent multi-angle video of display effect, Consumer's Experience is lifted.In addition, can using model data
To be conveniently generated multi-angle video continuously or discontinuously, generating mode is more flexible.
Acquiring unit 701 can be well known by persons skilled in the art can get for building needed for multi-angle video
Each shooting direction the video requency frame data of frame of video and the part of model data, such as can be combined by general processor
Logical order is realized, can also be realized by special hardware circuit.Wherein, the model data maps including each frame of video
The two dimensional model data obtained to two-dimensional space, each frame of video for mapping to two-dimensional space map to the three-dimensional that three dimensions obtains
The type of threedimensional model corresponding to model data and the three dimensions.
In one example, generation unit 703 can be by textures (such as texture mapping) mode, to realize according to
The threedimensional model of two dimensional model data, the video requency frame data and the reconstruction generates the multi-angle video, such as can root
Textures are carried out to the threedimensional model of the reconstruction according to the two dimensional model data and the video requency frame data, regarded with generation various visual angles
Frequently.
Wherein, the description to the type of two dimensional model data, three-dimensional modeling data and threedimensional model can be found in embodiment 1 or
3, here is omitted.
In one example, the frame of video includes multiple units of polygon, and the two dimensional model data include respectively regarding
The vertex position of the number of unit and each unit in two-dimensional space in frequency frame;The three-dimensional modeling data includes each list
First number of vertices and vertex position in three dimensions.Wherein, to the slit mode of the frame of video and its unit, two-dimentional mould
Type data and three-dimensional modeling data are similar with embodiment 1 or 3, and here is omitted.
In one example, two dimensional model data can be corresponding with the shooting direction of frame of video, so as to generation unit 703
It is conveniently generated the video at the continuously or discontinuously visual angle corresponding with shooting direction.On corresponded manner reference can be made to embodiment 1
Or 3.
In one example, modeling unit 702 can come in the following way according to the type of the threedimensional model and
Each unit number of vertices in three dimensions and vertex position, three-dimensional modeling is carried out to each unit, to obtain each unit
Reconstruction threedimensional model.
Specifically, x ∈ [- 1,1] can be initially set up, y ∈ [- 1,1], z ∈ [- 1,1] three dimensions, according to receiving
Threedimensional model type (if self-defined threedimensional model, it may include model parameter) and each unit in three dimensions
Number of vertices and vertex position (such as coordinate) re-establish consistent with the type of threedimensional model in the three dimensions of the foundation
Threedimensional model, the threedimensional model as reconstruction.
Modeling unit 702 can be it is well known by persons skilled in the art can be according to the type of the threedimensional model and each
Unit number of vertices in three dimensions and vertex position, the part of three-dimension modeling is carried out to each unit, such as
It can be realized by general processor combination logical order, the present invention is without limitation.
In one example, generation unit 703 can be directed to each unit, according to summit of the unit in two-dimensional space
The threedimensional model of reconstruction of the video requency frame data of number and vertex position and the unit to the unit carries out stick picture disposing, to obtain
The multi-angle video.
Generation unit 703 can be it is well known by persons skilled in the art can be according to the video requency frame data and pattern number of acquisition
According to the part of generation multi-angle video, such as it can be realized by general processor combination logical order, the present invention is not done to this
Limitation.
By rebuilding the threedimensional model consistent with the type of the threedimensional model in model data, the two-dimentional mould based on frame of video
Type data and three-dimensional modeling data, stick picture disposing is carried out to the threedimensional model using video requency frame data and obtains multi-angle video, can
With it is maximum reduce distortion and deformation phenomenon, also, video acquisition side map and processing procedure in video data
The stretching done, the various optimization processings such as cut out and can also reflect in the multi-angle video of reconstruction, therefore regard can be strengthened more
The display effect of angle video, lift Consumer's Experience.
In one example, video-generating device can also include processing unit, and before stick picture disposing, processing unit can be with
The partly or completely video data in each unit of each data frame is handled (such as cut out or stretch) as needed.
By taking stretching as an example, it is based on known any can realize using processing unit using those skilled in the art and frame of video is drawn
The method of processing is stretched to realize the stretch processing to each unit, such as bilinear interpolation etc..Processing unit can be this area
Can be to the part of each frame of video processing such as being cut out or stretch known to technical staff, such as general procedure can be passed through
Device combination logical order realizes that the present invention is without limitation.
In one example, video generation can utilize video-generating device as needed, choose to freedom and flexibility and connect
Any frame of video even any unit in the frame of video received, to generate the multi-angle video at continuously or discontinuously visual angle.
In one example, after stick picture disposing is completed, the various visual angles generated can be regarded according to visual angle projection theory
Frequency shows user.Wherein visual angle projection theory, it is exactly the visual effect difference that object gives people in different depth, for example, together
One object different depth position in front of human eye, it is different to respectfully present the size given people and/or angle etc..
In one example, video-generating device may also include superpositing unit, and superpositing unit regards the various visual angles generated
Frequency is overlapped with other videos or image, and the video pictures of user are finally be presented to generation.For example, if necessary to open up
The picture shown is fixed background picture and the video pictures of the different visual angles on some positions, such as in static background picture,
There are the video pictures that two discontinuous positions are respectively necessary for presenting corresponding visual angle, at this point it is possible to based on received video
Data and model data generate the video pictures of different visual angles respectively on the two discontinuous positions, and it is static to be added to
On background frame.Because the data volume of static picture is far smaller than the data volume of frame of video, by the way of above-mentioned superposition,
In the case that guarantee does not influence the display effect of the multi-angle video of generation, code check can be effectively reduced, lifts Consumer's Experience.It is folded
It can well known by persons skilled in the art can carry out the multi-angle video generated and other videos or image to add unit
The part of superposition, such as can be realized by general processor combination logical order, the present invention is without limitation.
Embodiment 5
Fig. 8 shows a kind of structured flowchart of video processing equipment of an alternative embodiment of the invention.The equipment
1100 can be host server, personal computer PC or portable portable computer or the end for possessing computing capability
End etc..The specific embodiment of the invention is not limited the specific implementation of calculate node.
The equipment 1100 includes processor (processor) 1110, communication interface (Communications
Interface) 1120, memory (memory) 1130 and bus 1140.Wherein, processor 1110, communication interface 1120 and
Memory 1130 completes mutual communication by bus 1140.
Communication interface 1120 is used for and network device communications, and wherein the network equipment includes such as Virtual Machine Manager center, is total to
Enjoy storage etc..
Processor 1110 is used for configuration processor.Processor 1110 is probably a central processor CPU, or special collection
Into circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the present invention
One or more integrated circuits of embodiment.
Memory 1130 is used to deposit file.Memory 1130 may include high-speed RAM memory, it is also possible to also including non-
Volatile memory (non-volatile memory), for example, at least a magnetic disk storage.Memory 1130 can also be deposited
Memory array.Memory 1130 is also possible to by piecemeal, and described piece can be combined into virtual volume by certain rule.
In a kind of possible embodiment, said procedure can be the program code for including computer-managed instruction.The journey
Sequence is particularly used in the method realized described in embodiment 1 or 2.
Those of ordinary skill in the art are it is to be appreciated that each example components and algorithm in embodiment described herein
Step, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions actually with hardware also
It is software form to realize, application-specific and design constraint depending on technical scheme.Professional and technical personnel can be directed to
It is specific to realize described function using different devices is selected, but this realization is it is not considered that beyond model of the invention
Enclose.
If in the form of computer software come realize the function and as independent production marketing or in use, if
To a certain extent it is believed that all or part (such as the part to be contributed to prior art) of technical scheme is
Embody in form of a computer software product.The computer software product is generally stored inside computer-readable non-volatile
In storage medium, including some instructions are make it that computer equipment (can be that personal computer, server or network are set
It is standby etc.) perform all or part of step of various embodiments of the present invention device.And foregoing storage medium include USB flash disk, mobile hard disk,
Read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic
Dish or CD etc. are various can be with the medium of store program codes.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (24)
1. a kind of video capture method, including:
Obtain the frame of video of each shooting direction needed for structure multi-angle video;
Each frame of video is mapped into two-dimensional space, obtains two dimensional model data of each frame of video in two-dimensional space;
Each frame of video for mapping to the two-dimensional space is mapped into three dimensions according to predetermined threedimensional model, obtains each video
The three-dimensional modeling data of frame in three dimensions, and
The video requency frame data of the frame of video and model data are sent to server to build multi-angle video, wherein institute
Stating model data includes type, the two dimensional model data and the three-dimensional modeling data of the threedimensional model,
Wherein, the multi-angle video includes the various visual angles of panoramic video, the multi-angle video at continuous visual angle and discontinuous visual angle
At least one of video.
2. according to the method for claim 1, wherein, each frame of video is mapped into two-dimensional space, obtains each frame of video two
Two dimensional model data in dimension space, including:
In the two-dimensional space, each frame of video is divided into multiple units of polygon;
The number of vertices and vertex position of the number of unit and each unit in two-dimensional space in each frame of video are obtained, as institute
State two dimensional model data.
3. according to the method for claim 2, wherein, each frame of video of the two-dimensional space will be mapped to according to predetermined three
Dimension module maps to three dimensions, obtains the three-dimensional modeling data of each frame of video in three dimensions, including:
For each frame of video, each unit is mapped into three dimensions according to predetermined threedimensional model;
Each unit number of vertices in three dimensions and vertex position are obtained, as the three-dimensional modeling data.
4. according to the method for claim 2, wherein, the video requency frame data includes the video requency frame data of unit.
5. according to the method for claim 2, wherein, the unit is triangle.
6. method as claimed in any of claims 1 to 5, wherein, each shooting direction connects in real space
It is continuous or discontinuous.
7. method as claimed in any of claims 1 to 5, wherein, each frame of video is mapped into two-dimensional space includes:
According to the shooting direction of the frame of video, each frame of video is mapped into two-dimensional space.
8. a kind of video generation method, including:
Obtain the video requency frame data and model data of the frame of video of each shooting direction needed for structure multi-angle video, the mould
Type data map to the two dimensional model data that two-dimensional space obtains including each frame of video, and each frame of video for mapping to two-dimensional space is reflected
The type for the threedimensional model being incident upon corresponding to the three-dimensional modeling data and the three dimensions that three dimensions obtains;
Three-dimensional modeling, the threedimensional model rebuild are carried out according to the type of the three-dimensional modeling data and threedimensional model;
The various visual angles are generated according to the two dimensional model data, the video requency frame data and the threedimensional model of the reconstruction to regard
Frequently,
Wherein, the multi-angle video includes the various visual angles of panoramic video, the multi-angle video at continuous visual angle and discontinuous visual angle
At least one of video.
9. the method according to claim 11, wherein
The frame of video includes multiple units of polygon;
The two dimensional model data include the number of vertices of the number and each unit of unit in each frame of video in two-dimensional space
And vertex position;
The three-dimensional modeling data includes each unit number of vertices in three dimensions and vertex position.
10. according to the method for claim 9, wherein, carried out according to the type of the three-dimensional modeling data and threedimensional model
Three-dimensional modeling, the threedimensional model rebuild, including:
According to the type of the threedimensional model and each unit number of vertices in three dimensions and vertex position, to each
Unit carries out three-dimensional modeling, obtains the threedimensional model of the reconstruction of each unit.
11. according to the method for claim 10, wherein, according to the two dimensional model data, the video requency frame data and
The threedimensional model of the reconstruction generates the multi-angle video, including:
For each unit, according to number of vertices and the video frame number of vertex position and the unit of the unit in two-dimensional space
Stick picture disposing is carried out according to the threedimensional model of the reconstruction to the unit, obtains the multi-angle video.
12. the method according to any one in claim 8 to 11, the shooting side of the two dimensional model data and frame of video
To corresponding.
13. a kind of video acquisition device, including:
Acquiring unit, for obtaining the frame of video of each shooting direction needed for structure multi-angle video;
First map unit, for each frame of video to be mapped into two-dimensional space, obtain two dimension of each frame of video in two-dimensional space
Model data;
Second map unit, for each frame of video for mapping to the two-dimensional space to be mapped into three-dimensional by predetermined threedimensional model
Space, the three-dimensional modeling data of each frame of video in three dimensions is obtained, and
Transmitting element, for the video requency frame data of the frame of video and model data to be sent to server to build regard more
Angle video, wherein the model data includes type, the two dimensional model data and the threedimensional model of the threedimensional model
Data,
Wherein, the multi-angle video includes the various visual angles of panoramic video, the multi-angle video at continuous visual angle and discontinuous visual angle
At least one of video.
14. device according to claim 13, wherein, each frame of video is mapped into two-dimensional space, each frame of video is obtained and exists
Two dimensional model data in two-dimensional space, including:
In the two-dimensional space, each frame of video is divided into multiple units of polygon;
The number of vertices and point position of the number of unit and each unit in two-dimensional space in each frame of video are obtained, as described
Two dimensional model data.
15. device according to claim 14, wherein, each frame of video of the two-dimensional space will be mapped to according to predetermined
Threedimensional model maps to three dimensions, obtains the three-dimensional modeling data of each frame of video in three dimensions, including:
For each frame of video, each unit is mapped into three dimensions according to predetermined threedimensional model;
Each unit number of vertices in three dimensions and vertex position are obtained, as the three-dimensional modeling data.
16. device according to claim 15, wherein, the video requency frame data includes the video requency frame data of unit.
17. device according to claim 15, wherein, the unit is triangle.
18. according to the device described in any one in claim 13-17, wherein, each shooting direction is in real space
Continuously or discontinuously.
19. the device according to any one in claim 13 to 17, wherein, each frame of video is mapped into two-dimensional space bag
Include:
According to the shooting direction of the frame of video, each frame of video is mapped into two-dimensional space.
20. a kind of video-generating device, including:
Acquiring unit, the video requency frame data and mould of the frame of video for obtaining each shooting direction needed for structure multi-angle video
Type data, the model data map to the two dimensional model data that two-dimensional space obtains including frame of video, map to two-dimensional space
Each frame of video map to threedimensional model corresponding to three-dimensional modeling data and the three dimensions that three dimensions obtains
Type;
Modeling unit, for carrying out three-dimensional modeling according to the type of the three-dimensional modeling data and threedimensional model, rebuild
Threedimensional model;
Generation unit, for being given birth to according to the two dimensional model data, the video requency frame data and the threedimensional model of the reconstruction
Into the multi-angle video,
Wherein, the multi-angle video includes the various visual angles of panoramic video, the multi-angle video at continuous visual angle and discontinuous visual angle
At least one of video.
21. device according to claim 20, wherein,
The frame of video includes multiple units of polygon;
The two dimensional model data include the number of vertices of the number and each unit of unit in each frame of video in two-dimensional space
And vertex position;
The three-dimensional modeling data includes each unit number of vertices in three dimensions and vertex position.
22. device according to claim 21, wherein, carried out according to the type of the three-dimensional modeling data and threedimensional model
Three-dimensional modeling, the threedimensional model rebuild, including:
According to the type of the threedimensional model and each unit number of vertices in three dimensions and vertex position, to each
Unit carries out three-dimensional modeling, obtains the threedimensional model of the reconstruction of each unit.
23. device according to claim 22, wherein, according to the two dimensional model data, the video requency frame data and
The threedimensional model of the reconstruction generates the multi-angle video, including:
For each unit, according to number of vertices and the video frame number of vertex position and the unit of the unit in two-dimensional space
Stick picture disposing is carried out according to the threedimensional model of the reconstruction to the unit, obtains the multi-angle video.
24. the device according to any one in claim 21 to 23, the shooting of the two dimensional model data and frame of video
Direction is corresponding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610614498.XA CN106131535B (en) | 2016-07-29 | 2016-07-29 | Video capture method and device, video generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610614498.XA CN106131535B (en) | 2016-07-29 | 2016-07-29 | Video capture method and device, video generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106131535A CN106131535A (en) | 2016-11-16 |
CN106131535B true CN106131535B (en) | 2018-03-02 |
Family
ID=57255376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610614498.XA Expired - Fee Related CN106131535B (en) | 2016-07-29 | 2016-07-29 | Video capture method and device, video generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106131535B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109547766B (en) * | 2017-08-03 | 2020-08-14 | 杭州海康威视数字技术股份有限公司 | Panoramic image generation method and device |
CN110536076A (en) * | 2018-05-23 | 2019-12-03 | 福建天晴数码有限公司 | A kind of method and terminal that Unity panoramic video is recorded |
EP3833029A4 (en) * | 2018-08-02 | 2021-09-01 | Sony Group Corporation | Image processing apparatus and method |
CN112738010B (en) * | 2019-10-28 | 2023-08-22 | 阿里巴巴集团控股有限公司 | Data interaction method and system, interaction terminal and readable storage medium |
CN110910504A (en) * | 2019-11-28 | 2020-03-24 | 北京世纪高通科技有限公司 | Method and device for determining three-dimensional model of region |
CN112465939B (en) * | 2020-11-25 | 2023-01-24 | 上海哔哩哔哩科技有限公司 | Panoramic video rendering method and system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100443552B1 (en) * | 2002-11-18 | 2004-08-09 | 한국전자통신연구원 | System and method for embodying virtual reality |
US7499586B2 (en) * | 2005-10-04 | 2009-03-03 | Microsoft Corporation | Photographing big things |
US8009178B2 (en) * | 2007-06-29 | 2011-08-30 | Microsoft Corporation | Augmenting images for panoramic display |
JP5024410B2 (en) * | 2010-03-29 | 2012-09-12 | カシオ計算機株式会社 | 3D modeling apparatus, 3D modeling method, and program |
CN104050714B (en) * | 2014-06-03 | 2017-03-15 | 崔岩 | A kind of object digital three-dimensional reconstruction system and method based on optical scanning |
CN104599305B (en) * | 2014-12-22 | 2017-07-14 | 浙江大学 | A kind of two three-dimensional animation producing methods combined |
CN104851080B (en) * | 2015-05-08 | 2017-11-17 | 浙江大学 | A kind of three-dimensional PET images method for reconstructing based on TV |
CN105678748B (en) * | 2015-12-30 | 2019-01-15 | 清华大学 | Interactive calibration method and device in three-dimension monitoring system based on three-dimensionalreconstruction |
-
2016
- 2016-07-29 CN CN201610614498.XA patent/CN106131535B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN106131535A (en) | 2016-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106131535B (en) | Video capture method and device, video generation method and device | |
CN107392984A (en) | A kind of method and computing device based on Face image synthesis animation | |
CN111247562A (en) | Point cloud compression using hybrid transforms | |
CN108227916A (en) | For determining the method and apparatus of the point of interest in immersion content | |
EP3992919A1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
CN109675315A (en) | Generation method, device, processor and the terminal of avatar model | |
CN110335343A (en) | Based on RGBD single-view image human body three-dimensional method for reconstructing and device | |
CN108352082B (en) | Techniques to crowd 3D objects into a plane | |
US20160217610A1 (en) | Method, Apparatus and Terminal for Reconstructing Three-Dimensional Object | |
GB2448233A (en) | Producing Image Data Representing Retail Packages | |
TWI502546B (en) | System, method, and computer program product for extruding a model through a two-dimensional scene | |
US9754398B1 (en) | Animation curve reduction for mobile application user interface objects | |
CN112138386A (en) | Volume rendering method and device, storage medium and computer equipment | |
CN112288665A (en) | Image fusion method and device, storage medium and electronic equipment | |
WO2012097556A1 (en) | Three dimensional (3d) icon processing method, device and mobile terminal | |
EP3533218A1 (en) | Simulating depth of field | |
CN111583372B (en) | Virtual character facial expression generation method and device, storage medium and electronic equipment | |
CN104519289B (en) | Unpacking method, device and system for packing picture frame | |
CN106127862A (en) | The treating method and apparatus of figure | |
CN111710020A (en) | Animation rendering method and device and storage medium | |
CN107077746A (en) | System, method and computer program product for network transmission and the Automatic Optimal of the 3D texture models of real-time rendering | |
US20220058833A1 (en) | Complexity reduction of video-based point cloud compression encoding using grid-based segmentation | |
CN110751026B (en) | Video processing method and related device | |
CN109697748A (en) | Model compression processing method, model pinup picture processing method device, storage medium | |
KR102417959B1 (en) | Apparatus and method for providing three dimensional volumetric contents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200511 Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Patentee after: Alibaba (China) Co.,Ltd. Address before: 200241, room 2, floor 02, building 555, Dongchuan Road, Minhang District, Shanghai Patentee before: Transmission network technology (Shanghai) Co., Ltd |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180302 Termination date: 20200729 |