CN106131535A - Video capture method and device, video generation method and device - Google Patents

Video capture method and device, video generation method and device Download PDF

Info

Publication number
CN106131535A
CN106131535A CN201610614498.XA CN201610614498A CN106131535A CN 106131535 A CN106131535 A CN 106131535A CN 201610614498 A CN201610614498 A CN 201610614498A CN 106131535 A CN106131535 A CN 106131535A
Authority
CN
China
Prior art keywords
video
frame
data
dimensional
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610614498.XA
Other languages
Chinese (zh)
Other versions
CN106131535B (en
Inventor
王涛
柯金杰
赵刚
顾思斌
潘柏宇
王冀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Chuanxian Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuanxian Network Technology Shanghai Co Ltd filed Critical Chuanxian Network Technology Shanghai Co Ltd
Priority to CN201610614498.XA priority Critical patent/CN106131535B/en
Publication of CN106131535A publication Critical patent/CN106131535A/en
Application granted granted Critical
Publication of CN106131535B publication Critical patent/CN106131535B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of video capture method and device, video generation method and device, including: obtain the frame of video building each shooting direction needed for multi-angle video;Each frame of video is mapped to two-dimensional space, obtains each frame of video two dimensional model data in two-dimensional space;The each frame of video mapping to two-dimensional space is mapped to three dimensions according to predetermined threedimensional model, obtain each frame of video three-dimensional modeling data in three dimensions, and video requency frame data and the model data of frame of video are sent to server to build multi-angle video, wherein model data includes the type of threedimensional model, two dimensional model data and three-dimensional modeling data.The present invention can be based on the video requency frame data obtained, about two dimensional model data, three-dimensional modeling data and the threedimensional model of each frame of video, generation includes panoramic video, any non-panoramic video or other continuous or discrete multi-angle videos, and decrease distortion and deformation etc. to the full extent, and then promote Consumer's Experience.

Description

Video capture method and device, video generation method and device
Technical field
The present invention relates to video technique field, particularly relate to a kind of video capture method and device, video generation method and Device.
Background technology
The collection of video information and the important directions that transmission is Information Technology Development, conventional video is due to single shooting Head limited view, a certain layout at the scene of can only absorbing, it is impossible to allow different users watch oneself feeling of different angles emerging simultaneously The scene of interest, it is impossible to meet the individual demand of user.But, the panoramic video come into vogue in recent years overcomes conventional video Drawbacks described above.
Panoramic video technology relates to the fields such as computer graphics, human-computer interaction technology, sensing technology, artificial intelligence, it With computational methods generate three-dimensional true to nature depending on, listen sensation, it is provided that user, about the simulation of the sense organ such as vision, audition, allows user lead to Cross and use various device oneself " to be projected " in this virtual environment, make user as immersively observed three dimensions Interior scene.Panoramic video can regard a kind of special case of multi-angle video as, i.e. includes level 360 ° and vertical 360 ° of institutes There is the video at visual angle.
At present, the two-dimensional video data collected often only is supplied to video generation side by video acquisition square tube, and video generates Root sets up three-dimensional panoramic video according to this two-dimensional video data by specific model (the most spherical, regular hexahedron or cone etc.) Watch with supply user, but, the two-dimensional video data that video acquisition side provides is usually applicable only to generate panoramic video, Video generation side cannot generate various discontinuous visual angle the most as required based on the two-dimensional video data provided Multi-angle video.And owing to the model of different video generation side use is different, and the two-dimensional video that video acquisition side is gathered The model that data are not necessarily used with video generation side adapts, and the effect that causes video to show is not fully up to expectations (such as to be turned round Song, deformation etc.).Further, prior art is typically only capable to generate aphorama based on the existing threedimensional model such as spheroid, regular hexahedron Frequently, if self-defined other kinds of model, the most more likely could be adaptive with threedimensional model and reduce because of two-dimensional video data Display effect.
Summary of the invention
Technical problem
In view of this, the present invention proposes a kind of video capture method and device, video generation method and device, it is possible to freely It is flexibly generated continuous visual angle or the multi-angle video at discontinuous visual angle, and improves the effect that video shows, thus improve use Family is experienced.
Solution
On the one hand, it is proposed that a kind of video capture method, including: obtain each shooting direction built needed for multi-angle video Frame of video;Each frame of video is mapped to two-dimensional space, obtains each frame of video two dimensional model data in two-dimensional space;To reflect The each frame of video being incident upon described two-dimensional space maps to three dimensions according to predetermined threedimensional model, obtains each frame of video in three-dimensional Three-dimensional modeling data in space, and the video requency frame data of described frame of video and model data are sent to server so that Building multi-angle video, wherein said model data includes the type of described threedimensional model, described two dimensional model data and described Three-dimensional modeling data.
On the other hand, it is proposed that a kind of video generation method, including: obtain the respectively side of shooting built needed for multi-angle video To the video requency frame data of frame of video and model data, described model data includes that each frame of video maps to two-dimensional space and obtains Two dimensional model data, each frame of video mapping to two-dimensional space maps to three-dimensional modeling data and the institute that three dimensions obtains State the type of threedimensional model corresponding to three dimensions;Type according to described three-dimensional modeling data and threedimensional model carries out three-dimensional Modeling, obtains the threedimensional model rebuild;According to described two dimensional model data, described video requency frame data and the three-dimensional of described reconstruction Multi-angle video described in model generation.
Another aspect, it is proposed that a kind of video acquisition device, including: acquiring unit, it is used for obtaining structure multi-angle video The frame of video of required each shooting direction;First map unit, for each frame of video is mapped to two-dimensional space, obtains each video Frame two dimensional model data in two-dimensional space;Second map unit, for mapping to each frame of video of described two-dimensional space Map to three dimensions by predetermined threedimensional model, obtain each frame of video three-dimensional modeling data in three dimensions, Yi Jifa Send unit, for sending the video requency frame data of described frame of video and model data to server to build various visual angles and regarding Frequently, wherein said model data includes the type of described threedimensional model, described two dimensional model data and described three-dimensional modeling data.
Another further aspect, it is proposed that a kind of video-generating device, including: acquiring unit, it is used for obtaining structure multi-angle video The video requency frame data of the frame of video of required each shooting direction and model data, described model data includes that frame of video maps to The two dimensional model data that two-dimensional space obtains, each frame of video mapping to two-dimensional space maps to the three-dimensional mould that three dimensions obtains The type of the threedimensional model corresponding to type data and described three dimensions;Modeling unit, for according to described threedimensional model number According to and the type of threedimensional model carry out three-dimensional modeling, obtain the threedimensional model rebuild;Signal generating unit, for according to described two dimension mould The threedimensional model of type data, described video requency frame data and described reconstruction generates described multi-angle video.
Beneficial effect
According to various aspects of the invention, by obtaining the frame of video of each shooting direction, each video data will be included, respectively regard Frequently frame maps to two dimensional model data that two-dimensional space obtains, each frame of video of mapping to two-dimensional space is by predetermined threedimensional model The type mapping to three-dimensional modeling data that three dimensions obtains and threedimensional model sends to server, jointly for video Generation side builds multi-angle video.Due to the video requency frame data provided and its two dimensional model data, three-dimensional modeling data and three The model datas such as the type of dimension module are associated so that the mode generating multi-angle video is the most flexible, and is not limited only to panorama Video, can include arbitrary continuation or the multi-angle video at discontinuous visual angle.Further, the video requency frame data of collection can be suitably It is applied to the threedimensional model of any type (including customization type), and can be with video during multi-angle video generates The model data that frame data are corresponding is foundation, thus decreases distortion and deformation etc. to the full extent, and then promotes Consumer's Experience.
According to below with reference to the accompanying drawings detailed description of illustrative embodiments, the further feature of the present invention and aspect being become Clear.
Accompanying drawing explanation
The accompanying drawing of the part comprising in the description and constituting description together illustrates the present invention's with description Exemplary embodiment, feature and aspect, and for explaining the principle of the present invention.
Fig. 1 illustrates the flow chart of video capture method according to an embodiment of the invention.
Fig. 2 illustrates another flow chart of video capture method according to an embodiment of the invention.
Fig. 3 illustrates the another flow chart of video capture method according to an embodiment of the invention.
Fig. 4 illustrates alternative predetermined threedimensional model schematic diagram.
Fig. 5 illustrates the flow chart of video generation method according to an embodiment of the invention.
Fig. 6 illustrates the structure chart of video acquisition device according to an embodiment of the invention.
Fig. 7 illustrates the structure chart of video-generating device according to an embodiment of the invention.
Fig. 8 illustrates the structure chart of video generation equipment according to an embodiment of the invention.
Detailed description of the invention
Various exemplary embodiments, feature and the aspect of the present invention is described in detail below with reference to accompanying drawing.In accompanying drawing identical Reference represent the same or analogous element of function.Although the various aspects of embodiment shown in the drawings, but remove Non-specifically is pointed out, it is not necessary to accompanying drawing drawn to scale.
The most special word " exemplary " means " as example, embodiment or illustrative ".Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
It addition, in order to better illustrate the present invention, detailed description of the invention below gives numerous details. It will be appreciated by those skilled in the art that do not have some detail, the present invention equally implements.In some instances, for Device well known to those skilled in the art, means, element and circuit are not described in detail, in order to highlight the purport of the present invention.
Embodiment 1
Fig. 1 illustrates the flow chart of video capture method according to an embodiment of the invention.As it is shown in figure 1, the method is main Including:
Step 101, obtains the frame of video building each shooting direction needed for multi-angle video;
Step 102, maps to two-dimensional space by each frame of video, obtains each frame of video two dimensional model number in two-dimensional space According to;
Step 103, maps to three-dimensional space by each frame of video mapping to described two-dimensional space according to predetermined threedimensional model Between, obtain each frame of video three-dimensional modeling data in three dimensions, and
Step 104, sends video requency frame data and the model data of described frame of video to server to build and regarding more Angle video, wherein said model data includes the type of described threedimensional model, described two dimensional model data and described threedimensional model Data.
According to the method for this embodiment, by obtaining the frame of video of each shooting direction, each video data, each video will be included Each frame of video that frame maps to two dimensional model data that two-dimensional space obtains, map to two-dimensional space is reflected by predetermined threedimensional model The type being incident upon three-dimensional modeling data that three dimensions obtains and threedimensional model sends jointly to server, raw for video Cheng Fang etc. build multi-angle video.Due to the video requency frame data provided and its two dimensional model data, three-dimensional modeling data and three The model datas such as the type of dimension module are associated so that the mode generating multi-angle video is the most flexible, and is not limited only to panorama Video, can include arbitrary continuation or the multi-angle video at discontinuous visual angle.Further, the video requency frame data of collection can be suitably It is applied to the threedimensional model of any type (including customization type), and can be with video during multi-angle video generates The model data that frame data are corresponding is foundation, thus decreases distortion and deformation etc. to the full extent, and then promotes Consumer's Experience.
Also referred to as " space video ", it is any that it makes in real space the multi-angle video built by the present embodiment Tangent plane can carry out combination in any according to free rule model, even can cross over time dimension and be combined, required to produce The video image wanted.
Below in conjunction with some concrete examples, the various possible specific implementation of the present embodiment is illustrated.This A little examples are only exemplary and explanat, are not intended to limit the present invention.
In one example, the frame of video building each shooting direction needed for multi-angle video can be from video capture device Obtaining, such as collecting device can be that photographic head, sensor etc. arbitrarily can gather each shooting built needed for multi-angle video The device of the frame of video in direction (each visual angle) or a combination thereof, collecting device can be one or more, can be laid in a distributed manner Desired position.In another example, the frame of video building each shooting direction needed for multi-angle video can also be from the 3rd Side receives and obtains.Those skilled in the art can obtain each bat built needed for multi-angle video by known prior art means Taking the photograph the frame of video in direction, the present invention is without limitation.
In one example, each shooting direction can continuously can also be discontinuous in real space.
In one example, the video requency frame data of the available suitable compress mode frame of video to being collected is pressed Contracting, to reduce data volume.
In one example, each frame of video is mapped to two-dimensional space can with the shooting direction of each frame of video as foundation, Such as, for discontinuous two frame of video of same time point shooting direction, can map to not connect accordingly in two-dimensional space Continuous position, in order to generate the discontinuous multi-angle video in visual angle.Certainly, those skilled in the art can also set as required Put the relation between the shooting direction of frame of video and its mapping position in two-dimensional space, it may for example comprise by shooting direction not Continuous print frame of video is spliced in two-dimensional space, makes they mapping position in two-dimensional space continuous, required to obtain The composite video image wanted.
In one example, two dimensional model data can include the coordinate in two-dimensional space of the characteristic point in frame of video, three Dimension module data can include the coordinate in three dimensions of the characteristic point in frame of video, and the type of threedimensional model can be conventional The type such as spheroid, regular hexahedron.If self-defined threedimensional model, the mould of the type available customization threedimensional model of threedimensional model Shape parameter represents.
In one example, as in figure 2 it is shown, step 102 may include that
Step 201, in described two-dimensional space, is divided into polygonal multiple unit by each frame of video.
Specifically, for each frame of video obtained, in two-dimensional space, each frame of video can be divided into polygon Multiple unit.In other words, each frame of video after singulated is to be spliced by multiple polygonal elements.Wherein, each frame of video The multiple unit being divided into can be same type of polygon, and the most all unit are all triangle or other polygons, or Person can be different types of polygon, such as, can comprise triangle and other polygons etc., the invention is not limited in this regard. In one example, can be compressed processing to each video after segmentation.
Each frame of video is divided into polygonal multiple unit, in order to obtain two dimensional model data and threedimensional model number According to, and then it is beneficial to the type structure multi-angle video according to two dimensional model data, three-dimensional modeling data and threedimensional model.And And, by being less unit by frame of video cutting, enabling with less unit, two dimensional image is mapped to three-dimensional, more can Enough it is adapted to different threedimensional models (including self-defined threedimensional model), reduce further the distortion of image, improve Imaging effect.
Step 202, it is thus achieved that number of vertices in two-dimensional space of the number of unit and each unit and top in each frame of video Point position, as described two dimensional model data.
For example, it is possible to set up x ∈ [0,1], the two-dimensional space of y ∈ [0,1], each frame of video is divided into polygonal multiple After unit, map to the two-dimensional space (or carrying out the segmentation of unit after frame of video is mapped to two-dimensional space again) set up, and Determining the position in this two-dimensional space, the summit of each unit, this position can utilize this summit coordinate in two-dimensional space Represent.The number of unit, the number on each unit summit in this two-dimensional space and the position on summit each frame of video can split Put (such as coordinate) as two dimensional model data.
The unit that frame of video or its segmentation obtain is mapped to two-dimensional space, can be according to well known by persons skilled in the art Arbitrarily mapping mode realizes, and the present invention is without limitation.
In one example, as it is shown on figure 3, step 103 may include that
Step 301, for each frame of video, maps to three dimensions by each described unit according to predetermined threedimensional model;
Step 302, it is thus achieved that each described unit number of vertices in three dimensions and vertex position, as described three-dimensional mould Type data.
For example, it is possible to set up x ∈ [-1,1], y ∈ [-1,1], the three dimensions of z ∈ [-1,1], in step 102 Each frame of video, can map to each unit of each frame of video in the three dimensions set up according to predetermined threedimensional model.Really Determining the position in this three dimensions, the summit of each unit, this position can utilize this summit coordinate in three dimensions to carry out table Show.Can be using the position (such as coordinate) on the number on each unit summit in this three dimensions and summit as threedimensional model number According to.Wherein, predetermined threedimensional model can be spherical, regular hexahedron or cone etc., it is also possible to for self-defining other kinds of Threedimensional model.In one example, for self-defined threedimensional model, model data may also include the mould of this self-defined threedimensional model Shape parameter, is beneficial to video generation side reconstruct multi-angle video.
The unit that the frame of video or segmentation that map to two-dimensional space obtain is mapped to three dimensions, can be according to this area Arbitrarily mapping mode known to the skilled person realizes, and the present invention is without limitation.
Fig. 4 illustrates alternative predetermined threedimensional model schematic diagram, the type of described predetermined threedimensional model include but It is not limited to the model example shown in Fig. 4.
In one example, for step 104, the video requency frame data of described frame of video and model data can be sent To server (such as transcoding server etc.), in order to subsequent builds multi-angle video.Wherein, described video requency frame data can be Using the video requency frame data after conventional compression coding, model data can include the two dimensional model data in such as step 202, example Such as the three-dimensional modeling data in the type of the threedimensional model in step 301 and such as step 302.Wherein, mode is sent permissible Use wired or be wirelessly transmitted, the invention is not limited in this regard.
Server can carry out package to send to video generation side, package to the video requency frame data received and model data Information can include above-mentioned two dimensional model data and three-dimensional modeling data.These packet information can be transmitted as protocol data and answer With layer, it is also possible to be stored in video compress layer, such as data transfer layer, coding layer etc..
In one example, described video requency frame data can include regarding of the unit obtained after segmentation in step 201 Frequently frame data.
In one example, in order to realize the special-effect of video, each frame of video such as can be cut out or stretch Etc. suitable process.As a example by video to be stretched, for each frame of video, well known by persons skilled in the art can be used What can realize the method that frame of video carries out stretch processing to realize this purpose, such as bilinear interpolation etc..These process Can carry out during frame of video or its unit are mapped to two-dimensional space.
In one example, in the case of having determined that two dimensional model data and three-dimensional modeling data, say, that respectively The number of unit in frame of video, each unit number of vertices in two-dimensional space, vertex position, each unit is in three dimensions In the case of number of vertices, vertex position all determine, can before sending, by fixed two dimensional model data and threedimensional model number Reduce the data volume of transmission according to being compressed explicitly with corresponding video requency frame data processing, with guarantee video requency frame data with And the related data in model data matches, it is simple to improve the quality of subsequent builds multi-angle video.
According to above example, video acquisition side can select suitable mode according to the characteristic of the frame of video collected Video is compressed, stretches special handlings such as cutting out, based on suitable model and mapping mode frame of video carried out two dimension and Three-dimensional mapping, and information (such as model data) relevant to these process and mapping is passed to video generation side so that video Generation can reach and generate multi-angle video on this basis so that the generation of multi-angle video is more flexible, suitability is higher, one-tenth As effect is more preferable, simultaneously, it is simple to set up unified data transfer format between video acquisition side and video generation side, and also drop The low processing pressure of video generation side.
Embodiment 2
Fig. 5 illustrates the flow chart of video generation method according to an embodiment of the invention.As it is shown in figure 5, the method is main Including:
Step 501, obtains video requency frame data and the mould of the frame of video building each shooting direction needed for multi-angle video Type data, described model data includes that each frame of video maps to the two dimensional model data that two-dimensional space obtains, and maps to two dimension empty Between each frame of video map to the threedimensional model corresponding to three-dimensional modeling data that three dimensions obtains and described three dimensions Type;
Step 502, carries out three-dimensional modeling according to the type of described three-dimensional modeling data and threedimensional model, obtains three rebuild Dimension module;
Step 503, generates according to the threedimensional model of described two dimensional model data, described video requency frame data and described reconstruction Described multi-angle video.
According to the method for this embodiment, by obtaining regarding of the frame of video of each shooting direction needed for building multi-angle video Frequently frame data and model data, rebuilds threedimensional model according to this model data, and according to model data, described video requency frame data Multi-angle video is generated with the threedimensional model of described reconstruction.Two-dimensional space is mapped to owing to model data including each frame of video The two dimensional model data obtained, map to each frame of video of two-dimensional space map to three-dimensional modeling data that three dimensions obtains with And the type of the threedimensional model corresponding to described three dimensions, with these model datas when can enable generation multi-angle video For foundation, improve generation process and the suitability of the video data provided, at utmost video requency frame data to be carried out also Former, thus obtain the multi-angle video that display effect is more excellent, promote Consumer's Experience.Additionally, utilize the model data can be easily Generating multi-angle video continuously or discontinuously, generating mode is more flexible.
In one example, according to the three-dimensional mould of described two dimensional model data, described video requency frame data and described reconstruction Type is generated described multi-angle video and can be realized by pinup picture (such as texture mapping) mode, such as can be according to described two dimensional model Data and described video requency frame data carry out pinup picture to the threedimensional model of described reconstruction, to generate multi-angle video.
Wherein, the description to the type of two dimensional model data, three-dimensional modeling data and threedimensional model can be found in embodiment 1 In, here is omitted.
In one example, described frame of video includes that polygonal multiple unit, described two dimensional model data include respectively regarding Frequently the number of unit and each unit vertex position in two-dimensional space in frame;Described three-dimensional modeling data includes each described list Unit's number of vertices in three dimensions and vertex position.Wherein, to described frame of video and the slit mode of unit, two dimension mould Type data are similar with embodiment 1 with three-dimensional modeling data, and here is omitted.
In one example, two dimensional model data can be corresponding with the shooting direction of frame of video, in order to builds easily The video at the continuously or discontinuously visual angle corresponding with shooting direction.Embodiment 1 is can be found in about corresponded manner.
In one example, step 502 may include that the type according to described threedimensional model and each described unit are three Number of vertices in dimension space and vertex position, carry out three-dimensional modeling to each unit, obtains the threedimensional model of the reconstruction of each unit.
Specifically, can initially set up x ∈ [-1,1], y ∈ [-1,1], the three dimensions of z ∈ [-1,1], according to receiving The type (if self-defined threedimensional model, it may include model parameter) of threedimensional model and each unit in three dimensions Number of vertices and vertex position (such as coordinate) re-establish consistent with the type of threedimensional model in the three dimensions of this foundation Threedimensional model, as rebuild threedimensional model.
In one example, step 503 may include that for each unit, according to this unit summit in two-dimensional space The video requency frame data of number and vertex position and this unit carries out stick picture disposing to the threedimensional model of the reconstruction of this unit, with To described multi-angle video.
By rebuilding the threedimensional model consistent with the type of the threedimensional model in model data, two-dimentional mould based on frame of video Type data and three-dimensional modeling data, utilize video requency frame data that this threedimensional model is carried out stick picture disposing and obtain multi-angle video, can With the maximum phenomenon reducing distortion and deformation, and, video acquisition side is mapping and in processing procedure to video data The stretching done, the various optimizations such as cut out and process and also be able to reflect in the multi-angle video rebuild, therefore can strengthen and regard more The display effect of angle video, promotes Consumer's Experience.
In one example, before stick picture disposing, can as required to the part in each unit of each Frame or Whole video datas carries out processing (such as cut out or stretch).As a example by stretching, can use known to those skilled in the art Any method that can realize frame of video is carried out stretch processing realize the stretch processing to each unit, such as bilinearity and insert Value method etc..
In one example, video generates and can choose to freedom and flexibility in received frame of video as required Arbitrarily frame of video even arbitrarily unit, generates the multi-angle video at continuously or discontinuously visual angle.
In one example, after completing stick picture disposing, according to visual angle projection theory, the various visual angles generated can be regarded Frequency shows user.Wherein visual angle projection theory, it is simply that the visual effect that object gives people at different depth is different, such as, with One object is in human eye front different depth position, and size and/or the angle etc. that are presented to people are the most different.
In one example, the multi-angle video generated can be overlapped with other videos or image, to generate Present to the video pictures of user eventually.For example, if needing the picture shown is fixed background picture and some positions On the video pictures of different visual angles, such as in static background picture, have two discontinuous positions to be respectively necessary for presenting corresponding The video pictures at visual angle, at this point it is possible to based on received video data and model data in the discontinuous position of the two The upper video pictures generating different visual angles respectively, and be added on static background frame.Because the data volume of static picture It is far smaller than the data volume of frame of video, uses the mode of above-mentioned superposition, ensure not affect the display of the multi-angle video of generation In the case of effect, can effectively reduce code check, promote Consumer's Experience.
Embodiment 3
Fig. 6 illustrates the structure chart of video acquisition device according to an embodiment of the invention.As shown in Figure 6, this device can be used In realizing the operation of each step of method in embodiment 1, this video acquisition device specifically includes that
Acquiring unit 601, for obtaining the frame of video building each shooting direction needed for multi-angle video;
First map unit 602, for each frame of video is mapped to two-dimensional space, obtains each frame of video in two-dimensional space Two dimensional model data;
Second map unit 603, for being reflected each frame of video mapping to described two-dimensional space by predetermined threedimensional model It is incident upon three dimensions, obtains each frame of video three-dimensional modeling data in three dimensions, and
Transmitting element 604, for the video requency frame data of described frame of video and model data are sent to server so that Building multi-angle video, wherein said model data includes the type of described threedimensional model, described two dimensional model data and described Three-dimensional modeling data.
According to the video acquisition device of this embodiment, by obtaining the frame of video of each shooting direction, each video counts will be included The each frame of video map to, according to, each frame of video, two dimensional model data that two-dimensional space obtains, mapping to two-dimensional space is by predetermined Threedimensional model maps to the type of three-dimensional modeling data that three dimensions obtains and threedimensional model and jointly sends to server, Multi-angle video is built for video generation side etc..Due to the video requency frame data provided and its two dimensional model data, three-dimensional mould The model datas such as the type of type data and threedimensional model are associated so that the mode generating multi-angle video is the most flexible, and not It is only limitted to panoramic video, arbitrary continuation or the multi-angle video at discontinuous visual angle can be included.Further, the video requency frame data of collection The threedimensional model of any type (including customization type) can be suitably applied to, and during multi-angle video generates With model data corresponding to video requency frame data as foundation, thus distortion and deformation etc., Jin Erti can be decreased to the full extent Rise Consumer's Experience.
In one example, acquiring unit 601 can be those skilled in the art can use can to obtain structure many The obtaining widget of the frame of video of each shooting direction needed for multi-view video, such as collecting device (such as photographic head, sensor etc.), Arbitrarily can gather device or a combination thereof of the frame of video building each shooting direction needed for multi-angle video, collecting device is permissible It is one or more, can be distributed setting up separately in desired position.In another embodiment, acquiring unit 601 is all right It is the parts that can receive the frame of video from third-party each shooting direction, the invention is not limited in this regard.
In one example, each shooting direction can continuously can also be discontinuous in real space.
In one example, described video acquisition unit can also include compression unit, and compression unit may utilize suitable The video requency frame data of the compress mode frame of video to being collected is compressed, to reduce data volume.
In one example, the first map unit 602 each frame of video is mapped to two-dimensional space can be with each frame of video Shooting direction is foundation, such as, for discontinuous two frame of video of same time point shooting direction, can reflect in two-dimensional space It is incident upon corresponding discontinuous position, in order to generate the discontinuous multi-angle video in visual angle.Certainly, those skilled in the art are also The relation between the shooting direction of frame of video and its mapping position in two-dimensional space, such as, bag can be arranged as required to Include and discontinuous for shooting direction frame of video is spliced in two-dimensional space, make they mapping position in two-dimensional space even Continuous, to obtain required composite video image.
In one example, two dimensional model data can include the coordinate in two-dimensional space of the characteristic point in frame of video, three Dimension module data can include the coordinate in three dimensions of the characteristic point in frame of video, and the type of threedimensional model can be conventional The type such as spheroid, regular hexahedron.If self-defined threedimensional model, the mould of the type available customization threedimensional model of threedimensional model Shape parameter represents.
In one example, each frame of video can be mapped to two dimension sky by the first map unit 602 in the following way Between, obtain each frame of video two dimensional model data in two-dimensional space.
For example, first, in described two-dimensional space, each frame of video is divided into polygonal multiple unit.Specifically Ground, for each frame of video obtained, can be divided into polygonal multiple unit in two-dimensional space by each frame of video.Change Yan Zhi, each frame of video after singulated is to be spliced by multiple polygonal elements.Wherein, multiple lists that each frame of video is divided into Unit can be same type of polygon, and the most all unit are all triangle or other polygons, or can be inhomogeneity The polygon of type, such as, can comprise triangle and other polygons etc., the invention is not limited in this regard.In one example, Can be compressed processing to each video after segmentation.
Each frame of video is divided into polygonal multiple unit, in order to obtain two dimensional model data and threedimensional model number According to, and then it is beneficial to the type structure multi-angle video according to two dimensional model data, three-dimensional modeling data and threedimensional model.And And, by being less unit by frame of video cutting, enabling with less unit, two dimensional image is mapped to three-dimensional, more can Enough it is adapted to different threedimensional models (including self-defined threedimensional model), reduce further the distortion of image, improve Imaging effect.
Secondly, it is thus achieved that number of vertices in two-dimensional space of the number of unit and each unit and position, summit in each frame of video Put, as described two dimensional model data.
For example, it is possible to set up x ∈ [0,1], the two-dimensional space of y ∈ [0,1], each frame of video is divided into polygonal multiple After unit, map to the two-dimensional space (or carrying out the segmentation of unit after frame of video is mapped to two-dimensional space again) set up, and Determining the position in this two-dimensional space, the summit of each unit, this position can utilize this summit coordinate in two-dimensional space Represent.The number of unit, the number on each unit summit in this two-dimensional space and the position on summit each frame of video can split Put (such as coordinate) as two dimensional model data.
First map unit 602 can be well known by persons skilled in the art each frame of video can be mapped to two-dimensional space Parts, such as can combine logical order by general processor and realize, the present invention is without limitation.
In one example, the second map unit 603 can will map to described two-dimensional space in the following way Each frame of video maps to three dimensions according to predetermined threedimensional model, obtains each frame of video threedimensional model number in three dimensions According to.
For example, for each frame of video, each described unit is mapped to three dimensions according to predetermined threedimensional model;Obtain Each described unit number of vertices in three dimensions and vertex position, as described three-dimensional modeling data.Specifically, example As, x ∈ [-1,1], y ∈ [-1,1] can be set up, the three dimensions of z ∈ [-1,1], for each video mapping to two-dimensional space Frame, can map to each unit of each frame of video in the three dimensions set up according to predetermined threedimensional model.Determine each list Position in this three dimensions, the summit of unit, this position can utilize this summit coordinate in three dimensions to represent.Can Using the number on each unit summit in this three dimensions and the position (such as coordinate) on summit as three-dimensional modeling data.Its In, predetermined threedimensional model can be spherical, regular hexahedron or cone etc., it is also possible to for self-defining other kinds of three-dimensional mould Type.In one example, for self-defined threedimensional model, model data may also include the model ginseng of this self-defined threedimensional model Number, is beneficial to video generation side reconstruct multi-angle video.
Second map unit 603 can be well known by persons skilled in the art can to map to each video of two-dimensional space Frame maps to three-dimensional parts according to predetermined threedimensional model, such as, can combine logical order by general processor and come real Existing, the present invention is without limitation.
In one example, the video requency frame data of described frame of video and model data can be sent by transmitting element 604 To server (such as transcoding server etc.), in order to subsequent builds multi-angle video.Wherein, described video requency frame data can be Using the video requency frame data after conventional compression coding, model data can include the two dimension that the such as first map unit 602 obtains Model data, such as second map if only what the type of threedimensional models of 603 employings and the such as second map unit 603 obtained Three-dimensional modeling data.Wherein, transmission mode can use wired or wirelessly be transmitted, and this is not made by the present invention Limit.
Transmitting element 604 can be well known by persons skilled in the art can by the video requency frame data of described frame of video and Model data sends the parts to server, such as, can combine interrelated logic module by general transmission hardware module and realize, The present invention is without limitation.
Server can carry out package to send to video generation side, package to the video requency frame data received and model data Information can include above-mentioned two dimensional model data and three-dimensional modeling data.These packet information can be transmitted as protocol data and answer With layer, it is also possible to be stored in video compress layer, such as data transfer layer, coding layer etc..
In one example, the video requency frame data of the unit that described video requency frame data obtains after can including segmentation.
In one example, described video acquisition unit can also include processing unit, in order to realize the special effect of video Really, each frame of video can such as be cut out or the suitable process such as stretching by processing unit.As a example by video is stretched, For each frame of video, those skilled in the art can utilize processing unit and use such as bilinearity differential technique to come each frame of video Carry out stretch processing.These process can be carried out during frame of video or its unit are mapped to two-dimensional space.Processing unit Can be well known by persons skilled in the art each frame of video such as can being cut out or the parts of the process such as stretching, such as may be used Combining logical order by general processor to realize, the present invention is without limitation.
In one example, described video acquisition device may also include compression unit, have determined that two dimensional model data and In the case of three-dimensional modeling data, say, that the number of unit in each frame of video, each unit summit in two-dimensional space Number, vertex position, in the case of each unit number of vertices in three dimensions, vertex position all determine, can before sending, Video requency frame data will be compressed explicitly with corresponding according to fixed two dimensional model data and three-dimensional modeling data Process the data volume reducing transmission, to guarantee that the related data in video requency frame data and model data matches, it is simple to carry The quality of high subsequent builds multi-angle video.Compression unit can be well known by persons skilled in the art can be to video requency frame data The parts being compressed, such as, can combine logical order by general processor and realize, and the present invention is without limitation.
According to above example, video acquisition side can select suitable mode according to the characteristic of the frame of video collected Video is compressed, stretches special handlings such as cutting out, based on suitable model and mapping mode frame of video carried out two dimension and Three-dimensional mapping, and information (such as model data) relevant to these process and mapping is passed to video generation side so that video Generation can reach and generate multi-angle video on this basis so that the generation of multi-angle video is more flexible, suitability is higher, one-tenth As effect is more preferable, simultaneously, it is simple to set up unified data transfer format between video acquisition side and video generation side, and also drop The low processing pressure of video generation side.
Embodiment 4
Fig. 7 illustrates the structure chart of video-generating device according to an embodiment of the invention.As it is shown in fig. 7, this device can be used In realizing the operation of each step of method in embodiment 2, this video-generating device specifically includes that
Acquiring unit 701, for obtaining the video frame number of the frame of video building each shooting direction needed for multi-angle video According to this and model data, described model data includes that frame of video maps to the two dimensional model data that two-dimensional space obtains, maps to Each frame of video of two-dimensional space maps to three corresponding to three-dimensional modeling data that three dimensions obtains and described three dimensions The type of dimension module;
Modeling unit 702, for carrying out three-dimensional modeling according to the type of described three-dimensional modeling data and threedimensional model, obtains The threedimensional model rebuild;
Signal generating unit 703, for according to described two dimensional model data, described video requency frame data and the three-dimensional of described reconstruction Multi-angle video described in model generation.
According to the video-generating device of this embodiment, by regarding of each shooting direction needed for obtaining structure multi-angle video Frequently the video requency frame data of frame and model data, rebuilds threedimensional model according to this model data, and according to model data, described regard The threedimensional model of frame data and described reconstruction frequently generates multi-angle video.Map to owing to model data including each frame of video The two dimensional model data that two-dimensional space obtains, each frame of video mapping to two-dimensional space maps to the three-dimensional mould that three dimensions obtains The type of the threedimensional model corresponding to type data and described three dimensions, with these when can enable generation multi-angle video Model data is foundation, improves generation process and the suitability of the video data provided, with at utmost to video frame number According to reducing, thus obtain the multi-angle video that display effect is more excellent, promote Consumer's Experience.Additionally, utilize the model data can To be conveniently generated multi-angle video continuously or discontinuously, generating mode is more flexible.
Acquiring unit 701 can be well known by persons skilled in the art can getting needed for building multi-angle video The video requency frame data of frame of video of each shooting direction and the parts of model data, such as can be combined by general processor Logical order realizes, it is also possible to realized by special hardware circuit.Wherein, described model data includes that each frame of video maps The two dimensional model data obtained to two-dimensional space, each frame of video mapping to two-dimensional space maps to the three-dimensional that three dimensions obtains The type of the threedimensional model corresponding to model data and described three dimensions.
In one example, signal generating unit 703 can pass through pinup picture (such as texture mapping) mode, realizes according to described The threedimensional model of two dimensional model data, described video requency frame data and described reconstruction generates described multi-angle video, such as can root According to described two dimensional model data and described video requency frame data, the threedimensional model of described reconstruction is carried out pinup picture, regard from various visual angles to generate Frequently.
Wherein, the description to the type of two dimensional model data, three-dimensional modeling data and threedimensional model can be found in embodiment 1 or 3, here is omitted.
In one example, described frame of video includes that polygonal multiple unit, described two dimensional model data include respectively regarding Frequently the number of unit and each unit vertex position in two-dimensional space in frame;Described three-dimensional modeling data includes each described list Unit's number of vertices in three dimensions and vertex position.Wherein, to described frame of video and the slit mode of unit, two dimension mould Type data are similar with embodiment 1 or 3 with three-dimensional modeling data, and here is omitted.
In one example, two dimensional model data can be corresponding with the shooting direction of frame of video, in order to signal generating unit 703 It is conveniently generated the video at the continuously or discontinuously visual angle corresponding with shooting direction.Embodiment 1 is can be found in about corresponded manner Or 3.
In one example, modeling unit 702 can come in the following way the type according to described threedimensional model and Each described unit number of vertices in three dimensions and vertex position, carry out three-dimensional modeling to each unit, to obtain each unit The threedimensional model of reconstruction.
Specifically, can initially set up x ∈ [-1,1], y ∈ [-1,1], the three dimensions of z ∈ [-1,1], according to receiving The type (if self-defined threedimensional model, it may include model parameter) of threedimensional model and each unit in three dimensions Number of vertices and vertex position (such as coordinate) re-establish consistent with the type of threedimensional model in the three dimensions of this foundation Threedimensional model, as rebuild threedimensional model.
Modeling unit 702 can be well known by persons skilled in the art can be according to the type of described threedimensional model and each Described unit number of vertices in three dimensions and vertex position, carry out the parts of three-dimension modeling, such as to each unit Can combine logical order by general processor to realize, the present invention is without limitation.
In one example, signal generating unit 703 can be for each unit, according to this unit summit in two-dimensional space The video requency frame data of number and vertex position and this unit carries out stick picture disposing to the threedimensional model of the reconstruction of this unit, to obtain Described multi-angle video.
Signal generating unit 703 can be well known by persons skilled in the art can according to obtain video requency frame data and pattern number According to the parts of generation multi-angle video, such as, can combine logical order by general processor and realize, this is not done by the present invention Limit.
By rebuilding the threedimensional model consistent with the type of the threedimensional model in model data, two-dimentional mould based on frame of video Type data and three-dimensional modeling data, utilize video requency frame data that this threedimensional model is carried out stick picture disposing and obtain multi-angle video, can With the maximum phenomenon reducing distortion and deformation, and, video acquisition side is mapping and in processing procedure to video data The stretching done, the various optimizations such as cut out and process and also be able to reflect in the multi-angle video rebuild, therefore can strengthen and regard more The display effect of angle video, promotes Consumer's Experience.
In one example, video-generating device can also include processing unit, and before stick picture disposing, processing unit is permissible As required the video data partly or completely in each unit of each Frame is processed (such as cut out or stretch). As a example by stretching, those skilled in the art are used to utilize processing unit based on known any can realization, frame of video to be drawn The method stretching process realizes the stretch processing to each unit, such as bilinear interpolation etc..Processing unit can be this area Known to the skilled person each frame of video such as can be cut out or the parts of the process such as stretching, such as, can pass through general procedure Device combines logical order and realizes, and the present invention is without limitation.
In one example, video generates can utilize video-generating device as required, chooses to freedom and flexibility and is connect Any frame of video in the frame of video received even arbitrarily unit, generates the multi-angle video at continuously or discontinuously visual angle.
In one example, after completing stick picture disposing, according to visual angle projection theory, the various visual angles generated can be regarded Frequency shows user.Wherein visual angle projection theory, it is simply that the visual effect that object gives people at different depth is different, such as, with One object is in human eye front different depth position, and size and/or the angle etc. that are presented to people are the most different.
In one example, video-generating device may also include superpositing unit, and the various visual angles generated are regarded by superpositing unit Frequency is overlapped with other videos or image, to generate the video pictures that finally be presented to user.For example, if needing exhibition The picture shown is in the video pictures of the different visual angles on fixed background picture and some positions, such as static background picture, Two discontinuous positions are had to be respectively necessary for presenting the video pictures at corresponding visual angle, at this point it is possible to based on received video Data and model data generate the video pictures of different visual angles on the discontinuous position of the two respectively, and it is static to be added to On background frame.Because the data volume of static picture is far smaller than the data volume of frame of video, use the mode of above-mentioned superposition, In the case of the display effect of the multi-angle video that guarantee does not affect generation, can effectively reduce code check, promote Consumer's Experience.Folded Adding unit can be well known by persons skilled in the art can be carried out with other videos or image by the multi-angle video that be generated The parts of superposition, such as, can combine logical order by general processor and realize, and the present invention is without limitation.
Embodiment 5
Fig. 8 shows the structured flowchart of a kind of video processing equipment of an alternative embodiment of the invention.Described equipment 1100 can be to possess the host server of computing capability, personal computer PC or portable portable computer or end End etc..Calculating node is not implemented and limits by the specific embodiment of the invention.
Described equipment 1100 includes processor (processor) 1110, communication interface (Communications Interface) 1120, memorizer (memory) 1130 and bus 1140.Wherein, processor 1110, communication interface 1120 and Memorizer 1130 completes mutual communication by bus 1140.
Communication interface 1120 is used for and network device communications, and wherein the network equipment includes such as Virtual Machine Manager center, is total to Enjoy storage etc..
Processor 1110 is used for performing program.Processor 1110 is probably a central processor CPU, or special collection Become circuit ASIC (Application Specific Integrated Circuit), or be configured to implement the present invention One or more integrated circuits of embodiment.
Memorizer 1130 is used for depositing file.Memorizer 1130 may comprise high-speed RAM memorizer, it is also possible to also includes non- Volatile memory (non-volatile memory), for example, at least one disk memory.Memorizer 1130 can also be to deposit Memory array.Memorizer 1130 is also possible to by piecemeal, and described piece can be by certain rule sets synthesis virtual volume.
In a kind of possible embodiment, said procedure can be the program code including computer-managed instruction.This journey Sequence is particularly used in the method realized described in embodiment 1 or 2.
Those of ordinary skill in the art are it is to be appreciated that each example components in embodiment described herein and algorithm Step, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These functions are actually with hardware also It is that software form realizes, depends on application-specific and the design constraint of technical scheme.Professional and technical personnel can be for Specific application selects different devices to realize described function, but this realization is it is not considered that exceed the model of the present invention Enclose.
If using the form of computer software realize described function and as independent production marketing or use time, then exist To a certain extent it is believed that all or part of (part such as contributed prior art) of technical scheme is Embody in form of a computer software product.This computer software product is generally stored inside the non-volatile of embodied on computer readable In storage medium, including some instructions with so that computer equipment (can be that personal computer, server or network set Standby etc.) perform all or part of step of various embodiments of the present invention device.And aforesaid storage medium include USB flash disk, portable hard drive, Read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic The various medium that can store program code such as dish or CD.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any Those familiar with the art, in the technical scope that the invention discloses, can readily occur in change or replace, should contain Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with described scope of the claims.

Claims (24)

1. a video capture method, including:
Obtain the frame of video building each shooting direction needed for multi-angle video;
Each frame of video is mapped to two-dimensional space, obtains each frame of video two dimensional model data in two-dimensional space;
The each frame of video mapping to described two-dimensional space is mapped to three dimensions according to predetermined threedimensional model, obtains each video Frame three-dimensional modeling data in three dimensions, and
Video requency frame data and the model data of described frame of video are sent to server to build multi-angle video, Qi Zhongsuo State model data and include the type of described threedimensional model, described two dimensional model data and described three-dimensional modeling data.
Method the most according to claim 1, wherein, maps to two-dimensional space by each frame of video, obtains each frame of video two Two dimensional model data in dimension space, including:
In described two-dimensional space, each frame of video is divided into polygonal multiple unit;
Obtain the number of unit in each frame of video and each unit number of vertices in two-dimensional space and vertex position, as institute State two dimensional model data.
Method the most according to claim 2, wherein, will map to each frame of video of described two-dimensional space according to predetermined three Dimension module maps to three dimensions, obtains each frame of video three-dimensional modeling data in three dimensions, including:
For each frame of video, each described unit is mapped to three dimensions according to predetermined threedimensional model;
Obtain each described unit number of vertices in three dimensions and vertex position, as described three-dimensional modeling data.
Method the most according to claim 2, wherein, described video requency frame data includes the video requency frame data of unit.
Method the most according to claim 2, wherein, described unit is triangle.
Method the most as claimed in any of claims 1 to 5, wherein, described each shooting direction connects in real space Continuous or discontinuous.
Method the most as claimed in any of claims 1 to 5, wherein, maps to two-dimensional space by each frame of video and includes:
According to the shooting direction of described frame of video, each frame of video is mapped to two-dimensional space.
8. a video generation method, including:
Obtain video requency frame data and model data, the described mould of the frame of video building each shooting direction needed for multi-angle video Type data include that each frame of video maps to the two dimensional model data that two-dimensional space obtains, and each frame of video mapping to two-dimensional space is reflected It is incident upon the type of the threedimensional model corresponding to three-dimensional modeling data and described three dimensions that three dimensions obtains;
Type according to described three-dimensional modeling data and threedimensional model carries out three-dimensional modeling, obtains the threedimensional model rebuild;
Threedimensional model according to described two dimensional model data, described video requency frame data and described reconstruction generates described various visual angles and regards Frequently.
Method the most according to claim 8, wherein
Described frame of video includes polygonal multiple unit;
Described two dimensional model data include number and each unit number of vertices in two-dimensional space of unit in each frame of video And vertex position;
Described three-dimensional modeling data includes each described unit number of vertices in three dimensions and vertex position.
Method the most according to claim 9, wherein, is carried out according to the type of described three-dimensional modeling data and threedimensional model Three-dimensional modeling, obtains the threedimensional model rebuild, including:
Type according to described threedimensional model and each described unit number of vertices in three dimensions and vertex position, to respectively Unit carries out three-dimensional modeling, obtains the threedimensional model of the reconstruction of each unit.
11. methods according to claim 10, wherein, according to described two dimensional model data, described video requency frame data and The threedimensional model of described reconstruction generates described multi-angle video, including:
For each unit, according to this unit number of vertices in two-dimensional space and the video frame number of vertex position and this unit Carry out stick picture disposing according to the threedimensional model of the reconstruction to this unit, obtain described multi-angle video.
12. according to Claim 8 to the method described in any one in 11, the shooting side of described two dimensional model data and frame of video To corresponding.
13. 1 kinds of video acquisition devices, including:
Acquiring unit, for obtaining the frame of video building each shooting direction needed for multi-angle video;
First map unit, for each frame of video is mapped to two-dimensional space, obtains each frame of video two dimension in two-dimensional space Model data;
Second map unit, for mapping to three-dimensional by each frame of video mapping to described two-dimensional space by predetermined threedimensional model Space, obtains each frame of video three-dimensional modeling data in three dimensions, and
Transmitting element, for sending the video requency frame data of described frame of video and model data to server to build and regarding more Angle video, wherein said model data includes the type of described threedimensional model, described two dimensional model data and described threedimensional model Data.
14. devices according to claim 13, wherein, map to two-dimensional space by each frame of video, obtain each frame of video and exist Two dimensional model data in two-dimensional space, including:
In described two-dimensional space, each frame of video is divided into polygonal multiple unit;
Obtain the number of unit in each frame of video and each unit number of vertices in two-dimensional space and some position, as described Two dimensional model data.
15. devices according to claim 14, wherein, will map to each frame of video of described two-dimensional space according to predetermined Threedimensional model maps to three dimensions, obtains each frame of video three-dimensional modeling data in three dimensions, including:
For each frame of video, each described unit is mapped to three dimensions according to predetermined threedimensional model;
Obtain each described unit number of vertices in three dimensions and vertex position, as described three-dimensional modeling data.
16. devices according to claim 15, wherein, described video requency frame data includes the video requency frame data of unit.
17. devices according to claim 15, wherein, described unit is triangle.
18. according to the device described in any one in claim 13-17, and wherein, described each shooting direction is in real space Continuously or discontinuously.
19. according to the device described in any one in claim 13 to 17, wherein, each frame of video is mapped to two-dimensional space bag Include:
According to the shooting direction of described frame of video, each frame of video is mapped to two-dimensional space.
20. 1 kinds of video-generating device, including:
Acquiring unit, for obtaining video requency frame data and the mould of the frame of video building each shooting direction needed for multi-angle video Type data, described model data includes that frame of video maps to the two dimensional model data that two-dimensional space obtains, maps to two-dimensional space Each frame of video map to threedimensional model corresponding to three-dimensional modeling data that three dimensions obtains and described three dimensions Type;
Modeling unit, for carrying out three-dimensional modeling according to the type of described three-dimensional modeling data and threedimensional model, obtains reconstruction Threedimensional model;
Signal generating unit, raw for the threedimensional model according to described two dimensional model data, described video requency frame data and described reconstruction Become described multi-angle video.
21. devices according to claim 20, wherein,
Described frame of video includes polygonal multiple unit;
Described two dimensional model data include number and each unit number of vertices in two-dimensional space of unit in each frame of video And vertex position;
Described three-dimensional modeling data includes each described unit number of vertices in three dimensions and vertex position.
22. devices according to claim 21, wherein, are carried out according to the type of described three-dimensional modeling data and threedimensional model Three-dimensional modeling, obtains the threedimensional model rebuild, including:
Type according to described threedimensional model and each described unit number of vertices in three dimensions and vertex position, to respectively Unit carries out three-dimensional modeling, obtains the threedimensional model of the reconstruction of each unit.
23. devices according to claim 22, wherein, according to described two dimensional model data, described video requency frame data and The threedimensional model of described reconstruction generates described multi-angle video, including:
For each unit, according to this unit number of vertices in two-dimensional space and the video frame number of vertex position and this unit Carry out stick picture disposing according to the threedimensional model of the reconstruction to this unit, obtain described multi-angle video.
24. according to the device described in any one in claim 21 to 23, the shooting of described two dimensional model data and frame of video Direction is corresponding.
CN201610614498.XA 2016-07-29 2016-07-29 Video capture method and device, video generation method and device Expired - Fee Related CN106131535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610614498.XA CN106131535B (en) 2016-07-29 2016-07-29 Video capture method and device, video generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610614498.XA CN106131535B (en) 2016-07-29 2016-07-29 Video capture method and device, video generation method and device

Publications (2)

Publication Number Publication Date
CN106131535A true CN106131535A (en) 2016-11-16
CN106131535B CN106131535B (en) 2018-03-02

Family

ID=57255376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610614498.XA Expired - Fee Related CN106131535B (en) 2016-07-29 2016-07-29 Video capture method and device, video generation method and device

Country Status (1)

Country Link
CN (1) CN106131535B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547766A (en) * 2017-08-03 2019-03-29 杭州海康威视数字技术股份有限公司 A kind of panorama image generation method and device
CN110536076A (en) * 2018-05-23 2019-12-03 福建天晴数码有限公司 A kind of method and terminal that Unity panoramic video is recorded
CN110910504A (en) * 2019-11-28 2020-03-24 北京世纪高通科技有限公司 Method and device for determining three-dimensional model of region
CN112465939A (en) * 2020-11-25 2021-03-09 上海哔哩哔哩科技有限公司 Panoramic video rendering method and system
CN112514396A (en) * 2018-08-02 2021-03-16 索尼公司 Image processing apparatus and image processing method
WO2021083176A1 (en) * 2019-10-28 2021-05-06 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040095385A1 (en) * 2002-11-18 2004-05-20 Bon-Ki Koo System and method for embodying virtual reality
US20070076016A1 (en) * 2005-10-04 2007-04-05 Microsoft Corporation Photographing big things
US20090002394A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Augmenting images for panoramic display
CN102208116A (en) * 2010-03-29 2011-10-05 卡西欧计算机株式会社 3D modeling apparatus and 3D modeling method
CN104050714A (en) * 2014-06-03 2014-09-17 崔岩 Object digitized three-dimensional reconstruction system and method based on raster scanning
CN104599305A (en) * 2014-12-22 2015-05-06 浙江大学 Two-dimension and three-dimension combined animation generation method
CN104851080A (en) * 2015-05-08 2015-08-19 浙江大学 TV-based 3D positron emission tomography (PET) image reconstruction method
CN105678748A (en) * 2015-12-30 2016-06-15 清华大学 Interactive calibration method and apparatus based on three dimensional reconstruction in three dimensional monitoring system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040095385A1 (en) * 2002-11-18 2004-05-20 Bon-Ki Koo System and method for embodying virtual reality
US20070076016A1 (en) * 2005-10-04 2007-04-05 Microsoft Corporation Photographing big things
US20090002394A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Augmenting images for panoramic display
CN102208116A (en) * 2010-03-29 2011-10-05 卡西欧计算机株式会社 3D modeling apparatus and 3D modeling method
CN104050714A (en) * 2014-06-03 2014-09-17 崔岩 Object digitized three-dimensional reconstruction system and method based on raster scanning
CN104599305A (en) * 2014-12-22 2015-05-06 浙江大学 Two-dimension and three-dimension combined animation generation method
CN104851080A (en) * 2015-05-08 2015-08-19 浙江大学 TV-based 3D positron emission tomography (PET) image reconstruction method
CN105678748A (en) * 2015-12-30 2016-06-15 清华大学 Interactive calibration method and apparatus based on three dimensional reconstruction in three dimensional monitoring system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547766A (en) * 2017-08-03 2019-03-29 杭州海康威视数字技术股份有限公司 A kind of panorama image generation method and device
CN109547766B (en) * 2017-08-03 2020-08-14 杭州海康威视数字技术股份有限公司 Panoramic image generation method and device
US11012620B2 (en) 2017-08-03 2021-05-18 Hangzhou Hikvision Digital Technology Co., Ltd. Panoramic image generation method and device
CN110536076A (en) * 2018-05-23 2019-12-03 福建天晴数码有限公司 A kind of method and terminal that Unity panoramic video is recorded
CN112514396A (en) * 2018-08-02 2021-03-16 索尼公司 Image processing apparatus and image processing method
WO2021083176A1 (en) * 2019-10-28 2021-05-06 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium
CN110910504A (en) * 2019-11-28 2020-03-24 北京世纪高通科技有限公司 Method and device for determining three-dimensional model of region
CN112465939A (en) * 2020-11-25 2021-03-09 上海哔哩哔哩科技有限公司 Panoramic video rendering method and system
CN112465939B (en) * 2020-11-25 2023-01-24 上海哔哩哔哩科技有限公司 Panoramic video rendering method and system

Also Published As

Publication number Publication date
CN106131535B (en) 2018-03-02

Similar Documents

Publication Publication Date Title
CN106131535B (en) Video capture method and device, video generation method and device
CN1215443C (en) Layer representation of three-D body and method and device for drawing said body by utilizing it
US9898860B2 (en) Method, apparatus and terminal for reconstructing three-dimensional object
CN107392984A (en) A kind of method and computing device based on Face image synthesis animation
CN108227916A (en) For determining the method and apparatus of the point of interest in immersion content
US11328481B2 (en) Multi-resolution voxel meshing
WO2022205762A1 (en) Three-dimensional human body reconstruction method and apparatus, device, and storage medium
CN108352082B (en) Techniques to crowd 3D objects into a plane
CN107004301A (en) Using depth information to draw augmented reality scene
JP2010123007A (en) Image processor
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
TWI502546B (en) System, method, and computer program product for extruding a model through a two-dimensional scene
CN111583372B (en) Virtual character facial expression generation method and device, storage medium and electronic equipment
CN116097316A (en) Object recognition neural network for modeless central prediction
CN109697748A (en) Model compression processing method, model pinup picture processing method device, storage medium
CN115272608A (en) Human hand reconstruction method and equipment
CN102693065A (en) Method for processing visual effect of stereo image
EP4115394A1 (en) Systems and methods for inferring object from aerial imagery
CN110751026B (en) Video processing method and related device
WO2019042028A1 (en) All-around spherical light field rendering method
CN106503174A (en) A kind of environment Visualization method and system modeled based on Network Three-dimensional
CN113139992A (en) Multi-resolution voxel gridding
CN107230243A (en) A kind of consistent speed change interpolating method of space-time based on 2 D animation
CN107358641A (en) Prime number spiral scanning method and system
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200511

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Alibaba (China) Co.,Ltd.

Address before: 200241, room 2, floor 02, building 555, Dongchuan Road, Minhang District, Shanghai

Patentee before: Transmission network technology (Shanghai) Co., Ltd

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180302

Termination date: 20200729

CF01 Termination of patent right due to non-payment of annual fee