CN114449247B - Multi-channel video 3D (three-dimensional) superposition method and system - Google Patents

Multi-channel video 3D (three-dimensional) superposition method and system Download PDF

Info

Publication number
CN114449247B
CN114449247B CN202210371703.XA CN202210371703A CN114449247B CN 114449247 B CN114449247 B CN 114449247B CN 202210371703 A CN202210371703 A CN 202210371703A CN 114449247 B CN114449247 B CN 114449247B
Authority
CN
China
Prior art keywords
video
dimensional
data
information data
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210371703.XA
Other languages
Chinese (zh)
Other versions
CN114449247A (en
Inventor
赵开勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyu Innovation Technology Co ltd
Original Assignee
Shenzhen Qiyu Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyu Innovation Technology Co ltd filed Critical Shenzhen Qiyu Innovation Technology Co ltd
Priority to CN202210371703.XA priority Critical patent/CN114449247B/en
Publication of CN114449247A publication Critical patent/CN114449247A/en
Application granted granted Critical
Publication of CN114449247B publication Critical patent/CN114449247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of virtual manufacturing special effects, in particular to a multi-channel video 3D superposition method and a system thereof; extracting a video, calculating a shooting track and an angle according to video content, acquiring three-dimensional information data in the video, acquiring three-dimensional space data of the video according to the three-dimensional information data of the video, and overlapping and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space scene; the invention reduces the modeling stage, directly utilizes the original video to derive the images for analysis and superposition, greatly reduces the process complexity and improves the manufacturing efficiency.

Description

Multi-channel video 3D superposition method and system
Technical Field
The invention relates to the technical field of virtual production special effects, in particular to a multi-channel video 3D (three-dimensional) superposition method and a multi-channel video 3D superposition system.
Background
With the increasing demand of viewers for tv programs, video requires a great deal of video trick to achieve the effect of rendering material during the production process.
The existing technology needs a special effect to overlap a green screen, then 3D space data needs to be shot firstly, then overlapping is carried out, a 3D model needs to be built firstly, then video shooting is carried out, the shooting process is too complicated, the cost is higher, and the reason is that the cost for shooting a film is higher than that of the prior art.
Disclosure of Invention
The invention mainly solves the technical problem of providing a multi-channel video 3D superposition method, which comprises the steps of firstly extracting videos, analyzing the three-dimensional space of the videos to obtain three-dimensional information data and three-dimensional space data, and then carrying out multi-video superposition fusion to obtain a fused three-dimensional space scene, thereby achieving the effect of special effect; a multi-channel video 3D overlay system is also provided.
In order to solve the technical problems, the invention adopts a technical scheme that: the method for 3D superposition of multiple paths of videos is provided, and comprises the following steps:
step S1, extracting a video;
step S2, calculating shooting track and angle according to video content, and acquiring three-dimensional information data in the video;
step S3, obtaining three-dimensional space data of the video according to the three-dimensional information data of the video;
and step S4, overlapping and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space scene.
As a modification of the present invention, in step S1, the captured video is imported.
As a further improvement of the present invention, in step S2, image information in each direction is obtained based on the shooting trajectory and angle of the video.
As a further improvement of the present invention, in step S2, the image information of each direction is converted into three-dimensional information data, and the three-dimensional information data includes shape, illumination, rendering, and texture.
As a further improvement of the present invention, in step S3, the three-dimensional information data is integrated to obtain three-dimensional space data.
As a further improvement of the present invention, in step S4, three-dimensional spatial data of different videos are collected, and then all the three-dimensional spatial data are overlapped and fused and re-rendered by illumination, so as to obtain a fused three-dimensional spatial scene.
A multi-channel video 3D overlay system, comprising:
the extraction module is used for extracting a video;
the analysis module is used for calculating shooting tracks and angles according to the video content and acquiring three-dimensional information data in the video;
the acquisition module is used for acquiring three-dimensional spatial data of the video according to the three-dimensional information data of the video;
and the fusion module is used for superposing and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space.
As an improvement of the invention, the analysis module comprises:
and the derivation unit is used for obtaining image information of each direction according to the shooting track and the shooting angle of the video.
As a further improvement of the present invention, the analysis module further comprises:
and the conversion unit is used for converting the image information in each direction into three-dimensional information data.
As a further improvement of the present invention, the acquisition module comprises:
and the integration unit is used for integrating the three-dimensional information data to obtain three-dimensional space data.
The beneficial effects of the invention are: compared with the prior art, the method comprises the steps of extracting the video, calculating shooting tracks and angles according to the content of the video, obtaining three-dimensional information data in the video, obtaining three-dimensional space data of the video according to the three-dimensional information data of the video, and overlapping and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space scene.
The method analyzes the three-dimensional space of the video to obtain three-dimensional information data and three-dimensional space data, and then performs multi-video overlapping fusion to obtain a fused three-dimensional space scene, thereby achieving the effect of special effect.
The invention reduces the modeling stage, directly utilizes the original video to derive the images for analysis and superposition, greatly reduces the process complexity and improves the manufacturing efficiency.
Drawings
Fig. 1 is a block diagram of the steps of the multi-channel video 3D overlaying method of the present invention.
Detailed Description
At present, a special effect video is manufactured, a special effect superposition green screen is adopted, then 3D space data are shot, then superposition is carried out, a 3D model is firstly established, then video shooting is carried out, and the shooting process is too complicated.
The invention provides a multi-channel video 3D (three-dimensional) superposition method, which comprises the following steps:
step S1, extracting a video;
step S2, calculating shooting track and angle according to the video content, and acquiring three-dimensional information data in the video;
step S3, obtaining three-dimensional space data of the video according to the three-dimensional information data of the video;
and S4, overlapping and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space scene.
The method comprises the steps of extracting a video, calculating shooting tracks and angles according to video contents, obtaining three-dimensional information data in the video, obtaining three-dimensional space data of the video according to the three-dimensional information data of the video, and overlapping and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space scene.
In step S1, the captured video is imported and used to analyze the video as a two-dimensional image.
In the present invention, in step S2, the image information in each direction is converted into three-dimensional information data, which includes shape, illumination, rendering, and texture.
In the present invention, in step S3, three-dimensional information data is integrated to obtain three-dimensional spatial data.
In the invention, in step S4, three-dimensional spatial data of different videos are collected, and then all the three-dimensional spatial data are overlaid and fused, and the three-dimensional spatial scene is rendered again by illumination, so as to obtain a fused three-dimensional spatial scene.
The invention also provides a multi-channel video 3D superposition system, which comprises:
the extraction module is used for extracting a video;
the analysis module is used for calculating shooting tracks and angles according to the video content and acquiring three-dimensional information data in the video;
the acquisition module is used for acquiring three-dimensional spatial data of the video according to the three-dimensional information data of the video;
and the fusion module is used for superposing and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space.
The analysis module comprises a derivation unit and a conversion unit, wherein the derivation unit is used for obtaining image information in each direction according to the shooting track and the angle of the video; the conversion unit is used for converting the image information in each direction into three-dimensional information data.
In the invention, the acquisition module comprises an integration unit for integrating the three-dimensional information data to obtain the three-dimensional spatial data.
The invention has the following advantages:
1. extracting a video, calculating a shooting track and an angle according to video content, acquiring three-dimensional information data in the video, acquiring three-dimensional space data of the video according to the three-dimensional information data of the video, and overlapping and fusing the three-dimensional space data of different videos to obtain a fused three-dimensional space scene.
2. The three-dimensional space of the video is analyzed to obtain three-dimensional information data and three-dimensional space data, and then multi-video superposition fusion is carried out to obtain a fused three-dimensional space scene, so that the effect of special effects is achieved.
3. The modeling stage is reduced, the original video is directly used for exporting the images for analysis and superposition, the process complexity is greatly reduced, and the manufacturing efficiency is improved.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present specification and the attached drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (2)

1. A multi-channel video 3D superposition method is characterized by comprising the following steps:
step S1, extracting a video;
step S2, calculating shooting track and angle according to video content, and acquiring three-dimensional information data in the video;
step S3, obtaining three-dimensional space data of the video according to the three-dimensional information data of the video;
s4, overlapping and fusing three-dimensional space data of different videos to obtain a fused three-dimensional space scene;
in step S2, image information in each direction is obtained according to the shooting track and angle of the video, and the image information in each direction is converted into three-dimensional information data, where the three-dimensional information data includes shape, illumination, rendering, and material;
in step S1, the captured video is imported;
in step S3, integrating the three-dimensional information data to obtain three-dimensional spatial data;
in step S4, three-dimensional spatial data of different videos are collected, and then all the three-dimensional spatial data are overlaid and fused and re-rendered by illumination, so as to obtain a fused three-dimensional spatial scene.
2. A system for implementing the multi-channel video 3D overlay method of claim 1, comprising:
the extraction module is used for extracting videos;
the analysis module is used for calculating shooting tracks and angles according to the video content and acquiring three-dimensional information data in the video;
the acquisition module is used for acquiring three-dimensional spatial data of the video according to the three-dimensional information data of the video;
the fusion module is used for superposing and fusing three-dimensional space data of different videos to obtain a fused three-dimensional space;
the analysis module includes:
the deriving unit is used for obtaining image information of each direction according to the shooting track and the shooting angle of the video;
the analysis module further comprises:
the conversion unit is used for converting the image information in each direction into three-dimensional information data;
the acquisition module comprises:
and the integration unit is used for integrating the three-dimensional information data to obtain three-dimensional space data.
CN202210371703.XA 2022-04-11 2022-04-11 Multi-channel video 3D (three-dimensional) superposition method and system Active CN114449247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210371703.XA CN114449247B (en) 2022-04-11 2022-04-11 Multi-channel video 3D (three-dimensional) superposition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210371703.XA CN114449247B (en) 2022-04-11 2022-04-11 Multi-channel video 3D (three-dimensional) superposition method and system

Publications (2)

Publication Number Publication Date
CN114449247A CN114449247A (en) 2022-05-06
CN114449247B true CN114449247B (en) 2022-07-22

Family

ID=81360416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210371703.XA Active CN114449247B (en) 2022-04-11 2022-04-11 Multi-channel video 3D (three-dimensional) superposition method and system

Country Status (1)

Country Link
CN (1) CN114449247B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572962A (en) * 2021-07-28 2021-10-29 北京大学 Outdoor natural scene illumination estimation method and device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010021090A1 (en) * 2008-08-20 2010-02-25 パナソニック株式会社 Distance estimating device, distance estimating method, program, integrated circuit, and camera
JP2012043396A (en) * 2010-08-13 2012-03-01 Hyundai Motor Co Ltd System and method for managing vehicle consumables using augmented reality
CN106251399B (en) * 2016-08-30 2019-04-16 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam
CN106373148A (en) * 2016-08-31 2017-02-01 中国科学院遥感与数字地球研究所 Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system
CN108335344A (en) * 2017-01-19 2018-07-27 北京佳士乐动漫科技有限公司 The method and system of three-dimensional image and video fusion
CN108765298A (en) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 Unmanned plane image split-joint method based on three-dimensional reconstruction and system
GB2584276B (en) * 2019-05-22 2023-06-07 Sony Interactive Entertainment Inc Capture of a three-dimensional representation of a scene
GB2584282B (en) * 2019-05-24 2021-08-25 Sony Interactive Entertainment Inc Image acquisition system and method
CN112489121A (en) * 2019-09-11 2021-03-12 丰图科技(深圳)有限公司 Video fusion method, device, equipment and storage medium
CN114143528A (en) * 2020-09-04 2022-03-04 北京大视景科技有限公司 Multi-video stream fusion method, electronic device and storage medium
CN113920270B (en) * 2021-12-15 2022-08-19 深圳市其域创新科技有限公司 Layout reconstruction method and system based on multi-view panorama

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572962A (en) * 2021-07-28 2021-10-29 北京大学 Outdoor natural scene illumination estimation method and device

Also Published As

Publication number Publication date
CN114449247A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
AU760594B2 (en) System and method for creating 3D models from 2D sequential image data
CN101883291B (en) Method for drawing viewpoints by reinforcing interested region
US11037308B2 (en) Intelligent method for viewing surveillance videos with improved efficiency
WO2021093584A1 (en) Free viewpoint video generation and interaction method based on deep convolutional neural network
CN102592275B (en) Virtual viewpoint rendering method
CN104219584A (en) Reality augmenting based panoramic video interaction method and system
CN102857739A (en) Distributed panorama monitoring system and method thereof
CN110706155B (en) Video super-resolution reconstruction method
Bleyer et al. Temporally consistent disparity maps from uncalibrated stereo videos
CN102333234A (en) Binocular stereo video state information monitoring method and device
CN114449247B (en) Multi-channel video 3D (three-dimensional) superposition method and system
CN109523508B (en) Dense light field quality evaluation method
CN103248910B (en) Three-dimensional imaging system and image reproducing method thereof
US20200154046A1 (en) Video surveillance system
Liu et al. Deep view synthesis via self-consistent generative network
Yamamoto et al. LIFLET: Light field live with thousands of lenslets
WO2024007182A1 (en) Video rendering method and system in which static nerf model and dynamic nerf model are fused
CN108335344A (en) The method and system of three-dimensional image and video fusion
CN110379130B (en) Medical nursing anti-falling system based on multi-path high-definition SDI video
CN112887589A (en) Panoramic shooting method and device based on unmanned aerial vehicle
Chen et al. Automatic 2d-to-3d video conversion using 3d densely connected convolutional networks
CN114885147B (en) Fusion production and broadcast system and method
Li et al. CCF-Net: Composite context fusion network with inter-slice correlative fusion for multi-disease lesion detection
CN114818992B (en) Image data analysis method, scene estimation method and 3D fusion method
WO2024031251A1 (en) Volume rendering method and system for embedding 2d/three-dimensional (3d) video during nerf 3d scenario reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant