CN115209123A - Splicing method for VR multi-view camera video - Google Patents
Splicing method for VR multi-view camera video Download PDFInfo
- Publication number
- CN115209123A CN115209123A CN202110375290.8A CN202110375290A CN115209123A CN 115209123 A CN115209123 A CN 115209123A CN 202110375290 A CN202110375290 A CN 202110375290A CN 115209123 A CN115209123 A CN 115209123A
- Authority
- CN
- China
- Prior art keywords
- splicing
- standard
- parameters
- taking
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 16
- 230000003044 adaptive effect Effects 0.000 claims abstract description 7
- 238000007499 fusion processing Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 8
- 238000000926 separation method Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
- H04N13/268—Image signal generators with monoscopic-to-stereoscopic image conversion based on depth image-based rendering [DIBR]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a splicing method of VR multi-view camera videos, which comprises the following steps: the method comprises the steps of taking experience splicing parameters of VR cameras with 2-8 lenses as preset conditions, starting an adaptive algorithm under a specific scene, taking default parameters as initial parameters, then carrying out image splicing, calculating deviation values after splicing frames, estimating splicing quality at seams, then automatically fine-tuning parameters according to the algorithm, repeating the process for accumulating N times, wherein N can be prefabricated, if the parameters reach the standard in N times of calibration, taking the best parameter as a target value, if the parameters do not reach the standard in N times, continuing the next round of parameter calibration until the parameters reach the standard, taking the finally reached parameter as the target value, if the parameters do not reach the standard in m times of calibration, taking the best parameter as the target value, stopping calibration, then carrying out standard splicing, carrying out a fusion process, and taking the target value as a reference. The invention has the advantages of good splicing effect and less seam separation phenomenon.
Description
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a splicing method of a VR multi-view camera video.
Background
Virtual Reality technology (the English name: virtual Reality, abbreviated as VR), also known as smart technology, is a brand-new practical technology developed in the 20 th century, and includes a computer, electronic information and simulation technology into a whole, and the basic implementation mode is that the computer simulates a Virtual environment so as to give people an environmental immersion feeling.
Virtual reality, as the name implies, is a combination of virtual reality and reality, and theoretically, virtual reality technology (VR) is a computer simulation system that can create and experience a virtual world, and uses a computer to create a simulation environment to immerse a user in the environment, and virtual reality technology is a technology that uses data in real life and combines electronic signals generated by computer technology with various output devices to convert the electronic signals into phenomena that can be perceived by people, such as true objects in reality and substances that cannot be seen by the naked eye, and is expressed by a three-dimensional model, because the phenomena are not directly seen but are simulated by computer technology, and is called virtual reality.
The multi-lens camera is a simple camera enhancement software, and can enable a user to directly shoot a plurality of different pictures, and the different pictures form a grid-type spliced picture.
The existing panoramic camera has the following defects:
1. splicing parameters are basically fixed after a panoramic camera in the prior art is taken out of the field, and professional manual calibration is needed if deviation exists;
2. the adaptability to different scenes is poor, and spliced seams are easy to appear;
3. the whole flow is spliced frame by frame with huge computing power consumption.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a splicing method of VR multi-view camera videos, which realizes a variable splicing target place method, overcomes the defects that obvious splicing seams appear on a plurality of scenes when a general VR camera is spliced in a fixed target place, reduces the frame-by-frame splicing calculation amount, and solves the problem that the ideal effect is difficult to achieve by manually adjusting splicing parameters in an actual scene of a VR panoramic camera.
(II) technical scheme
In order to achieve the purpose, the invention provides the following technical scheme:
a splicing method of VR multi-view camera videos comprises the following steps:
1) Selecting a shooting site, selecting VR multi-view cameras to be used, setting the number of lenses of the VR multi-view cameras to be 2-8, erecting the VR multi-view cameras on the site, and adjusting and calibrating the height and angle of the VR multi-view cameras;
2) Using a VR multi-view camera to shoot scenes, and obtaining 2-6 groups of videos for later use;
3) The method comprises the steps that empirical splicing parameters of VR cameras with 2-8 lenses are used as preset conditions, an adaptive algorithm is started under a specific scene, default parameters are used as initial parameters, then image splicing is carried out, after several frames of pictures are spliced, a deviation value is calculated, splicing quality of a seam is estimated, then the process is automatically adjusted in a fine mode according to the algorithm, and the process is repeated for N times, wherein N can be preset;
4) If the calibration reaches the standard in the N times of calibration, taking the parameter of the best time as a target value;
5) If the standard is not achieved in the N times, continuing the next round of parameter calibration until the parameter reaches the standard, and taking the final parameter reaching the standard as a target value;
6) If m times of calibration do not reach the standard in m rounds, taking the optimal parameter as a target value, and stopping calibration;
7) And then standard splicing and fusion processes are carried out, and the target value is used as a reference.
Preferably, the default parameter is a parameter assigned a default value when declaring a parameter of the function.
Preferably, the image stitching means: and acquiring image data of a plurality of depth levels from the corresponding overlapping area of the panoramic video images according to the depth information of the overlapping area in each panoramic video image, and splicing the image data of the same depth level.
Preferably, the stitching parameters are stored in a memory of the multi-view camera or a memory card.
Preferably, each fisheye lens of the VR multi-view camera generates (with a resolution of 7680 × 3840 or 3840 × 1920) FOV >120 ° as input, and generates one path of high definition panoramic 8K video as output.
(III) advantageous effects
Compared with the prior art, the invention provides a splicing method of a VR multi-view camera video, which has the following beneficial effects:
according to the splicing method of the VR multi-view camera videos, the rapid splicing of the panoramic images of the VR camera can be used for high-definition VR live broadcasting, the splicing effect is good through automatic calibration, the seam separation phenomenon rarely occurs, the frame-by-frame splicing calculation amount is reduced, and the problem that the ideal effect is difficult to achieve by manually adjusting the splicing parameters of the VR panoramic camera in an actual scene is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application:
FIG. 1 is a diagram of the operating system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1:
the first embodiment is as follows:
a splicing method of VR multi-view camera videos comprises the following steps:
1) Selecting a shooting site, selecting VR multi-view cameras to be used, setting the number of lenses of the VR multi-view cameras to be 2, erecting the VR multi-view cameras on the site, and adjusting and calibrating the height and the angle of the VR multi-view cameras;
2) Using a VR multi-view camera to shoot a scene, obtaining 2 groups of videos, generating (with the resolution of 7680X 3840 or 3840X 1920) FOV (field of view) of more than 120 degrees as input by each fisheye lens of the VR multi-view camera, and generating one path of high-definition panoramic 8K video as output for later use;
3) The method comprises the following steps of taking experience splicing parameters of a VR camera with 2 lenses as preset conditions, starting an adaptive algorithm under a specific scene, taking default parameters as initial parameters, assigning a default value to the default parameters when a certain parameter of a declaration function is specified, and then carrying out image splicing, wherein the image splicing refers to the following steps: acquiring image data of a plurality of depth levels from the corresponding panoramic video image overlapping area according to the depth information of the overlapping area in each panoramic video image, splicing the image data of the same depth level, calculating a deviation value after splicing a plurality of frames of pictures, estimating the splicing quality at a joint, automatically fine-tuning parameters according to an algorithm, storing the splicing parameters in a memory of the multi-view camera or a memory card, and repeating the process for N times to accumulate, wherein N can be prefabricated;
4) If the calibration reaches the standard in the N times of calibration, taking the parameter of the best time as a target value;
5) If the standard is not achieved in the N times, continuing the next round of parameter calibration until the parameter reaches the standard, and taking the final parameter reaching the standard as a target value;
6) If m times of calibration do not reach the standard in m rounds, taking the optimal parameter as a target value, and stopping calibration;
7) And then performing standard splicing and fusion process, and taking the target value as a reference.
Example two:
a splicing method of VR multi-view camera videos comprises the following steps:
1) Selecting a shooting site, selecting VR multi-view cameras to be used, setting the number of lenses of the VR multi-view cameras to be 4, erecting the VR multi-view cameras on the site, and adjusting and calibrating the height and the angle of the VR multi-view cameras;
2) Using a VR multi-view camera to shoot a scene, obtaining 3 groups of videos by shooting, generating (with the resolution of 7680X 3840 or 3840X 1920) an FOV (field of view) of more than 120 degrees as input by each fisheye lens of the VR multi-view camera, and generating one path of high-definition panoramic 8K video as output for later use;
3) The method comprises the following steps of taking experience splicing parameters of a VR camera with 4 lenses as preset conditions, starting an adaptive algorithm under a specific scene, taking default parameters as initial parameters, and carrying out image splicing, wherein the default parameters are a default value which is designated for a certain parameter of a declaration function, and the image splicing refers to the following steps: acquiring image data of a plurality of depth levels from the corresponding panoramic video image overlapping area according to the depth information of the overlapping area in each panoramic video image, splicing the image data of the same depth level, calculating a deviation value after splicing a plurality of frames of pictures, estimating the splicing quality at a joint, automatically fine-tuning parameters according to an algorithm, storing the splicing parameters in a memory of the multi-view camera or a memory card, and repeating the process for N times to accumulate, wherein N can be prefabricated;
4) If the calibration reaches the standard in the N times of calibration, taking the parameter of the best time as a target value;
5) If the standard is not reached in the N times, continuing the next round of parameter calibration until the parameter reaches the standard, and taking the final parameter reaching the standard as a target value;
6) If m times of calibration do not reach the standard in m rounds, taking the optimal parameter as a target value, and stopping calibration;
7) And then standard splicing and fusion processes are carried out, and the target value is used as a reference.
Example three:
a splicing method of VR multi-view camera videos comprises the following steps:
1) Selecting a shooting site, selecting VR multi-view cameras to be used, setting the number of lenses of the VR multi-view cameras to be 6, erecting the VR multi-view cameras on the site, and adjusting and calibrating the height and the angle of the VR multi-view cameras;
2) Using a VR multi-view camera to shoot a scene, obtaining 5 groups of videos, generating (with the resolution of 7680X 3840 or 3840X 1920) FOV (field of view) of more than 120 degrees as input by each fisheye lens of the VR multi-view camera, and generating one path of high-definition panoramic 8K video as output for later use;
3) The method comprises the following steps of taking experience splicing parameters of a VR camera with 6 lenses as preset conditions, starting an adaptive algorithm under a specific scene, taking default parameters as initial parameters, assigning a default value to the default parameters when a certain parameter of a declaration function is specified, and then carrying out image splicing, wherein the image splicing refers to the following steps: acquiring image data of a plurality of depth levels from the corresponding panoramic video image overlapping area according to the depth information of the overlapping area in each panoramic video image, splicing the image data of the same depth level, calculating a deviation value after splicing a plurality of frames of pictures, estimating the splicing quality at a joint, automatically fine-tuning parameters according to an algorithm, storing the splicing parameters in a memory of the multi-view camera or a memory card, and repeating the process for N times to accumulate, wherein N can be prefabricated;
4) If the calibration reaches the standard in the N times of calibration, taking the parameter of the best time as a target value;
5) If the standard is not reached in the N times, continuing the next round of parameter calibration until the parameter reaches the standard, and taking the final parameter reaching the standard as a target value;
6) If m times of calibration do not reach the standard in m rounds, taking the optimal parameter as a target value, and stopping calibration;
7) And then standard splicing and fusion processes are carried out, and the target value is used as a reference.
Experimental example four:
a splicing method of VR multi-view camera videos comprises the following steps:
1) Selecting a shooting site, selecting VR multi-view cameras to be used, setting the number of lenses of the VR multi-view cameras to be 8, erecting the VR multi-view cameras on the site, and adjusting and calibrating the height and the angle of the VR multi-view cameras;
2) Using a VR multi-view camera to shoot a scene, obtaining 6 groups of videos, generating (with the resolution of 7680X 3840 or 3840X 1920) FOV (field of view) of more than 120 degrees as input by each fisheye lens of the VR multi-view camera, and generating one path of high-definition panoramic 8K video as output for later use;
3) The method comprises the following steps of taking experience splicing parameters of an 8-lens VR (virtual reality) camera as preset conditions, starting an adaptive algorithm under a specific scene, taking default parameters as initial parameters, assigning a default value to the default parameters when a certain parameter of a function is declared, and then carrying out image splicing, wherein the image splicing refers to the following steps: acquiring image data of a plurality of depth levels from the corresponding panoramic video image overlapping area according to the depth information of the overlapping area in each panoramic video image, splicing the image data of the same depth level, calculating a deviation value after splicing a plurality of frames of pictures, estimating the splicing quality at a joint, automatically fine-tuning parameters according to an algorithm, storing the splicing parameters in a memory of the multi-view camera or a memory card, and repeating the process for N times to accumulate, wherein N can be prefabricated;
4) If the calibration reaches the standard in the N times of calibration, taking the parameter of the best time as a target value;
5) If the standard is not reached in the N times, continuing the next round of parameter calibration until the parameter reaches the standard, and taking the final parameter reaching the standard as a target value;
6) If m times of calibration do not reach the standard in m rounds, taking the optimal parameter as a target value, and stopping calibration;
7) And then performing standard splicing and fusion process, and taking the target value as a reference.
The invention has the beneficial effects that:
according to the splicing method of the VR multi-view camera videos, the rapid splicing of the panoramic images of the VR camera can be used for high-definition VR live broadcasting, the splicing effect is good through automatic calibration, the seam separation phenomenon rarely occurs, the frame-by-frame splicing calculation amount is reduced, and the problem that the ideal effect is difficult to achieve by manually adjusting the splicing parameters of the VR panoramic camera in an actual scene is solved.
The invention adopts a method of dynamic calibration parameters, the splicing calibration parameters are finely adjusted through training in a specific scene, splicing is carried out by utilizing a splicing algorithm and a multiband fusion mode, images after splicing automatically test whether the splicing effect reaches the standard, and then the calibration parameters are finely adjusted through the algorithm until the matching is the optimal value in the range.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (5)
1. A splicing method of VR multi-view camera videos is characterized by comprising the following steps:
1) Selecting a shooting site, selecting VR multi-view cameras to be used, setting the number of lenses of the VR multi-view cameras to be 2-8, erecting the VR multi-view cameras on the site, and adjusting and calibrating the height and angle of the VR multi-view cameras;
2) Using a VR multi-view camera to shoot scenes, and obtaining 2-6 groups of videos for later use;
3) The method comprises the steps that empirical splicing parameters of VR cameras with 2-8 lenses are used as preset conditions, an adaptive algorithm is started under a specific scene, default parameters are used as initial parameters, then image splicing is carried out, after several frames of pictures are spliced, a deviation value is calculated, splicing quality of a seam is estimated, then the process is automatically adjusted in a fine mode according to the algorithm, and the process is repeated for N times, wherein N can be preset;
4) If the calibration reaches the standard in the N times of calibration, taking the optimal parameter of the time as a target value;
5) If the standard is not reached in the N times, continuing the next round of parameter calibration until the parameter reaches the standard, and taking the final parameter reaching the standard as a target value;
6) If m times of calibration do not reach the standard in m rounds, taking the optimal parameter as a target value, and stopping calibration;
7) And then performing standard splicing and fusion process, and taking the target value as a reference.
2. The method of claim 1, wherein the default parameter is a default value assigned to a parameter of the declaration function.
3. The method of claim 1, wherein the image stitching is performed by: and acquiring image data of a plurality of depth levels from the corresponding overlapping area of the panoramic video images according to the depth information of the overlapping area in each panoramic video image, and splicing the image data of the same depth level.
4. The method for splicing VR multi-view camera video of claim 1, wherein the splicing parameters are stored in a memory of the multi-view camera or a memory card.
5. The method of claim 1, wherein each fisheye lens of the VR multi-view camera generates (with a resolution of 7680X 3840 or 3840X 1920) FOV >120 ° as an input and a path of HD panoramic 8K video as an output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110375290.8A CN115209123A (en) | 2021-04-08 | 2021-04-08 | Splicing method for VR multi-view camera video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110375290.8A CN115209123A (en) | 2021-04-08 | 2021-04-08 | Splicing method for VR multi-view camera video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115209123A true CN115209123A (en) | 2022-10-18 |
Family
ID=83570314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110375290.8A Pending CN115209123A (en) | 2021-04-08 | 2021-04-08 | Splicing method for VR multi-view camera video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115209123A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118368527A (en) * | 2024-06-20 | 2024-07-19 | 青岛珞宾通信有限公司 | Multi-camera panoramic camera image calibration method and system |
-
2021
- 2021-04-08 CN CN202110375290.8A patent/CN115209123A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118368527A (en) * | 2024-06-20 | 2024-07-19 | 青岛珞宾通信有限公司 | Multi-camera panoramic camera image calibration method and system |
CN118368527B (en) * | 2024-06-20 | 2024-08-16 | 青岛珞宾通信有限公司 | Multi-camera panoramic camera image calibration method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10609282B2 (en) | Wide-area image acquiring method and apparatus | |
CN103888683B (en) | Mobile terminal and shooting method thereof | |
CN112311965B (en) | Virtual shooting method, device, system and storage medium | |
CN108737738B (en) | Panoramic camera and exposure method and device thereof | |
CN112330736B (en) | Scene picture shooting method and device, electronic equipment and storage medium | |
CN105072314A (en) | Virtual studio implementation method capable of automatically tracking objects | |
US12008708B2 (en) | Method and data processing system for creating or adapting individual images based on properties of a light ray within a lens | |
CN103945210A (en) | Multi-camera photographing method for realizing shallow depth of field effect | |
WO2023207452A1 (en) | Virtual reality-based video generation method and apparatus, device, and medium | |
CN112118435B (en) | Multi-projection fusion method and system for special-shaped metal screen | |
CN109190533B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN111866523A (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
WO2021031210A1 (en) | Video processing method and apparatus, storage medium, and electronic device | |
WO2023217138A1 (en) | Parameter configuration method and apparatus, device, storage medium and product | |
Ouyang et al. | Neural camera simulators | |
CN114339042A (en) | Image processing method and device based on multiple cameras and computer readable storage medium | |
CN109788270A (en) | 3D-360 degree panorama image generation method and device | |
CN115209123A (en) | Splicing method for VR multi-view camera video | |
CN108632538B (en) | CG animation and camera array combined bullet time shooting system and method | |
CN117527993A (en) | Device and method for performing virtual shooting in controllable space | |
CN112019747B (en) | Foreground tracking method based on holder sensor | |
EP4150560B1 (en) | Single image 3d photography with soft-layering and depth-aware inpainting | |
CN117974796A (en) | XR augmented reality camera calibration method, device and system | |
CN117082225B (en) | Virtual delay video generation method, device, equipment and storage medium | |
CN117528236B (en) | Adjustment method and device for virtual camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20221018 |