CN116760963A - Video panorama stitching and three-dimensional fusion method and device - Google Patents

Video panorama stitching and three-dimensional fusion method and device Download PDF

Info

Publication number
CN116760963A
CN116760963A CN202310699313.XA CN202310699313A CN116760963A CN 116760963 A CN116760963 A CN 116760963A CN 202310699313 A CN202310699313 A CN 202310699313A CN 116760963 A CN116760963 A CN 116760963A
Authority
CN
China
Prior art keywords
dimensional
video
fusion
stitching
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310699313.XA
Other languages
Chinese (zh)
Inventor
马平
张小二
孙靖
严姗姗
任鹏辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Film Digital Production Base Co ltd
Original Assignee
China Film Digital Production Base Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Film Digital Production Base Co ltd filed Critical China Film Digital Production Base Co ltd
Priority to CN202310699313.XA priority Critical patent/CN116760963A/en
Publication of CN116760963A publication Critical patent/CN116760963A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to the field of video stitching and fusion, in particular to a method and a device for stitching and three-dimensional fusion of video panorama. The method comprises the following steps: s1, shooting videos with different angles by using a plurality of cameras to obtain a plurality of video streams; s2, respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image; s3, carrying out panoramic stitching on the processed video streams to obtain panoramic video; s4, constructing a three-dimensional scene model by utilizing three-dimensional modeling software; and S5, carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion. The invention can combine the shot video stream with the virtual constructed three-dimensional scene model to manufacture the three-dimensional panoramic video with higher reality and better combination degree.

Description

Video panorama stitching and three-dimensional fusion method and device
Technical Field
The invention relates to the field of video stitching and fusion, in particular to a method and a device for stitching and three-dimensional fusion of video panorama.
Background
The panoramic video can achieve the video display effect of 360 degrees of shooting angle, the implementation of the panoramic video generally adopts a video stitching technology, for example, a plurality of cameras are utilized to carry out video acquisition on scenes, and then videos acquired by the cameras are stitched in real time to form the panoramic video.
Chinese patent publication No. CN111583116a discloses a method and system for splicing and fusing video panorama based on multi-camera cross photography, which acquires images of two overlapping cameras at the same time from a video stream; sequentially carrying out distortion correction and orthographic correction on the two images to obtain corrected images; extracting characteristic points of the two corrected images, and obtaining characteristic point matching pairs according to the characteristic points; obtaining a perspective transformation matrix according to the feature point matching pairs, and obtaining two perspective transformation images according to the perspective transformation matrix; obtaining a mask of two perspective transformed images; and splicing the two perspective transformation images according to the final mask and the characteristic point matching pairs. The technical scheme realizes the splicing of images between the cameras with large main optical axis crossing included angles.
However, the above technical solution has the following disadvantages:
panoramic video is limited to shooting a real environment or an object, and when an object picture to be acquired is not in a real shooting picture at any time, the panoramic video with the object cannot be captured and shot. In addition, when an object picture to be acquired cannot be moved according to an artificial desire, even if a panoramic video with the object can be shot, the manufactured panoramic video cannot meet the user demand, and the expressive force of the panoramic video is insufficient.
Disclosure of Invention
The invention aims at solving the problems in the background technology and provides a video panorama splicing and three-dimensional fusion method and device capable of combining a shot video stream with a virtually constructed three-dimensional scene model to manufacture a three-dimensional panorama video with higher authenticity and better combination degree.
On one hand, the invention provides a video panorama stitching and three-dimensional fusion method, which comprises the following steps:
s1, shooting videos with different angles by using a plurality of cameras to obtain a plurality of video streams;
s2, respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image;
s3, carrying out panoramic stitching on the processed video streams to obtain panoramic video;
s4, constructing a three-dimensional scene model by utilizing three-dimensional modeling software;
and S5, carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion.
Preferably, the method further comprises S6, wherein the control of the three-dimensional panoramic video is realized by a user in an interactive mode.
Preferably, in step S2, the image processing procedure uses a gaussian filter formula:
where G (x, y) represents a gaussian function, σ represents a standard deviation, and x and y represent distances of pixels in the horizontal and vertical directions, respectively.
Preferably, in step S3, the panorama stitching uses perspective transformation to transform the image coordinates before stitching into the image coordinates after stitching, and the perspective transformation formula is used:
wherein ,representing the transformed coordinate point +.>Representing coordinate points in the original image, f representing focal length of the camera, θ representing rotation angle of the camera, t x and ty Representing the camera translation distance.
Preferably, the triangulation formula used in the three-dimensional reconstruction fusion:
where d represents the distance of the object from the camera, b represents the length of the object on the image, x b and xa Respectively representing the abscissa values of the two endpoints of the object on the image.
Preferably, in step S1, a plurality of cameras are simultaneously in a stationary state or simultaneously in a moving state to capture a video stream.
On the other hand, the invention provides a video panorama stitching and three-dimensional fusion device for implementing the video panorama stitching and three-dimensional fusion method, which comprises the following steps:
video stream shooting module: shooting videos with different angles by using a plurality of cameras to obtain a plurality of video streams;
video stream processing module: respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image;
panoramic video stitching module: panoramic stitching is carried out on the processed video streams to obtain panoramic video;
the three-dimensional scene model building module: constructing a three-dimensional scene model by utilizing three-dimensional modeling software;
and a three-dimensional reconstruction fusion module: carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion;
and the interaction control module is used for: and the user controls the three-dimensional panoramic video in an interactive mode.
Preferably, the video stream shooting module comprises a plurality of cameras and is carried on an unmanned aerial vehicle platform, the unmanned aerial vehicle platform comprises a mounting ball seat, a balancing weight, a guide ring, a rotating ring, a connecting rod, an unmanned aerial vehicle body, a signal transmitter and a remote controller, the mounting ball seat is of a hollow spherical structure, the balancing weight is arranged at the inner bottom of the mounting ball seat, the plurality of cameras are evenly distributed on the spherical surface of the mounting ball seat, the guide ring is horizontally arranged on the peripheral surface of the mounting ball seat, the rotating ring is rotationally arranged on the guide ring, two ends of the connecting rod are respectively connected with the rotating ring and the unmanned aerial vehicle body, the signal transmitter is arranged on the unmanned aerial vehicle body and is respectively in communication connection with the unmanned aerial vehicle body and the cameras, and the remote controller is in communication connection with the signal transmitter.
Preferably, the unmanned aerial vehicle body flies in a manner of horizontally encircling the installation ball seat when flying.
Compared with the prior art, the invention has the following beneficial technical effects:
according to the invention, the real video stream pictures are subjected to panoramic stitching to obtain the panoramic video, and then the panoramic video is subjected to fusion reconstruction with the built virtual three-dimensional scene model, so that the three-dimensional panoramic video obtained by reconstruction is higher in authenticity and better in combination degree compared with a single panoramic video and a single three-dimensional scene model. The manufactured three-dimensional panoramic video can bring convenience to users for brand new and more real virtual experience, and shows better display effect.
Drawings
FIG. 1 is a schematic flow chart of a video panorama stitching and three-dimensional fusion method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system architecture of a video panorama stitching and three-dimensional fusion device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video stream shooting module carried on an unmanned plane platform;
fig. 4 is a schematic block diagram of the signaling principle of the unmanned plane platform;
fig. 5 is a flight path diagram of the unmanned aerial vehicle body when the video stream shooting module is lifted up and down in a straight line, wherein a solid line represents a moving path of the video stream shooting module, and a dotted line represents a flight path of the unmanned aerial vehicle body;
fig. 6 is a flight path diagram of the unmanned aerial vehicle body when the video stream shooting module moves horizontally, wherein a solid line represents a moving path of the video stream shooting module and a dotted line represents a flight path of the unmanned aerial vehicle body;
fig. 7 is a flight path diagram of the unmanned aerial vehicle body when the video stream capturing module moves in an oblique direction, wherein a solid line represents a moving path of the video stream capturing module and a dotted line represents a flight path of the unmanned aerial vehicle body.
Reference numerals: 1. installing a ball seat; 2. a guide ring; 3. a rotating ring; 4. a connecting rod; 5. an unmanned aerial vehicle body; 6. a signal transmitter.
Detailed Description
Example 1
As shown in fig. 1, the method for splicing and three-dimensional fusion of video panorama provided in this embodiment includes the following steps:
s1, shooting videos with different angles by using a plurality of cameras to obtain a plurality of video streams, wherein the plurality of cameras are in a static state or in a moving state at the same time to shoot the video streams;
s2, respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image, wherein a Gaussian filter formula is used in the image processing process:
wherein G (x, y) represents a Gaussian function, sigma represents a standard deviation, sigma value is inversely proportional to image definition, the higher the sigma value is, the lower the image definition is, the lower the sigma value is, the higher the image definition is, and a proper sigma value can be selected according to actual conditions; x and y represent the distance of the pixel in the horizontal and vertical directions, respectively;
s3, carrying out panoramic stitching on the processed video streams to obtain panoramic video, wherein the panoramic stitching uses perspective transformation to transform the image coordinates before stitching into the image coordinates after stitching, and a perspective transformation formula is used:
wherein ,representing the transformed coordinate point +.>Represents a coordinate point in an original image, f represents a focal length of a camera, θ represents a rotation angle of the camera, and when the camera is not rotated, θ is 0, t x and ty Representing camera translation distance;
s4, constructing a three-dimensional scene model by utilizing three-dimensional modeling software, wherein 3DS Max, maya and other three-dimensional modeling software can be adopted to construct the three-dimensional scene model;
s5, carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion, and matching the three-dimensional scene model to a corresponding position of the panoramic video during three-dimensional reconstruction fusion, wherein a triangulation formula is used in the three-dimensional reconstruction fusion:
where d represents the distance of the object from the camera, b represents the length of the object on the image, x b and xa Respectively representing the abscissa values of two endpoints of the object on the image;
and S6, the user controls the three-dimensional panoramic video in an interactive mode, so that user experience is improved, and viewing of the three-dimensional panoramic video with stronger reality is realized.
According to the embodiment, the real video stream pictures are subjected to panoramic stitching to obtain the panoramic video, then the panoramic video and the built virtual three-dimensional scene model are subjected to fusion reconstruction, and the three-dimensional panoramic video obtained through reconstruction is higher in authenticity and better in combination degree compared with a single panoramic video and a single three-dimensional scene model. The manufactured three-dimensional panoramic video can bring convenience to users for brand new and more real virtual experience, and shows better display effect.
Example two
As shown in fig. 1 to fig. 7, the video panorama stitching and three-dimensional fusion device provided in this embodiment is configured to implement the video panorama stitching and three-dimensional fusion method in the first embodiment, where the video panorama stitching and three-dimensional fusion device includes a video stream shooting module, a video stream processing module, a panorama video stitching module, a three-dimensional scene model building module, a three-dimensional reconstruction fusion module, and an interaction control module.
The video stream shooting module shoots videos of different angles by using a plurality of cameras to obtain a plurality of video streams, the video streams comprise edge coverage areas, pictures of the video streams can completely cover a three-dimensional omnidirectional space, and shooting blind areas do not exist.
The video stream processing module is used for respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image.
The panoramic video stitching module is used for performing panoramic stitching on the processed video streams to obtain panoramic video, and when stitching is performed, overlapping portions among adjacent video streams are cut off, and only non-overlapping video stream pictures are reserved.
The three-dimensional scene model construction module is used for constructing a three-dimensional scene model by utilizing three-dimensional modeling software, the three-dimensional scene model corresponds to a shot video stream picture, and the adopted three-dimensional modeling software can comprise 3DS Max, maya and the like.
The three-dimensional reconstruction fusion module is used for carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion, and the three-dimensional panoramic video is obtained by carrying out corresponding fusion on the panoramic video and the three-dimensional scene model, so that the authenticity of the three-dimensional panoramic video is stronger.
The interaction control module enables a user to control the three-dimensional panoramic video in an interaction mode, so that the user has better interaction experience on the three-dimensional panoramic video, and can perform immersive experience.
As shown in fig. 2-4, the video stream shooting module comprises a plurality of cameras and is carried on an unmanned aerial vehicle platform, the unmanned aerial vehicle platform comprises an installation ball seat 1, a balancing weight, a guide ring 2, a rotating ring 3, a connecting rod 4, an unmanned aerial vehicle body 5, a signal transmitter 6 and a remote controller, the installation ball seat 1 is of a hollow spherical structure, the balancing weight is arranged at the bottom inside the installation ball seat 1, the plurality of cameras are evenly distributed on the spherical surface of the installation ball seat 1, the guide ring 2 is horizontally arranged on the outer circumferential surface of the installation ball seat 1, the rotating ring 3 is rotationally arranged on the guide ring 2, two ends of the connecting rod 4 are respectively connected with the rotating ring 3 and the unmanned aerial vehicle body 5, the signal transmitter 6 is arranged on the unmanned aerial vehicle body 5 and is respectively in communication connection with the cameras, the remote controller is in communication connection with the signal transmitter 6, signals can be sent to the unmanned aerial vehicle body 5 and the cameras through the signal transmitter 6, the flight of the unmanned aerial vehicle body 5 and shooting of the cameras can be controlled, the video streams shot by the cameras can be returned to the remote controller through the signal transmitter 6 to be stored, and the user can collect the video streams shot by the cameras through the remote controller through the reader.
The unmanned aerial vehicle body 5 flies with the mode of horizontal encircling installation ball seat 1 when flying, because installation ball seat 1 inboard is provided with the balancing weight, when unmanned aerial vehicle body 5 revolves around installation ball seat 1, unmanned aerial vehicle body 5 drives connecting rod 4 and swivel becket 3 and rotates around installation ball seat 1, and the guide ring 2 leads swivel becket 3, makes the rotation process more smooth and easy, and a plurality of cameras on the installation ball seat 1 maintain and shoot the position unchangeably, guarantee to shoot continuity and stability of picture.
As shown in fig. 5, when the video stream photographing module is vertically lifted, the installation ball seat 1 and the plurality of cameras are vertically lifted, and the unmanned aerial vehicle body 5 is overlapped and revolved around the installation ball seat 1 for compound flying movement during lifting, so that the plurality of cameras can photograph continuous video streams.
As shown in fig. 6, when the video stream photographing module horizontally moves, the installation tee 1 and the plurality of cameras horizontally move, and the unmanned aerial vehicle body 5 superimposes the composite flying motion revolving around the installation tee 1 while horizontally moving, so that the plurality of cameras can photograph the continuous video stream.
As shown in fig. 7, when the video stream photographing module moves in the tilting direction, the installation tee 1 and the plurality of cameras also move in the tilting direction, and the unmanned aerial vehicle body 5 superimposes the composite flight motion revolving around the installation tee 1 while moving in the tilting direction, so that the plurality of cameras can photograph the continuous video stream.
Similarly, for any form of movement of the video stream shooting module, the installation ball seat 1 and the plurality of cameras move in a synchronous form, and the unmanned aerial vehicle body 5 can superpose compound flight movement revolving around the installation ball seat 1 when moving in a synchronous form, so that the cameras in a plurality of different directions can shoot continuous video streams, and finally the integrity and the continuity of the manufactured panoramic video can be ensured. When a plurality of cameras of the video stream shooting module are in a stationary state at a fixed position to shoot video, the unmanned aerial vehicle body 5 performs only the movement around the installation tee 1.
When unmanned aerial vehicle body 5 is in some camera fronts, unmanned aerial vehicle body 5 can shelter from the camera and cause the visual field blind area, and the video stream on this direction can't be completely shot to the camera. When the unmanned aerial vehicle body 5 turns away, the camera can normally shoot, obtain the video stream that shoots when not sheltered from, utilize above-mentioned video stream to simulate and make the video stream picture that does not complete shooting before to can realize the video stream continuity and the integrality of a plurality of cameras in whole shooting flow on the whole.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited thereto, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (9)

1. The video panorama stitching and three-dimensional fusion method is characterized by comprising the following steps of:
s1, shooting videos with different angles by using a plurality of cameras to obtain a plurality of video streams;
s2, respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image;
s3, carrying out panoramic stitching on the processed video streams to obtain panoramic video;
s4, constructing a three-dimensional scene model by utilizing three-dimensional modeling software;
and S5, carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion.
2. The method for splicing and three-dimensional fusion of video panorama according to claim 1, further comprising S6, wherein the user controls the three-dimensional panorama in an interactive manner.
3. The method according to claim 1, wherein in step S2, the image processing procedure uses a gaussian filter formula:
where G (x, y) represents a gaussian function, σ represents a standard deviation, and x and y represent distances of pixels in the horizontal and vertical directions, respectively.
4. A method of video panorama stitching and three-dimensional fusion according to claim 3, wherein in step S3, the panorama stitching uses perspective transformation to transform the pre-stitched image coordinates into post-stitched image coordinates using the perspective transformation formula:
wherein ,representing the transformed coordinate point +.>Representing coordinate points in the original image, f representing focal length of the camera, θ representing rotation angle of the camera, t x and ty Representing the camera translation distance.
5. The method of video panorama stitching and three-dimensional fusion according to claim 4, wherein the triangulation formula used in three-dimensional reconstruction fusion:
where d represents the distance of the object from the camera, b represents the length of the object on the image, x b and xa Respectively representing the abscissa values of the two endpoints of the object on the image.
6. The method according to claim 1, wherein in step S1, the plurality of cameras are simultaneously in a stationary state or simultaneously in a moving state to capture video streams.
7. A video panorama stitching and three-dimensional fusion device for implementing the video panorama stitching and three-dimensional fusion method according to claim 6, wherein the video panorama stitching and three-dimensional fusion device comprises:
video stream shooting module: shooting videos with different angles by using a plurality of cameras to obtain a plurality of video streams;
video stream processing module: respectively carrying out denoising and image enhancement image processing processes on a plurality of video streams to obtain a clearer image;
panoramic video stitching module: panoramic stitching is carried out on the processed video streams to obtain panoramic video;
the three-dimensional scene model building module: constructing a three-dimensional scene model by utilizing three-dimensional modeling software;
and a three-dimensional reconstruction fusion module: carrying out three-dimensional reconstruction fusion on the spliced panoramic video and the three-dimensional scene model to obtain a three-dimensional panoramic video after three-dimensional reconstruction fusion;
and the interaction control module is used for: and the user controls the three-dimensional panoramic video in an interactive mode.
8. The video panorama stitching and three-dimensional fusion device according to claim 7, wherein the video stream shooting module comprises a plurality of cameras and is carried on an unmanned aerial vehicle platform, the unmanned aerial vehicle platform comprises an installation ball seat (1), a balancing weight, a guide ring (2), a rotating ring (3), a connecting rod (4), an unmanned aerial vehicle body (5), a signal transmitter (6) and a remote controller, the installation ball seat (1) is of an inside hollow spherical structure, the balancing weight is arranged at the bottom inside the installation ball seat (1), the cameras are evenly distributed on the spherical surface of the installation ball seat (1), the guide ring (2) is horizontally arranged on the outer circumferential surface of the installation ball seat (1), the rotating ring (3) is rotationally arranged on the guide ring (2), two ends of the connecting rod (4) are respectively connected with the rotating ring (3) and the unmanned aerial vehicle body (5), the signal transmitter (6) is arranged on the unmanned aerial vehicle body (5), the signal transmitter (6) is respectively in communication connection with the unmanned aerial vehicle body (5) and the cameras, and the remote controller is in communication connection with the signal transmitter (6).
9. The video panorama stitching and three-dimensional fusion device according to claim 8, wherein the unmanned aerial vehicle body (5) flies in a horizontal surrounding mounting of the tee (1).
CN202310699313.XA 2023-06-13 2023-06-13 Video panorama stitching and three-dimensional fusion method and device Pending CN116760963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310699313.XA CN116760963A (en) 2023-06-13 2023-06-13 Video panorama stitching and three-dimensional fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310699313.XA CN116760963A (en) 2023-06-13 2023-06-13 Video panorama stitching and three-dimensional fusion method and device

Publications (1)

Publication Number Publication Date
CN116760963A true CN116760963A (en) 2023-09-15

Family

ID=87954718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310699313.XA Pending CN116760963A (en) 2023-06-13 2023-06-13 Video panorama stitching and three-dimensional fusion method and device

Country Status (1)

Country Link
CN (1) CN116760963A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678722A (en) * 2014-11-20 2016-06-15 深圳英飞拓科技股份有限公司 Panoramic stitched image bending correction method and panoramic stitched image bending correction device
CN105739525A (en) * 2016-02-14 2016-07-06 普宙飞行器科技(深圳)有限公司 System of matching somatosensory operation to realize virtual flight
CN111583116A (en) * 2020-05-06 2020-08-25 上海瀚正信息科技股份有限公司 Video panorama stitching and fusing method and system based on multi-camera cross photography
CN112017222A (en) * 2020-09-08 2020-12-01 北京正安维视科技股份有限公司 Video panorama stitching and three-dimensional fusion method and device
CN112383746A (en) * 2020-10-29 2021-02-19 北京软通智慧城市科技有限公司 Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678722A (en) * 2014-11-20 2016-06-15 深圳英飞拓科技股份有限公司 Panoramic stitched image bending correction method and panoramic stitched image bending correction device
CN105739525A (en) * 2016-02-14 2016-07-06 普宙飞行器科技(深圳)有限公司 System of matching somatosensory operation to realize virtual flight
CN111583116A (en) * 2020-05-06 2020-08-25 上海瀚正信息科技股份有限公司 Video panorama stitching and fusing method and system based on multi-camera cross photography
CN112017222A (en) * 2020-09-08 2020-12-01 北京正安维视科技股份有限公司 Video panorama stitching and three-dimensional fusion method and device
CN112383746A (en) * 2020-10-29 2021-02-19 北京软通智慧城市科技有限公司 Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN101689292B (en) Banana codec
CN110730296B (en) Image processing apparatus, image processing method, and computer readable medium
WO2019104641A1 (en) Unmanned aerial vehicle, control method therefor and recording medium
CN107660337A (en) For producing the system and method for assembled view from fish eye camera
US11233944B2 (en) Method for achieving bullet time capturing effect and panoramic camera
WO2019041276A1 (en) Image processing method, and unmanned aerial vehicle and system
WO2014162324A1 (en) Spherical omnidirectional video-shooting system
WO2018133589A1 (en) Aerial photography method, device, and unmanned aerial vehicle
CN105072314A (en) Virtual studio implementation method capable of automatically tracking objects
WO2018035764A1 (en) Method for taking wide-angle pictures, device, cradle heads, unmanned aerial vehicle and robot
EP2612491A1 (en) Rotary image generator
CN108600607A (en) A kind of fire-fighting panoramic information methods of exhibiting based on unmanned plane
JP2008005450A (en) Method of grasping and controlling real-time status of video camera utilizing three-dimensional virtual space
WO2019023914A1 (en) Image processing method, unmanned aerial vehicle, ground console, and image processing system thereof
WO2022047701A1 (en) Image processing method and apparatus
CN109525816A (en) A kind of more ball fusion linked systems of multiple gun based on three-dimensional geographic information and method
CN115330594A (en) Target rapid identification and calibration method based on unmanned aerial vehicle oblique photography 3D model
TWI696147B (en) Method and system for rendering a panoramic image
KR20160102845A (en) Flight possible omnidirectional image-taking camera system
CN113163107B (en) Panoramic picture timing acquisition triggering system and method
CN213126145U (en) AR-virtual interactive system
CN115935011A (en) Data processing method of mirroring platform based on BIM (building information modeling)
CN116760963A (en) Video panorama stitching and three-dimensional fusion method and device
CN113454980A (en) Panorama shooting method, electronic device and storage medium
CN112334853A (en) Course adjustment method, ground end equipment, unmanned aerial vehicle, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination