CN108769569B - 360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle - Google Patents

360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle Download PDF

Info

Publication number
CN108769569B
CN108769569B CN201810314065.1A CN201810314065A CN108769569B CN 108769569 B CN108769569 B CN 108769569B CN 201810314065 A CN201810314065 A CN 201810314065A CN 108769569 B CN108769569 B CN 108769569B
Authority
CN
China
Prior art keywords
image data
module
image
unmanned aerial
observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810314065.1A
Other languages
Chinese (zh)
Other versions
CN108769569A (en
Inventor
晁涌耀
梁艳菊
常嘉义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Microelectronics Technology Research Institute
Original Assignee
Kunshan Microelectronics Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Microelectronics Technology Research Institute filed Critical Kunshan Microelectronics Technology Research Institute
Priority to CN201810314065.1A priority Critical patent/CN108769569B/en
Publication of CN108769569A publication Critical patent/CN108769569A/en
Application granted granted Critical
Publication of CN108769569B publication Critical patent/CN108769569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

A360 degree stereoscopic panorama observation system for unmanned aerial vehicle, comprising: the system comprises an observation subsystem and a processing subsystem, wherein the observation subsystem is composed of a plurality of cameras for observing 6 directions of a space where the unmanned aerial vehicle is located, and the processing subsystem is communicated with the observation subsystem. Compared with the prior art, can observe the environmental information all around, the high environmental image information of top of unmanned aerial vehicle on the unmanned aerial vehicle height, possess the ability that covers 360 and the 360 omnidirectional fields of vision of perpendicular. In addition, the elevation sensing module is added, and the Beidou/GPS satellite information and the inertial sensor information are compatible, so that the position and elevation information in the unmanned aerial vehicle observation environment can be detected more accurately. 2D and 3D modeling is adopted to observe the image pair, and the environment is described more accurately by combining the height information with the environment image information.

Description

360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle
Technical Field
The invention relates to the field of aerial photography of unmanned aerial vehicles, in particular to a 360-degree three-dimensional panoramic observation system and method for an unmanned aerial vehicle, which are used for realizing 360-degree three-dimensional panoramic observation of the unmanned aerial vehicle.
Background
With the rapid development of economy and the continuous progress of society, the environment detection technology of unmanned aerial vehicles carrying various sensor systems is widely applied in many aspects. In the aspect of environmental monitoring in environmental protection, the unmanned aerial vehicle can be used for exploring the environmental conditions in a large range due to the characteristics of strong timeliness, good maneuverability and wide patrol range. In the aspect of surveying and mapping, the surveying and mapping unmanned aerial vehicle is suitable for transportation as a remote sensing image acquisition device with low cost, high precision and simple and convenient operation, and has good effects in the aspects of traditional surveying and mapping, digital city construction, geographical national condition monitoring, disaster emergency treatment and the like.
The current unmanned aerial vehicle observation system mainly shoots and splices the scene of overlooking ground, and the situation of ground scene can be surveyd comprehensively to the system, singly can't gather the image that encircles unmanned aerial vehicle and the image of unmanned aerial vehicle top, when environmental monitoring, can't reflect three-dimensional environmental aspect. In the case of a low flying height, it is difficult to observe an all-around environmental situation.
Application No.: 201610969823.4 the invention discloses an unmanned aerial vehicle panoramic vision tracking method, an unmanned aerial vehicle and a control terminal, wherein the method comprises the following steps: acquiring images shot by a plurality of cameras at the same time point; splicing images shot by the plurality of cameras at the same time point to form a panoramic image; and transmitting the panoramic image spliced every time to a control terminal wirelessly connected with the unmanned aerial vehicle.
In the above patent, there are two problems: 1. the camera all sets up in unmanned aerial vehicle's below, unable perception unmanned aerial vehicle parallel height and the image information more than parallel height. The panoramic information is provided only on the top plan surface, and is not provided in the vertical direction. 2. The image information is not combined with the altitude information of the unmanned aerial vehicle, and the three-dimensional information in the actual environment on each altitude in the actual environment cannot be detected in detail.
Disclosure of Invention
In order to solve the above problem, according to an aspect of the present invention, there is disclosed a 360-degree stereoscopic panorama observation system for an unmanned aerial vehicle, comprising: observe subsystem and processing subsystem, wherein, observe the subsystem by observing unmanned aerial vehicle below, observe unmanned aerial vehicle top and observe unmanned aerial vehicle a plurality of cameras all around and constitute, processing subsystem with observe the subsystem and communicate, processing subsystem is used for right the image that a plurality of cameras obtained in the observation subsystem is handled.
Preferably, the observation subsystem further comprises: a structure for fixing the plurality of cameras.
Preferably, the processing subsystem comprises: the system comprises a core processor module, a system storage module, a video storage module, an image data synchronization module and a position and elevation sensing module, wherein the system storage module, the video storage module, the image data synchronization module and the position and elevation sensing module are respectively connected with the core processor module.
More preferably, the core processor module is a DSP, a GPU, an FPGA, or a CPU, and is configured to control reading-in of scene image data, image stitching, image processing, and user interaction.
More preferably, the image data synchronization processing module is configured to perform synchronization processing on multiple cameras, the position and elevation sensing module is configured to obtain longitude and latitude information and elevation information of a system, the communication module is configured to communicate with the outside, and the system storage module and the video storage module are respectively configured to store a control program and synthesize panoramic image data, where the video storage module is an SD card or a TF card.
According to another aspect of the invention, a method for using the 360-degree stereoscopic panoramic observation system for the unmanned aerial vehicle is disclosed, which is characterized by comprising the following steps:
the observation subsystem collects image data;
the processing subsystem carries out image preprocessing on the image data;
the processing subsystem carries out image correction on the preprocessed image data according to internal and external reference data of the camera;
the processing subsystem carries out panoramic projection model modeling according to the corrected image data and the internal and external parameter data of the camera to obtain a panoramic projection model;
the processing subsystem carries out panoramic projection texture mapping on the panoramic projection model to obtain a panoramic projection texture mapping image;
and the processing subsystem carries out image texture fusion on the panoramic projection texture mapping image.
Further, the internal and external parameters of the camera include: focal lengths of the plurality of cameras and relative position information among the plurality of cameras.
Further, the modeling the panoramic projection model includes: according to observation needs, 2D or 3D modeling is carried out on the image data according to relative position information among the multiple cameras, wherein the 2D modeling is composed of an image square matrix, each square in the square matrix corresponds to one pixel point, the 3D modeling is composed of multiple faces to form a stereo model, each component face in the stereo model corresponds to one pixel point, and the pixel points are from the corrected image data.
Further, the performing panoramic projection texture mapping on the panoramic projection model includes: and according to the relative position information of the cameras, mapping the position information of the squares or surfaces forming the 2D or 3D model to the corrected image information through the pixel points, and further mapping the position information to the original image information, thereby completing the mapping of the panoramic projection texture, wherein each position information corresponds to one or more pieces of original image information.
Further, the image texture fusion of the panoramic projection texture mapping image comprises: and performing image texture fusion by a method of giving different weights to a plurality of corresponding original images in the pixel position information.
Compared with the prior art, can observe the environmental information all around, the high environmental image information of top of unmanned aerial vehicle on the unmanned aerial vehicle height, possess the ability that covers 360 and the 360 omnidirectional fields of vision of perpendicular. In addition, the elevation sensing module is added, and the Beidou/GPS satellite information and the inertial sensor information are compatible, so that the position and elevation information in the unmanned aerial vehicle observation environment can be detected more accurately. 2D and 3D modeling is adopted to observe the image pair, and the environment is described more accurately by combining the height information with the environment image information.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the specific embodiments. The drawings are only for purposes of illustrating the particular embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a structural diagram of the system of the present invention.
FIG. 2 is a flow chart of image processing according to the present invention.
FIG. 3 is a diagram illustrating a projected texture map according to the present invention.
FIG. 4 is a diagram of an observation subsystem architecture in accordance with an alternative embodiment of the present invention.
FIG. 5 is a diagram of an observation subsystem configuration in accordance with another alternative embodiment of the present invention.
FIG. 6 is a schematic diagram of an embodiment of estimating an area of an observation area in a downward view by using an elevation information sensing module according to the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention discloses a 360-degree stereoscopic panoramic observation system for an unmanned aerial vehicle, which comprises: the system comprises an observation subsystem and a processing subsystem, wherein a plurality of cameras are adopted to observe 6 directions of a space where an unmanned aerial vehicle is located, and pictures in the 6 directions are combined together, so that 360-degree three-dimensional panoramic observation of the space where the unmanned aerial vehicle is located is realized, the plurality of cameras form a camera group, the camera group is used as the observation subsystem of the system, the observation subsystem sends collected image information of each direction of the space where the unmanned aerial vehicle is located to the processing subsystem for modeling and image fusion, and therefore 360-degree three-dimensional panoramic observation of the space where the unmanned aerial vehicle is located is finally realized.
As shown in fig. 1, the system structure of the present invention is composed of: an observation subsystem (camera group) and a processing subsystem, wherein the observation subsystem comprises: observe unmanned aerial vehicle below, observe unmanned aerial vehicle top and observe unmanned aerial vehicle a plurality of cameras all around and constitute, the processing subsystem includes: the system comprises a core processor module, a system storage module, a video storage module, an image data synchronization module and a position and elevation sensing module, wherein the system storage module, the video storage module, the image data synchronization module and the position and elevation sensing module are respectively connected with the core processor module.
Specifically, the core processing module is a central processing unit, is a control center of the whole system, and is responsible for controlling reading-in of scene image data, processing and splicing of images, and interaction with a user. The central processing unit can adopt a DSP digital signal processor, a GPU image processing unit, an FPGA programmable logic array or a CPU. In addition, in order to ensure the normal work of the processor, the invention also comprises a power supply module, a clock module and a communication interface module, wherein the power supply module is used for transforming voltage and providing stable voltage for other modules, the clock module is used for providing a timing or counting function, and the power supply module, the clock module and the communication interface module are both connected with the core processing module. The communication interface module is used for connecting the core processor module with an observation subsystem (camera group); the system storage module or the video storage module is an SD card, an SDHC (high capacity SD memory card) card or a TF card, wherein the system storage module is used for storing a control program and temporary data; the video storage module is used for storing video data, wherein the video data comprises: the method comprises the steps of obtaining original video data, corrected video data and video data after image texture fusion; the image data synchronization module is used for comprehensive synchronization processing of multiple paths of camera images, such as reading, synchronization, coding and control transmission of multiple paths of camera data. In addition, when the image information output by the cameras is an analog signal, the image data synchronization processing module also needs to perform analog-to-digital conversion processing on the transmission images of the multiple cameras; and the position and elevation sensing module acquires longitude and latitude information and elevation information of the system according to the satellite and the information of the relevant sensor. The module is compatible with inertial sensors such as Beidou satellites, GPS satellites or accelerometers, gyros and the like. By introducing the position elevation sensing module, the invention can realize the calculation of the area of an observation area, or can accurately control the height of the unmanned aerial vehicle by a remote control device outside the invention. The communication module is used for communicating with control or display equipment outside the invention. The working process of the module is as follows: under the control of the clock module, the plurality of cameras transmit image data to the core processor module through the image data synchronization module, the core processor module receives elevation information from the position elevation sensing module at the same time, control commands in the system storage module are called through the core processor module, the image data are processed and combined with the elevation information, 360-degree stereoscopic image data are generated, original image data, corrected image data and generated fusion image data are stored, and the fusion image data are transmitted to a remote monitoring device outside the unmanned aerial vehicle through the communication module to be displayed, so that 360-degree panoramic stereoscopic observation of the space where the unmanned aerial vehicle is located is achieved. The following will explain the implementation of the present invention.
As shown in fig. 2, it is an image processing flowchart of the present invention, and the method includes: the observation subsystem collects image data; the processing subsystem carries out image preprocessing on the image data; the processing subsystem carries out image correction on the preprocessed image data according to internal and external reference data of the camera; the processing subsystem carries out panoramic projection model modeling according to the corrected image data and the internal and external parameter data of the camera to obtain a panoramic projection model; the processing subsystem carries out panoramic projection texture mapping on the panoramic projection model to obtain a panoramic projection texture mapping image; and the processing subsystem carries out image texture fusion on the panoramic projection texture mapping image.
Specifically, the invention adopts a plurality of cameras to simultaneously observe 6 directions of the space where the unmanned aerial vehicle is located, so as to obtain the image data of the 6 directions of the space where the unmanned aerial vehicle is located, and the processing subsystem obtains the image data of the 6 directions through the communication interface module and performs the following processing: and image preprocessing, which is mainly used for carrying out definition enhancement, image denoising and video image deinterlacing on the image. The method comprises the following steps of carrying out drying treatment on an image by adopting a Gaussian filtering method, wherein the principle is that the whole image is subjected to weighted average, so that the image becomes more linear and smooth; the image de-interlacing process can adopt an inter-field median filtering algorithm to process, and the process is as follows: for six lines (row1-row6) in the image, the first three lines are taken as an example, and the row1 odd field image pixel values in the first line are set as A, B and C, the row2 even field pixels in the second line are set as D, E, F, and the row3 odd field in the third line is set as G, H, I. The first line of image pixels is padded A, B, C after de-interlacing. The second row of images is filled as the median image. The median is then calculated as:
value1 mean (first data in first row, first data in second row, first data in third row)
value2 mean (second data in first row, second data in second row, first data in third row)
value3 mean (third data in first row, second data in second row, second data in third row)
Arranging three numerical values in the mean from small to large, and then taking a middle value, wherein the three numerical values are shown in the following table:
Figure BDA0001623255350000061
in image correction, since most of images shot by a camera have distortion, the image distortion is generally more serious when the angle of field of view of the camera is larger. Therefore, by observing the image distortion characteristic and adopting a method based on a calibration object, a mathematical model of image distortion is established, and an image distortion rule is further obtained; and then restoring the image of the distorted image along the inverse process of the image distortion of the distorted image by using the known camera distortion rule. In the above process, the distortion of the image may be affected by the internal parameter of the camera, such as the focal length, and the external parameter of the camera, such as the relative positions of the plurality of cameras, so that the internal and external parameter data of the camera are also needed in the image correction process.
And in order to facilitate panoramic observation, the panoramic projection modeling method can perform two kinds of processing on the panoramic image according to observation requirements, and performs 2D modeling processing or 3D modeling processing. The 2D model in the invention is composed of a plurality of grid arrays, one grid array represents one pixel, the 3D model is modeled by adopting a virtual 3D world coordinate system, so that the observable viewpoints are richer, the 3D model is presented in a 3-dimensional point cloud mode, the 3D projection model can adopt a sphere or other three-dimensional models, the surface of the 3D model is segmented in the invention, so that a plurality of small unit surfaces are obtained, and each small unit surface in the 3D model corresponds to one pixel. In the invention, the planar 2D model is suitable for observing a certain planar panoramic scene in a top view or a bottom view. When 2D modeling observation is selected, if upward observation is selected, the method and the device can splice images used for shooting the images above the space where the unmanned aerial vehicle is located and images used for shooting the periphery of the space where the unmanned aerial vehicle is located to form a whole upward view scene picture, and similarly, when upward observation is selected, the method and the device can splice images used for shooting the images below the space where the unmanned aerial vehicle is located and images used for shooting the periphery of the space where the unmanned aerial vehicle is located to form a whole upward view scene picture.
And (3) panoramic projection texture mapping, wherein each pixel on the panoramic projection model is endowed with a specific texture value. As described above, each lattice or small unit plane on the panoramic projection model represents a pixel, and the position of the pixel (the lattice or small unit plane) can be mapped into the image after distortion correction through the internal and external parameters of the camera, and then mapped into the original image (the image which is shot by the camera and is not processed), and finally the corresponding pixel position in the original camera is obtained, and the position is also called texture coordinates. In the invention, a plurality of cameras are adopted to observe simultaneously, and in order to ensure the integrity of a picture, the observation areas of the plurality of cameras are overlapped, so that a corresponding texture coordinate may exist in an original image at a pixel position in the panoramic model, and the process of texture mapping is shown in fig. 3. Therefore, in the present invention, the image texture fusion processing is performed on the condition that one model pixel corresponds to a plurality of texture coordinates, and the specific process is as follows.
The image texture fusion, in the invention, the weight of each texture in a pixel point is respectively determined by analyzing the pixel point with a plurality of texture coordinates in a model, so that a plurality of textures are fused according to the weight, and the mathematical expression is as follows:
F(x,y)=w1f1(x,y)+w2f2(x,y)+……wnfn(x,y)
where F (x, y) represents the texture after fusion, Fi(x, y) represents the i-th texture in one pixel, wiAnd expressing the weight of the ith texture in the pixel point.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
As shown in fig. 4, which is a structural diagram of an observation subsystem according to an alternative embodiment of the present invention, in this embodiment, 6 individual cameras are fixed on a 6-face structure to form a camera group, and a panoramic stereo capture of an image is implemented by the camera group. Specifically, a camera is respectively installed on the top surface, the bottom surface and each side surface of the 6-surface body structure, the cameras on the 4 side surfaces cover a horizontal 360-degree view field, the top surface or the bottom surface and part of the side surface cameras form a 2D view field covering a scene in the vertical direction, and the 6 cameras form a 3D stereo scene together. The camera can be a digital camera or an analog camera.
Fig. 5 is a structural diagram of an observation subsystem according to another alternative embodiment of the present invention, in which the observation subsystem (camera group) adopts a pentahedron structure, and 360 degrees horizontally adopts three wide-angle cameras with observation angles greater than 120 degrees for observation.
As shown in fig. 6, a schematic diagram of an embodiment of the present invention for estimating an area of an observation area in a top view by using a position elevation information sensing module, where α represents an observation angle of a camera, h represents a height at which an unmanned aerial vehicle is located, and r represents a radius of the observation area, and since the height at which the unmanned aerial vehicle is located is much greater than a diameter of a lens of the camera, α and β are approximately equal, a radius r of the observation area is approximately equal to r ═ h × arctan (α), so that the area of the observation area is approximately equal to pi [ h × arctan (α) ])]2
The above description is only an exemplary embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (1)

1. A method of using a 360 degree stereoscopic panoramic observation system for an unmanned aerial vehicle, the system comprising an observation subsystem and a processing subsystem, wherein the observation subsystem is comprised of a plurality of cameras observing below the unmanned aerial vehicle, above the unmanned aerial vehicle and around the unmanned aerial vehicle, the processing subsystem is in communication with the observation subsystem, the processing subsystem is for processing images obtained by the plurality of cameras in the observation subsystem, comprising:
the observation subsystem collects image data;
the processing subsystem carries out image preprocessing on the image data;
the processing subsystem carries out image correction on the preprocessed image data according to internal and external reference data of the camera; the internal and external parameters of the camera include: focal lengths of the plurality of cameras and relative position information among the plurality of cameras;
the processing subsystem carries out panoramic projection model modeling according to the corrected image data and the internal and external parameter data of the camera to obtain a panoramic projection model; the modeling of the panoramic projection model comprises: according to observation needs, performing 2D or 3D modeling on the image data according to relative position information among the multiple cameras, wherein the 2D modeling is composed of an image square matrix, each square in the square matrix corresponds to one pixel point, the 3D modeling is composed of multiple faces into a stereo model, each component face in the stereo model corresponds to one pixel point, and the pixel point is from the corrected image data;
the processing subsystem carries out panoramic projection texture mapping on the panoramic projection model to obtain a panoramic projection texture mapping image; according to the relative position information of the cameras, mapping the position information of the squares or surfaces forming the 2D or 3D model to the corrected image information through the pixel points, and then mapping the position information to the original image information, thereby completing the mapping of the panoramic projection texture, wherein each pixel position information corresponds to one or more pieces of original image information;
the processing subsystem carries out image texture fusion on the panoramic projection texture mapping image, carries out image texture fusion by a method of endowing a plurality of corresponding original images in the pixel position information with different weights, and is expressed in a mathematical mode as follows:
F(x,y)=w1f1(x,y)+w2f2(x,y)+……wnfn(x,y)
where F (x, y) represents the texture after fusion, Fi(x, y) represents the i-th texture in one pixel, wiRepresenting the weight of the ith texture in the pixel point;
the processing subsystem includes: the system comprises a core processor module, a system storage module, a video storage module, an image data synchronization module and a position and elevation sensing module, wherein the system storage module, the video storage module, the image data synchronization module and the position and elevation sensing module are respectively connected with the core processor module;
the image data synchronous processing module is used for synchronous processing of multiple paths of cameras, the position elevation sensing module is used for obtaining longitude and latitude information and elevation information of a system, the communication module is used for communicating with the outside, and the system storage module and the video storage module are respectively used for storing a control program and synthesizing panoramic image data, wherein the video storage module is an SD (secure digital) card or a TF (Transflash) card;
under the control of the clock module, the plurality of cameras transmit image data to the core processor module through the image data synchronization module, the core processor module receives elevation information from the position elevation sensing module at the same time, control commands in the system storage module are called through the core processor module, the image data are processed and combined with the elevation information, 360-degree stereoscopic image data are generated, original image data, corrected image data and generated fusion image data are stored, the fusion image data are transmitted to the remote monitoring device through the communication module to be displayed, and 360-degree panoramic stereoscopic observation of the space where the unmanned aerial vehicle is located is achieved.
CN201810314065.1A 2018-04-10 2018-04-10 360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle Active CN108769569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810314065.1A CN108769569B (en) 2018-04-10 2018-04-10 360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810314065.1A CN108769569B (en) 2018-04-10 2018-04-10 360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN108769569A CN108769569A (en) 2018-11-06
CN108769569B true CN108769569B (en) 2021-04-13

Family

ID=63981565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810314065.1A Active CN108769569B (en) 2018-04-10 2018-04-10 360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN108769569B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113571A (en) * 2019-05-07 2019-08-09 合肥芃明科技有限公司 A kind of approaches to IM based on virtual reality and video fusion
CN112712462A (en) * 2019-10-24 2021-04-27 上海宗保科技有限公司 Unmanned aerial vehicle image acquisition system based on image splicing
CN111064947A (en) * 2019-12-04 2020-04-24 广东康云科技有限公司 Panoramic-based video fusion method, system, device and storage medium
WO2021146972A1 (en) * 2020-01-21 2021-07-29 深圳市大疆创新科技有限公司 Airspace detection method, movable platform, device, and storage medium
WO2022140970A1 (en) * 2020-12-28 2022-07-07 深圳市大疆创新科技有限公司 Panoramic image generation method and apparatus, movable platform and storage medium
CN115861070A (en) * 2022-12-14 2023-03-28 湖南凝服信息科技有限公司 Three-dimensional video fusion splicing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105775151A (en) * 2016-01-29 2016-07-20 上海云舞网络科技有限公司 360 degree panoramic aerial photographing and video recording unmanned aerial vehicle and rack frame
CN206251247U (en) * 2016-11-10 2017-06-13 广西师范大学 Three-dimensional panoramic video long distance control system based on unmanned plane
CN107240065A (en) * 2017-04-19 2017-10-10 中科院微电子研究所昆山分所 A kind of 3D full view image generating systems and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8462209B2 (en) * 2009-06-26 2013-06-11 Keyw Corporation Dual-swath imaging system
CN104834784B (en) * 2015-05-13 2018-06-19 西南交通大学 A kind of railway, which is met an urgent need, assists rescue three-dimensional goods electronic sand map system
CN105139350A (en) * 2015-08-12 2015-12-09 北京航空航天大学 Ground real-time reconstruction processing system for unmanned aerial vehicle reconnaissance images
CN105627991B (en) * 2015-12-21 2017-12-12 武汉大学 A kind of unmanned plane image real time panoramic joining method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105775151A (en) * 2016-01-29 2016-07-20 上海云舞网络科技有限公司 360 degree panoramic aerial photographing and video recording unmanned aerial vehicle and rack frame
CN206251247U (en) * 2016-11-10 2017-06-13 广西师范大学 Three-dimensional panoramic video long distance control system based on unmanned plane
CN107240065A (en) * 2017-04-19 2017-10-10 中科院微电子研究所昆山分所 A kind of 3D full view image generating systems and method

Also Published As

Publication number Publication date
CN108769569A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108769569B (en) 360-degree three-dimensional panoramic observation system and method for unmanned aerial vehicle
CN110244282B (en) Multi-camera system and laser radar combined system and combined calibration method thereof
KR102046032B1 (en) Image capturing apparatus, image capture system, image processing method, information processing apparatus, and computer-readable storage medium
JP6687204B2 (en) Projection image generation method and apparatus, and mapping method between image pixels and depth values
US10176595B2 (en) Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof
KR100988872B1 (en) Method and imaging system for obtaining complex images using rotationally symmetric wide-angle lens and image sensor for hardwired image processing
CN101814181B (en) Unfolding method for restoration of fisheye image
CN101606177B (en) Information processing method
US20180160045A1 (en) Method and device of image processing and camera
CN107169924B (en) Method and system for establishing three-dimensional panoramic image
CN102005039B (en) Fish-eye camera stereo vision depth measuring method based on Taylor series model
KR102295809B1 (en) Apparatus for acquisition distance for all directions of vehicle
KR101915729B1 (en) Apparatus and Method for Generating 360 degree omni-directional view
CN113570721A (en) Method and device for reconstructing three-dimensional space model and storage medium
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
KR20090012290A (en) Methods of optaining panoramic images using rotationally symmetric wide-angle lenses and devices thereof
CN105096252B (en) A kind of preparation method of the comprehensive streetscape striograph of banding
CN107705252A (en) Splice the method and system of expansion correction suitable for binocular fish eye images
KR20120099952A (en) Sensor system, and system and method for preparing environment map using the same
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN102692806A (en) Methods for acquiring and forming free viewpoint four-dimensional space video sequence
CN102831816B (en) Device for providing real-time scene graph
CN110675484A (en) Dynamic three-dimensional digital scene construction method with space-time consistency based on compound eye camera
WO2018052100A1 (en) Image processing device, image processing method, and image processing program
CN108269234A (en) A kind of lens of panoramic camera Attitude estimation method and panorama camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 707, complex building, Kunshan Industrial Technology Research Institute, No. 1699, Zuchongzhi South Road, Kunshan, Suzhou, Jiangsu, 215399

Applicant after: Kunshan Microelectronics Technology Research Institute

Address before: 215347 7th floor, IIR complex, 1699 Weicheng South Road, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: KUNSHAN BRANCH, INSTITUTE OF MICROELECTRONICS OF CHINESE ACADEMY OF SCIENCES

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant