WO2022140970A1 - Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement - Google Patents

Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement Download PDF

Info

Publication number
WO2022140970A1
WO2022140970A1 PCT/CN2020/140351 CN2020140351W WO2022140970A1 WO 2022140970 A1 WO2022140970 A1 WO 2022140970A1 CN 2020140351 W CN2020140351 W CN 2020140351W WO 2022140970 A1 WO2022140970 A1 WO 2022140970A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
auxiliary camera
camera device
processor
Prior art date
Application number
PCT/CN2020/140351
Other languages
English (en)
Chinese (zh)
Inventor
谭卓
郭浩铭
李琛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/140351 priority Critical patent/WO2022140970A1/fr
Publication of WO2022140970A1 publication Critical patent/WO2022140970A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the technical field of movable platforms, and in particular, to a method, device, movable platform and storage medium for generating panoramic images.
  • the maximum limit of the pitch axis of the gimbal is generally not more than 30 degrees.
  • the present application provides a panorama image generating method, device, movable platform and storage medium, so as to improve the quality of panorama images captured and generated based on the movable platform.
  • the present application provides a method for generating a panoramic image, the method comprising:
  • the field of view of the auxiliary camera device includes the movable platform.
  • the present application further provides a panoramic image generating device, the panoramic image generating device comprising a memory and a processor;
  • the memory is used to store computer programs
  • the processor is configured to execute the computer program and implement the following steps when executing the computer program:
  • the field of view of the auxiliary camera device includes the movable platform.
  • the present application further provides a movable platform, where the movable platform includes the above-mentioned panoramic image generating apparatus.
  • the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor can realize the above-mentioned panoramic image generation method.
  • the panorama image generating method, device, movable platform and storage medium disclosed in the present application by acquiring at least one first image captured by the main camera device mounted on the movable platform within the field of view of the main camera device, and acquiring a possible first image.
  • There is an overlapping range between the field range and the field of view range of the auxiliary camera device and image fusion processing is performed on the acquired at least one first image and at least one second image to generate a corresponding panoramic image.
  • the panoramic image obtained by the operation is more natural and real, and the quality of the panoramic image is improved.
  • FIG. 1 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of steps of a method for generating a panoramic image provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of an auxiliary camera device selection interface provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of steps of another panoramic image generation method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of steps of another panoramic image generation method provided by an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of a panoramic image generating apparatus provided by an embodiment of the present application.
  • the embodiments of the present application provide a panorama image generating method, device, movable platform and storage medium, which are used to improve the quality of panorama images captured and generated based on the movable platform.
  • the movable platform includes but is not limited to unmanned aerial vehicles, such as rotary-wing aircraft, including single-rotor aircraft, dual-rotor aircraft, tri-rotor aircraft, quad-rotor aircraft, hexa-rotor aircraft, octa-rotor aircraft, ten-rotor aircraft, twelve-rotor aircraft Rotorcraft, etc.
  • unmanned aerial vehicles such as rotary-wing aircraft, including single-rotor aircraft, dual-rotor aircraft, tri-rotor aircraft, quad-rotor aircraft, hexa-rotor aircraft, octa-rotor aircraft, ten-rotor aircraft, twelve-rotor aircraft Rotorcraft, etc.
  • the movable platform may also be other types of unmanned aerial vehicles or movable devices, such as fixed-wing unmanned aerial vehicles, and the embodiment of the present application is not limited thereto.
  • the movable platform 1000 may include a body 100 , a power system 200 provided in the body 100 , a panoramic image generating device 300 , a main camera device 400 and at least one auxiliary camera device 500 .
  • the power system 200 is used to provide power for the movable platform 1000;
  • the main camera device 400 and at least one auxiliary camera device 500 include but are not limited to cameras, which are used to capture images in different fields of view;
  • the panoramic image generation device 300 is used for Panoramic images are generated from images captured by the main camera 400 and the at least one auxiliary camera 500 in different fields of view.
  • At least one auxiliary camera device 500 includes two or more up-view cameras, the up-view cameras are usually used for obstacle avoidance, and the images captured by the up-view cameras are generally of low resolution and poor signal-to-noise ratio. , the dynamic range is small.
  • the power system 200 may include one or more electronic governors (referred to as ESCs for short), one or more propellers, and one or more motors corresponding to the one or more propellers, wherein the motors are connected to the electronic between the governor and the propeller.
  • the electronic governor is used to provide driving current to the motor to control the speed of the motor.
  • the motor is used to drive the propeller to rotate, thereby providing power for the flight of the movable platform 1000, which enables the movable platform 1000 to achieve one or more degrees of freedom movement.
  • the movable platform 1000 can rotate about one or more axes of rotation.
  • the motor may be a DC motor or an AC motor.
  • the motor may be a brushless motor or a brushed motor.
  • the unmanned aerial vehicle includes a plurality of propellers, an anti-shake gimbal, a camera and at least one upward-looking camera (not shown).
  • operations such as UAV attitude adjustment and UAV hovering can be realized by controlling the rotation of the propeller.
  • the first image can be captured by a camera, and the second image can be captured by at least one upward-looking camera.
  • the panoramic image generation method provided by the embodiments of the present application will be described in detail below based on the movable platform 1000 . It should be noted that the movable platform 1000 in FIG. 1 is only used to explain the panoramic image generation method provided by the embodiment of the present application, but does not constitute a limitation on the application scenario of the panoramic image generation method provided by the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for generating a panoramic image provided by an embodiment of the present application.
  • the method can be used in any movable platform provided by the above embodiments, so as to improve the quality of the panoramic image captured and generated based on the movable platform.
  • the panoramic image generation method specifically includes steps S101 to S103 .
  • S102 Acquire at least one second image captured by at least one auxiliary camera device mounted on the movable platform within the field of view of the auxiliary camera device, where the field of view of the auxiliary camera device includes the movable platform In the upper area of the platform, there is an overlapping range between the field of view of the main camera device and the field of view of the auxiliary camera device.
  • the movable platform is loaded with a main camera device and one or more auxiliary camera devices.
  • the field of view of the main camera device includes the front, rear, left, and right regions of the movable platform and the movable platform.
  • the field of view of the auxiliary camera device includes the upper region of the movable platform, wherein the field of view of the main camera device and the field of view of the auxiliary camera device have overlapping and non-overlapping ranges.
  • One or more images are taken within the field of view of the main camera by using the main camera, and one or more images are taken within the field of view of the corresponding auxiliary camera by using one or more auxiliary cameras.
  • the image captured by the main camera device is hereinafter referred to as the first image
  • the image captured by the auxiliary camera device is referred to as the second image.
  • the panoramic image generation method may further include: when the main camera device is in the main camera device When capturing the first image of each column within the field of view of the camera device, the at least one auxiliary camera device captures the second image within the field of view of the auxiliary camera device.
  • the main camera captures the first image of each column within the field of view of the main camera
  • the One or more auxiliary camera devices capture second images within the field of view of the auxiliary camera devices, so as to obtain a plurality of first images and a plurality of second images.
  • the panoramic image generated by using the multiple first images and the multiple second images improves the image quality of the panoramic image compared to the panoramic image generated only by using the single first image and the single second image.
  • the panoramic image generation method may further include: According to the FOV (Field of view, field of view) information corresponding to the plurality of auxiliary camera devices loaded on the movable platform and the FOV information corresponding to the main camera device, the selected auxiliary camera device is selected from the plurality of auxiliary camera devices. at least one auxiliary camera device.
  • FOV Field of view, field of view
  • one or more auxiliary camera devices are selected from the plurality of auxiliary camera devices to capture the second image according to the FOV information corresponding to the main camera device and the FOV information corresponding to each auxiliary camera device.
  • there are overlapping ranges and non-overlapping ranges between the field of view range of the selected one or more auxiliary camera devices and the field of view range of the main camera device so that there may be an overlapping portion between the second image captured and the first image, according to The second image and the first image may generate a panoramic image.
  • the number of the second image is not so many, thereby reducing the workload of subsequent processing of the second image.
  • the at least one auxiliary camera is selected from the plurality of auxiliary camera devices according to the FOV information corresponding to the plurality of auxiliary camera devices loaded on the movable platform and the FOV information corresponding to the main camera device
  • the apparatus may include selecting a single auxiliary camera device from the plurality of auxiliary camera devices if the FOV of the auxiliary camera device is greater than or equal to a preset FOV threshold.
  • the captured second image When the FOV of a certain auxiliary camera device is greater than or equal to the preset FOV threshold, that is, when the FOV of the auxiliary camera device is larger, the captured second image also covers a larger area. A single of the secondary cameras is selected for the second image capture.
  • the movable platform can be rotated to cover -180 to 180 degrees in the longitude direction.
  • the auxiliary camera device and the main camera device can completely cover the latitude direction of -90 to 90 degrees. It can cover the area corresponding to the entire 360*180 spherical panorama.
  • acquiring at least one second image captured by at least one auxiliary camera device mounted on the movable platform within the field of view of the auxiliary camera device may include: if the FOV of the auxiliary camera device greater than or equal to the preset FOV threshold, obtain a single second image captured by a single auxiliary camera; if the FOV of the auxiliary camera is less than the preset FOV threshold, obtain a single auxiliary camera The plurality of second images captured at different orientations are rotated, or the plurality of second images captured by one or more of the auxiliary camera devices are acquired.
  • a single second image is captured by the auxiliary camera device.
  • the main camera device captures the first image of the first column within the field of view of the main camera device
  • a second image is captured by the auxiliary camera device.
  • the FOV of a single auxiliary camera device is less than the preset FOV threshold, that is, when the FOV of a single auxiliary camera device is small, for example, only a local area within a preset angle above the movable platform can be captured by a single auxiliary camera device at a time, Then, a plurality of second images are captured at different orientations by rotating the single auxiliary camera device, or a plurality of second images are captured by using a plurality of auxiliary camera devices.
  • an area within a preset angle above the movable platform can be covered.
  • each second image is captured by a single auxiliary camera device, thereby obtaining two second images.
  • each of the plurality of auxiliary camera devices can also capture a second image, so as to obtain a plurality of first images. For two images, for example, if one second image is captured in each of the first column and the fifth column by two auxiliary camera devices, four second images are obtained.
  • the panoramic image generating method may further include: displaying an auxiliary camera device selection interface for a user to select from a plurality of auxiliary camera devices mounted on the movable platform displayed on the auxiliary camera device selection interface , select a target auxiliary camera device; and determine the target auxiliary camera device selected by the user as the at least one auxiliary camera device.
  • a preset auxiliary camera device selection interface is displayed, wherein the auxiliary camera device selection interface displays a plurality of auxiliary camera devices loaded on the movable platform.
  • the user can select one or more target auxiliary camera devices from the plurality of auxiliary camera devices mounted on the movable platform displayed on the auxiliary camera device selection interface. Based on the user's selection operation, one or more target auxiliary camera apparatuses selected by the user are determined as at least one auxiliary camera apparatus for capturing the second image. For example, as shown in FIG.
  • the auxiliary camera device 1, the auxiliary camera device 2, the auxiliary camera device 3, and the auxiliary camera device i loaded on the movable platform are displayed on the auxiliary camera device selection interface.
  • the auxiliary camera 1 and the auxiliary camera 2 are selected, the auxiliary camera 1 and the auxiliary camera 2 are determined as the auxiliary camera for capturing the second image. Determining the at least one auxiliary camera device through a user operation further improves the user's interactive experience.
  • the panoramic image generating method may further include: selecting the at least one auxiliary camera device from a plurality of auxiliary camera devices loaded on the movable platform according to the current shooting mode.
  • Shooting based on a movable platform includes a variety of shooting modes, including but not limited to vertical panorama mode, spherical panorama mode, and the like.
  • different auxiliary camera devices are used for shooting the second image.
  • one or more auxiliary camera devices are selected from the plurality of auxiliary camera devices mounted on the movable platform for shooting the second image.
  • the selecting the at least one auxiliary camera device from the plurality of auxiliary camera devices loaded on the movable platform according to the current shooting mode may include: if the current shooting mode is the vertical panorama mode, from the A single auxiliary camera device is selected from the plurality of auxiliary camera devices; if the current shooting mode is a spherical panorama mode, more than one auxiliary camera device is selected from the plurality of auxiliary camera devices.
  • the current shooting mode is the vertical panorama mode
  • multiple auxiliary camera devices loaded on the movable platform are required. to select a single secondary camera for capturing the second image.
  • the current shooting mode is spherical panorama mode
  • multiple auxiliary camera devices mounted on the movable platform One or more auxiliary camera devices are selected for capturing the second image. Selecting the auxiliary camera device according to the current shooting mode can not only meet the shooting requirements, but also avoid the use of too many auxiliary camera devices, resulting in waste of resources.
  • image fusion processing is performed on the at least one first image and the at least one second image. For example, if a first image is captured by the main camera device, and a second image is captured by each of the two auxiliary camera devices, image fusion processing is performed on the first image and the two second images obtained by shooting to generate the corresponding panoramic image.
  • step S104 may be included before step S103 , and step S103 may include sub-step S1031 .
  • S104 Perform image preprocessing on the at least one first image or the at least one second image, so that the resolution of the at least one first image is the same as the resolution of the at least one second image match.
  • image preprocessing is performed on the at least one first image or the at least one second image.
  • the preprocessing matches the resolution of the at least one first image with the resolution of the at least one second image.
  • the image preprocessing includes but is not limited to image upsampling processing, image downsampling processing, and the like.
  • image fusion processing is performed on the at least one first image and the at least one second image whose resolutions are matched, compared to directly performing image fusion on the obtained at least one first image and at least one second image
  • the fusion process further improves the image quality of the generated panoramic image.
  • performing image preprocessing on the at least one first image or the at least one second image may include: if the resolution of the at least one second image is smaller than that of the at least one first image If the resolution of the at least one second image is larger than the resolution of the at least one first image, perform image upsampling processing on the at least one second image; Perform image downsampling processing on the at least one second image, or perform image upsampling processing on the at least one first image.
  • the resolution of the second image is smaller than the resolution of the first image, that is, the resolution of the first image is high, and the resolution of the second image is low.
  • the sampling process increases the resolution of the second image, so that the resolution of the at least one first image matches the resolution of the at least one second image.
  • the resolution of the second image is greater than the resolution of the first image, that is, the resolution of the first image is low, and the resolution of the second image is high.
  • the down-sampling process increases the resolution of the second image, or performs image up-sampling processing on each first image, so that the resolution of at least one first image matches the resolution of at least one second image.
  • performing image upsampling processing on the at least one second image may include: inputting the at least one second image into a neural network model to perform superdivision upsampling processing; or, using a bicubic interpolation algorithm Perform image upsampling processing on the at least one second image.
  • a neural network super-resolution technique can be used to input at least one second image into a corresponding neural network model for super-resolution and up-sampling processing, and output a high-resolution second image .
  • an interpolation algorithm such as a bicubic interpolation algorithm, is used to perform image upsampling processing on at least one second image to obtain at least one second image with high resolution.
  • step S105 may be included before step S103 , and step S103 may include sub-step S1032 .
  • S105 Perform HDR fusion processing on a plurality of the second images captured by each of the auxiliary camera devices, respectively, to generate a high dynamic range image corresponding to each of the auxiliary camera devices.
  • the HDR (High-Dynamic Range, high dynamic lighting rendering) fusion processing is performed on the plurality of second images to generate a high dynamic range image corresponding to each auxiliary camera device.
  • the auxiliary camera 1 and the auxiliary camera 2 are used to capture multiple second images, respectively, the multiple second images captured by the auxiliary camera 1 are subjected to HDR fusion processing to generate the corresponding first high dynamic range image.
  • image fusion processing is performed on at least one first image captured by the main camera device and a high dynamic range image generated by performing HDR fusion processing.
  • image fusion processing is performed on at least one first image, the first high dynamic range image, and the second high dynamic range image.
  • the image quality of the generated panoramic image is further improved.
  • HDR fusion processing is performed on a plurality of second images captured by each auxiliary camera device to generate a high dynamic image corresponding to each auxiliary camera device. range image. If the resolution of the at least one high dynamic range image is smaller than the resolution of the at least one first image, perform image upsampling processing on the at least one high dynamic range image. If the resolution of at least one high dynamic range image is greater than the resolution of at least one first image, perform image downsampling processing on at least one high dynamic range image, or perform image upsampling processing on at least one first image .
  • the process before performing image fusion processing on the at least one first image and the at least one second image, may include: when the first image includes multiple images, performing image fusion processing on the multiple first images. Perform image fusion processing on the images to obtain a corresponding first fusion image; when the second images include multiple images, perform image fusion processing on the plurality of second images to obtain a corresponding second fusion image; Performing image fusion processing on the at least one first image and the at least one second image may include: performing image fusion processing on the first fusion image and the second fusion image.
  • first images are captured by the main camera device within the field of view of the main camera device to obtain multiple first images, and image fusion processing is performed on the multiple first images to obtain the corresponding first images. a fused image.
  • image fusion processing is also performed on the plurality of second images to obtain corresponding second fusion images. Then, image fusion processing is performed on the first fused image and the second fused image to generate a corresponding panoramic image.
  • illumination compensation is performed on the plurality of first images, so that the brightness of the plurality of first images is consistent, thereby avoiding the fusion of the plurality of first images. Brightness differences appear in the first fused image.
  • illumination compensation is performed on the plurality of second images, so that the brightness of the plurality of second images is consistent.
  • first perform illumination compensation on the first fusion image and the second fusion image so that the brightness of the first fusion image and the second fusion image are consistent, so as to ensure The quality of the resulting panorama image.
  • performing image fusion processing on the plurality of second images to obtain the corresponding second fused images may include: performing latitude and longitude mapping on the plurality of second images to obtain the corresponding plurality of second local regions image; perform a first seam finding operation on a plurality of the second local area images; according to the results of the first seam finding operation, perform multi-band fusion () processing on a plurality of the second local area images to obtain the second Fused images.
  • the second image includes the img_up_1 image and the img_up_2 image
  • a seam finding operation is performed on the two second local area images sph_up_1 and sph_up_2.
  • the seam finding operation of the two second local area images sph_up_1 and sph_up_2 is hereinafter referred to as the first seam finding operation.
  • performing the first seam finding operation on the plurality of second partial area images may include: acquiring a first overlapping area of the plurality of second partial area images; performing the first overlapping area according to the first overlapping area. The first seam finding operation is described.
  • the overlapping area of the two second partial area images sph_up_1 and sph_up_2 is hereinafter referred to as the first overlapping area.
  • before performing latitude and longitude mapping on the plurality of second images may include: optimizing the internal parameter matrix and the external parameter matrix corresponding to the plurality of second images; Performing latitude and longitude mapping may include: performing latitude and longitude mapping on a plurality of the second images according to the optimized internal parameter matrix and the external parameter matrix corresponding to a plurality of the second images to obtain a plurality of the second partial images.
  • area images, and multiple mask images corresponding to the second partial area images; the acquiring the first overlapping area of the multiple second partial area images may include: corresponding to the multiple second partial area images The mask image is obtained, and the first overlapping area of a plurality of the second partial area images is obtained.
  • the internal and external parameters of the main camera device and the auxiliary camera device are first calibrated, and the main camera device is centered, that is, under the condition that the yaw, pitch, and roll angles are all set to 0, the internal and external parameters of the auxiliary camera device are calibrated to obtain the auxiliary camera.
  • the internal parameter matrix K and the external parameter matrix (including the rotation matrix R and the translation matrix T) of the device are generally ignored when the translation matrix T is used.
  • the auxiliary camera device includes the auxiliary camera device 1 and the auxiliary camera device 2, the internal parameter matrix K_up_1 and the external parameter matrix (R_up_1, T_up_1) of the auxiliary camera device 1 and the internal parameter matrix K_up_2 and the external parameter matrix ( R_up_2, T_up_2).
  • the internal parameter matrix corresponding to the first image captured by the main camera device and the second image captured by the auxiliary camera device are known, and the pose relationship between the two first images can be directly obtained by
  • the IMU (Inertial measurement unit, inertial measurement unit) data of the two first images is derived, and the pose relationship between the first image and the second image can be derived through the IMU data or the calibrated rotation matrix R, so as to obtain the external image of the second image. parameter matrix.
  • the optimizing the internal parameter matrices and the external parameter matrices corresponding to the plurality of second images may include: based on the at least one first image, determining, based on the at least one first image, to perform the process with the plurality of second images.
  • the target first image matched by the image feature points; the determined first target image and a plurality of the second images are subjected to image feature point matching to obtain corresponding image feature point matching information; the image feature point matching information , and the internal parameter matrix and external parameter matrix corresponding to the first image of the target, the internal parameter matrix and the external parameter matrix corresponding to a plurality of the second images, input beam method adjustment model processing, and obtain a plurality of the second images corresponding to The optimized internal parameter matrix and the external parameter matrix.
  • the first image of the single image is determined as the target first image; if the first image includes multiple images, all the first images are determined as the target first image. , or select several first images from the plurality of first images according to preset rules, and determine them as the target first images.
  • the main camera device captures first images of multiple rows and columns within the field of view of the main camera device to obtain first images of multiple rows and columns.
  • the first image of the interval column from the first images of the plurality of columns, it is determined as the target first image.
  • the first images in columns 1, 3, 5, and 7 are selected to be determined as the target first images.
  • the external parameter matrix (rotation matrix R) corresponding to each second image includes the pose data between each second image and other second images, and the pose data between each second image and the target first image .
  • the pose data between each second image and the target first image is derived from the calibrated rotation matrix R, or derived from the IMU data of each second image and the IMU data of the target first image.
  • the second image includes the img_up_1 image and the img_up_2 image
  • the optimized internal parameter matrix K_up_refine2 and the external parameter matrix R_up_refine_2 corresponding to the img_up_2 image are obtained.
  • the optimized internal parameter matrix and external parameter matrix corresponding to the img_up_1 image and the img_up_2 image map the img_up_1 image and the img_up_2 image to the latitude and longitude to obtain the second local area image corresponding to the img_up_1 image sph_up_1, sph_up_1 corresponding mask image mask_up_1, and img_up_2
  • the mask image mask_up_2 corresponding to the second local area images sph_up_2 and sph_up_2 corresponding to the images.
  • the mask image mask_up_1 and the mask image mask_up_2 extract the first overlapping area of the two second local area images sph_up_1 and sph_up_2.
  • the first seam finding operation is performed according to the first overlapping area, and a curve with the smallest difference is found in the first overlapping area. After that, based on the curve with the smallest difference obtained by the first seam finding operation, perform multi-band blend processing on the two second local area images sph_up_1 and sph_up_2 to obtain the second fusion image sph_up and the second The mask image mask_up corresponding to the fusion image sph_up.
  • ERP equirectangular projection, spherical projection
  • the second overlapping area of the first fused image sph_side and the second fused image sph_up is extracted.
  • a second seam finding operation is performed according to the second overlapping area, and a curve with the smallest difference is found in the second overlapping area.
  • multi-band fusion processing is performed on the first fusion image sph_side and the second fusion image sph_up to obtain a corresponding panoramic image.
  • At least one first image captured by the main camera mounted on the movable platform within the field of view of the main camera is acquired, and at least one auxiliary camera mounted on the movable platform is acquired in the field of view of the auxiliary camera.
  • FIG. 7 is a schematic block diagram of a panoramic image generating apparatus provided by an embodiment of the present application.
  • the panoramic image generating apparatus 300 may include a processor 311 and a memory 312, and the processor 311 and the memory 312 are connected through a bus, such as an I2C (Inter-integrated Circuit) bus.
  • a bus such as an I2C (Inter-integrated Circuit) bus.
  • the processor 311 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 312 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a mobile hard disk, and the like.
  • Various computer programs to be executed by the processor 311 are stored in the memory 312 .
  • the processor is used for running the computer program stored in the memory, and implements the following steps when executing the computer program:
  • the field of view of the auxiliary camera device includes the movable platform.
  • the processor before performing the image fusion processing on the at least one first image and the at least one second image, the processor is configured to:
  • the processor When implementing the image fusion processing on the at least one first image and the at least one second image, the processor is configured to implement:
  • the processor when implementing the image preprocessing on the at least one first image or the at least one second image, the processor is configured to implement:
  • the resolution of the at least one second image is smaller than the resolution of the at least one first image, perform image upsampling processing on the at least one second image;
  • the resolution of the at least one second image is greater than the resolution of the at least one first image, perform image downsampling processing on the at least one second image, or perform image downsampling on the at least one second image, or perform an image downsampling process on the at least one second image.
  • An image is subjected to image upsampling processing.
  • the processor when implementing the image upsampling process on the at least one second image, is configured to implement:
  • An image upsampling process is performed on the at least one second image by using a bicubic interpolation algorithm.
  • the processor before performing the image fusion processing on the at least one first image and the at least one second image, the processor is configured to:
  • the processor When implementing the image fusion processing on the at least one first image and the at least one second image, the processor is configured to implement:
  • the processor before performing the image fusion processing on the at least one first image and the at least one second image, the processor is configured to:
  • the first images include multiple sheets
  • image fusion processing is performed on the multiple sheets of the first images to obtain corresponding first fusion images
  • image fusion processing is performed on the multiple sheets of the second images to obtain corresponding second fusion images
  • the processor When implementing the image fusion processing on the at least one first image and the at least one second image, the processor is used to implement:
  • the processor when implementing the image fusion processing on the plurality of second images to obtain the corresponding second fused images, the processor is configured to implement:
  • multi-band fusion processing is performed on a plurality of the second local area images to obtain the second fusion image.
  • the processor when implementing the first seam finding operation on the plurality of second partial region images, is configured to implement:
  • the first seam finding operation is performed according to the first overlapping area.
  • the processor before the processor implements the latitude and longitude mapping on the plurality of second images, the processor is configured to implement:
  • the processor When implementing the latitude and longitude mapping on the plurality of second images, the processor is configured to implement:
  • the optimized internal parameter matrix and the external parameter matrix corresponding to the plurality of second images perform latitude and longitude mapping on the plurality of second images to obtain a plurality of the second local area images and a plurality of a mask image corresponding to the second local area image;
  • the processor When implementing the acquiring of the first overlapping area of the plurality of second partial area images, the processor is configured to implement:
  • the first overlapping area of the plurality of second partial area images is acquired according to the mask images corresponding to the plurality of second partial area images.
  • the processor when implementing the optimization of the internal parameter matrices and the external parameter matrices corresponding to the plurality of second images, the processor is configured to implement:
  • the matching information of the image feature points, the internal parameter matrix and the external parameter matrix corresponding to the first image of the target, the internal parameter matrix and the external parameter matrix corresponding to the plurality of second images, and the input beam method adjustment model are processed to obtain The optimized internal parameter matrix and the external parameter matrix corresponding to the plurality of second images.
  • the extrinsic parameter matrix corresponding to each of the second images includes pose data between each of the second images and other second images, and each of the second images and the The pose data between the first images of the target, wherein the pose data between each of the second images and the first image of the target is determined by a calibrated rotation matrix, or the IMU of each second image.
  • the data is generated with the IMU data of the first image of the target.
  • the processor when implementing the determining, based on the at least one first image, the target first image to be matched with the image feature points of the plurality of second images, the processor is configured to implement:
  • the at least one first image includes a single image, determining the single first image as the target first image;
  • the at least one first image includes multiple images, determine all the first images as the target first image, or select several first images from the multiple first images according to a preset rule an image, determined as the first image of the target.
  • the plurality of first images include multiple columns, and the processor selects a plurality of first images from the plurality of first images according to a preset rule, and determines them as the target The first image is used to achieve:
  • the first images in the interval column are selected and determined as the target first images.
  • the processor is further configured to:
  • the at least one auxiliary camera device performs the first image capture within the field of view of the auxiliary camera device The second image is taken.
  • the processor is further configured to:
  • the at least one auxiliary camera device is selected from the plurality of auxiliary camera devices according to the FOV information corresponding to the plurality of auxiliary camera devices mounted on the movable platform and the FOV information corresponding to the main camera device.
  • the processor implements the method according to the FOV information corresponding to the plurality of auxiliary camera devices loaded on the movable platform and the FOV information corresponding to the main camera device, from the plurality of auxiliary camera devices.
  • the at least one auxiliary camera device is selected in the camera device, it is used to realize:
  • a single auxiliary camera device is selected from the plurality of auxiliary camera devices.
  • the processor when the processor implements the acquiring at least one second image captured by at least one auxiliary camera device mounted on the movable platform within the field of view of the auxiliary camera device, use To achieve:
  • auxiliary camera device If the FOV of the auxiliary camera device is greater than or equal to a preset FOV threshold, acquiring a single second image captured by a single auxiliary camera device;
  • the FOV of the auxiliary camera device is less than the preset FOV threshold, acquire multiple second images shot by a single auxiliary camera device rotated in different directions, or acquire images shot by more than one auxiliary camera device a plurality of the second images.
  • the processor is further configured to:
  • auxiliary camera device selection interface for the user to select a target auxiliary camera device from a plurality of auxiliary camera devices mounted on the movable platform displayed on the auxiliary camera device selection interface;
  • the target auxiliary camera device selected by the user is determined as the at least one auxiliary camera device.
  • the processor is further configured to:
  • the at least one auxiliary camera device is selected from a plurality of auxiliary camera devices loaded on the movable platform according to the current shooting mode.
  • the processor when the processor selects the at least one auxiliary camera device from a plurality of auxiliary camera devices loaded on the movable platform according to the current shooting mode, the processor specifically implements:
  • the current shooting mode is a vertical panorama mode, selecting a single auxiliary camera device from the plurality of auxiliary camera devices;
  • the current shooting mode is a spherical panorama mode, selecting one or more of the auxiliary camera devices from the plurality of auxiliary camera devices.
  • the embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the present application The steps of the panorama image generating method provided by the embodiment.
  • the computer-readable storage medium may be the internal storage unit of the movable platform or the panoramic image generating device described in the foregoing embodiments, such as a hard disk or memory of the movable platform or the panoramic image generating device.
  • the computer-readable storage medium can also be an external storage device of the movable platform or the panoramic image generation device, such as a plug-in hard disk equipped on the movable platform or the panoramic image generation device, a smart memory card (Smart Media) Card, SMC), secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement. Le procédé comprend les étapes suivantes : l'acquisition d'au moins une première image, qui est photographiée, au moyen d'un appareil photographique principal monté sur une plateforme mobile, dans le champ de vision de l'appareil photographique principal (S101) ; l'acquisition d'au moins une seconde image, qui est photographiée, au moyen d'au moins un appareil photographique auxiliaire monté sur la plateforme mobile, dans le champ de vision du ou des appareils photographiques auxiliaires, le champ de vision du ou des appareils photographiques auxiliaires comprenant une zone supérieure de la plateforme mobile, et il existe une plage de chevauchement entre le champ de vision de l'appareil photographique principal et le champ de vision du ou des appareils photographiques auxiliaires (S102) ; et la réalisation d'un traitement de fusion d'image sur la ou les premières images et la ou les secondes images, de façon à générer une image panoramique correspondante (S103).
PCT/CN2020/140351 2020-12-28 2020-12-28 Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement WO2022140970A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/140351 WO2022140970A1 (fr) 2020-12-28 2020-12-28 Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/140351 WO2022140970A1 (fr) 2020-12-28 2020-12-28 Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2022140970A1 true WO2022140970A1 (fr) 2022-07-07

Family

ID=82259838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140351 WO2022140970A1 (fr) 2020-12-28 2020-12-28 Procédé et appareil de génération d'images panoramiques, plateforme mobile et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2022140970A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369128A (zh) * 2017-05-30 2017-11-21 深圳晨芯时代科技有限公司 一种720度全景图像拍摄方法
WO2018081924A1 (fr) * 2016-11-01 2018-05-11 深圳岚锋创视网络科技有限公司 Procédé, système et dispositif de prise de vues permettant la génération d'une image panoramique
CN108513567A (zh) * 2017-03-23 2018-09-07 深圳市大疆创新科技有限公司 图像融合的方法和无人飞行器
CN108769569A (zh) * 2018-04-10 2018-11-06 中科院微电子研究所昆山分所 一种用于无人机的360度立体全景观测系统及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018081924A1 (fr) * 2016-11-01 2018-05-11 深圳岚锋创视网络科技有限公司 Procédé, système et dispositif de prise de vues permettant la génération d'une image panoramique
CN108513567A (zh) * 2017-03-23 2018-09-07 深圳市大疆创新科技有限公司 图像融合的方法和无人飞行器
CN107369128A (zh) * 2017-05-30 2017-11-21 深圳晨芯时代科技有限公司 一种720度全景图像拍摄方法
CN108769569A (zh) * 2018-04-10 2018-11-06 中科院微电子研究所昆山分所 一种用于无人机的360度立体全景观测系统及方法

Similar Documents

Publication Publication Date Title
WO2019161813A1 (fr) Procédé, appareil et système de reconstruction tridimensionnelle de scène dynamique, serveur et support
US10594941B2 (en) Method and device of image processing and camera
CN205263655U (zh) 一种用于自动生成全景照片的系统、无人飞行器及地面站
CN105045279A (zh) 一种利用无人飞行器航拍自动生成全景照片的系统及方法
WO2017136231A1 (fr) Système et procédé d'utilisation d'un réseau de caméras multiples pour capturer des scènes statiques et/ou animées
US20200221062A1 (en) Image processing method and device
WO2019041276A1 (fr) Procédé de traitement d'image, véhicule aérien sans pilote et système
KR101896654B1 (ko) 드론을 이용한 3d 영상 처리 시스템 및 방법
CN111344644A (zh) 用于基于运动的自动图像捕获的技术
WO2019155335A1 (fr) Véhicule aérien sans pilote comprenant un système aérien omnidirectionnel de détection de profondeur et d'évitement d'obstacle, et son procédé de fonctionnement
CN113273172A (zh) 全景拍摄方法、装置、系统及计算机可读存储介质
WO2019128275A1 (fr) Procédé et dispositif de commande de photographie, et aéronef
US20220103799A1 (en) Image data processing method and apparatus, image processing chip, and aircraft
JP2019110462A (ja) 制御装置、システム、制御方法、及びプログラム
WO2019191940A1 (fr) Procédés et système permettant de composer et de capturer des images
JP2019216343A (ja) 決定装置、移動体、決定方法、及びプログラム
JP2018201119A (ja) モバイルプラットフォーム、飛行体、支持装置、携帯端末、撮像補助方法、プログラム、及び記録媒体
WO2021251441A1 (fr) Procédé, système et programme
CN114529800A (zh) 一种旋翼无人机避障方法、系统、装置及介质
US20230177707A1 (en) Post-processing of mapping data for improved accuracy and noise-reduction
US20210404840A1 (en) Techniques for mapping using a compact payload in a movable object environment
WO2020019175A1 (fr) Procédé et dispositif de traitement d'image et dispositif photographique et véhicule aérien sans pilote
WO2022077218A1 (fr) Traitement de nuage de points en ligne de données lidar et de caméra
JP7501535B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム
US20220113423A1 (en) Representation data generation of three-dimensional mapping data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967321

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967321

Country of ref document: EP

Kind code of ref document: A1