WO2022028594A1 - 图像处理方法、装置、计算机可读存储介质及计算机设备 - Google Patents

图像处理方法、装置、计算机可读存储介质及计算机设备 Download PDF

Info

Publication number
WO2022028594A1
WO2022028594A1 PCT/CN2021/111264 CN2021111264W WO2022028594A1 WO 2022028594 A1 WO2022028594 A1 WO 2022028594A1 CN 2021111264 W CN2021111264 W CN 2021111264W WO 2022028594 A1 WO2022028594 A1 WO 2022028594A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
sequences
video
rotation amount
frame sequences
Prior art date
Application number
PCT/CN2021/111264
Other languages
English (en)
French (fr)
Inventor
陈聪
袁文亮
姜文杰
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2022028594A1 publication Critical patent/WO2022028594A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application belongs to the field of image processing, and in particular, relates to an image processing method, apparatus, computer-readable storage medium, and computer equipment.
  • the embodiments of the present application provide an image processing method, an apparatus, a computer-readable storage medium, a computer device, a terminal, and a camera, aiming to solve one of the above problems.
  • an embodiment of the present application provides an image processing method, the method comprising:
  • the multiple video frame sequences are captured by multiple cameras respectively;
  • the smoothed video frames corresponding to each group of synchronized video frames after registration are fused to generate fused video frames and/or videos.
  • an embodiment of the present application provides an image processing apparatus, and the apparatus includes:
  • a synchronization module configured to acquire a plurality of video frame sequences, and synchronize the plurality of video frame sequences, and the plurality of video frame sequences are respectively photographed by a plurality of cameras;
  • a motion estimation module for estimating the motion rotation amount of each video frame relative to the reference coordinate system in the plurality of video frame sequences
  • a smoothing module configured to smooth the motion rotation amount of each video frame in the plurality of video frame sequences relative to the reference coordinate system to obtain a smooth rotation corresponding to each video frame in the plurality of video frame sequences quantity
  • the rendering module is configured to rotate and render the corresponding video frame by using the smooth rotation amount corresponding to each video frame in the plurality of video frame sequences, and output the corresponding video frame with the smooth rotation amount corresponding to each video frame in the plurality of video frame sequences. smooth video frames corresponding to each video frame;
  • a registration module configured to register smooth video frames corresponding to each group of synchronized video frames in the plurality of video frame sequences
  • the fusion module is used to fuse the smooth video frames corresponding to each group of synchronized video frames after registration to generate fused video frames and/or videos.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the image processing method as described above are implemented.
  • an embodiment of the present application provides a computer device, including:
  • processors one or more processors
  • the processor implements the steps of the image processing method when executing the computer program.
  • an embodiment of the present application provides a camera, including:
  • processors one or more processors
  • the processor implements the steps of the image processing method when executing the computer program.
  • an embodiment of the present application provides a terminal, including:
  • processors one or more processors
  • the processor implements the steps of the image processing method when executing the computer program.
  • the motion rotation amount of each video frame in the multiple video frame sequences relative to the reference coordinate system is estimated; Perform smoothing to obtain a smooth rotation amount; rotate and render a video frame with a smooth rotation amount, and output a smooth video frame, so that high-definition, stable video frames and/or videos can be generated. Since the smooth video frames corresponding to each group of synchronized video frames in the plurality of video frame sequences are registered, the smooth video frames corresponding to each group of synchronized video frames after the registration are fused to generate The fused video frame and/or video. Video frames and/or videos with wider viewing angles can thus be generated.
  • the image processing method of the present application has fast processing speed, low power consumption and strong robustness.
  • FIG. 1 , FIG. 2 and FIG. 3 are schematic diagrams of application scenarios of the image processing method provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 6 is a specific structural block diagram of a computer device provided by an embodiment of the present application.
  • FIG. 7 is a specific structural block diagram of a terminal provided by an embodiment of the present application.
  • FIG. 8 is a specific structural block diagram of a camera provided by an embodiment of the present application.
  • An application scenario of the image processing method provided by an embodiment of the present application may be a terminal including multiple cameras or a camera including multiple cameras.
  • a terminal including multiple cameras or a camera including multiple cameras executes the image processing method provided by an embodiment of the present application to process multiple images captured by the multiple cameras.
  • An application scenario of the image processing method provided by an embodiment of the present application may also include a connected computer device 100 and a camera 200 including a plurality of cameras (as shown in FIG. 1 ).
  • the image processing method provided by an embodiment of the present application The application scenario may also include a connected computer device 100 and a plurality of cameras 300 including one or more cameras (as shown in FIG.
  • the application scenario of the image processing method provided by an embodiment of the present application may also include
  • the connected computer device 100 and multiple terminals 400 including one or more cameras (as shown in FIG. 3 ), the application scenario of the image processing method provided by an embodiment of the present application may also include computer devices and computer devices respectively.
  • a plurality of terminals including one or more cameras and a plurality of cameras (not shown) including one or more cameras are connected.
  • the computer device 100 , the camera 200 including a plurality of cameras, the camera 300 including one or more cameras, and the terminal 400 including the one or more cameras may run at least one application program.
  • Computer device 100 may be a server, desktop computer, tablet computer, laptop computer, personal digital assistant, or the like.
  • the computer device 100 executes the image processing method provided by an embodiment of the present application to process multiple images captured by one camera 200 including multiple cameras, or, multiple images captured by multiple cameras 300 including one or more cameras processing, or processing multiple images captured by the terminal 400 including one or more cameras.
  • FIG. 4 is a flowchart of an image processing method provided by an embodiment of the present application.
  • This embodiment mainly takes the application of the image processing method to a computer device, a terminal, or a camera as an example for illustration.
  • the image processing method includes the following steps: It should be noted that, if there are substantially the same results, the image processing method provided by an embodiment of the present application is not limited to the sequence of the process shown in FIG. 4 .
  • the synchronization of the multiple video frame sequences only needs to be performed before step S105, which may be performed before any of the four steps of steps S102, S103, S104, and S105. .
  • the number of the multiple cameras is n, where n is an integer greater than or equal to 2.
  • the multiple cameras are located in one terminal or camera, or may be located in multiple terminals and/or cameras, which are not specifically limited in this application.
  • the camera is used to obtain images or videos, and may include components such as lenses and image sensors.
  • the lens of the camera may be a standard lens, a wide-angle lens, an ultra-wide-angle lens or other lenses; if multiple cameras are located in multiple cameras or terminals, the distance between the lenses of two adjacent cameras may be, but not limited to, within 5cm, or more.
  • the motion states of the cameras can be kept consistent, but not limited to. If multiple cameras are located in one terminal or camera, the positional relationship of the multiple cameras is fixed, and the distance between the lenses of two adjacent cameras may be, but not limited to, within 5 cm.
  • the synchronization of the multiple video frame sequences is specifically:
  • the timestamps of the multiple video frame sequences are respectively extracted, and the multiple video frame sequences are synchronized by the timestamps of the multiple video frame sequences.
  • the synchronization of the multiple video frame sequences is specifically:
  • the gyroscope signals corresponding to the multiple video frame sequences are respectively extracted, and the multiple video frame sequences are synchronized by the gyroscope signals corresponding to the multiple video frame sequences.
  • the line synchronization of the multiple video frame sequences by using the timestamps of the multiple video frame sequences is specifically:
  • a reference time is used to synchronize the timestamps of the multiple video frame sequences, and the reference time may include, but is not limited to: using the system time of the terminal or camera where the multiple cameras are located as the reference time or using any video frame Timestamp of the sequence as a base time etc.
  • the reference coordinate system may be the reference frame of the first video frame captured by the camera that captures the video frame or an IMU (Inertial measurement unit) when capturing the first video frame
  • IMU Inertial measurement unit
  • a visual motion estimation algorithm such as a structure-from-motion (sfm) algorithm, synchronous positioning, etc.
  • sfm structure-from-motion
  • S102 may specifically include:
  • the key frame may specifically be: setting the first video frame as a key frame, and judging in real time the degree of overlap between the current video frame and the field of view of the key frame and the associated number of feature points, when the degree of overlap and the number of feature points are When the number of associations is greater than or equal to the preset value, keep the first video frame as the key frame; when the overlap and the number of associations of feature points are less than the preset value, update the key frame and set the current video frame to is the key frame;
  • the beam adjustment method uses the attitude of the camera or the terminal and the three-dimensional coordinates of the measurement point as unknown parameters, and uses the coordinates of the feature points detected on the image for forward intersection as the observation data, so as to obtain the optimal adjustment.
  • Camera parameters and world point coordinates are used to calculate the attitude of the camera or the terminal and the three-dimensional coordinates of the measurement point as unknown parameters, and uses the coordinates of the feature points detected on the image for forward intersection as the observation data, so as to obtain the optimal adjustment.
  • S102 may specifically be: using an IMU method to estimate the plurality of videos
  • the motion rotation amount of each video frame in the frame sequence relative to the reference coordinate system the IMU method can specifically adopt the following motion estimation method: obtain the current state timestamp, acceleration count value of the gyroscope in the terminal or camera in real time and angular velocity value; using extended Kalman filter to combine the accelerometer count value and angular velocity value to estimate the motion rotation of each fused video frame relative to the reference coordinate system.
  • S102 may specifically be: combining the visual motion estimation algorithm and the IMU method to estimate the motion rotation amount of each video frame in the multiple video frame sequences relative to the reference coordinate system .
  • S103 may specifically be:
  • the motion rotation amount of each video frame in the plurality of video frame sequences relative to the reference coordinate system is smoothed by controlling the cropping margin, so as to obtain a smoothing corresponding to each video frame in the plurality of video frame sequences amount of rotation.
  • S104 may specifically be:
  • S105 may specifically be: performing a pairwise operation on two smoothed video frames with overlapping regions in the smoothed video frames corresponding to each group of synchronized video frames in the plurality of video frame sequences registration.
  • the pairwise registration can be implemented by methods including but not limited to the following:
  • Align each pair of images with overlapping areas the specific methods include but are not limited to the following: perform operations such as distortion correction, scale transformation, and epipolar correction for each pair of images according to the calibrated camera parameters, so that the point of the same name of each pair of images is located at the same row or the same column;
  • each pair of images with overlapping areas is registered; specifically, the following methods may be used, including but not limited to: detecting and matching feature points for each pair of images, and performing registration using an affine transformation model.
  • the feature point detection may use: Oriented Fast and Rotated Brief (ORB), Scale-invariant feature transform (SIFT), or Speeded Up Robust Features (SURF), etc. Algorithm; the matching can be calculated by Fast Library for Approximate Nearest Neighbor (FLANN) algorithm according to the descriptor of the feature point, and RANSAC (Random Sample Consensus, random sampling consistency algorithm) is used to eliminate errors according to the affine transformation model. match.
  • ORB Oriented Fast and Rotated Brief
  • SIFT Scale-invariant feature transform
  • SURF Speeded Up Robust Features
  • S106 may specifically adopt a traditional image stitching fusion algorithm, or may adopt the following image fusion method:
  • the image processing apparatus provided by an embodiment of the present application may be a computer program or a piece of program code running in a computer device, a terminal, or a camera, for example, the image processing apparatus is an application software; the image processing apparatus may It is used to execute the corresponding steps in the image processing method provided by the embodiments of the present application.
  • An image processing apparatus provided by an embodiment of the present application includes:
  • a synchronization module 11 configured to acquire a plurality of video frame sequences, and synchronize the plurality of video frame sequences, and the plurality of video frame sequences are respectively photographed by a plurality of cameras;
  • a motion estimation module 12 configured to estimate the motion rotation amount of each video frame relative to the reference coordinate system in the plurality of video frame sequences
  • the smoothing module 13 is configured to smooth the motion rotation amount of each video frame relative to the reference coordinate system in the plurality of video frame sequences to obtain a smoothed corresponding to each video frame in the plurality of video frame sequences. amount of rotation;
  • the rendering module 14 is configured to rotate and render the corresponding video frame by using the smooth rotation amount corresponding to each video frame in the plurality of video frame sequences, and output the corresponding video frame in the plurality of video frame sequences. smooth video frame corresponding to each video frame;
  • a registration module configured to register smooth video frames corresponding to each group of synchronized video frames in the plurality of video frame sequences
  • the fusion module 16 is configured to fuse the smooth video frames corresponding to each group of synchronized video frames after registration to generate fused video frames and/or videos.
  • the image processing apparatus provided by an embodiment of the present application and the image processing method provided by an embodiment of the present application belong to the same concept, and the specific implementation process thereof can be found in the full text of the specification, which will not be repeated here.
  • An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the image processing method provided by an embodiment of the present application. step.
  • FIG. 6 shows a specific structural block diagram of a computer device provided by an embodiment of the present application.
  • the computer device may be the computer device shown in FIG. 1 , FIG. 2 , and FIG. 3 .
  • a computer device 100 includes: one or more A processor 101, a memory 102, and one or more computer programs, wherein the processor 101 and the memory 102 are connected by a bus, the one or more computer programs are stored in the memory 102, and configured is executed by the one or more processors 101, and the processor 101 implements the steps of the image processing method provided by an embodiment of the present application when the processor 101 executes the computer program.
  • Computer device 100 may be a desktop computer, tablet computer, laptop computer, personal digital assistant, or the like.
  • FIG. 7 shows a specific structural block diagram of a terminal provided by an embodiment of the present application.
  • a terminal 500 includes: one or more processors 201, a memory 202, and one or more computer programs, wherein the processors 201 and The memory 202 is connected by a bus, and the one or more computer programs are stored in the memory 202 and configured to be executed by the one or more processors 201 that execute the computer
  • the program implements the steps of the image processing method provided by an embodiment of the present application.
  • FIG. 8 shows a specific structural block diagram of a camera provided by an embodiment of the present application.
  • a camera 600 includes: one or more processors 301, a memory 302, and one or more computer programs, wherein the processors 301 and The memory 302 is connected by a bus, and the one or more computer programs are stored in the memory 302 and configured to be executed by the one or more processors 301 that execute the computer
  • the program implements the steps of the image processing method provided by an embodiment of the present application.
  • the motion rotation amount of each video frame in the multiple video frame sequences relative to the reference coordinate system is estimated; the motion rotation amount is smoothed , to obtain a smooth rotation amount; rotate and render the video frame with the smooth rotation amount, and output a smooth video frame, so high-definition, stable video frames and/or videos can be generated. Since the smooth video frames corresponding to each group of synchronized video frames in the plurality of video frame sequences are registered, the smooth video frames corresponding to each group of synchronized video frames after the registration are fused to generate The fused video frame and/or video. Video frames and/or videos with wider viewing angles can thus be generated.
  • the image processing method of the present application has fast processing speed, low power consumption and strong robustness.
  • the steps in the embodiments of the present application are not necessarily executed sequentially in the order indicated by the step numbers. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in each embodiment may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The execution of these sub-steps or stages The sequence is also not necessarily sequential, but may be performed alternately or alternately with other steps or sub-steps of other steps or at least a portion of a phase.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法、装置、计算机可读存储介质及计算机设备。所述图像处理方法包括:获取多个视频帧序列,并对所述多个视频帧序列进行同步;估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;对所述运动旋转量进行平滑,得到平滑的旋转量;采用所述平滑的旋转量对相应的视频帧进行旋转和渲染,输出平滑的视频帧;对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准;将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。该方法可以生成高清、稳定、视角更广的视频帧和/或视频,速度快、功耗低,具有很强的鲁棒性。

Description

图像处理方法、装置、计算机可读存储介质及计算机设备 技术领域
本申请属于图像处理领域,尤其涉及一种图像处理方法、装置、计算机可读存储介质及计算机设备。
背景技术
目前大多数手机都具有双摄像头或多摄像头,双摄像头或多摄像头带来了更出色的拍照体验的同时还有些功能不够完善。例如,有些手机支持拍摄广角的大视场图像,但图像清晰度不高;有些手机则支持长焦拍摄出超级清晰的图像,但不能支持拍摄广角的大视场图像。另外,现有技术中对由多个终端的摄像头分别拍摄的多幅图像进行融合的方法中,也是存在无法生成广角的大视场且高清晰度的图像或视频的问题。
技术问题
本申请实施例在于提供一种图像处理方法、装置、计算机可读存储介质及计算机设备、终端及相机,旨在解决以上问题之一。
技术解决方案
第一方面,本申请实施例提供了一种图像处理方法,所述方法包括:
获取多个视频帧序列,并对所述多个视频帧序列进行同步,所述多个视频帧序列分别由多个摄像头拍摄;
估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;
对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量;
采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧;
对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧 进行配准;
将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。
第二方面,本申请实施例提供了一种图像处理装置,所述装置包括:
同步模块,用于获取多个视频帧序列,并对所述多个视频帧序列进行同步,所述多个视频帧序列分别由多个摄像头拍摄;
运动估算模块,用于估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;
平滑模块,用于对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量;
渲染模块,用于采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧;
配准模块,用于对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准;
融合模块,用于将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。
第三方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如所述的图像处理方法的步骤。
第四方面,本申请实施例提供了一种计算机设备,包括:
一个或多个处理器;
存储器;以及
一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述处理器执行所述计算机程序时实现如所述的图像处 理方法的步骤。
第五方面,本申请实施例提供了一种相机,包括:
一个或多个处理器;
存储器;以及
一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述处理器执行所述计算机程序时实现如所述的图像处理方法的步骤。
第六方面,本申请实施例提供了一种终端,包括:
一个或多个处理器;
存储器;以及
一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述处理器执行所述计算机程序时实现如所述的图像处理方法的步骤。
有益效果
在本申请实施例中,由于对多个由多个摄像头拍摄的视频帧序列进行同步,估算多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;对所述运动旋转量进行平滑,得到平滑的旋转量;采用平滑的旋转量对视频帧进行旋转和渲染,输出平滑的视频帧,因此可以生成高清、稳定的视频帧和/或视频。又由于对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准,将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。因此可以生成视角更广的视频帧和/或视频。此外,本申请的图像处理方法处理速度快、功耗低,具有很强的鲁棒性。
附图说明
图1、图2和图3是本申请一实施例提供的图像处理方法的应用场景示意图。
图4是本申请一实施例提供的图像处理方法的流程图。
图5是本申请一实施例提供的图像处理装置示意图。
图6是本申请一实施例提供的计算机设备的具体结构框图。
图7是本申请一实施例提供的终端的具体结构框图。
图8是本申请一实施例提供的相机的具体结构框图。
本发明的实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
以下结合具体实施例对本发明的具体实现进行详细描述:
本申请一实施例提供的图像处理方法的应用场景可以是包括多个摄像头的终端或者是包括多个摄像头的相机。包括多个摄像头的终端或者包括多个摄像头的相机执行本申请一实施例提供的图像处理方法对多个摄像头拍摄的多幅图像进行处理。本申请一实施例提供的图像处理方法的应用场景也可以是包括相连接的计算机设备100和一个包括多个摄像头的相机200(如图1所示),本申请一实施例提供的图像处理方法的应用场景也可以是包括相连接的计算机设备100和多个包括一个或多个摄像头的相机300(如图2所示),本申请一实施例提供的图像处理方法的应用场景也可以是包括相连接的计算机设备100和多个包括一个或多个摄像头的终端400(如图3所示),本申请一实施例提供的图像处理方法的应用场景也可以是包括计算机设备和分别与计算机设备连接的多个包括一个或多个摄像头的终端和多个包括一个或多个摄像头的相机(图未示)。计算机设备100、包括多个摄像头的相机200、包括一个或多个摄像头的相机300和包括一个或多个摄像头的终端400中可运行至少一个的应用程序。计算机设备100可以是服务器、台式计算机、平板电脑、笔记本电脑、个人数字助理等。计算机设备100执行本申请一实施例提供的图像处理方法对一个包括多个摄像头的相机200拍摄的多幅图像进行处理,或者,对多个包括一个或多个摄像头的相机300拍摄的多幅图像进行处理,或者,对包括一 个或多个摄像头的终端400拍摄的多幅图像进行处理。
请参阅图4,是本申请一实施例提供的图像处理方法的流程图,本实施例主要以该图像处理方法应用于计算机设备、终端或相机为例来举例说明,本申请一实施例提供的图像处理方法包括以下步骤:需注意的是,若有实质上相同的结果,本申请一实施例提供的图像处理方法并不以图4所示的流程顺序为限。
S101、获取多个视频帧序列,并对所述多个视频帧序列进行同步,所述多个视频帧序列分别由多个摄像头拍摄。
在本申请一实施例中,所述对所述多个视频帧序列进行同步只需要保证在步骤S105之前执行即可,可以是步骤S102、S103、S104和S105这四个步骤的任意步骤之前执行。
所述多个摄像头的数量为n个,n是大于或等于2的整数,所述多个摄像头位于一个终端或者相机,也可以位于多个终端和/或相机,本申请不做具体限定。
所述摄像头用于获得图像或视频,可以包括镜头、图像传感器等组件。所述摄像头的镜头可以是标准镜头、广角镜头、超广角镜头或其他镜头;如果多个摄像头位于多个相机或者终端时,相邻两个摄像头的镜头之间的距离可以但不限于在5cm以内,多个摄像头的运动状态可以但不限于保持一致。如果多个摄像头是位于一个终端或者相机时,多个摄像头的位置关系是固定的,相邻两个摄像头的镜头之间的距离可以但不限于在5cm以内。
当所述多个摄像头位于一个终端或者相机时,所述对所述多个视频帧序列进行同步具体为:
分别提取多个视频帧序列的时间戳,通过所述多个视频帧序列的时间戳对所述多个视频帧序列进行同步。
当所述多个摄像头位于多个终端和/或相机时,所述对所述多个视频帧序列进行同步具体为:
分别提取多个视频帧序列对应的陀螺仪信号,通过所述多个视频帧序 列对应的陀螺仪信号对所述多个视频帧序列进行同步。
所述通过所述多个视频帧序列的时间戳对所述多个视频帧序列行同步具体为:
采用基准时间将所述多个视频帧序列的时间戳保持同步,所述基准时间可以包括但不限于:采用所述多个摄像头位于的终端或相机的系统时间作为基准时间或采用任一个视频帧序列的时间戳作为基准时间等。
S102、估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量。
针对每个视频帧,所述参考坐标系可以为拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系或拍摄所述第一帧视频帧时IMU(Inertial measurement unit,惯性测量单元)状态的参考系或者地球坐标系。
当所述参考坐标系为拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系时,可以采用视觉运动估计算法(例如运动恢复结构(structure-from-motion,sfm)算法、同步定位与建图(Simultaneous Localization and Mapping,slam)算法等)估算所述多个视频帧序列中的每个视频帧相对所述拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系的运动旋转量,在本申请一实施例中,针对每个视频帧,S102具体可以包括:
S1021、实时或离线更新关键帧K,得到所有关键帧K,分别计算每个关键帧K相对所述拍摄所述视频帧的摄像头拍摄的第一帧视频帧的旋转量q K_0;所述实时更新关键帧具体可以为:将所述第一帧视频帧设置为关键帧,实时判断当前视频帧与关键帧的视场之间的交叠度和特征点关联个数,当交叠度和特征点关联个数大于或等于预设值时,保持所述第一帧视频帧为关键帧不变;当交叠度和特征点关联个数小于预设值时,更新关键帧,将当前视频帧设置为关键帧;
S1022、计算视频帧N和与所述视频帧N匹配的同名点最多的关键帧K之间的相对旋转量q N_k
S1023、获得视频帧N相对所述第一帧视频帧的第一旋转量q N_0,其 中,q N_0=q N_K·q K_0
S1024、采用光束平差法对第一旋转量进行优化,得到第二旋转量,将所述第二旋转量作为视频帧N相对第一帧视频帧的运动旋转量。
所述光束平差法,为通过将相机或终端的姿态和测量点的三维坐标作为未知参数,将影像上探测到的用于前方交会的特征点坐标作为观测数据从而进行平差得到最优的相机参数和世界点坐标。
当所述参考坐标系为拍摄所述第一帧视频帧时IMU状态的参考系或者地球坐标系时,在本申请一实施例中,S102具体还可以为:用IMU方法估算所述多个视频帧序列中的每个视频帧相对所述参考坐标系的运动旋转量,所述IMU方法具体可采用以下的运动估计方法:实时获取终端或相机中的陀螺仪的当前状态时间戳、加速度计数值和角速度数值;利用扩展卡尔曼滤波结合加速度计数值和角速度数值,估计得到每个融合后的视频帧相对参考坐标系的运动旋转量。
在本申请一实施例中,S102具体还可以为:结合所述视觉运动估计算法和所述IMU方法估算所述多个视频帧序列中的每个视频帧相对所述参考坐标系的运动旋转量。
S103、对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量q′ N_0进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量
Figure PCTCN2021111264-appb-000001
在本申请一实施例中,S103具体可以为:
采用控制裁剪余度的方式对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量。
S104、采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧。
在本申请一实施例中,S104具体可以为:
采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行3D旋转,渲染获得并输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧,其中,3D旋转的旋转量Δq计算公式为:
Figure PCTCN2021111264-appb-000002
S105、对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准。
在本申请一实施例中,S105具体可以为:对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧中有重叠区域的两幅平滑的视频帧进行两两配准。
所述两两配准可采用包括但不限于如下方法实现:
将有重叠区域的每对图像对齐;具体可以采用包括但不限于以下方式:根据标定的相机参数对每对图像进行畸变校正、尺度变换以及极线校正等操作,使每对图像的同名点位于同一行或者同一列;
或者,将有重叠区域的每对图像进行配准;具体可以采用包括但不限于以下方式:对每对图像进行特征点检测和匹配,并用仿射变换模型进行配准。
所述特征点检测可以采用:定向快速旋转简报(Oriented Fast and Rotated Brief,ORB)、尺度不变特征变换(Scale-invariant feature transform,SIFT)或加速鲁棒特征(Speeded Up Robust Features,SURF)等算法;所述匹配可以根据特征点的描述子用快速最近邻(Fast Library for Approximate Nearest Neighbor,FLANN)算法计算,并根据仿射变换模型用RANSAC(Random Sample Consensus,随机抽样一致性算法)剔除错误匹配。
S106、将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。
在本申请一实施例中,S106具体可以采用传统的图像拼接融合算法, 也可以采用采用以下的图像融合方法:
获取若干张已对齐的图像;计算每张图像的梯度信息;设定每张图像的掩模图,生成目标梯度图像;对目标梯度图像进行梯度运算,得到目标拉普拉斯图像;对拉普拉斯图像做反卷积变换,生成融合后的全景图像。
请参阅图5,本申请一实施例提供的图像处理装置可以是运行于计算机设备、终端或相机中的一个计算机程序或一段程序代码,例如该图像处理装置为一个应用软件;该图像处理装置可以用于执行本申请实施例提供的图像处理方法中的相应步骤。本申请一实施例提供的图像处理装置包括:
同步模块11,用于获取多个视频帧序列,并对所述多个视频帧序列进行同步,所述多个视频帧序列分别由多个摄像头拍摄;
运动估算模块12,用于估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;
平滑模块13,用于对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量;
渲染模块14,用于采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧;
配准模块15,用于对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准;
融合模块16,用于将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。
本申请一实施例提供的图像处理装置与本申请一实施例提供的图像处理方法属于同一构思,其具体实现过程详见说明书全文,此处不再赘述。
本申请一实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本申请一实施例提供的图像处理方法的步骤。
图6示出了本申请一实施例提供的计算机设备的具体结构框图,该计算机设备可以是图1、图2和图3中所示的计算机设备,一种计算机设备100包括:一个或多个处理器101、存储器102、以及一个或多个计算机程序,其中所述处理器101和所述存储器102通过总线连接,所述一个或多个计算机程序被存储在所述存储器102中,并且被配置成由所述一个或多个处理器101执行,所述处理器101执行所述计算机程序时实现如本申请一实施例提供的图像处理方法的步骤。
计算机设备100可以是台式计算机、平板电脑、笔记本电脑、个人数字助理等。
图7示出了本申请一实施例提供的终端的具体结构框图,一种终端500包括:一个或多个处理器201、存储器202、以及一个或多个计算机程序,其中所述处理器201和所述存储器202通过总线连接,所述一个或多个计算机程序被存储在所述存储器202中,并且被配置成由所述一个或多个处理器201执行,所述处理器201执行所述计算机程序时实现如本申请一实施例提供的图像处理方法的步骤。
图8示出了本申请一实施例提供的相机的具体结构框图,一种相机600包括:一个或多个处理器301、存储器302、以及一个或多个计算机程序,其中所述处理器301和所述存储器302通过总线连接,所述一个或多个计算机程序被存储在所述存储器302中,并且被配置成由所述一个或多个处理器301执行,所述处理器301执行所述计算机程序时实现如本申请一实施例提供的图像处理方法的步骤。
在本申请中,由于对多个由多个摄像头拍摄的视频帧序列进行同步,估算多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;对所述运动旋转量进行平滑,得到平滑的旋转量;采用平滑的旋转量对视频帧进行旋转和渲染,输出平滑的视频帧,因此可以生成高清、稳定的视频帧和/或视频。又由于对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准,将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融 合后的视频帧和/或视频。因此可以生成视角更广的视频帧和/或视频。此外,本申请的图像处理方法处理速度快、功耗低,具有很强的鲁棒性。
应该理解的是,本申请各实施例中的各个步骤并不是必然按照步骤标号指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各实施例中至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形 和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (16)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取多个视频帧序列,并对所述多个视频帧序列进行同步,所述多个视频帧序列分别由多个摄像头拍摄;
    估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;
    对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量;
    采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧;
    对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准;
    将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。
  2. 如权利要求1所述的图像处理方法,其特征在于,所述多个摄像头位于一个终端或者相机,或者位于多个终端和/或相机。
  3. 如权利要求2所述的图像处理方法,其特征在于,当所述多个摄像头位于一个终端或者相机时,所述对所述多个视频帧序列进行同步具体为:
    分别提取多个视频帧序列的时间戳,通过所述多个视频帧序列的时间戳对所述多个视频帧序列进行同步。
  4. 如权利要求2所述的图像处理方法,其特征在于,当所述多个摄像头位于多个终端和/或相机时,所述对所述多个视频帧序列进行同步具体为:
    分别提取多个视频帧序列对应的陀螺仪信号,通过所述多个视频帧序列对应的陀螺仪信号对所述多个视频帧序列进行同步。
  5. 如权利要求3所述的图像处理方法,其特征在于,所述通过所述多 个视频帧序列的时间戳对所述多个视频帧序列进行同步具体为:
    采用基准时间将所述多个视频帧序列的时间戳保持同步。
  6. 如权利要求1所述的图像处理方法,其特征在于,所述对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准具体为:对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧中有重叠区域的两幅平滑的视频帧进行两两配准。
  7. 如权利要求1所述的图像处理方法,其特征在于,针对每个视频帧,所述参考坐标系为拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系、拍摄所述第一帧视频帧时惯性测量单元状态的参考系或者地球坐标系;
    当所述参考坐标系为拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系时,所述估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量具体为:采用视觉运动估计算法估算所述多个视频帧序列中的每个视频帧相对所述拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系的运动旋转量;
    当所述参考坐标系为拍摄所述第一帧视频帧时惯性测量单元状态的参考系或者地球坐标系时,所述估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量具体为:采用惯性测量单元方法估算所述多个视频帧序列中的每个视频帧相对所述参考坐标系的运动旋转量;
    或者,
    所述估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量具体为:结合所述视觉运动估计算法和所述惯性测量单元方法估算所述多个视频帧序列中的每个视频帧相对所述参考坐标系的运动旋转量。
  8. 如权利要求7所述的图像处理方法,其特征在于,所述采用视觉运动估计算法估算所述多个视频帧序列中的每个视频帧相对所述拍摄所述视频帧的摄像头拍摄的第一帧视频帧的参考系的运动旋转量具体包括:
    实时或离线更新关键帧K,得到所有关键帧K,分别计算每个关键帧K 相对所述拍摄所述视频帧的摄像头拍摄的第一帧视频帧的旋转量q K_0
    计算视频帧N和与所述视频帧N匹配的同名点最多的关键帧K之间的相对旋转量q N_k
    获得视频帧N相对所述第一帧视频帧的第一旋转量q N_0,其中,q N_0=q N_K·q K_0
    采用光束平差法对第一旋转量q N_0进行优化,得到第二旋转量q′ N_0,将所述第二旋转量q′ N_0作为视频帧N相对第一帧视频帧的运动旋转量。
  9. 如权利要求8所述的图像处理方法,其特征在于,所述实时或离线更新关键帧具体为:
    将所述第一帧视频帧设置为关键帧,实时判断当前视频帧与关键帧的视场之间的交叠度和特征点关联个数,当交叠度和特征点关联个数大于或等于预设值时,保持所述第一帧视频帧为关键帧不变;当交叠度和特征点关联个数小于预设值时,更新关键帧,将当前视频帧设置为关键帧。
  10. 如权利要求1至9任一项所述的图像处理方法,其特征在于,所述对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量具体为:
    采用控制裁剪余度的方式对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量。
  11. 如权利要求8所述的图像处理方法,其特征在于,所述采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧具体为:
    采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行3D旋转,渲染获得并输出与所述多个视频帧序列中的每 个视频帧对应的平滑的视频帧,其中,3D旋转的旋转量Δq计算公式为:
    Figure PCTCN2021111264-appb-100001
    其中,
    Figure PCTCN2021111264-appb-100002
    是平滑的旋转量。
  12. 一种图像处理装置,其特征在于,所述装置包括:
    同步模块,用于获取多个视频帧序列,并对所述多个视频帧序列进行同步,所述多个视频帧序列分别由多个摄像头拍摄;
    运动估算模块,用于估算所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量;
    平滑模块,用于对所述多个视频帧序列中的每个视频帧相对参考坐标系的运动旋转量进行平滑,得到与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量;
    渲染模块,用于采用所述与所述多个视频帧序列中的每个视频帧对应的平滑的旋转量对相应的视频帧进行旋转和渲染,输出与所述多个视频帧序列中的每个视频帧对应的平滑的视频帧;
    配准模块,用于对与所述多个视频帧序列中的每组同步的视频帧对应的平滑的视频帧进行配准;
    融合模块,用于将配准后的每组同步的视频帧对应的平滑的视频帧进行融合,生成融合后的视频帧和/或视频。
  13. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至11任一项所述的图像处理方法的步骤。
  14. 一种计算机设备,包括:
    一个或多个处理器;
    存储器;以及
    一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现 如权利要求1至11任一项所述的图像处理方法的步骤。
  15. 一种相机,包括:
    一个或多个处理器;
    存储器;以及
    一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至11任一项所述的图像处理方法的步骤。
  16. 一种终端,包括:
    一个或多个处理器;
    存储器;以及
    一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至11任一项所述的图像处理方法的步骤。
PCT/CN2021/111264 2020-08-06 2021-08-06 图像处理方法、装置、计算机可读存储介质及计算机设备 WO2022028594A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010783966.2A CN112017215B (zh) 2020-08-06 2020-08-06 图像处理方法、装置、计算机可读存储介质及计算机设备
CN202010783966.2 2020-08-06

Publications (1)

Publication Number Publication Date
WO2022028594A1 true WO2022028594A1 (zh) 2022-02-10

Family

ID=73499129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/111264 WO2022028594A1 (zh) 2020-08-06 2021-08-06 图像处理方法、装置、计算机可读存储介质及计算机设备

Country Status (2)

Country Link
CN (1) CN112017215B (zh)
WO (1) WO2022028594A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017215B (zh) * 2020-08-06 2023-08-25 影石创新科技股份有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备
CN112017216B (zh) * 2020-08-06 2023-10-27 影石创新科技股份有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备
CN112291593B (zh) * 2020-12-24 2021-03-23 湖北芯擎科技有限公司 数据同步方法和数据同步装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146231A (zh) * 2007-07-03 2008-03-19 浙江大学 根据多视角视频流生成全景视频的方法
CN104063867A (zh) * 2014-06-27 2014-09-24 浙江宇视科技有限公司 一种多摄像机视频同步方法和装置
EP2851870A1 (en) * 2013-09-20 2015-03-25 Application Solutions (Electronics and Vision) Limited Method for estimating ego motion of an object
CN108564617A (zh) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 多目相机的三维重建方法、装置、vr相机和全景相机
CN109040575A (zh) * 2017-06-09 2018-12-18 株式会社理光 全景视频的处理方法、装置、设备、计算机可读存储介质
US20200160539A1 (en) * 2018-11-16 2020-05-21 National Applied Research Laboratories Moving object detection system and method
CN112017215A (zh) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100688B (zh) * 2014-05-12 2019-08-20 索尼公司 图像处理方法、图像处理装置和监视系统
CN107801014B (zh) * 2017-10-25 2019-11-08 深圳岚锋创视网络科技有限公司 一种全景视频防抖的方法、装置及便携式终端
CN111107267A (zh) * 2019-12-30 2020-05-05 广州华多网络科技有限公司 图像处理方法、装置、设备及存储介质
CN111242975B (zh) * 2020-01-07 2023-08-25 影石创新科技股份有限公司 自动调整视角的全景视频渲染方法、存储介质及计算机设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146231A (zh) * 2007-07-03 2008-03-19 浙江大学 根据多视角视频流生成全景视频的方法
EP2851870A1 (en) * 2013-09-20 2015-03-25 Application Solutions (Electronics and Vision) Limited Method for estimating ego motion of an object
CN104063867A (zh) * 2014-06-27 2014-09-24 浙江宇视科技有限公司 一种多摄像机视频同步方法和装置
CN109040575A (zh) * 2017-06-09 2018-12-18 株式会社理光 全景视频的处理方法、装置、设备、计算机可读存储介质
CN108564617A (zh) * 2018-03-22 2018-09-21 深圳岚锋创视网络科技有限公司 多目相机的三维重建方法、装置、vr相机和全景相机
US20200160539A1 (en) * 2018-11-16 2020-05-21 National Applied Research Laboratories Moving object detection system and method
CN112017215A (zh) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备

Also Published As

Publication number Publication date
CN112017215B (zh) 2023-08-25
CN112017215A (zh) 2020-12-01

Similar Documents

Publication Publication Date Title
WO2022028595A1 (zh) 图像处理方法、装置、计算机可读存储介质及计算机设备
WO2022028594A1 (zh) 图像处理方法、装置、计算机可读存储介质及计算机设备
US9177384B2 (en) Sequential rolling bundle adjustment
WO2021227359A1 (zh) 一种无人机投影方法、装置、设备及存储介质
US10559090B2 (en) Method and apparatus for calculating dual-camera relative position, and device
WO2020014909A1 (zh) 拍摄方法、装置和无人机
US8447140B1 (en) Method and apparatus for estimating rotation, focal lengths and radial distortion in panoramic image stitching
WO2019119328A1 (zh) 一种基于视觉的定位方法及飞行器
CN111127524A (zh) 一种轨迹跟踪与三维重建方法、系统及装置
KR102367361B1 (ko) 위치 측정 및 동시 지도화 방법 및 장치
US20150125045A1 (en) Environment Mapping with Automatic Motion Model Selection
US8805091B1 (en) Incremental image processing pipeline for matching multiple photos based on image overlap
WO2017022033A1 (ja) 画像処理装置、画像処理方法および画像処理プログラム
CN107749069B (zh) 图像处理方法、电子设备和图像处理系统
CN110660098A (zh) 基于单目视觉的定位方法和装置
US9838572B2 (en) Method and device for determining movement between successive video images
KR20100065918A (ko) 사진의 촬영 위치 및 방향 정보 태깅 방법과 그 장치
CN111882655B (zh) 三维重建的方法、装置、系统、计算机设备和存储介质
JPWO2016208404A1 (ja) 情報処理装置および方法、並びにプログラム
JP5973767B2 (ja) 対応点探索装置、そのプログラム及びカメラパラメータ推定装置
JP4809134B2 (ja) 画像処理装置及びその処理方法
CN112270748B (zh) 基于图像的三维重建方法及装置
CN115705651A (zh) 视频运动估计方法、装置、设备和计算机可读存储介质
Dasari et al. A joint visual-inertial image registration for mobile HDR imaging
WO2020146965A1 (zh) 图像重新聚焦的控制方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21853963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21853963

Country of ref document: EP

Kind code of ref document: A1