WO2020238897A1 - 一种全景图像、视频拼接方法、计算机可读存储介质及全景相机 - Google Patents

一种全景图像、视频拼接方法、计算机可读存储介质及全景相机 Download PDF

Info

Publication number
WO2020238897A1
WO2020238897A1 PCT/CN2020/092344 CN2020092344W WO2020238897A1 WO 2020238897 A1 WO2020238897 A1 WO 2020238897A1 CN 2020092344 W CN2020092344 W CN 2020092344W WO 2020238897 A1 WO2020238897 A1 WO 2020238897A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
block
template
row
final
Prior art date
Application number
PCT/CN2020/092344
Other languages
English (en)
French (fr)
Inventor
王果
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Priority to JP2021570386A priority Critical patent/JP7350893B2/ja
Priority to US17/615,571 priority patent/US20220237736A1/en
Priority to EP20814063.2A priority patent/EP3982322A4/en
Publication of WO2020238897A1 publication Critical patent/WO2020238897A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06T3/047
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Definitions

  • the invention belongs to the field of panoramic images and videos, and in particular relates to a panoramic image, a video splicing method, a computer-readable storage medium and a panoramic camera.
  • panoramic image stitching algorithms mostly use stitching algorithms based on feature point matching.
  • This type of algorithm generally uses faster feature point detection algorithms, specifically including: detecting feature points of two images through ORB, SURF and SIFT, and then using nearest neighbor matching algorithm and RANSAC algorithm to match and filter the feature points.
  • the splicing algorithm based on feature point matching has the following disadvantages: (1) It is easy to produce mismatches, and some mismatches cannot be effectively filtered out, which will affect the final splicing effect; (2) Feature point detection and matching filtering based on the RANSAC algorithm The efficiency of the algorithm is low, and it cannot meet the needs of panoramic cameras for real-time stitching of panoramic images.
  • the purpose of the present invention is to provide a panoramic image splicing, video method, computer readable storage medium and panoramic camera, aiming to solve the problem that splicing algorithms based on feature point matching are prone to mismatches, and some mismatches cannot be effectively filtered out, which will affect The final stitching effect; the algorithm efficiency of feature point detection and matching filtering based on the RANSAC algorithm is low, and it cannot meet the problem that the panoramic camera needs to stitch panoramic images in real time.
  • the present invention provides a panoramic image stitching method. For fish-eye photos taken by a panoramic camera composed of multiple cameras, the following steps are performed for every fish-eye photo taken by two adjacent cameras:
  • S104 Update the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the final matching result, and perform panoramic stitching according to the updated mapping relationship to obtain a seamless panoramic image.
  • the present invention provides a panoramic image stitching method, characterized in that the following steps are performed on two fisheye photos with overlapping image regions:
  • S203 Use a matching filtering algorithm based on region expansion to perform matching filtering on the initial template matching result to obtain a final matching result
  • S204 Update the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the final matching result, and perform panoramic stitching according to the updated mapping relationship to obtain a seamless panoramic image.
  • the present invention provides a panoramic video splicing method, characterized in that the panoramic video splicing method splices the first frame of the panoramic video by any one of the panoramic image splicing methods in the second aspect.
  • the present invention provides a panoramic video splicing method, characterized in that the panoramic video splicing method splices intermediate frames of a panoramic video by any one of the panoramic image splicing methods in the second aspect, and before S2022, it further includes The following steps:
  • the area where the matching state in the detection template strip chart is stable is specifically:
  • Analyze the status queue of the template block and mark the row of the template block where the number of successful verification times is greater than the set threshold and the change of the NCC value is less than the set threshold as a static area.
  • the present invention provides a computer-readable storage medium that stores a computer program, wherein the computer program is executed by a processor as in the first or second aspect
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium.
  • the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor as in the third aspect or the fourth aspect.
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium.
  • the present invention provides a panoramic camera, including: one or more processors; a memory, and one or more computer programs, the processor and the memory are connected by a bus, wherein the one or more The computer program is stored in the memory and is configured to be executed by the one or more processors, characterized in that, when the processor executes the computer program, it implements any one of the first aspect or the second aspect.
  • One of the steps of the panoramic image stitching method One of the steps of the panoramic image stitching method.
  • the present invention provides a panoramic camera, including: one or more processors; a memory, and one or more computer programs, the processor and the memory are connected by a bus, wherein the one or more The computer program is stored in the memory and is configured to be executed by the one or more processors, wherein, when the processor executes the computer program, the computer program implements any one of the third aspect or the fourth aspect.
  • a panoramic camera including: one or more processors; a memory, and one or more computer programs, the processor and the memory are connected by a bus, wherein the one or more The computer program is stored in the memory and is configured to be executed by the one or more processors, wherein, when the processor executes the computer program, the computer program implements any one of the third aspect or the fourth aspect.
  • the fisheye photos taken by two adjacent cameras are mapped to the corresponding seam areas of the ball model, two strip images with overlapping areas are formed; block template matching is performed on the two strip images , Get the initial template matching result; use the matching filter algorithm based on region expansion to filter the initial template matching result to get the final matching result; update the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the final matching result , Perform panoramic stitching according to the updated mapping relationship to obtain a seamless panoramic image.
  • the method of the present invention is highly efficient and can meet the requirements of real-time splicing of panoramic images at the mobile terminal; the feature matching result is accurate and stable, and the good effect of seamless splicing can be achieved; when applied to video, the matching effect is stable and has certain robustness , Can be very suitable for dynamic, static, long-range, close-range alternate scenes.
  • FIG. 1 is a flowchart of a panoramic image stitching method provided in Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart of a panoramic image stitching method provided in Embodiment 2 of the present invention.
  • FIG. 3 is a flowchart of S102 in the panoramic image stitching method provided in Embodiment 1 of the present invention.
  • FIG. 4 is a flowchart of S202 in the panoramic image stitching method provided in the second embodiment of the present invention.
  • FIG. 5 is a flowchart of S103 in the panoramic image stitching method provided in the first embodiment of the present invention or S203 in the panoramic image stitching method provided in the second embodiment of the present invention.
  • Fig. 6 is a schematic diagram of the process of S102 and S1031 in the panoramic image stitching method provided in the first embodiment of the present invention or S202 and S2031 in the panoramic image stitching method provided in the second embodiment of the present invention.
  • FIG. 7 is a specific structural block diagram of a panoramic camera provided by Embodiment 4 of the present invention.
  • S102 Perform block template matching on the two strip images to obtain an initial template matching result.
  • Gaussian blurring is performed on the two strip images to reduce photo noise and improve matching accuracy; and/or canny edge detection is performed on the two strip images to obtain image gradient information for subsequent elimination of textureless areas to provide a data basis.
  • S102 may specifically include the following steps:
  • any one of the two strip charts as the template strip chart, and the other strip chart as the strip chart to be matched and divide the template strip chart into a square matrix of M rows and N columns, Each square in the square matrix is regarded as a template square, adjacent rows in the square matrix have overlapping parts, and the square matrix covers the entire template strip chart; the strip chart to be matched is divided into M rows of square areas, in the square area Adjacent rows have overlapping parts, and M and N are positive integers greater than 1;
  • the maximum value in S1023 may be a maximum value greater than the set NCC threshold (for example, 0.8).
  • the template block in S1021 is an effective template block, and the effective template block is determined in the following manner:
  • S103 Use a matching filtering algorithm based on region expansion to perform matching filtering on the initial template matching result to obtain a final matching result.
  • S103 may specifically include the following steps:
  • the expansion criterion is: when each template block is used as a matching block and the disparity of one matching block is used as the disparity of another matching block, and the NCC value of the other matching block is still greater than the set threshold (0.8), then the two The matching blocks are merged into one matching block.
  • the credible matching blocks cluster the credible matching blocks to obtain multiple regions, so that the difference between the x components of the disparity of adjacent rows in the same region does not exceed the set threshold (the smallest value in the experiment)
  • the width of the trusted matching block filter the region according to the size of the region (for example, the number of rows included), and delete the regions less than the preset number of rows (for example, 3 rows, 4 rows, etc.) (including the trusted matching in these rows) Block), set all rows that have not formed a region as failed rows, and again cluster the trusted matching blocks according to the disparity consistency of the trusted matching blocks to update the region information.
  • S10331 Determine the starting line for expansion: construct a line credibility value for each line based on the consistency between the disparity of the credible matching block and the regional average and the credibility of the credible matching block with a preset weight. Sort the credibility values, and select the rows with the credibility value in the front preset number (for example, the top 10 rows) as the expansion start row;
  • S104 Update the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the final matching result, and perform panoramic stitching according to the updated mapping relationship to obtain a seamless panoramic image.
  • S104 is specifically: updating the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the disparity between the final credible matching block and the final credible matching block in each row, and performing panoramic stitching according to the updated mapping relationship to obtain seamless Panorama.
  • the panoramic image splicing method provided in the first embodiment of the present invention is applicable to the first frame of the panoramic video, that is, the fisheye photo is the first frame of the panoramic video.
  • the fisheye photo is the first frame of the panoramic video.
  • the intermediate frame of the panoramic video before S1022, the following steps are also included:
  • the area where the matching state is stable in the detection template strip chart is specifically:
  • Analyze the status queue of the template block and mark the row of the template block where the number of successful verification times is greater than the set threshold (for example, 8 times, 9 times, etc.) and the NCC value change is less than the set threshold (for example, 0.03, 0.05, etc.) as a static area.
  • the set threshold for example, 8 times, 9 times, etc.
  • the NCC value change is less than the set threshold (for example, 0.03, 0.05, etc.) as a static area.
  • the states of the final trusted matching block include four: successful verification, verification failure, successful rematch, and failed rematch.
  • the fisheye photos taken by two adjacent cameras are mapped to the corresponding seam areas of the ball model, two strip images with overlapping areas are formed; block template matching is performed on the two strip images , Get the initial template matching result; use the matching filter algorithm based on region expansion to filter the initial template matching result to get the final matching result; update the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the final matching result , Perform panoramic stitching according to the updated mapping relationship to obtain a seamless panoramic image.
  • the method of the present invention is highly efficient and can meet the requirements of real-time splicing of panoramic images at the mobile terminal; the feature matching result is accurate and stable, and the good effect of seamless splicing can be achieved; when applied to video, the matching effect is stable and has certain robustness , Can be very suitable for dynamic, static, long-range, close-range alternate scenes.
  • the NCC matrix is used to expand left and right in the same row in both directions.
  • the trusted matching block is row-based Clustering to obtain multiple regions, filter the regions according to the size of the region, expand each region up and down, and execute again according to the disparity consistency of the trusted matching blocks, cluster the trusted matching blocks by row, and get more Each area is filtered according to the size of the area, which greatly improves the accuracy of matching filtering and the efficiency of the algorithm.
  • the dynamic video frame matching mechanism based on matching verification: Under this mechanism, for the first frame of the video, the entire strip chart is matched and filtered by the block template, and for the intermediate frames, the matching verification and status queue are used to dynamically Update rematching rows, only perform block template matching and matching filtering on rematching rows, and implement static area detection and failure row marking. This mechanism reduces the matching fluctuation between adjacent frames, improves the stability of the matching, and improves the operating efficiency of the algorithm.
  • the second embodiment of the present invention provides a panoramic image stitching method, which performs the following steps on two fisheye photos with overlapping image regions:
  • S202 Perform block template matching on the two strip images to obtain an initial template matching result.
  • Gaussian blurring is performed on the two strip images to reduce photo noise and improve matching accuracy.
  • S202 may specifically include the following steps:
  • the maximum value in S2023 may be a maximum value greater than the set NCC threshold (for example, 0.8).
  • the template block in S2021 is an effective template block, and the effective template block is determined in the following manner:
  • S203 Use a matching filtering algorithm based on region expansion to perform matching filtering on the initial template matching result to obtain a final matching result.
  • S203 may specifically include the following steps:
  • For each template block use the NCC matrix to expand to the left and right in the same row to form a candidate matching block.
  • the disparity consistency, the width of the candidate matching block and the NCC value are determined by The preset weight ratio constructs the matching credibility M, the candidate matching blocks in each row are sorted according to the matching credibility M, and the candidate matching block with the highest matching credibility is selected as the credible matching block of the row.
  • the expansion criterion is: when each template block is used as a matching block and the disparity of one matching block is used as the disparity of another matching block, and the NCC value of the other matching block is still greater than the set threshold (0.8), then the two The matching blocks are merged into one matching block.
  • S2032 according to the disparity consistency of the credible matching blocks, cluster the credible matching blocks to obtain multiple regions, so that the difference between the x components of the disparity of adjacent rows in the same region does not exceed the set threshold (the smallest value in the experiment)
  • the width of the trusted matching block filter the region according to the size of the region (for example, the number of rows included), and delete the regions less than the preset number of rows (for example, 3 rows, 4 rows, etc.) (including the trusted matching in these rows) Block), set all rows that have not formed a region as failed rows, and again cluster the trusted matching blocks according to the disparity consistency of the trusted matching blocks to update the region information.
  • S20331 Determine the starting line for expansion: construct a line credibility value for each line according to the consistency of the disparity of the credible matching block and the regional average value and the credibility of the credible matching block with preset weights. Sort the credibility values, and select the rows with the credibility value in the front preset number (for example, the top 10 rows) as the expansion start row;
  • the matching confidence of the region For each candidate expansion region, construct the matching confidence of the region based on the average matching confidence of the candidate matching blocks contained in the region and the region size, and use the matching confidence M of all candidate matching blocks in the region Assign the matching credibility of the region, mark all candidate matching blocks in the region as regional credible matching blocks; for multiple regional credible matching blocks in each row, select the region credible matching with the largest matching credibility M
  • the block serves as the final credible matching block of the row, and the disparity corresponding to the final credible matching block is the final disparity of the row.
  • S204 Update the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the final matching result, and perform panoramic stitching according to the updated mapping relationship to obtain a seamless panoramic image.
  • S204 is specifically: updating the mapping relationship between the fisheye photo and the corresponding patchwork area of the ball model according to the disparity corresponding to the final credible matching block and the final credible matching block of each row, and performing panoramic stitching according to the updated mapping relationship to obtain seamless Panorama.
  • the third embodiment of the present invention provides a panoramic video splicing method, which is characterized in that the panoramic video splicing method splices the first frame of the panoramic video through any one of the panoramic image splicing methods in the second embodiment.
  • the fourth embodiment of the present invention provides a panoramic video splicing method, characterized in that the panoramic video splicing method uses any one of the panoramic image splicing methods in the second embodiment to splice the intermediate frames of the panoramic video, and before S2022, it also includes the following step:
  • the area where the matching state is stable in the detection template strip chart is specifically:
  • Analyze the status queue of the template block and mark the row of the template block where the number of successful verification times is greater than the set threshold (for example, 8 times, 9 times, etc.) and the NCC value change is less than the set threshold (for example, 0.03, 0.05, etc.) as a static area.
  • the set threshold for example, 8 times, 9 times, etc.
  • the NCC value change is less than the set threshold (for example, 0.03, 0.05, etc.) as a static area.
  • the states of the final trusted matching block include four: successful verification, verification failure, successful rematch, and failed rematch.
  • the two fisheye photos are mapped to the corresponding seam area of the ball model, two strip images with overlapping areas are formed, and two strip images with overlapping areas are formed; for the two strip images Perform block template matching to obtain the initial template matching result; use the matching filtering algorithm based on region expansion to filter the initial template matching results to obtain the final matching result; update the fisheye photo to the ball model according to the final matching result.
  • panoramic stitching is performed according to the updated mapping relationship to obtain a seamless panoramic image.
  • the method of the present invention is highly efficient and can meet the requirements of real-time splicing of panoramic images at the mobile terminal; the feature matching result is accurate and stable, and the good effect of seamless splicing can be achieved; when applied to video, the matching effect is stable and has certain robustness , Can be very suitable for dynamic, static, long-range, close-range alternate scenes.
  • the NCC matrix is used to expand left and right in the same row in both directions.
  • the trusted matching block is row-based Clustering to obtain multiple regions, filter the regions according to the size of the region, expand each region up and down, and execute again according to the disparity consistency of the trusted matching blocks, cluster the trusted matching blocks by row, and get more Each area is filtered according to the size of the area, which greatly improves the accuracy of matching filtering and the efficiency of the algorithm.
  • the dynamic video frame matching mechanism based on matching verification: Under this mechanism, for the first frame of the video, the entire strip chart is matched and filtered by the block template, and for the intermediate frames, the matching verification and status queue are used to dynamically Update rematching rows, only perform block template matching and matching filtering on rematching rows, and implement static area detection and failure row marking. This mechanism reduces the matching fluctuation between adjacent frames, improves the stability of the matching, and improves the operating efficiency of the algorithm.
  • the fifth embodiment of the present invention provides a computer-readable storage medium that stores a computer program, and is characterized in that, when the computer program is executed by a processor, the implementation is as in any of the first or second embodiments.
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium.
  • Embodiment 6 is a diagrammatic representation of Embodiment 6
  • the sixth embodiment of the present invention provides a computer-readable storage medium that stores a computer program, and is characterized in that, when the computer program is executed by a processor, the implementation is as in the third or fourth embodiment.
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium.
  • FIG. 7 shows a specific structural block diagram of a portable terminal provided by Embodiment 5 of the present invention.
  • a portable terminal 100 includes: one or more processors 101, a memory 102, and one or more computer programs, wherein the processor 101 and the memory 102 are connected by a bus, the one or more computer programs are stored in the memory 102, and are configured to be executed by the one or more processors 101, and the processor 101 executes all
  • the computer program implements the steps of a panoramic image stitching method provided in Embodiment 1 or Embodiment 2 of the present invention.
  • Embodiment 8 is a diagrammatic representation of Embodiment 8
  • FIG. 7 shows a specific structural block diagram of a portable terminal provided in Embodiment 6 of the present invention.
  • a portable terminal 100 includes: one or more processors 101, a memory 102, and one or more computer programs, wherein the processor 101 and the memory 102 are connected by a bus, the one or more computer programs are stored in the memory 102, and are configured to be executed by the one or more processors 101, and the processor 101 executes all
  • the computer program implements the steps of a panoramic video splicing method provided in the third or fourth embodiment of the present invention.
  • the program can be stored in a computer-readable storage medium, and the storage medium can include: Read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明适用于全景图像、视频领域,提供了一种全景图像、视频拼接方法及全景相机。本发明将两个相邻的相机拍摄的鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;对两张带状图进行分块模板匹配,得到初始模板匹配结果;使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。本发明的方法效率较高,能够满足移动端实时拼接全景图像的需求;特征匹配结果准确且稳定,可实现无缝拼接的良好效果;应用于视频时,匹配效果稳定,具有一定鲁棒性,能很好的适用于动态、静态、远景、近景交替多变的场景。

Description

一种全景图像、视频拼接方法、计算机可读存储介质及全景相机 技术领域
本发明属于全景图像、视频领域,尤其涉及一种全景图像、视频拼接方法、计算机可读存储介质及全景相机。
背景技术
目前全景图像拼接算法大多使用基于特征点匹配的拼接算法。这类算法一般使用速度较快的特征点检测算法,具体包括:通过ORB、SURF和SIFT检测两张图像的特征点,然后利用最近邻匹配算法和RANSAC算法对特征点进行匹配和匹配过滤。然而,基于特征点匹配的拼接算法有如下缺点:(1)容易产生误匹配,有些误匹配无法被有效滤除,会影响最终拼接效果;(2)特征点检测和基于RANSAC算法的匹配过滤的算法效率较低,无法满足全景相机实时拼接全景图像的需求。
技术问题
本发明的目的在于提供一种全景图像拼接、视频方法、计算机可读存储介质及全景相机,旨在解决基于特征点匹配的拼接算法容易产生误匹配,有些误匹配无法被有效滤除,会影响最终拼接效果;特征点检测和基于RANSAC算法的匹配过滤的算法效率较低,无法满足全景相机实时拼接全景图像的需求的问题。
技术解决方案
第一方面,本发明提供了一种全景图像拼接方法,对于多个相机构成的全景相机拍摄的鱼眼照片,对每两个相邻的相机拍摄的鱼眼照片均执行如下步骤:
S101、将两个相邻的相机拍摄的鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;
S102、对两张带状图进行分块模板匹配,得到初始模板匹配结果;
S103、使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;
S104、根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
第二方面,本发明提供一种全景图像拼接方法,其特征在于,将两张具有重叠图像区域的鱼眼照片执行如下步骤:
S201、将两张鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;
S202、对两张带状图进行分块模板匹配,得到初始模板匹配结果;
S203、使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;
S204、根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
第三方面,本发明提供一种全景视频拼接方法,其特征在于,所述全景视频拼接方法通过第二方面中的任意一项全景图像拼接方法拼接全景视频的第一帧。
第四方面,本发明提供一种全景视频拼接方法,其特征在于,所述全景视频拼接方法通过第二方面中的任意一项全景图像拼接方法拼接全景视频的中间帧,在S2022之前,还包括以下步骤:
S2051、检测模板带状图中的静态区域,所述静态区域是图像画面静止或匹配状态稳定的区域;
S2052、分析上一帧的每一个最终可信匹配块的状态队列,将连续验证失败或重匹配失败次数大于设定阈值的最终可信匹配块所在行标记为失败行;
S2053、对上一帧的每一个最终可信匹配块,按照其视差找到其在待匹配带状图中的对应方块区域,计算这两个等大的区域的NCC值,若NCC 大于设定阈值,则将该最终可信匹配块标记为验证成功,更新最终可信匹配块的状态队列;反之,则将其标记为验证失败,更新最终可信匹配块的状态队列;
S2054、分析每一行的最终可信匹配块的状态队列,对于非节点帧,将最终可信匹配块的连续验证失败次数大于设定阈值的行设为重匹配行,对于节点帧,将所有非静态区域的行都设为重匹配行;
对于所有重匹配行按照S2022、S2023、S203和S204进行操作,并更新最终可信匹配块的状态队列,将重匹配成功的行中所包含的最终可信匹配块标记为重匹配成功,将重匹配失败行中的最终可信匹配块标记为重匹配失败。
进一步地,如第四方面中所述的方法,所述检测模板带状图中的匹配状态稳定的区域具体为:
分析模板方块的状态队列,将验证成功次数大于设定阈值且NCC值变化小于设定阈值的模板方块所在行标记为静态区域。
第五方面,本发明提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如第一方面或第二方面中任一项所述的全景图像拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
第六方面,本发明提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如第三方面或第四方面中任一项所述的全景视频拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
第七方面,本发明提供一种全景相机,包括:一个或多个处理器;存储器、以及一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算 机程序时实现如第一方面或第二方面中任一项所述的全景图像拼接方法的步骤。
第八方面,本发明提供一种全景相机,包括:一个或多个处理器;存储器、以及一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如第三方面或第四方面中任一项所述的全景视频拼接方法的步骤。
有益效果
在本发明中,由于将两个相邻的相机拍摄的鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;对两张带状图进行分块模板匹配,得到初始模板匹配结果;使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。因此本发明的方法效率较高,能够满足移动端实时拼接全景图像的需求;特征匹配结果准确且稳定,可实现无缝拼接的良好效果;应用于视频时,匹配效果稳定,具有一定鲁棒性,能很好的适用于动态、静态、远景、近景交替多变的场景。
附图说明
图1是本发明实施例一提供的全景图像拼接方法的流程图。
图2是本发明实施例二提供的全景图像拼接方法的流程图。
图3是本发明实施例一提供的全景图像拼接方法中的S102的流程图。
图4是本发明实施例二提供的全景图像拼接方法中的S202的流程图。
图5是本发明实施例一提供的全景图像拼接方法中的S103或本发明实施例二提供的全景图像拼接方法中的S203的流程图。
图6本发明实施例一提供的全景图像拼接方法中的S102和S1031或本 发明实施例二提供的全景图像拼接方法中的S202和S2031的过程示意图。
图7是本发明实施例四提供的全景相机的具体结构框图。
本发明的实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
实施例一:
请参阅图1,本发明实施例一提供的全景图像拼接方法,对于多个相机构成的全景相机拍摄的鱼眼照片,对每两个相邻的相机拍摄的鱼眼照片均执行如下步骤:
S101、将两个相邻的相机拍摄的鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图。
S102、对两张带状图进行分块模板匹配,得到初始模板匹配结果。
在本发明实施例一中,S102之前还可以包括以下步骤:
对两张带状图进行高斯模糊,以减少照片噪声,提高匹配精度;和/或,对两张带状图进行Canny边缘检测,得到图像梯度信息,供后续剔除无纹理区域提供数据基础。
请参阅图3和图6,在本发明实施例一中,S102具体可以包括以下步骤:
S1021、选择两个带状图中的任意一张带状图作为模板带状图,另外一张带状图作为待匹配带状图,将模板带状图分割成M行N列的方块矩阵,并将方块矩阵中的每个方块作为一个模板方块,方块矩阵中相邻行具有重叠部分,方块矩阵覆盖整张模板带状图;将待匹配带状图分割成M行方块区域,方块区域中相邻行具有重叠部分,M和N是大于1的正整数;
S1022、将每个模板方块在待匹配带状图中进行模板匹配,匹配区域为 待匹配带状图中对应于模板方块的同一行的整行区域,每一个模板方块进行模板匹配之后均得到一个NCC(Normalization cross correlation,归一化互相关)矩阵,从而得到M*N个NCC矩阵;
S1023、在每一个NCC矩阵中寻找最大值,通过最大值所在NCC矩阵中的位置,计算出该模板方块在待匹配带状图中对应区域的中心位置,再根据已知模板方块的中心在模板带状图中的位置,计算该模板方块的视差(Disparity),从而计算得到每一个模板方块的视差,作为初始模板匹配结果。
在本发明实施例一中,S1023中的最大值可以是大于设定的NCC阈值(例如0.8)的最大值。
在本发明实施例一中,S1021中的模板方块是有效模板方块,所述有效模板方块是通过以下方式确定的:
根据Canny边缘检测得到的图像梯度信息计算每一个匹配方块的纹理丰富度,当纹理丰富度大于设定阈值时,将其标记为有效模板方块。
S103、使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果。
请参阅图5和图6,在本发明实施例一中,S103具体可以包括以下步骤:
S1031、对每一个模板方块,利用NCC矩阵,在同一行向左和向右双向扩张,形成一个候选匹配块,对每一个候选匹配块,以视差一致性、候选匹配块的宽度和NCC值以预设的权值比例构建匹配可信度M,对每一行的候选匹配块,按照匹配可信度M进行排序,选出匹配可信度最高的候选匹配块作为该行的可信匹配块。
扩张准则为:将每一个模板方块作为一个匹配块,使用一个匹配块的视差作为另一个匹配块的视差时,另一个匹配块的NCC值仍大于设定阈值(0.8),则将这两个匹配块合并为一个匹配块。
S1032、根据可信匹配块的视差一致性,对可信匹配块进行聚类,得到多个区域,使得同一区域内相邻行的视差的x分量之差不超过设定阈值(实验中取最小可信匹配块的宽度),根据区域大小(例如包含的行数)对区域进行过滤,删掉小于预设行数(例如3行、4行等)的区域(包括这些行中的可信匹配块),将所有未形成区域的行设为失败行,再次根据可信匹配块的视差一致性,对可信匹配块进行聚类,更新区域信息。
S1033、对每一个区域进行上下行扩张,具体包括以下步骤:
S10331、确定扩张起始行:对每一行根据可信匹配块的视差与区域平均值的一致性和可信匹配块的可信度以预设权值构建行可信度值,对每一行的可信度值进行排序,选取可信度值位于前面预设数量的行(例如前10的行)作为扩张起始行;
S10332、对每一个扩张起始行向上和向下进行双向扩张;对于待扩张行中的每一个候选匹配,计算当前扩张行的最优匹配与该候选匹配的视差一致性度量C,若视差一致性度量C大于设定的视差一致性阈值,则将该视差一致性度量C以预设权值更新至该候选匹配的匹配可信度M中,对待扩张行的所有候选匹配按照匹配可信度M进行排序,选取M值最大的候选匹配块纳入当前扩张区域中。若当前扩张行的最优匹配与待扩张行中的任意候选匹配的视差一致性度量C均小于设定视差一致性阈值,则中断当前区域的扩张,从而使每一个区域均可得到多个候选扩张区域。
S10333、对于每一个候选扩张区域,以区域中包含候选匹配块的平均匹配可信度和区域大小构建该区域的匹配可信度,将该区域中所有候选匹配块的匹配可信度M均使用该区域的匹配可信度赋值,将该区域中的所有候选匹配块标记为区域可信匹配块;对于每一行的多个区域可信匹配块,选取匹配可信度M最大的区域可信匹配块作为该行的最终可信匹配块,该最终可信匹配块对应的视差即为该行的最终视差。
S1034、再次执行一次S1032。
S104、根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
S104具体为:根据每行的最终可信匹配块和最终可信匹配块对应的视差更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
当本发明实施例一提供的全景图像拼接方法应用于全景视频拼接时,本发明实施例一提供的全景图像拼接方法适用于全景视频的第一帧,即所述鱼眼照片是全景视频的第一帧对应的鱼眼照片,对于全景视频的中间帧,在S1022之前,还包括以下步骤:
S1051、检测模板带状图中的静态区域,所述静态区域是图像画面静止或匹配状态稳定的区域。
检测模板带状图中的匹配状态稳定的区域具体为:
分析模板方块的状态队列,将验证成功次数大于设定阈值(例如8次、9次等)且NCC值变化小于设定阈值(例如0.03、0.05等)的模板方块所在行标记为静态区域。
S1052、分析上一帧的每一个最终可信匹配块的状态队列,将连续验证失败或重匹配失败次数大于设定阈值(例如3次、5次等)的最终可信匹配块所在行标记为失败行。失败行在下一个节点帧到来之前不会再次成为重匹配行。
所述最终可信匹配块的状态包括4个:验证成功、验证失败、重匹配成功和重匹配失败。
S1053、对上一帧的每一个最终可信匹配块,按照其视差找到其在待匹配带状图中的对应方块区域,计算这两个等大的区域的NCC值,若NCC大于设定阈值(例如0.8),则将该最终可信匹配块标记为验证成功,更新最终可信匹配块的状态队列;反之,则将其标记为验证失败,更新最终可信匹配块的状态队列。
S1054、分析每一行的最终可信匹配块的状态队列,对于非节点帧,将最终可信匹配块的连续验证失败次数大于设定阈值(例如非静态区域1次,静态区域3次)的行设为重匹配行,对于节点帧,将所有非静态区域的行都设为重匹配行。节点帧是指从第一帧开始每隔n帧(例如20帧、30帧等)而设置的帧。
对于所有重匹配行按照S1022、S1023、S103和S104进行操作,并更新最终可信匹配块的状态队列,将重匹配成功的行中所包含的最终可信匹配块标记为重匹配成功,将重匹配失败行中的最终可信匹配块标记为重匹配失败。
在本发明中,由于将两个相邻的相机拍摄的鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;对两张带状图进行分块模板匹配,得到初始模板匹配结果;使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。因此本发明的方法效率较高,能够满足移动端实时拼接全景图像的需求;特征匹配结果准确且稳定,可实现无缝拼接的良好效果;应用于视频时,匹配效果稳定,具有一定鲁棒性,能很好的适用于动态、静态、远景、近景交替多变的场景。
另外,由于使用基于区域扩张的匹配过滤算法,对每一个模板方块,利用NCC矩阵,在同一行向左和向右双向扩张,根据可信匹配块的视差一致性,对可信匹配块按行聚类,得到多个区域,根据区域大小对区域进行过滤,对每一个区域进行上下行扩张,再执行一次根据可信匹配块的视差一致性,对可信匹配块按行聚类,得到多个区域,根据区域大小对区域进行过滤,大大提高了匹配过滤的准确度和算法效率。
另外,基于匹配验证的动态视频帧匹配机制:该机制下,对于视频第一帧,对整幅带状图进行分块模板匹配和匹配过滤,对于中间帧,则通过匹 配验证和状态队列,动态更新重匹配行,仅对重匹配行进行分块模板匹配和匹配过滤,并实行静态区域检测和失败行标记。该机制减小了相邻帧之间的匹配波动,提高了匹配的稳定性,提高了算法的运行效率。
实施例二:
请参阅图2,本发明实施例二提供一种全景图像拼接方法,对两张具有重叠图像区域的鱼眼照片执行如下步骤:
S201、将两张鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图。
S202、对两张带状图进行分块模板匹配,得到初始模板匹配结果。
在本发明实施例二中,S202之前还可以包括以下步骤:
对两张带状图进行高斯模糊,以减少照片噪声,提高匹配精度。
请参阅图4和图6,在本发明实施例二中,S202具体可以包括以下步骤:
S2021、选择两个带状图中的任意一张带状图作为模板带状图,另外一张带状图作为待匹配带状图,将模板带状图分割成M行N列的方块矩阵,并将方块矩阵中的每个方块作为一个模板方块,方块矩阵中相邻行具有重叠部分,方块矩阵覆盖整张模板带状图;将待匹配带状图分割成M行方块区域,方块区域中相邻行具有重叠部分,M和N是大于1的正整数;
S2022、将每个模板方块在待匹配带状图中进行模板匹配,匹配区域为待匹配带状图中对应于模板方块的同一行的整行区域,每一个模板方块进行模板匹配之后均得到一个NCC(Normalization cross correlation,归一化互相关)矩阵,从而得到M*N个NCC矩阵;
S2023、在每一个NCC矩阵中寻找最大值,通过最大值所在NCC矩阵中的位置,计算出该模板方块在待匹配带状图中对应区域的中心位置,再根据已知模板方块的中心在模板带状图中的位置,计算该模板方块的视差(Disparity),从而计算得到每一个模板方块的视差,作为初始模板匹配结 果。
在本发明实施例二中,S2023中的最大值可以是大于设定的NCC阈值(例如0.8)的最大值。
在本发明实施例二中,S2021中的模板方块是有效模板方块,所述有效模板方块是通过以下方式确定的:
根据Canny边缘检测得到的图像梯度信息计算每一个匹配方块的纹理丰富度,当纹理丰富度大于设定阈值时,将其标记为有效模板方块。
S203、使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果。
请参阅图5和图6,在本发明实施例二中,S203具体可以包括以下步骤:
S2031、对每一个模板方块,利用NCC矩阵,在同一行向左和向右双向扩张,形成一个候选匹配块,对每一个候选匹配块,以视差一致性、候选匹配块的宽度和NCC值以预设的权值比例构建匹配可信度M,对每一行的候选匹配块,按照匹配可信度M进行排序,选出匹配可信度最高的候选匹配块作为该行的可信匹配块。
扩张准则为:将每一个模板方块作为一个匹配块,使用一个匹配块的视差作为另一个匹配块的视差时,另一个匹配块的NCC值仍大于设定阈值(0.8),则将这两个匹配块合并为一个匹配块。
S2032、根据可信匹配块的视差一致性,对可信匹配块进行聚类,得到多个区域,使得同一区域内相邻行的视差的x分量之差不超过设定阈值(实验中取最小可信匹配块的宽度),根据区域大小(例如包含的行数)对区域进行过滤,删掉小于预设行数(例如3行、4行等)的区域(包括这些行中的可信匹配块),将所有未形成区域的行设为失败行,再次根据可信匹配块的视差一致性,对可信匹配块进行聚类,更新区域信息。
S2033、对每一个区域进行上下行扩张,具体包括以下步骤:
S20331、确定扩张起始行:对每一行根据可信匹配块的视差与区域平均值的一致性和可信匹配块的可信度以预设权值构建行可信度值,对每一行的可信度值进行排序,选取可信度值位于前面预设数量的行(例如前10的行)作为扩张起始行;
S20332、对每一个扩张起始行向上和向下进行双向扩张;对于待扩张行中的每一个候选匹配,计算当前扩张行的最优匹配与该候选匹配的视差一致性度量C,若视差一致性度量C大于设定的视差一致性阈值,则将该视差一致性度量C以预设权值更新至该候选匹配的匹配可信度M中,对待扩张行的所有候选匹配按照匹配可信度M进行排序,选取M值最大的候选匹配块纳入当前扩张区域中。若当前扩张行的最优匹配与待扩张行中的任意候选匹配的视差一致性度量C均小于设定视差一致性阈值,则中断当前区域的扩张,从而使每一个区域均可得到多个候选扩张区域。
S20333、对于每一个候选扩张区域,以区域中包含候选匹配块的平均匹配可信度和区域大小构建该区域的匹配可信度,将该区域中所有候选匹配块的匹配可信度M均使用该区域的匹配可信度赋值,将该区域中的所有候选匹配块标记为区域可信匹配块;对于每一行的多个区域可信匹配块,选取匹配可信度M最大的区域可信匹配块作为该行的最终可信匹配块,该最终可信匹配块对应的视差即为该行的最终视差。
S2034、再次执行一次S2032。
S204、根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
S204具体为:根据每行的最终可信匹配块和最终可信匹配块对应的视差更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
实施例三:
本发明实施例三提供一种全景视频拼接方法,其特征在于,所述全景视 频拼接方法通过实施例二中的任意一项全景图像拼接方法拼接全景视频的第一帧。
实施例四:
本发明实施例四提供一种全景视频拼接方法,其特征在于,所述全景视频拼接方法通过实施例二中的任意一项全景图像拼接方法拼接全景视频的中间帧,在S2022之前,还包括以下步骤:
S2051、检测模板带状图中的静态区域,所述静态区域是图像画面静止或匹配状态稳定的区域。
检测模板带状图中的匹配状态稳定的区域具体为:
分析模板方块的状态队列,将验证成功次数大于设定阈值(例如8次、9次等)且NCC值变化小于设定阈值(例如0.03、0.05等)的模板方块所在行标记为静态区域。
S2052、分析上一帧的每一个最终可信匹配块的状态队列,将连续验证失败或重匹配失败次数大于设定阈值(例如3次、5次等)的最终可信匹配块所在行标记为失败行。失败行在下一个节点帧到来之前不会再次成为重匹配行。
所述最终可信匹配块的状态包括4个:验证成功、验证失败、重匹配成功和重匹配失败。
S2053、对上一帧的每一个最终可信匹配块,按照其视差找到其在待匹配带状图中的对应方块区域,计算这两个等大的区域的NCC值,若NCC大于设定阈值(例如0.8),则将该最终可信匹配块标记为验证成功,更新最终可信匹配块的状态队列;反之,则将其标记为验证失败,更新最终可信匹配块的状态队列。
S2054、分析每一行的最终可信匹配块的状态队列,对于非节点帧,将最终可信匹配块的连续验证失败次数大于设定阈值(例如非静态区域1次,静态区域3次)的行设为重匹配行,对于节点帧,将所有非静态区域 的行都设为重匹配行。节点帧是指从第一帧开始每隔n帧(例如20帧、30帧等)而设置的帧。
对于所有重匹配行按照S2022、S2023、S203和S204进行操作,并更新最终可信匹配块的状态队列,将重匹配成功的行中所包含的最终可信匹配块标记为重匹配成功,将重匹配失败行中的最终可信匹配块标记为重匹配失败。
在本发明中,由于将两张鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图,形成两张具有重合区域的带状图;对两张带状图进行分块模板匹配,得到初始模板匹配结果;使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。因此本发明的方法效率较高,能够满足移动端实时拼接全景图像的需求;特征匹配结果准确且稳定,可实现无缝拼接的良好效果;应用于视频时,匹配效果稳定,具有一定鲁棒性,能很好的适用于动态、静态、远景、近景交替多变的场景。
另外,由于使用基于区域扩张的匹配过滤算法,对每一个模板方块,利用NCC矩阵,在同一行向左和向右双向扩张,根据可信匹配块的视差一致性,对可信匹配块按行聚类,得到多个区域,根据区域大小对区域进行过滤,对每一个区域进行上下行扩张,再执行一次根据可信匹配块的视差一致性,对可信匹配块按行聚类,得到多个区域,根据区域大小对区域进行过滤,大大提高了匹配过滤的准确度和算法效率。
另外,基于匹配验证的动态视频帧匹配机制:该机制下,对于视频第一帧,对整幅带状图进行分块模板匹配和匹配过滤,对于中间帧,则通过匹配验证和状态队列,动态更新重匹配行,仅对重匹配行进行分块模板匹配和匹配过滤,并实行静态区域检测和失败行标记。该机制减小了相邻帧之 间的匹配波动,提高了匹配的稳定性,提高了算法的运行效率。
实施例五:
本发明实施例五提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如实施例一或实施例二中任一项所述的全景图像拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
实施例六:
本发明实施例六提供一种一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如实施例三或实施例四中任一项所述的全景视频拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
实施例七:
图7示出了本发明实施例五提供的便携式终端的具体结构框图,一种便携式终端100包括:一个或多个处理器101、存储器102、以及一个或多个计算机程序,其中所述处理器101和所述存储器102通过总线连接,所述一个或多个计算机程序被存储在所述存储器102中,并且被配置成由所述一个或多个处理器101执行,所述处理器101执行所述计算机程序时实现如本发明实施例一或实施例二提供的一种全景图像拼接方法的步骤。
实施例八:
图7示出了本发明实施例六提供的便携式终端的具体结构框图,一种便携式终端100包括:一个或多个处理器101、存储器102、以及一个或多个计算机程序,其中所述处理器101和所述存储器102通过总线连接,所述一个或多个计算机程序被存储在所述存储器102中,并且被配置成由所述一个或多个处理器101执行,所述处理器101执行所述计算机程序时实现如本发明实施例三或实施例四提供的一种全景视频拼接方法的步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步 骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (25)

  1. 一种全景图像拼接方法,其特征在于,对于多个相机构成的全景相机拍摄的鱼眼照片,对每两个相邻的相机拍摄的鱼眼照片均执行如下步骤:
    S101、将两个相邻的相机拍摄的鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;
    S102、对两张带状图进行分块模板匹配,得到初始模板匹配结果;
    S103、使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;
    S104、根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
  2. 如权利要求1所述的方法,其特征在于,S102之前还包括以下步骤:
    对两张带状图进行高斯模糊。
  3. 如权利要求1或2所述的方法,其特征在于,S102具体包括以下步骤:
    S1021、选择两个带状图中的任意一张带状图作为模板带状图,另外一张带状图作为待匹配带状图,将模板带状图分割成M行N列的方块矩阵,并将方块矩阵中的每个方块作为一个模板方块,方块矩阵中相邻行具有重叠部分,方块矩阵覆盖整张模板带状图;将待匹配带状图分割成M行方块区域,方块区域中相邻行具有重叠部分,M和N是大于1的正整数;
    S1022、将每个模板方块在待匹配带状图中进行模板匹配,匹配区域为待匹配带状图中对应于模板方块的同一行的整行区域,每一个模板方块进行模板匹配之后均得到一个NCC矩阵,从而得到M*N个NCC矩阵;
    S1023、在每一个NCC矩阵中寻找最大值,通过最大值所在NCC矩阵中的位置,计算出该模板方块在待匹配带状图中对应区域的中心位置,再根据已知模板方块的中心在模板带状图中的位置,计算该模板方块的视差,从而计算得到每一个模板方块的视差,作为初始模板匹配结果。
  4. 如权利要求3所述的方法,其特征在于,S1021中的模板方块是有效模 板方块,所述有效模板方块是通过以下方式确定的:
    根据Canny边缘检测得到的图像梯度信息计算每一个匹配方块的纹理丰富度,当纹理丰富度大于设定阈值时,将其标记为有效模板方块。
  5. 如权利要求3所述的方法,其特征在于,S103具体包括以下步骤:
    S1031、对每一个模板方块,利用NCC矩阵,在同一行向左和向右双向扩张,形成一个候选匹配块,对每一个候选匹配块,以视差一致性、候选匹配块的宽度和NCC值以预设的权值比例构建匹配可信度M,对每一行的候选匹配块,按照匹配可信度M进行排序,选出匹配可信度最高的候选匹配块作为该行的可信匹配块;
    S1032、根据可信匹配块的视差一致性,对可信匹配块进行聚类,得到多个区域,使得同一区域内相邻行的视差的x分量之差不超过设定阈值,根据区域大小对区域进行过滤,删掉小于预设行数的区域,将所有未形成区域的行设为失败行,再次根据可信匹配块的视差一致性,对可信匹配块进行聚类,更新区域信息;
    S1033、对每一个区域进行上下行扩张;
    S1034、再次执行一次S1032。
  6. 如权利要求5所述的方法,其特征在于,S1033具体包括以下步骤:
    S10331、确定扩张起始行:对每一行根据可信匹配块的视差与区域平均值的一致性和可信匹配块的可信度以预设权值构建行可信度值,对每一行的可信度值进行排序,选取可信度值位于前面预设数量的行作为扩张起始行;
    S10332、对每一个扩张起始行向上和向下进行双向扩张;对于待扩张行中的每一个候选匹配,计算当前扩张行的最优匹配与该候选匹配的视差一致性度量,若视差一致性度量大于设定的视差一致性阈值,则将该视差一致性度量以预设权值更新至该候选匹配的匹配可信度中,对待扩张行的所有候选匹配按照匹配可信度进行排序,选取匹配可信度值最大的候选匹配块纳入当前扩张区域中;若当前扩张行的最优匹配与待扩张行中的任意候选匹配的视差一致性度量 均小于设定视差一致性阈值,则中断当前区域的扩张,从而使每一个区域均可得到多个候选扩张区域;
    S10333、对于每一个候选扩张区域,以区域中包含候选匹配块的平均匹配可信度和区域大小构建该区域的匹配可信度,将该区域中所有候选匹配块的匹配可信度均使用该区域的匹配可信度赋值,将该区域中的所有候选匹配块标记为区域可信匹配块;对于每一行的多个区域可信匹配块,选取匹配可信度最大的区域可信匹配块作为该行的最终可信匹配块,该最终可信匹配块对应的视差即为该行的最终视差。
  7. 如权利要求6所述的方法,其特征在于,S104具体为:
    根据每行的最终可信匹配块和最终可信匹配块对应的视差更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
  8. 如权利要求7所述的方法,其特征在于,当全景图像拼接方法应用于全景视频拼接时,所述全景图像拼接方法适用于全景视频的第一帧,即所述鱼眼照片是全景视频的第一帧对应的鱼眼照片,对于全景视频的中间帧,在S1022之前,还包括以下步骤:
    S1051、检测模板带状图中的静态区域,所述静态区域是图像画面静止或匹配状态稳定的区域;
    S1052、分析上一帧的每一个最终可信匹配块的状态队列,将连续验证失败或重匹配失败次数大于设定阈值的最终可信匹配块所在行标记为失败行;
    S1053、对上一帧的每一个最终可信匹配块,按照其视差找到其在待匹配带状图中的对应方块区域,计算这两个等大的区域的NCC值,若NCC大于设定阈值,则将该最终可信匹配块标记为验证成功,更新最终可信匹配块的状态队列;反之,则将其标记为验证失败,更新最终可信匹配块的状态队列;
    S1054、分析每一行的最终可信匹配块的状态队列,对于非节点帧,将最终可信匹配块的连续验证失败次数大于设定阈值的行设为重匹配行,对于节点 帧,将所有非静态区域的行都设为重匹配行;
    对于所有重匹配行按照S1022、S1023、S103和S104进行操作,并更新最终可信匹配块的状态队列,将重匹配成功的行中所包含的最终可信匹配块标记为重匹配成功,将重匹配失败行中的最终可信匹配块标记为重匹配失败。
  9. 如权利要求8所述的方法,其特征在于,所述检测模板带状图中的匹配状态稳定的区域具体为:
    分析模板方块的状态队列,将验证成功次数大于设定阈值且NCC值变化小于设定阈值的模板方块所在行标记为静态区域。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9任一项所述的全景图像拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
  11. 一种全景相机,包括:一个或多个处理器;存储器、以及一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至9任一项所述的全景图像拼接方法的步骤。
  12. 一种全景图像拼接方法,其特征在于,将两张具有重叠图像区域的鱼眼照片执行如下步骤:
    S201、将两张鱼眼照片映射至球模型的相应拼缝区域,形成两张具有重合区域的带状图;
    S202、对两张带状图进行分块模板匹配,得到初始模板匹配结果;
    S203、使用基于区域扩张的匹配过滤算法,对初始模板匹配结果进行匹配过滤,得到最终的匹配结果;
    S204、根据最终的匹配结果更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
  13. 如权利要求12所述的方法,其特征在于,在步骤S202之前还包括以下步骤:
    对两张带状图进行高斯模糊。
  14. 如权利要求12所述的方法,其特征在于,S202具体包括以下步骤:
    S2021、选择两个带状图中的任意一张带状图作为模板带状图,另外一张带状图作为待匹配带状图,将模板带状图分割成M行N列的方块矩阵,并将方块矩阵中的每个方块作为一个模板方块,方块矩阵中相邻行具有重叠部分,方块矩阵覆盖整张模板带状图;将待匹配带状图分割成M行方块区域,方块区域中相邻行具有重叠部分,M和N是大于1的正整数;
    S2022、将每个模板方块在待匹配带状图中进行模板匹配,匹配区域为待匹配带状图中对应于模板方块的同一行的整行区域,每一个模板方块进行模板匹配之后均得到一个NCC矩阵,从而得到M*N个NCC矩阵;
    S2023、在每一个NCC矩阵中寻找最大值,通过最大值所在NCC矩阵中的位置,计算出该模板方块在待匹配带状图中对应区域的中心位置,再根据已知模板方块的中心在模板带状图中的位置,计算该模板方块的视差,从而计算得到每一个模板方块的视差,作为初始模板匹配结果。
  15. 如权利要求14所述的方法,其特征在于,S2021中的模板方块是有效模板方块,所述有效模板方块是通过以下方式确定的:
    根据Canny边缘检测得到的图像梯度信息计算每一个匹配方块的纹理丰富度,当纹理丰富度大于设定阈值时,将其标记为有效模板方块。
  16. 如权利要求14所述的方法,其特征在于,S203具体包括以下步骤:
    S2031、对每一个模板方块,利用NCC矩阵,在同一行向左和向右双向扩张,形成一个候选匹配块,对每一个候选匹配块,以视差一致性、候选匹配块的宽度和NCC值以预设的权值比例构建匹配可信度M,对每一行的候选匹配块,按照匹配可信度M进行排序,选出匹配可信度最高的候选匹配块作为该行的可信匹配块;
    S2032、根据可信匹配块的视差一致性,对可信匹配块进行聚类,得到多个区域,使得同一区域内相邻行的视差的x分量之差不超过设定阈值,根据区域大小对区域进行过滤,删掉小于预设行数的区域,将所有未形成区域的行设为失败行,再次根据可信匹配块的视差一致性,对可信匹配块进行聚类,更新区域信息;
    S2033、对每一个区域进行上下行扩张;
    S2034、再次执行一次S2032。
  17. 如权利要求16所述的方法,其特征在于,S2033具体包括以下步骤:
    S20331、确定扩张起始行:对每一行根据可信匹配块的视差与区域平均值的一致性和可信匹配块的可信度以预设权值构建行可信度值,对每一行的可信度值进行排序,选取可信度值位于前面预设数量的行作为扩张起始行;
    S20332、对每一个扩张起始行向上和向下进行双向扩张;对于待扩张行中的每一个候选匹配,计算当前扩张行的最优匹配与该候选匹配的视差一致性度量,若视差一致性度量大于设定的视差一致性阈值,则将该视差一致性度量以预设权值更新至该候选匹配的匹配可信度中,对待扩张行的所有候选匹配按照匹配可信度进行排序,选取匹配可信度值最大的候选匹配块纳入当前扩张区域中;若当前扩张行的最优匹配与待扩张行中的任意候选匹配的视差一致性度量均小于设定视差一致性阈值,则中断当前区域的扩张,从而使每一个区域均可得到多个候选扩张区域;
    S20333、对于每一个候选扩张区域,以区域中包含候选匹配块的平均匹配可信度和区域大小构建该区域的匹配可信度,将该区域中所有候选匹配块的匹配可信度均使用该区域的匹配可信度赋值,将该区域中的所有候选匹配块标记为区域可信匹配块;对于每一行的多个区域可信匹配块,选取匹配可信度最大的区域可信匹配块作为该行的最终可信匹配块,该最终可信匹配块对应的视差即为该行的最终视差。
  18. 如权利要求17所述的方法,其特征在于,S204具体为:
    根据每行的最终可信匹配块和最终可信匹配块对应的视差更新鱼眼照片到球模型相应拼缝区域的映射关系,按照更新后的映射关系进行全景拼接得到无缝的全景图。
  19. 一种全景视频拼接方法,其特征在于,所述全景视频拼接方法通过权利要求12至18中的任意一项全景图像拼接方法拼接全景视频的第一帧。
  20. 一种全景视频拼接方法,其特征在于,所述全景视频拼接方法通过权利要求12至18中的任意一项全景图像拼接方法拼接全景视频的中间帧,在S2022之前,还包括以下步骤:
    S2051、检测模板带状图中的静态区域,所述静态区域是图像画面静止或匹配状态稳定的区域;
    S2052、分析上一帧的每一个最终可信匹配块的状态队列,将连续验证失败或重匹配失败次数大于设定阈值的最终可信匹配块所在行标记为失败行;
    S2053、对上一帧的每一个最终可信匹配块,按照其视差找到其在待匹配带状图中的对应方块区域,计算这两个等大的区域的NCC值,若NCC大于设定阈值,则将该最终可信匹配块标记为验证成功,更新最终可信匹配块的状态队列;反之,则将其标记为验证失败,更新最终可信匹配块的状态队列;
    S2054、分析每一行的最终可信匹配块的状态队列,对于非节点帧,将最终可信匹配块的连续验证失败次数大于设定阈值的行设为重匹配行,对于节点帧,将所有非静态区域的行都设为重匹配行;
    对于所有重匹配行按照S2022、S2023、S203和S204进行操作,并更新最终可信匹配块的状态队列,将重匹配成功的行中所包含的最终可信匹配块标记为重匹配成功,将重匹配失败行中的最终可信匹配块标记为重匹配失败。
  21. 如权利要求20所述的方法,其特征在于,所述检测模板带状图中的匹配状态稳定的区域具体为:
    分析模板方块的状态队列,将验证成功次数大于设定阈值且NCC值变化小于设定阈值的模板方块所在行标记为静态区域。
  22. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求12至18任一项所述的全景图像拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
  23. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求19至21任一项所述的全景视频拼接方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
  24. 一种全景相机,包括:一个或多个处理器;存储器、以及一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求12至18任一项所述的全景图像拼接方法的步骤。
  25. 一种全景相机,包括:一个或多个处理器;存储器、以及一个或多个计算机程序,所述处理器和所述存储器通过总线连接,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求19至21任一项所述的全景视频拼接方法的步骤。
PCT/CN2020/092344 2019-05-30 2020-05-26 一种全景图像、视频拼接方法、计算机可读存储介质及全景相机 WO2020238897A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021570386A JP7350893B2 (ja) 2019-05-30 2020-05-26 パノラマ画像、ビデオ合成方法、コンピュータ読み取り可能な記録媒体及びパノラマカメラ
US17/615,571 US20220237736A1 (en) 2019-05-30 2020-05-26 Panoramic image and video splicing method, computer-readable storage medium, and panoramic camera
EP20814063.2A EP3982322A4 (en) 2019-05-30 2020-05-26 PANORAMIC IMAGE AND VIDEO SPLICING METHOD, COMPUTER READABLE STORAGE MEDIA AND PANORAMIC CAMERA

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910464435.4 2019-05-30
CN201910464435.4A CN110189256B (zh) 2019-05-30 2019-05-30 一种全景图像拼接方法、计算机可读存储介质及全景相机

Publications (1)

Publication Number Publication Date
WO2020238897A1 true WO2020238897A1 (zh) 2020-12-03

Family

ID=67719082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092344 WO2020238897A1 (zh) 2019-05-30 2020-05-26 一种全景图像、视频拼接方法、计算机可读存储介质及全景相机

Country Status (5)

Country Link
US (1) US20220237736A1 (zh)
EP (1) EP3982322A4 (zh)
JP (1) JP7350893B2 (zh)
CN (1) CN110189256B (zh)
WO (1) WO2020238897A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494083A (zh) * 2022-04-14 2022-05-13 杭州雄迈集成电路技术股份有限公司 一种自适应提升视频通透性方法和系统

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189256B (zh) * 2019-05-30 2023-05-02 影石创新科技股份有限公司 一种全景图像拼接方法、计算机可读存储介质及全景相机
CN114764824A (zh) * 2020-12-30 2022-07-19 安霸国际有限合伙企业 使用导引节点的视差图构建
CN113344782B (zh) * 2021-05-31 2023-07-18 浙江大华技术股份有限公司 图像拼接方法、装置、存储介质及电子装置
CN113793281B (zh) * 2021-09-15 2023-09-08 江西格灵如科科技有限公司 一种基于gpu实现的全景图缝隙实时缝合方法及系统
CN115620181B (zh) * 2022-12-05 2023-03-31 海豚乐智科技(成都)有限责任公司 基于墨卡托坐标切片的航拍图像实时拼接方法
CN116485645A (zh) * 2023-04-13 2023-07-25 北京百度网讯科技有限公司 图像拼接方法、装置、设备及存储介质
CN116563186A (zh) * 2023-05-12 2023-08-08 中山大学 一种基于专用ai感知芯片的实时全景感知系统及方法
CN116452426B (zh) * 2023-06-16 2023-09-05 广汽埃安新能源汽车股份有限公司 一种全景图拼接方法及装置
CN116612390B (zh) * 2023-07-21 2023-10-03 山东鑫邦建设集团有限公司 一种建筑工程用的信息管理系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971375A (zh) * 2014-05-22 2014-08-06 中国人民解放军国防科学技术大学 一种基于图像拼接的全景凝视相机空间标定方法
CN104104911A (zh) * 2014-07-04 2014-10-15 华中师范大学 全景图像生成过程中的时间戳消除和重置方法及系统
CN105678729A (zh) * 2016-02-24 2016-06-15 段梦凡 鱼眼镜头全景图像拼接方法
US20180332222A1 (en) * 2016-07-29 2018-11-15 Tencent Technology (Shenzhen) Company Limited Method and apparatus for obtaining binocular panoramic image, and storage medium
CN110189256A (zh) * 2019-05-30 2019-08-30 深圳岚锋创视网络科技有限公司 一种全景图像拼接方法、计算机可读存储介质及全景相机

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1272750C (zh) * 2003-01-24 2006-08-30 上海杰图软件技术有限公司 一种基于两张鱼眼图像的智能型全景生成方法
JP5428618B2 (ja) * 2009-07-29 2014-02-26 ソニー株式会社 画像処理装置、撮像装置、および画像処理方法、並びにプログラム
JP5846268B1 (ja) * 2014-08-12 2016-01-20 株式会社リコー 画像処理システム、画像処理装置、プログラムおよび撮像システム
JP2018059767A (ja) 2016-10-04 2018-04-12 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
JP7268369B2 (ja) * 2019-01-30 2023-05-08 株式会社リコー 撮像システム、現像システム、撮像方法、及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971375A (zh) * 2014-05-22 2014-08-06 中国人民解放军国防科学技术大学 一种基于图像拼接的全景凝视相机空间标定方法
CN104104911A (zh) * 2014-07-04 2014-10-15 华中师范大学 全景图像生成过程中的时间戳消除和重置方法及系统
CN105678729A (zh) * 2016-02-24 2016-06-15 段梦凡 鱼眼镜头全景图像拼接方法
US20180332222A1 (en) * 2016-07-29 2018-11-15 Tencent Technology (Shenzhen) Company Limited Method and apparatus for obtaining binocular panoramic image, and storage medium
CN110189256A (zh) * 2019-05-30 2019-08-30 深圳岚锋创视网络科技有限公司 一种全景图像拼接方法、计算机可读存储介质及全景相机

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494083A (zh) * 2022-04-14 2022-05-13 杭州雄迈集成电路技术股份有限公司 一种自适应提升视频通透性方法和系统
CN114494083B (zh) * 2022-04-14 2022-07-29 杭州雄迈集成电路技术股份有限公司 一种自适应提升视频通透性方法和系统

Also Published As

Publication number Publication date
US20220237736A1 (en) 2022-07-28
CN110189256B (zh) 2023-05-02
JP2022534262A (ja) 2022-07-28
JP7350893B2 (ja) 2023-09-26
CN110189256A (zh) 2019-08-30
EP3982322A1 (en) 2022-04-13
EP3982322A4 (en) 2023-03-22

Similar Documents

Publication Publication Date Title
WO2020238897A1 (zh) 一种全景图像、视频拼接方法、计算机可读存储介质及全景相机
US11062123B2 (en) Method, terminal, and storage medium for tracking facial critical area
Chen et al. Improved saliency detection in RGB-D images using two-phase depth estimation and selective deep fusion
WO2022002150A1 (zh) 一种视觉点云地图的构建方法、装置
WO2020181872A1 (zh) 一种物体检测方法、装置及电子设备
Wang et al. RGB-D salient object detection via minimum barrier distance transform and saliency fusion
Nie et al. Depth-aware multi-grid deep homography estimation with contextual correlation
JP5554984B2 (ja) パターン認識方法およびパターン認識装置
US8718324B2 (en) Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation
WO2018082308A1 (zh) 一种图像处理方法及终端
WO2022206680A1 (zh) 图像处理方法、装置、计算机设备和存储介质
WO2023231233A1 (zh) 一种跨模态目标重识别方法、装置、设备及介质
WO2023109361A1 (zh) 用于视频处理的方法、系统、设备、介质和产品
Jiang et al. Rethinking temporal fusion for video-based person re-identification on semantic and time aspect
CN109961103B (zh) 特征提取模型的训练方法、图像特征的提取方法及装置
Zhao et al. Learning probabilistic coordinate fields for robust correspondences
CN110147809B (zh) 图像处理方法及装置、存储介质及图像设备
CN111461196A (zh) 基于结构特征的快速鲁棒图像识别跟踪方法和装置
WO2022206679A1 (zh) 图像处理方法、装置、计算机设备和存储介质
Zhu et al. An innovative saliency detection framework with an example of image montage
Wu et al. Stereo superpixel segmentation via dual-attention fusion networks
CN106469437B (zh) 图像处理方法和图像处理装置
CN112819937B (zh) 一种自适应多对象光场三维重建方法、装置及设备
CN115564639A (zh) 背景虚化方法、装置、计算机设备和存储介质
CN113298871A (zh) 地图生成方法、定位方法及其系统、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20814063

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021570386

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020814063

Country of ref document: EP

Effective date: 20220103