WO2018214505A1 - Procédé et système d'appariement stéréo - Google Patents

Procédé et système d'appariement stéréo Download PDF

Info

Publication number
WO2018214505A1
WO2018214505A1 PCT/CN2017/120340 CN2017120340W WO2018214505A1 WO 2018214505 A1 WO2018214505 A1 WO 2018214505A1 CN 2017120340 W CN2017120340 W CN 2017120340W WO 2018214505 A1 WO2018214505 A1 WO 2018214505A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
parallax
cost
pixel
Prior art date
Application number
PCT/CN2017/120340
Other languages
English (en)
Chinese (zh)
Inventor
龙学军
周剑
Original Assignee
成都通甲优博科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都通甲优博科技有限责任公司 filed Critical 成都通甲优博科技有限责任公司
Publication of WO2018214505A1 publication Critical patent/WO2018214505A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Definitions

  • the present invention relates to the field of computer vision and image processing, and more particularly to a stereo matching method and system.
  • Stereo matching is an important part of the field of computer vision and a core part of many 3D applications.
  • the stereo matching method is mainly divided into three types: local matching method, global matching method and semi-global matching method.
  • the local matching method has low complexity and small computational complexity, but the matching effect is poor;
  • the global matching method can obtain very good matching effect, but the complexity is too high to be processed in real time;
  • the semi-global matching method The matching effect is between the local algorithm and the global algorithm, but the amount of computation is still large, and it is difficult to apply to places with higher real-time requirements. Therefore, considering the matching effect and real-time performance, many applications choose a compromise algorithm.
  • the object of the present invention is to provide a stereo matching method and system, which can effectively solve the problem of mismatching of weak texture regions, and can also achieve better matching effects in outdoor scenes, and at the same time, the calculation amount is small and the real-time performance is good.
  • the present invention provides a stereo matching method, the method comprising:
  • Step S10 acquiring a reference image and a target image, and determining a parallax area of the aggregation area;
  • Step S11 dividing the reference image and the target image into N*N regions, and obtaining a reference tile image and a target tile image;
  • Step S12 Convert the reference image into an aggregated reference image by using a block transform algorithm according to the reference block image and the target block image, and transform the target image into an aggregate target image;
  • Step S13 respectively calculating the CENSUS feature of the aggregated reference image and the aggregated target image, and performing stereo matching, and calculating a matching value of the CENSUS feature of the aggregated reference image and the CENSUS feature of the aggregated target image;
  • Step S14 respectively calculating area cost aggregation of each of the reference block images and the target block images of the corresponding same block position
  • Step S15 Comparing the matching cost value with each of the regional cost aggregations. If the matching cost value is greater than the regional cost aggregation, the re-segmented region cost aggregation is smaller than the reference cost partition corresponding to the regional cost aggregation of the matching generation value. And the target block image and calculating the segmented region cost aggregation until the matching cost value is smaller than each region cost aggregation;
  • Step S16 Calculate the matching cost value is smaller than the cost aggregation of each reference block image and each pixel in the corresponding target block image when the regional cost aggregation is performed, and obtain the first parallax image by using the WTA method.
  • step S12 includes:
  • each pixel point in the first empty image is in one-to-one correspondence with pixel points of the reference image, and each pixel point in the second empty image is One-to-one correspondence of pixel points of the target image;
  • the re-segmentation area cost aggregation is smaller than the reference cost image corresponding to the regional cost aggregation of the matching cost value, and the target block image, including:
  • each reference block image corresponding to the region cost aggregation whose cost aggregation is smaller than the matching cost value, and a pixel average value and an intermediate pixel value of each target tile image;
  • the region expansion is performed according to the estimated parallax, and when the pixel average value of the reference tile image or the target tile image is greater than the intermediate pixel value, the estimated parallax is used. Perform regional reduction.
  • calculating the matching cost value is smaller than a cost aggregation of each pixel in the reference block image and the corresponding target block image in each region cost aggregation, including:
  • the undirected graph is processed by the Boruvka algorithm to obtain a minimum spanning tree
  • the program also includes:
  • the parallax refinement is performed using the first parallax image and the second parallax image to obtain a final parallax image.
  • performing parallax refinement by using the first parallax image and the second parallax image to obtain a final parallax image including:
  • Step S12 is performed to obtain a final parallax image.
  • the invention also provides a stereo matching system comprising:
  • An obtaining module configured to acquire a reference image and a target image, and determine a parallax area of the aggregation area
  • a dividing module configured to divide the reference image and the target image into N*N regions to obtain a reference blocking image and a target blocking image
  • An aggregation module configured to convert the reference image into an aggregated reference image by using a block transform algorithm according to the reference block image and the target block image, and convert the target image into an aggregate target image;
  • a matching cost calculation module configured to respectively calculate the CENSUS feature of the aggregated reference image and the aggregated target image, and perform stereo matching, and calculate a CENSUS feature of the aggregated reference image to match a CENSUS feature of the aggregated target image Generation value
  • An area cost aggregation module configured to separately calculate an area cost aggregation of each of the reference block images and the target block image of the corresponding same block position
  • a matching module configured to compare the matching cost value with each of the regional cost aggregations, and if the matching cost value is greater than the regional cost aggregation, the re-segmented region cost aggregation is smaller than the matching cost value of the regional cost aggregation corresponding reference Blocking the image and the target block image and calculating the segmented region cost aggregation until the matching cost value is less than the regional cost aggregation;
  • the disparity image obtaining module is configured to calculate a cost aggregation of each of the reference block images and each pixel in the corresponding target block image when the matching cost value is smaller than each region, and obtain a first parallax image by using a WTA method.
  • the aggregation module includes:
  • An empty image creating unit configured to create a first empty image and a second empty image, wherein each pixel in the first empty image has a one-to-one correspondence with pixel points of the reference image, and the second empty image Each pixel point has a one-to-one correspondence with pixel points of the target image;
  • an aggregated reference image unit configured to calculate a sum of pixel values of each of the reference block images, and assign the value to a corresponding pixel in the first empty image to obtain an aggregated reference image
  • the aggregation target image unit is configured to calculate a sum of pixel values of each of the target tile images, and assign the values to the corresponding pixel points in the second null image to obtain an aggregation target image.
  • the matching module includes:
  • a determining unit configured to respectively determine a reference block image corresponding to the region cost aggregation whose cost aggregation is smaller than the matching cost value, and a size of a pixel average value and an intermediate pixel value of each target tile image;
  • a re-dividing unit configured to perform area expansion according to the estimated disparity when the pixel average of the reference block image or the target block image is smaller than the intermediate pixel value, when the pixel average of the reference block image or the target block image is larger than the intermediate pixel For the value, the area is reduced according to the estimated parallax.
  • the program also includes:
  • a parallax refinement module configured to use the reference image as a new target image, use the target image as a new reference image, and calculate a second parallax image according to the new target image and the new reference image And performing parallax refinement using the first parallax image and the second parallax image to obtain a final parallax image.
  • the method uses the aggregated image to match and speed up the matching; on the basis of the matching of the aggregated image, the pixels of the matching block are accurately matched, the matching precision is high, the calculation amount is small, and the mismatching of the weak texture region is solved.
  • the present invention also provides a stereo matching system, which has the above-mentioned beneficial effects, and details are not described herein again.
  • FIG. 1 is a flowchart of a stereo matching method according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a stereo matching system according to an embodiment of the present invention.
  • the core of the invention is to provide a stereo matching method and system, which can effectively solve the problem of mismatching of weak texture regions, and can also achieve better matching effects in outdoor scenes, and at the same time, the calculation amount is small and the real-time performance is good.
  • FIG. 1 is a flowchart of a stereo matching method according to an embodiment of the present invention; the method may include: the method may include:
  • Step S10 acquiring a reference image and a target image, and determining a parallax area of the aggregation area;
  • the reference image and the target image are distortion-corrected images.
  • the dense stereo matching algorithm mainly focuses on the matching problem. Therefore, the pixels in the two images need to be corrected to the same horizontal line before the matching, that is, the reference image and the target image are distortion-corrected images, thereby ensuring matching accuracy and reliability. Sex.
  • the left image may be the right image corresponding to the reference image as the target image; or the left image may be the left image corresponding to the reference image as the target image. That is, the embodiment does not limit the reference image and the specific selection mode of the target image. Normally, the left image is used as the right image corresponding to the reference image as the target image.
  • Determining the parallax area of the aggregation area can be used Said.
  • Step S11 dividing the reference image and the target image into N*N regions, and obtaining the reference tile image and the target tile image;
  • Step S12 Convert the reference image into an aggregated reference image by using a block transform algorithm according to the reference block image and the target block image, and convert the target image into an aggregate target image;
  • the reference image is transformed into an aggregated reference image based on the block
  • the target image is transformed into the aggregated target image based on the block.
  • the sum of the pixel values of each target block image is calculated and assigned to the corresponding pixel in the second empty image to obtain an aggregated target image.
  • two empty images are created: a first empty image and a second empty image, and the sizes of the first empty image and the second empty image are consistent with the reference image and the target image, for example, N*N, and the embodiment is not for the image.
  • the specific size is limited.
  • Each pixel of the first empty image has a one-to-one correspondence with the pixel points in the reference image
  • each pixel of the second empty image has a one-to-one correspondence with the pixel points in the target image.
  • the method of obtaining the aggregated reference image and the aggregation target image by using the above that is, using the sum of the pixels as the image aggregation index, is simple and convenient, and the calculation amount is small. Can increase the speed of stereo matching.
  • Step S13 respectively calculating the CENSUS feature of the aggregated reference image and the aggregated target image, and performing stereo matching, and calculating a matching value of the CENSUS feature of the aggregated reference image and the CENSUS feature of the aggregated target image;
  • the embodiment does not limit the manner in which the matching value is obtained. Since the weighted CENSUS feature is more efficient than the general color gradient feature, the present embodiment can use the CENSUS feature to calculate the characteristics of the aggregated image (including the aggregated reference image and the aggregated target image).
  • the step may specifically be to calculate a CENSUS feature of the aggregated reference image and the aggregated target image, and perform stereo matching on the aggregated reference image CENSUS feature and the aggregated target image CENSUS feature, and calculate the aggregated reference image CENSUS feature and the aggregated target image CENSUS feature.
  • the matching cost C C
  • Step S14 respectively calculating the regional cost aggregation ⁇ of each reference block image and the corresponding target block image of the same block position
  • Step S15 Comparing the matching cost value with each region cost aggregation, if there is a matching cost value greater than the regional cost aggregation, the re-segment region cost aggregation is smaller than the matching cost value corresponding to the regional cost aggregation corresponding reference block image and the target block image and calculating The segmented region costs are aggregated until the matching cost value is less than the regional cost aggregation;
  • the aggregation cost matching of the region is considered to be completed (ie, the reference segmentation image and the corresponding end are ended) The match between the target block images).
  • each reference block image and the corresponding target block image of the same block position are correspondingly compared, and if the comparison result is that the matching value C C is greater than the regional cost corresponding to the reference block image and the target block image
  • the reference block image and the target block image are re-segmented until the region cost aggregate ⁇ corresponding to the reference block image and the target tile image is greater than the matching cost C C .
  • re-segmentation rules can be as follows:
  • the region expansion is performed according to the estimated parallax, and the pixel average value of the reference tile image or the target tile image is larger than the intermediate pixel.
  • the area is reduced according to the estimated parallax. That is, the reference block image that needs to be re-segmented and the pixel average value of the corresponding target block image are smaller than the intermediate pixel value, and the estimated parallax is obtained.
  • Step S16 Calculating the matching cost value is smaller than the cost aggregation of each of the reference block images and the corresponding target block image in each region cost aggregation, and obtaining the first parallax image by using the WTA method.
  • this embodiment does not limit the specific manner in which the cost aggregation of each pixel is specific.
  • the details can be as follows:
  • an estimated dense match is performed between the matching blocks C in the target image, that is, the two blocks are estimated.
  • the correspondence of one-to-one matching of pixels Specifically, taking the estimated parallax value d as a reference, d ⁇ [d min , d max , for each pixel point p(x, y) of the reference image, finding a corresponding pixel point p(xd, y on the target image) ).
  • the arrangement of the image pixels may be regarded as an adjacency graph of 8 connected or 4 connected, which may be arbitrarily selected in this embodiment, and is not limited thereto. Further, in order to reduce the amount of calculation, it is preferable to adopt a four-connected connection method.
  • the image is regarded as a four-connected undirected graph.
  • the nodes in the graph are individual pixels, and the weight between the adjacent nodes r and s is the maximum difference of the color values of the pixels between the pixels.
  • the calculation formula is as follows:
  • I c (r) is the color value of each channel of the r node
  • I c (s) is the color value of each channel of the s node
  • the Boruvka algorithm is used to process the undirected graph to obtain the minimum spanning tree
  • the obtained undirected graph uses the Boruvka algorithm (minimum spanning tree algorithm) to generate a minimum spanning tree; wherein the distance Dis(p, q) between any two nodes in the tree is the weight of the road connecting the two nodes. And, the similarity S(p,q) of the two nodes is Where ⁇ represents the normalization constant.
  • the Boruvka algorithm generates a minimum spanning tree in a greedy way, and the amount of operations is smaller than other algorithms. This embodiment does not limit the algorithm for generating the minimum spanning tree.
  • the Boruvka algorithm can be chosen here because of its small amount of computation.
  • the tree structure is filtered from the root node to the leaf node to obtain a cost aggregation of each reference block image and each pixel in the corresponding target block image.
  • the specific process may be: converting the minimum spanning tree into a tree structure with root nodes and leaf nodes, and filtering the tree structure from the leaf nodes to the root nodes to obtain an upward cost. Aggregation; according to the upward cost aggregation, the tree structure is filtered from the root node to the leaf node to obtain a pixel cost aggregation of the reference image.
  • the minimum spanning tree is converted into a tree structure having a parent node (ie, a root node) and a child node (ie, a leaf node). Filtering the tree structure twice, filtering from the leaf node to the root node for the first time, resulting in upward cost aggregation
  • the two filterings ensure that each pixel can use the pixels of the full image as the supporting region, and the aggregation cost only needs to be calculated once, which significantly reduces the computational complexity of the algorithm.
  • the above method for calculating the cost aggregation of each pixel performs accurate matching on the pixels of the matching block based on the matching of the aggregated image, the matching precision is high, the calculation amount is small, and the weak texture region error is solved. Matching questions.
  • the first parallax image is obtained by the WTA method. Specifically, according to the WTA (Winner Take A11) principle, the disparity value with the smallest final cost is selected as the final disparity value for each pixel to obtain a first disparity image. Further, it can also perform fast median filtering to obtain a filtered first parallax image, where the first parallax image can be represented by D(p).
  • WTA Winner Take A11
  • the stereo matching method in the embodiment of the present invention uses the aggregated image to perform matching, and speeds up the matching speed; and accurately matches the pixels of the matching block based on the matching of the aggregated image, and the matching precision is high, and the calculation is performed.
  • the amount is small, and at the same time, the problem of mismatching of weak texture regions is solved.
  • the method may further include:
  • step S11 to step S16 Taking the reference image as a new target image, using the target image as a new reference image, and performing step S11 to step S16 to obtain a second parallax image according to the new target image and the new reference image;
  • the parallax refinement is performed using the first parallax image and the second parallax image to obtain a final parallax image.
  • the process of parallax refinement may further include: taking the reference image as a new target image, using the target image as a new reference image, and performing step S11 to step S16 to obtain a second parallax image according to the new target image and the new reference image; The first parallax image and the second parallax image are subjected to parallax refinement to obtain a final parallax image.
  • the reference image and the target image are exchanged, that is, the reference image and the target image in the above embodiment are exchanged.
  • the reference image is taken as a new target image
  • the target image is taken as a new reference image, and based on the new target image and the new reference image.
  • the result of the exchange that is, the left image is the right image corresponding to the target image as the reference image.
  • the second parallax image here can be represented by D r (p).
  • the first parallax image and the second parallax image are used for parallax refinement, and obtaining the final parallax image may include:
  • Step S12 is performed to obtain a final parallax image.
  • the stereo matching method in the embodiment of the present invention uses the aggregated image to perform matching, and speeds up the matching speed; and accurately matches the pixels of the matching block based on the matching of the aggregated image, and the matching precision is high, and the calculation is performed.
  • the amount is small, and at the same time, the problem of mismatching of weak texture regions is solved, and a good matching effect can also be obtained in an outdoor scene. At the same time, the amount of calculation has not increased significantly, and a better matching effect can be obtained.
  • the stereo matching system provided by the embodiment of the present invention is described below.
  • the stereo matching system described below and the stereo matching method described above can refer to each other.
  • FIG. 2 is a structural block diagram of a stereo matching system according to an embodiment of the present invention.
  • the system may include:
  • the obtaining module 100 is configured to acquire a reference image and a target image, and determine a parallax area of the aggregation area;
  • a dividing module 200 configured to equally divide the reference image and the target image into N*N regions, to obtain a reference blocking image and a target blocking image;
  • the aggregation module 300 is configured to convert the reference image into an aggregated reference image by using a block transform algorithm according to the reference block image and the target block image, and convert the target image into an aggregate target image;
  • the matching cost calculation module 400 is configured to respectively calculate the CENSUS features of the aggregated reference image and the aggregated target image, and perform stereo matching, and calculate a matching value of the CENSUS feature of the aggregated reference image and the CENSUS feature of the aggregated target image;
  • the area cost aggregation module 500 is configured to separately calculate an area cost aggregation of each reference block image and a corresponding target block image of the same block position;
  • the matching module 600 is configured to compare the matching cost value with each region cost aggregation. If there is a matching cost value greater than the regional cost aggregation, the re-segment region cost aggregation is smaller than the matching cost value corresponding to the regional cost aggregation corresponding reference block image and the target segmentation Image and calculate the segmented region cost aggregation until the matching cost value is smaller than the regional cost aggregation;
  • the disparity image obtaining module 700 is configured to calculate a cost aggregation in which each of the reference block images and the corresponding target block image in the matching target block image are smaller than the cost of each region, and obtain the first parallax image by using the WTA method.
  • the aggregation module 300 may include:
  • An empty image creating unit configured to create a first empty image and a second empty image, wherein each pixel in the first empty image corresponds to a pixel of the reference image, and each pixel and target in the second empty image One-to-one correspondence of pixels of an image;
  • Aggregating a reference image unit configured to calculate a sum of pixel values of each reference block image, and assign the value to a corresponding pixel in the first empty image to obtain an aggregated reference image
  • the aggregation target image unit is configured to calculate a sum of pixel values of each target tile image and assign the value to a corresponding pixel point in the second null image to obtain an aggregation target image.
  • the matching module 600 can include:
  • a judging unit configured to respectively determine each reference block image corresponding to the region cost aggregation whose cost aggregation is smaller than the matching surrogate value, and a pixel average value and an intermediate pixel value of each target block image;
  • a re-dividing unit configured to perform area expansion according to the estimated disparity when the pixel average of the reference block image or the target block image is smaller than the intermediate pixel value, when the pixel average of the reference block image or the target block image is larger than the intermediate pixel For the value, the area is reduced according to the estimated parallax.
  • system may further include:
  • a parallax refinement module for using the reference image as a new target image, using the target image as a new reference image, and calculating a second parallax image according to the new target image and the new reference image; using the first parallax image and the first The two-parallax image is subjected to parallax refinement to obtain a final parallax image.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un système d'appariement stéréo. Le procédé consiste : à obtenir une image de référence et une image cible, et à déterminer une région de parallaxe dans une région d'agrégation (S10) ; à transformer l'image de référence en une image de référence agrégée et l'image cible en une image cible agrégée à l'aide d'un algorithme de transformation de bloc ; à calculer des caractéristiques de recensement pour les images, respectivement, et à obtenir une valeur de coût d'appariement de l'image de référence agrégée et de l'image cible agrégée ; à calculer une agrégation de coût régionale de blocs d'image de référence et de blocs d'image cibles au niveau des mêmes positions de bloc correspondantes, respectivement ; si la valeur de coût d'appariement est supérieure à l'agrégation de coût régionale, à rediviser des blocs d'image de référence et des blocs d'image cible correspondant à la région, et à calculer l'agrégation de coût régionale après la réalisation de la division, jusqu'à ce que la valeur de coût de mise en correspondance soit inférieure à la totalité de l'agrégation de coût régionale ; et à calculer une agrégation de coût pour chaque pixel dans les blocs d'image de référence et les blocs d'image cibles correspondants et ainsi une première image de parallaxe. Le procédé de la présente invention peut résoudre efficacement le problème de dés-appariement dans des régions de texture faible, et présente peu de calculs et de bonnes performances en temps réel.
PCT/CN2017/120340 2017-05-22 2017-12-29 Procédé et système d'appariement stéréo WO2018214505A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710362286.1A CN107220997B (zh) 2017-05-22 2017-05-22 一种立体匹配方法及系统
CN201710362286.1 2017-05-22

Publications (1)

Publication Number Publication Date
WO2018214505A1 true WO2018214505A1 (fr) 2018-11-29

Family

ID=59944587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120340 WO2018214505A1 (fr) 2017-05-22 2017-12-29 Procédé et système d'appariement stéréo

Country Status (2)

Country Link
CN (1) CN107220997B (fr)
WO (1) WO2018214505A1 (fr)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220997B (zh) * 2017-05-22 2020-12-25 成都通甲优博科技有限责任公司 一种立体匹配方法及系统
CN109919991A (zh) * 2017-12-12 2019-06-21 杭州海康威视数字技术股份有限公司 一种深度信息确定方法、装置、电子设备及存储介质
CN108109148A (zh) * 2017-12-12 2018-06-01 上海兴芯微电子科技有限公司 图像立体分配方法、移动终端
CN107967349B (zh) * 2017-12-13 2020-06-16 湖南省国土资源规划院 一种矿体储量块段匹配方法
CN108133493B (zh) * 2018-01-10 2021-10-22 电子科技大学 一种基于区域划分和渐变映射的异源图像配准优化方法
CN109978934B (zh) * 2019-03-04 2023-01-10 北京大学深圳研究生院 一种基于匹配代价加权的双目视觉立体匹配方法及系统
CN110046236B (zh) * 2019-03-20 2022-12-20 腾讯科技(深圳)有限公司 一种非结构化数据的检索方法及装置
CN110427968B (zh) * 2019-06-28 2021-11-02 武汉大学 一种基于细节增强的双目立体匹配方法
CN110473217B (zh) * 2019-07-25 2022-12-06 沈阳工业大学 一种基于Census变换的双目立体匹配方法
CN111462195B (zh) * 2020-04-09 2022-06-07 武汉大学 基于主线约束的非规则角度方向代价聚合路径确定方法
CN111768437B (zh) * 2020-06-30 2023-09-05 中国矿业大学 一种用于矿井巡检机器人的图像立体匹配方法及装置
WO2022000456A1 (fr) * 2020-07-03 2022-01-06 深圳市大疆创新科技有限公司 Procédé et appareil de traitement d'image, circuit intégré et dispositif
CN114144765A (zh) * 2020-07-03 2022-03-04 深圳市大疆创新科技有限公司 图像处理方法、集成电路、装置、可移动平台及存储介质
CN111951310A (zh) * 2020-07-17 2020-11-17 深圳市帝普森微电子有限公司 双目立体匹配方法、视差图获取装置和计算机存储介质
CN112070821B (zh) * 2020-07-31 2023-07-25 南方科技大学 一种低功耗立体匹配系统及获取深度信息的方法
CN112435283A (zh) * 2020-11-04 2021-03-02 浙江大华技术股份有限公司 图像的配准方法、电子设备以及计算机可读存储介质
CN113344989B (zh) * 2021-04-26 2023-05-16 贵州电网有限责任公司 一种NCC和Census的最小生成树航拍图像双目立体匹配方法
CN113345001A (zh) * 2021-05-19 2021-09-03 智车优行科技(北京)有限公司 视差图确定方法和装置、计算机可读存储介质、电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254348A (zh) * 2011-07-25 2011-11-23 北京航空航天大学 一种基于块匹配视差估计的中间视图合成方法
CN102611904A (zh) * 2012-02-15 2012-07-25 山东大学 三维电视系统中基于图像分割的立体匹配方法
CN102930530A (zh) * 2012-09-26 2013-02-13 苏州工业职业技术学院 一种双视点图像的立体匹配方法
US20130156339A1 (en) * 2011-04-08 2013-06-20 Panasonic Corporation Image processing apparatus and image processing method
CN105513064A (zh) * 2015-12-03 2016-04-20 浙江万里学院 一种基于图像分割和自适应权重的立体匹配方法
CN107220997A (zh) * 2017-05-22 2017-09-29 成都通甲优博科技有限责任公司 一种立体匹配方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867133B (zh) * 2015-04-30 2017-10-20 燕山大学 一种快速的分步立体匹配方法
CN106504276B (zh) * 2016-10-25 2019-02-19 桂林电子科技大学 非局部立体匹配方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156339A1 (en) * 2011-04-08 2013-06-20 Panasonic Corporation Image processing apparatus and image processing method
CN102254348A (zh) * 2011-07-25 2011-11-23 北京航空航天大学 一种基于块匹配视差估计的中间视图合成方法
CN102611904A (zh) * 2012-02-15 2012-07-25 山东大学 三维电视系统中基于图像分割的立体匹配方法
CN102930530A (zh) * 2012-09-26 2013-02-13 苏州工业职业技术学院 一种双视点图像的立体匹配方法
CN105513064A (zh) * 2015-12-03 2016-04-20 浙江万里学院 一种基于图像分割和自适应权重的立体匹配方法
CN107220997A (zh) * 2017-05-22 2017-09-29 成都通甲优博科技有限责任公司 一种立体匹配方法及系统

Also Published As

Publication number Publication date
CN107220997B (zh) 2020-12-25
CN107220997A (zh) 2017-09-29

Similar Documents

Publication Publication Date Title
WO2018214505A1 (fr) Procédé et système d'appariement stéréo
WO2018127007A1 (fr) Procédé et système d'acquisition d'image de profondeur
WO2018098891A1 (fr) Procédé et système de stéréocorrespondance
CN111833393A (zh) 一种基于边缘信息的双目立体匹配方法
WO2015135323A1 (fr) Procédé et dispositif de poursuite par caméra
US10957062B2 (en) Structure depth-aware weighting in bundle adjustment
CN110458772B (zh) 一种基于图像处理的点云滤波方法、装置和存储介质
CN108460792B (zh) 一种基于图像分割的高效聚焦立体匹配方法
WO2018214086A1 (fr) Procédé et appareil de reconstruction tridimensionnelle d'une scène, et dispositif terminal
CN113129352B (zh) 一种稀疏光场重建方法及装置
JP2023021087A (ja) 両眼画像マッチング方法、装置、デバイス、及び記憶媒体
CN108109148A (zh) 图像立体分配方法、移动终端
CN107274448B (zh) 一种基于水平树结构的可变权重代价聚合立体匹配算法
CN107155100A (zh) 一种基于图像的立体匹配方法及装置
WO2024060981A1 (fr) Procédé d'optimisation de maillage tridimensionnel, dispositif et support de stockage
WO2022120988A1 (fr) Procédé d'adaptation stéréo basé sur une convolution 2d hybride et une convolution pseudo 3d
CN117132737B (zh) 一种三维建筑模型构建方法、系统及设备
CN112270748B (zh) 基于图像的三维重建方法及装置
CN107730543B (zh) 一种半稠密立体匹配的快速迭代计算方法
WO2024082602A1 (fr) Procédé et appareil d'odométrie visuelle de bout en bout
Li et al. Graph-based saliency fusion with superpixel-level belief propagation for 3D fixation prediction
CN113344989B (zh) 一种NCC和Census的最小生成树航拍图像双目立体匹配方法
Kim et al. Real-time stereo matching using extended binary weighted aggregation
CN111105453A (zh) 一种获取视差图的方法
KR101178015B1 (ko) 시차 맵 생성 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910999

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910999

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17910999

Country of ref document: EP

Kind code of ref document: A1