WO2018209969A1 - Depth map creation method and system and image blurring method and system - Google Patents

Depth map creation method and system and image blurring method and system Download PDF

Info

Publication number
WO2018209969A1
WO2018209969A1 PCT/CN2017/120331 CN2017120331W WO2018209969A1 WO 2018209969 A1 WO2018209969 A1 WO 2018209969A1 CN 2017120331 W CN2017120331 W CN 2017120331W WO 2018209969 A1 WO2018209969 A1 WO 2018209969A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
depth map
point
points
Prior art date
Application number
PCT/CN2017/120331
Other languages
French (fr)
Chinese (zh)
Inventor
龙学军
刘勇
周剑
Original Assignee
成都通甲优博科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都通甲优博科技有限责任公司 filed Critical 成都通甲优博科技有限责任公司
Publication of WO2018209969A1 publication Critical patent/WO2018209969A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a depth map creation method and system, and an image blurring method and system.
  • the object of the present invention is to provide a depth map creation method and system and an image blurring method and system, which can obtain a high-quality depth map, thereby facilitating an image blurring effect.
  • the specific plan is as follows:
  • a depth map creation method including:
  • first set of feature points and the second set of feature points respectively, respectively determining support points corresponding to the first image and the second image to obtain a first set of support points and a second set of support points ;
  • a depth map corresponding to the target scene is determined using the parallax.
  • the process of determining feature points of any image includes:
  • the process of counting the total number of pixel points satisfying the preset condition around the candidate point includes:
  • the total number of pixels satisfying the preset condition around the candidate points is counted by using a preset pixel number statistical formula; wherein the predetermined pixel number statistics formula is:
  • N represents the total number
  • p represents the candidate point
  • circle(p) represents a circle centered on the candidate point p, with a predetermined value as a radius
  • x represents the circumference circle(p) any of a pixel
  • I (x) denotes the gray value of pixel x
  • I (p) denotes the gray value of the candidate point p
  • ⁇ d represents the difference between a predetermined gradation threshold.
  • the process of performing dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points including
  • the process of calculating a disparity corresponding to a pixel point in the Delaunay triangular mesh includes:
  • d p represents the parallax corresponding to the pixel point p in the Delaunay triangular mesh
  • (u p , v p ) represents the coordinates of the pixel point p
  • a, b, c are the Delaunay triangular regions where the pixel point p is located
  • the coefficient obtained by fitting the support point plane, h represents the minimum support distance of the pixel point p and the adjacent three support points, Expressed in the interval Take a random number.
  • the invention further discloses an image blurring method, comprising:
  • the image is blurred by the depth map to obtain a blurred image.
  • the process of performing image blurring processing by using the depth map includes:
  • C i represents a blurring coefficient of the i-th pixel point
  • z i represents a depth value of the i-th pixel point
  • f represents a focal length. Representing an average depth value on the focus area, Z far represents a maximum depth value on the focus area, Z near represents a minimum depth value on the focus area, and w represents an adjustment factor;
  • the preset blurring formula Using the preset blurring formula, performing image blurring processing on the target pixel point set to obtain the blurred image; wherein the target pixel point set is a pixel point whose depth value is in the range of [Z near , Z far ]
  • the set, the preset blur formula is:
  • m ⁇ n represents the number of pixel points in a circle having a coordinate (x, y) of the target pixel as a center and a blur factor of the target pixel as a radius, the target pixel being the Any pixel of the target pixel set, The pixel value of the target pixel after the blurring process is indicated, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
  • the invention also correspondingly discloses a depth map creation system, comprising:
  • An image acquisition module configured to acquire a first image and a second image obtained by the binocular shooting system after capturing the target scene
  • a feature point determining module configured to respectively determine feature points corresponding to the first image and the second image, to obtain a first set of feature points and a second set of feature points;
  • a support point determining module configured to respectively determine a support point corresponding to the first image and the second image by using the first set of feature points and the second set of feature points, to obtain a first set of supports Point and second set of support points;
  • a matching module configured to perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine the first image and the first Parallax between two images;
  • a depth map determining module configured to determine a depth map corresponding to the target scene by using the parallax.
  • the invention also correspondingly discloses an image blurring system, comprising:
  • a depth map obtaining module configured to acquire a depth map created by the foregoing depth map creation system
  • An image blurring module is configured to perform image blurring processing by using the depth map to obtain a blurred image.
  • the image blurring module includes:
  • a focus area determining unit configured to determine a focus area on the depth map
  • a blurring coefficient calculation unit configured to use the depth information on the depth map, and combine with a preset function to obtain a blurring coefficient of each pixel in the depth map; wherein the preset function is:
  • C i represents a blurring coefficient of the i-th pixel point
  • z i represents a depth value of the i-th pixel point
  • f represents a focal length. Representing an average depth value on the focus area, Z far represents a maximum depth value on the focus area, Z near represents a minimum depth value on the focus area, and w represents an adjustment factor;
  • a blurring processing unit configured to perform image blurring processing on the target pixel point set by using a preset blurring formula to obtain the blurred image; wherein the target pixel point set is a depth value in [Z near , Z far a collection of pixels within the range, the preset blur formula is:
  • m ⁇ n represents the number of pixel points in a circle having a coordinate (x, y) of the target pixel as a center and a blur factor of the target pixel as a radius, the target pixel being the Any pixel of the target pixel set, The pixel value of the target pixel after the blurring process is indicated, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
  • the depth map creation method includes: acquiring a first image and a second image obtained by capturing a target scene by the binocular shooting system; respectively determining feature points corresponding to the first image and the second image, and obtaining the first group a feature point and a second set of feature points; respectively, using the first set of feature points and the second set of feature points, correspondingly determining support points corresponding to the first image and the second image, and obtaining the first set of support points and the second set of support Pointing; performing dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine a disparity between the first image and the second image; determining the corresponding to the target scene by using the disparity Depth map.
  • the present invention first determines the feature points corresponding to the image, and then uses the feature points of the image to determine the support points corresponding to the image, which may be subsequently based on the first image and the second image. Dense stereo matching is performed on the corresponding support points of the image, so that the stereo matching precision between the first image and the second image can be greatly improved, thereby obtaining a more accurate parallax between the first image and the second image, which is obtained based on the parallax
  • the depth map will have a higher quality, which will help to improve the subsequent image blurring effect.
  • FIG. 1 is a flowchart of a method for creating a depth map according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a depth map creation system according to an embodiment of the present invention.
  • the embodiment of the invention discloses a depth map creation method. Referring to FIG. 1 , the method includes:
  • Step S11 Acquire a first image and a second image obtained after the binocular shooting system captures the target scene.
  • Step S12 determining feature points corresponding to the first image and the second image, respectively, to obtain a first set of feature points and a second set of feature points.
  • Step S13 Using the first set of feature points and the second set of feature points respectively, respectively determining the support points corresponding to the first image and the second image to obtain the first set of support points and the second set of support points.
  • the polar line constraint and the feature point descriptor can be used to quickly perform matching processing between feature points.
  • a WTA strategy (WTA, ie, Winner Takes All) can be used to select a matching cost in the parallax space. The smallest point is used as the feature point for matching success, and then the feature points that match the success are determined as the support points, and the unmatched feature points are eliminated, thereby obtaining the first set of support points and the second set of support points respectively.
  • Step S14 Perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine a disparity between the first image and the second image.
  • Step S15 Determine a depth map corresponding to the target scene by using the parallax.
  • the embodiment of the present invention first determines a feature point corresponding to the image, and then uses the feature point of the image to determine a support point corresponding to the image, which may be subsequently based on the first image and
  • the corresponding support points of the second image are densely stereo-matched, which can greatly improve the stereo matching precision between the first image and the second image, thereby obtaining a more accurate parallax between the first image and the second image, based on the parallax
  • the obtained depth map will have higher quality, which will help to improve the subsequent image blurring effect.
  • the embodiment of the present invention discloses a specific depth map creation method. Compared with the previous embodiment, the technical solution is further illustrated and optimized in this embodiment. specific:
  • step S12 of the previous embodiment it is necessary to separately determine feature points corresponding to the first image and the second image.
  • the determining process of the feature points of any image may specifically include the following steps S121 to S123:
  • Step S121 determining a candidate point from the image
  • Step S122 Count the total number of pixel points satisfying the preset condition around the candidate point
  • Step S123 determining whether the total number is greater than a preset number threshold, and if so, determining that the candidate point is a feature point of the image, and if not, determining that the candidate point is not a feature point of the image.
  • the process of counting the total number of the pixel points that meet the preset condition around the candidate point may specifically include:
  • the total number of pixels satisfying the preset condition around the candidate points is counted by using a preset pixel number statistical formula; wherein the predetermined number of preset pixel points is:
  • N represents the total number
  • p represents a candidate point
  • circle(p) represents a circle centered on the candidate point p, with a predetermined value as a radius
  • x represents any pixel on the circumference circle(p)
  • I (x) denotes the gray value of pixel x
  • I (p) denotes the gray value candidate point p
  • ⁇ d represents the difference between a predetermined gradation threshold.
  • the preset value may be specifically set according to actual needs, and is not specifically limited herein.
  • the process of performing dense stereo matching on the first image and the second image based on the first group of support points and the second group of support points may specifically include the following steps S141 to S143:
  • Step S141 Construct a corresponding Delaunay triangular mesh on the first image according to the first set of support points;
  • Step S142 Calculating the disparity corresponding to the pixel points located in the Delaunay triangular mesh, and obtaining corresponding disparity data;
  • Step S143 Perform dense stereo matching on the first image and the second image by using the disparity data and the disparity probability model to find support points matching the first set of support points from the second set of support points.
  • the process of calculating the disparity corresponding to the pixel points in the Delaunay triangular mesh in the above step S142 may specifically include:
  • the parallax corresponding to the pixel points in the Delaunay triangular mesh is calculated; wherein the preset parallax calculation formula is:
  • d p represents the parallax corresponding to the pixel point p in the Delaunay triangular mesh
  • (u p , v p ) represents the coordinates of the pixel point p
  • a, b, c are supported by the Delaunay triangular region where the pixel point p is located.
  • the coefficient obtained by fitting the point plane, h represents the minimum support distance of the pixel point p and the adjacent three support points, Expressed in the interval Take a random number.
  • an embodiment of the present invention further discloses an image blurring method, including the following steps S21 and S22:
  • Step S21 acquiring a depth map obtained by the depth map creation method disclosed in the foregoing embodiment
  • Step S22 performing image blurring processing using the depth map to obtain a blurred image.
  • the process of performing image blurring processing by using the depth map may specifically include the following steps S221 to S223:
  • Step S221 determining a focus area on the depth map.
  • Step S222 using the depth information on the depth map and combining the preset function to obtain the ambiguity coefficient of each pixel in the depth map; wherein, the preset function is:
  • C i represents a blurring coefficient of the i-th pixel point
  • z i represents a depth value of the i-th pixel point
  • f represents a focal length. Indicates the average depth value on the focus area
  • Z far represents the maximum depth value on the focus area
  • Z near represents the minimum depth value on the focus area
  • w represents the adjustment factor.
  • Step S223 performing image blurring processing on the target pixel point set by using a preset blurring formula to obtain a blurred image; wherein the target pixel point set is a set of pixel points whose depth values are in the range of [Z near , Z far ]
  • the default blur formula is:
  • m ⁇ n represents the number of pixels in a circle whose coordinates (x, y) are the center of the target pixel and whose radius is the radius of the target pixel, and the target pixel is the target pixel set.
  • One pixel Indicates the pixel value of the target pixel after the blurring process, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
  • the embodiment of the present invention can better improve the overall blurring effect of the blurred image by suppressing the leakage of the focusing layer to the defocused layer and determining the imaginary coefficient by pixel.
  • an embodiment of the present invention further discloses a depth map creation system.
  • the system includes:
  • the image obtaining module 11 is configured to obtain a first image and a second image obtained by the binocular shooting system after capturing the target scene;
  • the feature point determining module 12 is configured to respectively determine feature points corresponding to the first image and the second image, to obtain a first set of feature points and a second set of feature points;
  • the support point determining module 13 is configured to respectively determine a support point corresponding to the first image and the second image by using the first set of feature points and the second set of feature points, to obtain the first set of support points and the second set of support points ;
  • the matching module 14 is configured to perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine a disparity between the first image and the second image;
  • the depth map determining module 15 is configured to determine a depth map corresponding to the target scene by using the parallax.
  • the embodiment of the present invention first determines a feature point corresponding to the image, and then uses the feature point of the image to determine a support point corresponding to the image, which may be subsequently based on the first image and
  • the corresponding support points of the second image are densely stereo-matched, which can greatly improve the stereo matching precision between the first image and the second image, thereby obtaining a more accurate parallax between the first image and the second image, based on the parallax
  • the obtained depth map will have higher quality, which will help to improve the subsequent image blurring effect.
  • the present invention also discloses an image blurring system, including a depth map acquiring module and an image blurring module; wherein
  • a depth map obtaining module configured to acquire a depth map created by the depth map creation system disclosed in the foregoing embodiment
  • the image blurring module is configured to perform image blurring processing by using a depth map to obtain a blurred image.
  • the image blurring module may include a focus area determining unit, a blurring coefficient calculating unit, and a blurring processing unit;
  • a focus area determining unit for determining a focus area on the depth map
  • the imaginary coefficient calculation unit is configured to use the depth information on the depth map and combine the preset function to obtain the ambiguity coefficient of each pixel in the depth map; wherein, the preset function is:
  • C i represents a blurring coefficient of the i-th pixel point
  • z i represents a depth value of the i-th pixel point
  • f represents a focal length.
  • Z far represents the maximum depth value on the focus area
  • Z near represents the minimum depth value on the focus area
  • w represents the adjustment factor
  • the blurring processing unit is configured to perform image blurring processing on the target pixel point set by using a preset blurring formula to obtain a blurred image; wherein the target pixel point set is in the range of [Z near , Z far ]
  • the set of pixels, the default blur formula is:
  • m ⁇ n represents the number of pixels in a circle whose coordinates (x, y) are the center of the target pixel and whose radius is the radius of the target pixel, and the target pixel is the target pixel set.
  • One pixel Indicates the pixel value of the target pixel after the blurring process, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present application discloses a depth map creation method and system and an image blurring method and system. The depth map creation method comprises: obtaining a first image and a second image by a binocular shooting system photographing a target scene; determining respective feature points corresponding to the images to obtain a first set of feature points and a second set of feature points; using the first set of feature points and the second set of feature points to correspondingly determine support points corresponding to the images, to obtain a first set of support points and a second set of support points; performing, based on the first set of support points and the second set of support points, dense stereo matching on the first image and the second image, so as to determine parallax between the first image and the second image; and using the parallax to determine a depth map corresponding to the target scene. The present application can greatly improve accuracy of stereo matching between images, obtaining more accurate parallax between the images. A depth map obtained based on the parallax has higher quality, improving the effect of subsequent image blurring.

Description

深度图创建方法与系统及图像虚化方法与系统Depth map creation method and system and image blurring method and system
本申请要求于2017年05月19日提交的申请号为201710361218.3、名称为“一种深度图创建方法、系统以及图像虚化方法、系统”的中国专利申请的优先权,并将其全部内容通过引用的方式结合在本申请中。This application claims priority to Chinese Patent Application No. 201710361218.3, entitled "A Method, System for Creating a Depth Map, and Image Blur Method, System", submitted on May 19, 2017, and the entire contents of which are hereby incorporated by reference. The manner of citation is incorporated in the present application.
技术领域Technical field
本发明涉及图像处理技术领域,特别涉及深度图创建方法与系统及图像虚化方法与系统。The present invention relates to the field of image processing technologies, and in particular, to a depth map creation method and system, and an image blurring method and system.
背景技术Background technique
当前,随着图像处理技术的快速发展,越来越多的智能手机、平板电脑等设备逐渐增加了能够利用场景深度信息对图像进行虚化处理的功能,由此为用户带来了诸多有趣的拍照体验。At present, with the rapid development of image processing technology, more and more devices such as smart phones and tablet computers have gradually increased the ability to use the depth information of the scene to blur the image, which brings a lot of interesting to the user. Photo experience.
在现有的图像虚化过程中,深度图的质量直接影响了后续的图像虚化效果。如何创建高质量的深度图是目前还有待进一步解决的问题。In the existing image blurring process, the quality of the depth map directly affects the subsequent image blurring effect. How to create a high-quality depth map is still a problem to be solved.
发明内容Summary of the invention
有鉴于此,本发明的目的在于提供深度图创建方法与系统及图像虚化方法与系统,能够得到高质量的深度图,从而有利于提升图像虚化效果。其具体方案如下:In view of this, the object of the present invention is to provide a depth map creation method and system and an image blurring method and system, which can obtain a high-quality depth map, thereby facilitating an image blurring effect. The specific plan is as follows:
一种深度图创建方法,包括:A depth map creation method, including:
获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像;Obtaining a first image and a second image obtained by the binocular shooting system after shooting the target scene;
分别确定所述第一图像和所述第二图像对应的特征点,得到第一组特征点和第二组特征点;Determining feature points corresponding to the first image and the second image, respectively, to obtain a first set of feature points and a second set of feature points;
分别利用所述第一组特征点和所述第二组特征点,相应地确定出所述第 一图像和所述第二图像对应的支撑点,得到第一组支撑点和第二组支撑点;Using the first set of feature points and the second set of feature points respectively, respectively determining support points corresponding to the first image and the second image to obtain a first set of support points and a second set of support points ;
基于所述第一组支撑点和所述第二组支撑点,对所述第一图像和所述第二图像进行稠密立体匹配,以确定所述第一图像和所述第二图像之间的视差;Performing dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine between the first image and the second image Parallax
利用所述视差确定出与所述目标场景对应的深度图。A depth map corresponding to the target scene is determined using the parallax.
可选的,任一图像的特征点的确定过程,包括:Optionally, the process of determining feature points of any image includes:
从该图像中确定出候选点;Determining candidate points from the image;
统计所述候选点周围满足预设条件的像素点的总数量;Counting the total number of pixel points satisfying the preset condition around the candidate point;
判断所述总数量是否大于预设数量阈值,如果是,则判定所述候选点为该图像的特征点,如果否,则判定所述候选点不是该图像的特征点。Determining whether the total number is greater than a preset number threshold, and if so, determining that the candidate point is a feature point of the image, and if not, determining that the candidate point is not a feature point of the image.
可选的,所述统计所述候选点周围满足预设条件的像素点的总数量的过程,包括:Optionally, the process of counting the total number of pixel points satisfying the preset condition around the candidate point includes:
利用预设像素点数量统计公式,统计所述候选点周围满足预设条件的像素点的总数量;其中,所述预设像素点数量统计公式为:The total number of pixels satisfying the preset condition around the candidate points is counted by using a preset pixel number statistical formula; wherein the predetermined pixel number statistics formula is:
Figure PCTCN2017120331-appb-000001
Figure PCTCN2017120331-appb-000001
式中,N表示所述总数量,p表示所述候选点,circle(p)表示以所述候选点p为圆心、以预设数值为半径的圆周,x表示所述圆周circle(p)上的任一像素点,I(x)表示像素点x的灰度值,I(p)表示所述候选点p的灰度值,ε d表示预设灰度差阈值。 Where N represents the total number, p represents the candidate point, circle(p) represents a circle centered on the candidate point p, with a predetermined value as a radius, and x represents the circumference circle(p) any of a pixel, I (x) denotes the gray value of pixel x, I (p) denotes the gray value of the candidate point p, ε d represents the difference between a predetermined gradation threshold.
可选的,所述基于所述第一组支撑点和所述第二组支撑点,对所述第一图像和所述第二图像进行稠密立体匹配的过程,包括Optionally, the process of performing dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points, including
根据所述第一组支撑点在所述第一图像上构建相应的Delaunay三角网格;Constructing a corresponding Delaunay triangular mesh on the first image according to the first set of support points;
计算位于所述Delaunay三角网格内的像素点对应的视差,得到相应的视差数据;Calculating a disparity corresponding to a pixel point located in the Delaunay triangular mesh to obtain corresponding disparity data;
利用所述视差数据以及视差概率模型,对所述第一图像和所述第二图像进行稠密立体匹配,以从所述第二组支撑点中找出与所述第一组支撑点相匹配的支撑点。Performing dense stereo matching on the first image and the second image by using the disparity data and a disparity probability model to find out from the second set of support points that the first set of support points are matched Support point.
可选的,所述计算位于所述Delaunay三角网格内的像素点对应的视差的过程,包括:Optionally, the process of calculating a disparity corresponding to a pixel point in the Delaunay triangular mesh includes:
利用预设视差计算公式,计算位于所述Delaunay三角网格内的像素点对应的视差;其中,所述预设视差计算公式为:Calculating a parallax corresponding to a pixel point located in the Delaunay triangular mesh by using a preset parallax calculation formula; wherein the preset disparity calculation formula is:
Figure PCTCN2017120331-appb-000002
Figure PCTCN2017120331-appb-000002
式中,d p表示所述Delaunay三角网格内的像素点p对应的视差,(u p,v p)表示像素点p的坐标,a、b、c为通过对像素点p所在Delaunay三角形区域的支撑点平面进行拟合后得到的系数,h表示像素点p与相邻三个支撑点的最小支撑距离,
Figure PCTCN2017120331-appb-000003
表示在区间
Figure PCTCN2017120331-appb-000004
中取一个随机数。
Where d p represents the parallax corresponding to the pixel point p in the Delaunay triangular mesh, (u p , v p ) represents the coordinates of the pixel point p, and a, b, c are the Delaunay triangular regions where the pixel point p is located The coefficient obtained by fitting the support point plane, h represents the minimum support distance of the pixel point p and the adjacent three support points,
Figure PCTCN2017120331-appb-000003
Expressed in the interval
Figure PCTCN2017120331-appb-000004
Take a random number.
本发明还进一步公开了一种图像虚化方法,包括:The invention further discloses an image blurring method, comprising:
获取通过前述方法得到的深度图;Obtaining a depth map obtained by the foregoing method;
利用所述深度图进行图像虚化处理,得到虚化图像。The image is blurred by the depth map to obtain a blurred image.
可选的,所述利用所述深度图进行图像虚化处理的过程,包括:Optionally, the process of performing image blurring processing by using the depth map includes:
在所述深度图上确定出聚焦区域;Determining a focus area on the depth map;
利用所述深度图上的深度信息,并结合预设函数,得到所述深度图中每个像素点的虚化系数;其中,所述预设函数为:Using the depth information on the depth map, and combining the preset function, obtaining a blurring coefficient of each pixel in the depth map; wherein the preset function is:
Figure PCTCN2017120331-appb-000005
Figure PCTCN2017120331-appb-000005
其中,C i表示第i个像素点的虚化系数,z i表示第i个像素点的深度值,f表示焦距,
Figure PCTCN2017120331-appb-000006
表示所述聚焦区域上的平均深度值,Z far表示所述聚焦区域上的最大深度值,Z near表示所述聚焦区域上的最小深度值,w表示调节系数;
Wherein, C i represents a blurring coefficient of the i-th pixel point, z i represents a depth value of the i-th pixel point, and f represents a focal length.
Figure PCTCN2017120331-appb-000006
Representing an average depth value on the focus area, Z far represents a maximum depth value on the focus area, Z near represents a minimum depth value on the focus area, and w represents an adjustment factor;
利用预设虚化公式,对目标像素点集进行图像虚化处理,得到所述虚化图像;其中,所述目标像素点集为深度值在[Z near,Z far]范围内的像素点的集合,所述预设虚化公式为: Using the preset blurring formula, performing image blurring processing on the target pixel point set to obtain the blurred image; wherein the target pixel point set is a pixel point whose depth value is in the range of [Z near , Z far ] The set, the preset blur formula is:
Figure PCTCN2017120331-appb-000007
Figure PCTCN2017120331-appb-000007
式中,m×n表示以目标像素点的坐标(x,y)为圆心、以所述目标像素点的虚化系数为半径的圆内的像素点的数量,所述目标像素点为所述目标像素点集的任一像素点,
Figure PCTCN2017120331-appb-000008
表示虚化处理后的所述目标像素点的像素值,I i,j表示虚化处理前的所述圆内的像素点(i,j)对应的像素值。
Wherein m × n represents the number of pixel points in a circle having a coordinate (x, y) of the target pixel as a center and a blur factor of the target pixel as a radius, the target pixel being the Any pixel of the target pixel set,
Figure PCTCN2017120331-appb-000008
The pixel value of the target pixel after the blurring process is indicated, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
本发明还相应公开了一种深度图创建系统,包括:The invention also correspondingly discloses a depth map creation system, comprising:
图像获取模块,用于获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像;An image acquisition module, configured to acquire a first image and a second image obtained by the binocular shooting system after capturing the target scene;
特征点确定模块,用于分别确定所述第一图像和所述第二图像对应的特征点,得到第一组特征点和第二组特征点;a feature point determining module, configured to respectively determine feature points corresponding to the first image and the second image, to obtain a first set of feature points and a second set of feature points;
支撑点确定模块,用于分别利用所述第一组特征点和所述第二组特征点,相应地确定出所述第一图像和所述第二图像对应的支撑点,得到第一组支撑点和第二组支撑点;a support point determining module, configured to respectively determine a support point corresponding to the first image and the second image by using the first set of feature points and the second set of feature points, to obtain a first set of supports Point and second set of support points;
匹配模块,用于基于所述第一组支撑点和所述第二组支撑点,对所述第一图像和所述第二图像进行稠密立体匹配,以确定所述第一图像和所述第二图像之间的视差;a matching module, configured to perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine the first image and the first Parallax between two images;
深度图确定模块,用于利用所述视差确定出与所述目标场景对应的深度图。a depth map determining module configured to determine a depth map corresponding to the target scene by using the parallax.
本发明还相应公开了种图像虚化系统,包括:The invention also correspondingly discloses an image blurring system, comprising:
深度图获取模块,用于获取通过前述深度图创建系统创建得到的深度图;a depth map obtaining module, configured to acquire a depth map created by the foregoing depth map creation system;
图像虚化模块,用于利用所述深度图进行图像虚化处理,得到虚化图像。An image blurring module is configured to perform image blurring processing by using the depth map to obtain a blurred image.
可选的,所述图像虚化模块,包括:Optionally, the image blurring module includes:
聚焦区域确定单元,用于在所述深度图上确定出聚焦区域;a focus area determining unit, configured to determine a focus area on the depth map;
虚化系数计算单元,用于利用所述深度图上的深度信息,并结合预设函数,得到所述深度图中每个像素点的虚化系数;其中,所述预设函数为:a blurring coefficient calculation unit, configured to use the depth information on the depth map, and combine with a preset function to obtain a blurring coefficient of each pixel in the depth map; wherein the preset function is:
Figure PCTCN2017120331-appb-000009
Figure PCTCN2017120331-appb-000009
其中,C i表示第i个像素点的虚化系数,z i表示第i个像素点的深度值,f表示焦距,
Figure PCTCN2017120331-appb-000010
表示所述聚焦区域上的平均深度值,Z far表示所述聚焦区域上的最大深度值,Z near表示所述聚焦区域上的最小深度值,w表示调节系数;
Wherein, C i represents a blurring coefficient of the i-th pixel point, z i represents a depth value of the i-th pixel point, and f represents a focal length.
Figure PCTCN2017120331-appb-000010
Representing an average depth value on the focus area, Z far represents a maximum depth value on the focus area, Z near represents a minimum depth value on the focus area, and w represents an adjustment factor;
虚化处理单元,用于利用预设虚化公式,对目标像素点集进行图像虚化处理,得到所述虚化图像;其中,所述目标像素点集为深度值在[Z near,Z far]范围内的像素点的集合,所述预设虚化公式为: a blurring processing unit, configured to perform image blurring processing on the target pixel point set by using a preset blurring formula to obtain the blurred image; wherein the target pixel point set is a depth value in [Z near , Z far a collection of pixels within the range, the preset blur formula is:
Figure PCTCN2017120331-appb-000011
Figure PCTCN2017120331-appb-000011
式中,m×n表示以目标像素点的坐标(x,y)为圆心、以所述目标像素点的虚化系数为半径的圆内的像素点的数量,所述目标像素点为所述目标像素点集的任一像素点,
Figure PCTCN2017120331-appb-000012
表示虚化处理后的所述目标像素点的像素值,I i,j表示虚化处理前的所述圆内的像素点(i,j)对应的像素值。
Wherein m × n represents the number of pixel points in a circle having a coordinate (x, y) of the target pixel as a center and a blur factor of the target pixel as a radius, the target pixel being the Any pixel of the target pixel set,
Figure PCTCN2017120331-appb-000012
The pixel value of the target pixel after the blurring process is indicated, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
本发明中,深度图创建方法,包括:获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像;分别确定第一图像和第二图像对应的特征点,得到第一组特征点和第二组特征点;分别利用第一组特征点和第二组特征点,相应地确定出第一图像和第二图像对应的支撑点,得到第一组支撑点和第二组支撑点;基于第一组支撑点和第二组支撑点,对第一图像和第二图像进行稠密立体匹配,以确定第一图像和第二图像之间的视差;利用视差确定出与目标场景对应的深度图。In the present invention, the depth map creation method includes: acquiring a first image and a second image obtained by capturing a target scene by the binocular shooting system; respectively determining feature points corresponding to the first image and the second image, and obtaining the first group a feature point and a second set of feature points; respectively, using the first set of feature points and the second set of feature points, correspondingly determining support points corresponding to the first image and the second image, and obtaining the first set of support points and the second set of support Pointing; performing dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine a disparity between the first image and the second image; determining the corresponding to the target scene by using the disparity Depth map.
可见,本发明在获取到第一图像和第二图像之后,先确定出图像对应的特征点,然后利用图像的特征点确定出图像对应的支撑点,后续便可基于上述第一图像和第二图像各自对应的支撑点进行稠密立体匹配,这样能够大幅提升第一图像和第二图像之间的立体匹配精度,从而得到第一图像和第二图像之间更加准确的视差,基于该视差得到的深度图将具备较高的质量,从而有利于提升后续的图像虚化效果。It can be seen that after acquiring the first image and the second image, the present invention first determines the feature points corresponding to the image, and then uses the feature points of the image to determine the support points corresponding to the image, which may be subsequently based on the first image and the second image. Dense stereo matching is performed on the corresponding support points of the image, so that the stereo matching precision between the first image and the second image can be greatly improved, thereby obtaining a more accurate parallax between the first image and the second image, which is obtained based on the parallax The depth map will have a higher quality, which will help to improve the subsequent image blurring effect.
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is an embodiment of the present invention, and those skilled in the art can obtain other drawings according to the provided drawings without any creative work.
图1为本发明实施例公开的一种深度图创建方法流程图;FIG. 1 is a flowchart of a method for creating a depth map according to an embodiment of the present invention;
图2为本发明实施例公开的一种深度图创建系统结构示意图。FIG. 2 is a schematic structural diagram of a depth map creation system according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明实施例公开了一种深度图创建方法,参见图1所示,该方法包括:The embodiment of the invention discloses a depth map creation method. Referring to FIG. 1 , the method includes:
步骤S11:获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像。Step S11: Acquire a first image and a second image obtained after the binocular shooting system captures the target scene.
步骤S12:分别确定第一图像和第二图像对应的特征点,得到第一组特征点和第二组特征点。Step S12: determining feature points corresponding to the first image and the second image, respectively, to obtain a first set of feature points and a second set of feature points.
步骤S13:分别利用第一组特征点和第二组特征点,相应地确定出第一图像和第二图像对应的支撑点,得到第一组支撑点和第二组支撑点。Step S13: Using the first set of feature points and the second set of feature points respectively, respectively determining the support points corresponding to the first image and the second image to obtain the first set of support points and the second set of support points.
本发明实施例中,可以利用极线约束以及特征点描述子来快速进行特征点之间的匹配处理,具体的,可以使用WTA策略(WTA,即Winner Takes All),在视差空间内选择匹配代价最小的点作为匹配成功的特征点,然后将匹配成功的特征点确定为支撑点,并剔除不匹配的特征点,由此可分别得到上述第一组支撑点和第二组支撑点。In the embodiment of the present invention, the polar line constraint and the feature point descriptor can be used to quickly perform matching processing between feature points. Specifically, a WTA strategy (WTA, ie, Winner Takes All) can be used to select a matching cost in the parallax space. The smallest point is used as the feature point for matching success, and then the feature points that match the success are determined as the support points, and the unmatched feature points are eliminated, thereby obtaining the first set of support points and the second set of support points respectively.
步骤S14:基于第一组支撑点和第二组支撑点,对第一图像和第二图像进行稠密立体匹配,以确定第一图像和第二图像之间的视差。Step S14: Perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine a disparity between the first image and the second image.
步骤S15:利用视差确定出与目标场景对应的深度图。Step S15: Determine a depth map corresponding to the target scene by using the parallax.
可见,本发明实施例在获取到第一图像和第二图像之后,先确定出图像对应的特征点,然后利用图像的特征点确定出图像对应的支撑点,后续便可基于上述第一图像和第二图像各自对应的支撑点进行稠密立体匹配,这样能够大幅提升第一图像和第二图像之间的立体匹配精度,从而得到第一图像和第二图像之间更加准确的视差,基于该视差得到的深度图将具备较高的质量,从而有利于提升后续的图像虚化效果。It can be seen that, after acquiring the first image and the second image, the embodiment of the present invention first determines a feature point corresponding to the image, and then uses the feature point of the image to determine a support point corresponding to the image, which may be subsequently based on the first image and The corresponding support points of the second image are densely stereo-matched, which can greatly improve the stereo matching precision between the first image and the second image, thereby obtaining a more accurate parallax between the first image and the second image, based on the parallax The obtained depth map will have higher quality, which will help to improve the subsequent image blurring effect.
本发明实施例公开了一种具体的深度图创建方法,相对于上一实施例,本实施例对技术方案作了进一步的说明和优化。具体的:The embodiment of the present invention discloses a specific depth map creation method. Compared with the previous embodiment, the technical solution is further illustrated and optimized in this embodiment. specific:
上一实施例步骤S12中,需要分别确定第一图像和第二图像对应的特征点。本实施例中,任一图像的特征点的确定过程,具体可以包括下面步骤S121 至S123:In step S12 of the previous embodiment, it is necessary to separately determine feature points corresponding to the first image and the second image. In this embodiment, the determining process of the feature points of any image may specifically include the following steps S121 to S123:
步骤S121:从该图像中确定出候选点;Step S121: determining a candidate point from the image;
步骤S122:统计候选点周围满足预设条件的像素点的总数量;Step S122: Count the total number of pixel points satisfying the preset condition around the candidate point;
步骤S123:判断总数量是否大于预设数量阈值,如果是,则判定候选点为该图像的特征点,如果否,则判定候选点不是该图像的特征点。Step S123: determining whether the total number is greater than a preset number threshold, and if so, determining that the candidate point is a feature point of the image, and if not, determining that the candidate point is not a feature point of the image.
其中,上述步骤S122中,统计候选点周围满足预设条件的像素点的总数量的过程,具体可以包括:In the above step S122, the process of counting the total number of the pixel points that meet the preset condition around the candidate point may specifically include:
利用预设像素点数量统计公式,统计候选点周围满足预设条件的像素点的总数量;其中,上述预设像素点数量统计公式为:The total number of pixels satisfying the preset condition around the candidate points is counted by using a preset pixel number statistical formula; wherein the predetermined number of preset pixel points is:
Figure PCTCN2017120331-appb-000013
Figure PCTCN2017120331-appb-000013
式中,N表示上述总数量,p表示候选点,circle(p)表示以候选点p为圆心、以预设数值为半径的圆周,x表示圆周circle(p)上的任一像素点,I(x)表示像素点x的灰度值,I(p)表示候选点p的灰度值,ε d表示预设灰度差阈值。需要说明的是,上述预设数值可以根据实际需要进行具体设置,在此不对其进行具体限定。 Where N represents the total number, p represents a candidate point, circle(p) represents a circle centered on the candidate point p, with a predetermined value as a radius, and x represents any pixel on the circumference circle(p), I (x) denotes the gray value of pixel x, I (p) denotes the gray value candidate point p, ε d represents the difference between a predetermined gradation threshold. It should be noted that the preset value may be specifically set according to actual needs, and is not specifically limited herein.
进一步的,上一实施例步骤S14中,基于第一组支撑点和第二组支撑点,对第一图像和第二图像进行稠密立体匹配的过程,具体可以包括下面步骤S141至S143:Further, in the step S14 of the previous embodiment, the process of performing dense stereo matching on the first image and the second image based on the first group of support points and the second group of support points may specifically include the following steps S141 to S143:
步骤S141:根据第一组支撑点在第一图像上构建相应的Delaunay三角网格;Step S141: Construct a corresponding Delaunay triangular mesh on the first image according to the first set of support points;
步骤S142:计算位于Delaunay三角网格内的像素点对应的视差,得到相应的视差数据;Step S142: Calculating the disparity corresponding to the pixel points located in the Delaunay triangular mesh, and obtaining corresponding disparity data;
步骤S143:利用视差数据以及视差概率模型,对第一图像和第二图像进行稠密立体匹配,以从第二组支撑点中找出与第一组支撑点相匹配的支撑点。Step S143: Perform dense stereo matching on the first image and the second image by using the disparity data and the disparity probability model to find support points matching the first set of support points from the second set of support points.
其中,上述步骤S142中,计算位于Delaunay三角网格内的像素点对应的视差的过程,具体可以包括:The process of calculating the disparity corresponding to the pixel points in the Delaunay triangular mesh in the above step S142 may specifically include:
利用预设视差计算公式,计算位于Delaunay三角网格内的像素点对应的视差;其中,预设视差计算公式为:Using the preset parallax calculation formula, the parallax corresponding to the pixel points in the Delaunay triangular mesh is calculated; wherein the preset parallax calculation formula is:
Figure PCTCN2017120331-appb-000014
Figure PCTCN2017120331-appb-000014
式中,d p表示Delaunay三角网格内的像素点p对应的视差,(u p,v p)表示像素点p的坐标,a、b、c为通过对像素点p所在Delaunay三角形区域的支撑点平面进行拟合后得到的系数,h表示像素点p与相邻三个支撑点的最小支撑距离,
Figure PCTCN2017120331-appb-000015
表示在区间
Figure PCTCN2017120331-appb-000016
中取一个随机数。
Where d p represents the parallax corresponding to the pixel point p in the Delaunay triangular mesh, (u p , v p ) represents the coordinates of the pixel point p, and a, b, c are supported by the Delaunay triangular region where the pixel point p is located. The coefficient obtained by fitting the point plane, h represents the minimum support distance of the pixel point p and the adjacent three support points,
Figure PCTCN2017120331-appb-000015
Expressed in the interval
Figure PCTCN2017120331-appb-000016
Take a random number.
进一步的,本发明实施例还公开了一种图像虚化方法,包括下面步骤S21和S22:Further, an embodiment of the present invention further discloses an image blurring method, including the following steps S21 and S22:
步骤S21:获取通过前述实施例中公开的深度图创建方法得到的深度图;Step S21: acquiring a depth map obtained by the depth map creation method disclosed in the foregoing embodiment;
步骤S22:利用深度图进行图像虚化处理,得到虚化图像。Step S22: performing image blurring processing using the depth map to obtain a blurred image.
具体的,上述步骤S22中,利用深度图进行图像虚化处理的过程,具体可以包括下面步骤S221至S223:Specifically, in the above step S22, the process of performing image blurring processing by using the depth map may specifically include the following steps S221 to S223:
步骤S221:在深度图上确定出聚焦区域。Step S221: determining a focus area on the depth map.
步骤S222:利用深度图上的深度信息,并结合预设函数,得到深度图中每个像素点的虚化系数;其中,预设函数为:Step S222: using the depth information on the depth map and combining the preset function to obtain the ambiguity coefficient of each pixel in the depth map; wherein, the preset function is:
Figure PCTCN2017120331-appb-000017
Figure PCTCN2017120331-appb-000017
其中,C i表示第i个像素点的虚化系数,z i表示第i个像素点的深度值,f表示焦距,
Figure PCTCN2017120331-appb-000018
表示聚焦区域上的平均深度值,Z far表示聚焦区域上的最大深度值,Z near表示聚焦区域上的最小深度值,w表示调节系数。
Wherein, C i represents a blurring coefficient of the i-th pixel point, z i represents a depth value of the i-th pixel point, and f represents a focal length.
Figure PCTCN2017120331-appb-000018
Indicates the average depth value on the focus area, Z far represents the maximum depth value on the focus area, Z near represents the minimum depth value on the focus area, and w represents the adjustment factor.
步骤S223:利用预设虚化公式,对目标像素点集进行图像虚化处理,得到虚化图像;其中,目标像素点集为深度值在[Z near,Z far]范围内的像素点的集合,预设虚化公式为: Step S223: performing image blurring processing on the target pixel point set by using a preset blurring formula to obtain a blurred image; wherein the target pixel point set is a set of pixel points whose depth values are in the range of [Z near , Z far ] The default blur formula is:
Figure PCTCN2017120331-appb-000019
Figure PCTCN2017120331-appb-000019
式中,m×n表示以目标像素点的坐标(x,y)为圆心、以目标像素点的虚化系数为半径的圆内的像素点的数量,目标像素点为目标像素点集的任一像素 点,
Figure PCTCN2017120331-appb-000020
表示虚化处理后的目标像素点的像素值,I i,j表示虚化处理前的圆内的像素点(i,j)对应的像素值。
In the formula, m × n represents the number of pixels in a circle whose coordinates (x, y) are the center of the target pixel and whose radius is the radius of the target pixel, and the target pixel is the target pixel set. One pixel,
Figure PCTCN2017120331-appb-000020
Indicates the pixel value of the target pixel after the blurring process, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
基于上述技术方案可知,本发明实施例通过抑制聚焦层对离焦层的泄漏以及逐像素确定虚化系数的方式,可以更好地提升虚化图像的整体虚化效果,Based on the foregoing technical solutions, the embodiment of the present invention can better improve the overall blurring effect of the blurred image by suppressing the leakage of the focusing layer to the defocused layer and determining the imaginary coefficient by pixel.
相应的,本发明实施例还公开了一种深度图创建系统,参见图2所示,该系统包括:Correspondingly, an embodiment of the present invention further discloses a depth map creation system. Referring to FIG. 2, the system includes:
图像获取模块11,用于获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像;The image obtaining module 11 is configured to obtain a first image and a second image obtained by the binocular shooting system after capturing the target scene;
特征点确定模块12,用于分别确定第一图像和第二图像对应的特征点,得到第一组特征点和第二组特征点;The feature point determining module 12 is configured to respectively determine feature points corresponding to the first image and the second image, to obtain a first set of feature points and a second set of feature points;
支撑点确定模块13,用于分别利用第一组特征点和第二组特征点,相应地确定出第一图像和第二图像对应的支撑点,得到第一组支撑点和第二组支撑点;The support point determining module 13 is configured to respectively determine a support point corresponding to the first image and the second image by using the first set of feature points and the second set of feature points, to obtain the first set of support points and the second set of support points ;
匹配模块14,用于基于第一组支撑点和第二组支撑点,对第一图像和第二图像进行稠密立体匹配,以确定第一图像和第二图像之间的视差;The matching module 14 is configured to perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine a disparity between the first image and the second image;
深度图确定模块15,用于利用视差确定出与目标场景对应的深度图。The depth map determining module 15 is configured to determine a depth map corresponding to the target scene by using the parallax.
关于上述各个模块更加具体的工作过程可以参考前述实施例中公开的相应内容,在此不再进行赘述。For a more specific working process of each module described above, reference may be made to the corresponding content disclosed in the foregoing embodiments, and details are not described herein again.
可见,本发明实施例在获取到第一图像和第二图像之后,先确定出图像对应的特征点,然后利用图像的特征点确定出图像对应的支撑点,后续便可基于上述第一图像和第二图像各自对应的支撑点进行稠密立体匹配,这样能够大幅提升第一图像和第二图像之间的立体匹配精度,从而得到第一图像和第二图像之间更加准确的视差,基于该视差得到的深度图将具备较高的质量,从而有利于提升后续的图像虚化效果。It can be seen that, after acquiring the first image and the second image, the embodiment of the present invention first determines a feature point corresponding to the image, and then uses the feature point of the image to determine a support point corresponding to the image, which may be subsequently based on the first image and The corresponding support points of the second image are densely stereo-matched, which can greatly improve the stereo matching precision between the first image and the second image, thereby obtaining a more accurate parallax between the first image and the second image, based on the parallax The obtained depth map will have higher quality, which will help to improve the subsequent image blurring effect.
进一步的,本发明还公开了一种图像虚化系统,包括深度图获取模块以及图像虚化模块;其中,Further, the present invention also discloses an image blurring system, including a depth map acquiring module and an image blurring module; wherein
深度图获取模块,用于获取通过前述实施例公开的深度图创建系统创建得到的深度图;a depth map obtaining module, configured to acquire a depth map created by the depth map creation system disclosed in the foregoing embodiment;
图像虚化模块,用于利用深度图进行图像虚化处理,得到虚化图像。The image blurring module is configured to perform image blurring processing by using a depth map to obtain a blurred image.
具体的,上述图像虚化模块,可以包括聚焦区域确定单元、虚化系数计算单元以及虚化处理单元;其中,Specifically, the image blurring module may include a focus area determining unit, a blurring coefficient calculating unit, and a blurring processing unit;
聚焦区域确定单元,用于在深度图上确定出聚焦区域;a focus area determining unit for determining a focus area on the depth map;
虚化系数计算单元,用于利用深度图上的深度信息,并结合预设函数,得到深度图中每个像素点的虚化系数;其中,预设函数为:The imaginary coefficient calculation unit is configured to use the depth information on the depth map and combine the preset function to obtain the ambiguity coefficient of each pixel in the depth map; wherein, the preset function is:
Figure PCTCN2017120331-appb-000021
Figure PCTCN2017120331-appb-000021
其中,C i表示第i个像素点的虚化系数,z i表示第i个像素点的深度值,f表示焦距,
Figure PCTCN2017120331-appb-000022
表示聚焦区域上的平均深度值,Z far表示聚焦区域上的最大深度值,Z near表示聚焦区域上的最小深度值,w表示调节系数;
Wherein, C i represents a blurring coefficient of the i-th pixel point, z i represents a depth value of the i-th pixel point, and f represents a focal length.
Figure PCTCN2017120331-appb-000022
Indicates the average depth value on the focus area, Z far represents the maximum depth value on the focus area, Z near represents the minimum depth value on the focus area, and w represents the adjustment factor;
虚化处理单元,用于利用预设虚化公式,对目标像素点集进行图像虚化处理,得到虚化图像;其中,目标像素点集为深度值在[Z near,Z far]范围内的像素点的集合,预设虚化公式为: The blurring processing unit is configured to perform image blurring processing on the target pixel point set by using a preset blurring formula to obtain a blurred image; wherein the target pixel point set is in the range of [Z near , Z far ] The set of pixels, the default blur formula is:
Figure PCTCN2017120331-appb-000023
Figure PCTCN2017120331-appb-000023
式中,m×n表示以目标像素点的坐标(x,y)为圆心、以目标像素点的虚化系数为半径的圆内的像素点的数量,目标像素点为目标像素点集的任一像素点,
Figure PCTCN2017120331-appb-000024
表示虚化处理后的目标像素点的像素值,I i,j表示虚化处理前的圆内的像素点(i,j)对应的像素值。
In the formula, m × n represents the number of pixels in a circle whose coordinates (x, y) are the center of the target pixel and whose radius is the radius of the target pixel, and the target pixel is the target pixel set. One pixel,
Figure PCTCN2017120331-appb-000024
Indicates the pixel value of the target pixel after the blurring process, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。Finally, it should also be noted that in this context, relational terms such as first and second are used merely to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities. There is any such actual relationship or order between operations. Furthermore, the term "comprises" or "comprises" or "comprises" or any other variations thereof is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device that comprises a plurality of elements includes not only those elements but also Other elements, or elements that are inherent to such a process, method, item, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that comprises the element.
以上对本发明所提供的深度图创建方法与系统及图像虚化方法与系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The method and system for creating a depth map provided by the present invention and the image blurring method and system are described in detail. The principle and implementation manner of the present invention are described in the following. The description of the above embodiment is only used for the description. To help understand the method of the present invention and its core idea; at the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in specific embodiments and application scopes. It should not be construed as limiting the invention.

Claims (10)

  1. 深度图创建方法,其特征在于,包括:A depth map creation method, comprising:
    获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像;Obtaining a first image and a second image obtained by the binocular shooting system after shooting the target scene;
    分别确定所述第一图像和所述第二图像对应的特征点,得到第一组特征点和第二组特征点;Determining feature points corresponding to the first image and the second image, respectively, to obtain a first set of feature points and a second set of feature points;
    分别利用所述第一组特征点和所述第二组特征点,相应地确定出所述第一图像和所述第二图像对应的支撑点,得到第一组支撑点和第二组支撑点;Using the first set of feature points and the second set of feature points respectively, respectively determining support points corresponding to the first image and the second image to obtain a first set of support points and a second set of support points ;
    基于所述第一组支撑点和所述第二组支撑点,对所述第一图像和所述第二图像进行稠密立体匹配,以确定所述第一图像和所述第二图像之间的视差;Performing dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine between the first image and the second image Parallax
    利用所述视差确定出与所述目标场景对应的深度图。A depth map corresponding to the target scene is determined using the parallax.
  2. 根据权利要求1所述的深度图创建方法,其特征在于,任一图像的特征点的确定过程,包括:The depth map creation method according to claim 1, wherein the determining process of the feature points of any image comprises:
    从该图像中确定出候选点;Determining candidate points from the image;
    统计所述候选点周围满足预设条件的像素点的总数量;Counting the total number of pixel points satisfying the preset condition around the candidate point;
    判断所述总数量是否大于预设数量阈值,如果是,则判定所述候选点为该图像的特征点,如果否,则判定所述候选点不是该图像的特征点。Determining whether the total number is greater than a preset number threshold, and if so, determining that the candidate point is a feature point of the image, and if not, determining that the candidate point is not a feature point of the image.
  3. 根据权利要求2所述的深度图创建方法,其特征在于,所述统计所述候选点周围满足预设条件的像素点的总数量的过程,包括:The depth map creation method according to claim 2, wherein the process of counting the total number of pixel points satisfying the preset condition around the candidate point comprises:
    利用预设像素点数量统计公式,统计所述候选点周围满足预设条件的像素点的总数量;其中,所述预设像素点数量统计公式为:The total number of pixels satisfying the preset condition around the candidate points is counted by using a preset pixel number statistical formula; wherein the predetermined pixel number statistics formula is:
    Figure PCTCN2017120331-appb-100001
    Figure PCTCN2017120331-appb-100001
    式中,N表示所述总数量,p表示所述候选点,circle(p)表示以所述候选点p为圆心、以预设数值为半径的圆周,x表示所述圆周circle(p)上的任一像素点,I(x)表示像素点x的灰度值,I(p)表示所述候选点p的灰度值,ε d表示预设灰度差阈值。 Where N represents the total number, p represents the candidate point, circle(p) represents a circle centered on the candidate point p, with a predetermined value as a radius, and x represents the circumference circle(p) any of a pixel, I (x) denotes the gray value of pixel x, I (p) denotes the gray value of the candidate point p, ε d represents the difference between a predetermined gradation threshold.
  4. 根据权利要求1至3任一项所述的深度图创建方法,其特征在于,所述基于所述第一组支撑点和所述第二组支撑点,对所述第一图像和所述第二图像进行稠密立体匹配的过程,包括The depth map creation method according to any one of claims 1 to 3, wherein said first image and said first image are based on said first set of support points and said second set of support points The process of performing dense stereo matching on two images, including
    根据所述第一组支撑点在所述第一图像上构建相应的Delaunay三角网格;Constructing a corresponding Delaunay triangular mesh on the first image according to the first set of support points;
    计算位于所述Delaunay三角网格内的像素点对应的视差,得到相应的视差数据;Calculating a disparity corresponding to a pixel point located in the Delaunay triangular mesh to obtain corresponding disparity data;
    利用所述视差数据以及视差概率模型,对所述第一图像和所述第二图像进行稠密立体匹配,以从所述第二组支撑点中找出与所述第一组支撑点相匹配的支撑点。Performing dense stereo matching on the first image and the second image by using the disparity data and a disparity probability model to find out from the second set of support points that the first set of support points are matched Support point.
  5. 根据权利要求4所述的深度图创建方法,其特征在于,所述计算位于所述Delaunay三角网格内的像素点对应的视差的过程,包括:The depth map creation method according to claim 4, wherein the calculating the parallax corresponding to the pixel points in the Delaunay triangular mesh comprises:
    利用预设视差计算公式,计算位于所述Delaunay三角网格内的像素点对应的视差;其中,所述预设视差计算公式为:Calculating a parallax corresponding to a pixel point located in the Delaunay triangular mesh by using a preset parallax calculation formula; wherein the preset disparity calculation formula is:
    Figure PCTCN2017120331-appb-100002
    Figure PCTCN2017120331-appb-100002
    式中,d p表示所述Delaunay三角网格内的像素点p对应的视差,(u p,v p)表示像素点p的坐标,a、b、c为通过对像素点p所在Delaunay三角形区域的支撑点平面进行拟合后得到的系数,h表示像素点p与相邻三个支撑点的最小支撑距离,
    Figure PCTCN2017120331-appb-100003
    表示在区间
    Figure PCTCN2017120331-appb-100004
    中取一个随机数。
    Where d p represents the parallax corresponding to the pixel point p in the Delaunay triangular mesh, (u p , v p ) represents the coordinates of the pixel point p, and a, b, c are the Delaunay triangular regions where the pixel point p is located The coefficient obtained by fitting the support point plane, h represents the minimum support distance of the pixel point p and the adjacent three support points,
    Figure PCTCN2017120331-appb-100003
    Expressed in the interval
    Figure PCTCN2017120331-appb-100004
    Take a random number.
  6. 图像虚化方法,其特征在于,包括:The image blurring method is characterized in that it comprises:
    获取通过如权利要求1至5任一项所述方法得到的深度图;Obtaining a depth map obtained by the method according to any one of claims 1 to 5;
    利用所述深度图进行图像虚化处理,得到虚化图像。The image is blurred by the depth map to obtain a blurred image.
  7. 根据权利要求6所述的图像虚化方法,其特征在于,所述利用所述深度图进行图像虚化处理的过程,包括:The image blurring method according to claim 6, wherein the process of performing image blurring processing by using the depth map comprises:
    在所述深度图上确定出聚焦区域;Determining a focus area on the depth map;
    利用所述深度图上的深度信息,并结合预设函数,得到所述深度图中每个像素点的虚化系数;其中,所述预设函数为:Using the depth information on the depth map, and combining the preset function, obtaining a blurring coefficient of each pixel in the depth map; wherein the preset function is:
    Figure PCTCN2017120331-appb-100005
    Figure PCTCN2017120331-appb-100005
    其中,C i表示第i个像素点的虚化系数,z i表示第i个像素点的深度值,f表示焦距,
    Figure PCTCN2017120331-appb-100006
    表示所述聚焦区域上的平均深度值,Z far表示所述聚焦区域上的最大深度值,Z near表示所述聚焦区域上的最小深度值,w表示调节系数;
    Wherein, C i represents a blurring coefficient of the i-th pixel point, z i represents a depth value of the i-th pixel point, and f represents a focal length.
    Figure PCTCN2017120331-appb-100006
    Representing an average depth value on the focus area, Z far represents a maximum depth value on the focus area, Z near represents a minimum depth value on the focus area, and w represents an adjustment factor;
    利用预设虚化公式,对目标像素点集进行图像虚化处理,得到所述虚化图像;其中,所述目标像素点集为深度值在[Z near,Z far]范围内的像素点的集合,所述预设虚化公式为: Using the preset blurring formula, performing image blurring processing on the target pixel point set to obtain the blurred image; wherein the target pixel point set is a pixel point whose depth value is in the range of [Z near , Z far ] The set, the preset blur formula is:
    Figure PCTCN2017120331-appb-100007
    Figure PCTCN2017120331-appb-100007
    式中,m×n表示以目标像素点的坐标(x,y)为圆心、以所述目标像素点的虚化系数为半径的圆内的像素点的数量,所述目标像素点为所述目标像素点集的任一像素点,
    Figure PCTCN2017120331-appb-100008
    表示虚化处理后的所述目标像素点的像素值,I i,j表示虚化处理前的所述圆内的像素点(i,j)对应的像素值。
    Wherein m × n represents the number of pixel points in a circle having a coordinate (x, y) of the target pixel as a center and a blur factor of the target pixel as a radius, the target pixel being the Any pixel of the target pixel set,
    Figure PCTCN2017120331-appb-100008
    The pixel value of the target pixel after the blurring process is indicated, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
  8. 深度图创建系统,其特征在于,包括:A depth map creation system, comprising:
    图像获取模块,用于获取双目拍摄系统对目标场景进行拍摄后得到的第一图像和第二图像;An image acquisition module, configured to acquire a first image and a second image obtained by the binocular shooting system after capturing the target scene;
    特征点确定模块,用于分别确定所述第一图像和所述第二图像对应的特征点,得到第一组特征点和第二组特征点;a feature point determining module, configured to respectively determine feature points corresponding to the first image and the second image, to obtain a first set of feature points and a second set of feature points;
    支撑点确定模块,用于分别利用所述第一组特征点和所述第二组特征点,相应地确定出所述第一图像和所述第二图像对应的支撑点,得到第一组支撑点和第二组支撑点;a support point determining module, configured to respectively determine a support point corresponding to the first image and the second image by using the first set of feature points and the second set of feature points, to obtain a first set of supports Point and second set of support points;
    匹配模块,用于基于所述第一组支撑点和所述第二组支撑点,对所述第一图像和所述第二图像进行稠密立体匹配,以确定所述第一图像和所述第二图像之间的视差;a matching module, configured to perform dense stereo matching on the first image and the second image based on the first set of support points and the second set of support points to determine the first image and the first Parallax between two images;
    深度图确定模块,用于利用所述视差确定出与所述目标场景对应的深度图。a depth map determining module configured to determine a depth map corresponding to the target scene by using the parallax.
  9. 图像虚化系统,其特征在于,包括:An image blurring system, comprising:
    深度图获取模块,用于获取通过如权利要求8所述深度图创建系统创建得到的深度图;a depth map obtaining module, configured to acquire a depth map created by the depth map creation system according to claim 8;
    图像虚化模块,用于利用所述深度图进行图像虚化处理,得到虚化图像。An image blurring module is configured to perform image blurring processing by using the depth map to obtain a blurred image.
  10. 根据权利要求9所述的图像虚化系统,其特征在于,所述图像虚化模块,包括:The image blurring system according to claim 9, wherein the image blurring module comprises:
    聚焦区域确定单元,用于在所述深度图上确定出聚焦区域;a focus area determining unit, configured to determine a focus area on the depth map;
    虚化系数计算单元,用于利用所述深度图上的深度信息,并结合预设函数,得到所述深度图中每个像素点的虚化系数;其中,所述预设函数为:a blurring coefficient calculation unit, configured to use the depth information on the depth map, and combine with a preset function to obtain a blurring coefficient of each pixel in the depth map; wherein the preset function is:
    Figure PCTCN2017120331-appb-100009
    Figure PCTCN2017120331-appb-100009
    其中,C i表示第i个像素点的虚化系数,z i表示第i个像素点的深度值,f表示焦距,
    Figure PCTCN2017120331-appb-100010
    表示所述聚焦区域上的平均深度值,Z far表示所述聚焦区域上的最大深度值,Z near表示所述聚焦区域上的最小深度值,w表示调节系数;
    Wherein, C i represents a blurring coefficient of the i-th pixel point, z i represents a depth value of the i-th pixel point, and f represents a focal length.
    Figure PCTCN2017120331-appb-100010
    Representing an average depth value on the focus area, Z far represents a maximum depth value on the focus area, Z near represents a minimum depth value on the focus area, and w represents an adjustment factor;
    虚化处理单元,用于利用预设虚化公式,对目标像素点集进行图像虚化处理,得到所述虚化图像;其中,所述目标像素点集为深度值在[Z near,Z far]范围内的像素点的集合,所述预设虚化公式为: a blurring processing unit, configured to perform image blurring processing on the target pixel point set by using a preset blurring formula to obtain the blurred image; wherein the target pixel point set is a depth value in [Z near , Z far a collection of pixels within the range, the preset blur formula is:
    Figure PCTCN2017120331-appb-100011
    Figure PCTCN2017120331-appb-100011
    式中,m×n表示以目标像素点的坐标(x,y)为圆心、以所述目标像素点的虚化系数为半径的圆内的像素点的数量,所述目标像素点为所述目标像素点集的任一像素点,
    Figure PCTCN2017120331-appb-100012
    表示虚化处理后的所述目标像素点的像素值,I i,j表示虚化处理前的所述圆内的像素点(i,j)对应的像素值。
    Wherein m × n represents the number of pixel points in a circle having a coordinate (x, y) of the target pixel as a center and a blur factor of the target pixel as a radius, the target pixel being the Any pixel of the target pixel set,
    Figure PCTCN2017120331-appb-100012
    The pixel value of the target pixel after the blurring process is indicated, and I i,j represents the pixel value corresponding to the pixel point (i, j) in the circle before the blurring process.
PCT/CN2017/120331 2017-05-19 2017-12-29 Depth map creation method and system and image blurring method and system WO2018209969A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710361218.3 2017-05-19
CN201710361218.3A CN107170008B (en) 2017-05-19 2017-05-19 Depth map creating method and system and image blurring method and system

Publications (1)

Publication Number Publication Date
WO2018209969A1 true WO2018209969A1 (en) 2018-11-22

Family

ID=59816214

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120331 WO2018209969A1 (en) 2017-05-19 2017-12-29 Depth map creation method and system and image blurring method and system

Country Status (2)

Country Link
CN (1) CN107170008B (en)
WO (1) WO2018209969A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815709A (en) * 2019-04-10 2020-10-23 四川大学 Unit attitude multi-image-plane three-dimensional reconstruction method based on common digital camera
GB2583774A (en) * 2019-05-10 2020-11-11 Robok Ltd Stereo image processing
CN112686937A (en) * 2020-12-25 2021-04-20 杭州海康威视数字技术股份有限公司 Depth image generation method, device and equipment
CN109813334B (en) * 2019-03-14 2023-04-07 西安工业大学 Binocular vision-based real-time high-precision vehicle mileage calculation method

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107170008B (en) * 2017-05-19 2019-12-24 成都通甲优博科技有限责任公司 Depth map creating method and system and image blurring method and system
CN107682639B (en) * 2017-11-16 2019-09-27 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
WO2019178717A1 (en) * 2018-03-19 2019-09-26 深圳配天智能技术研究院有限公司 Binocular matching method, visual imaging device and device with storage function
CN109600552B (en) * 2019-01-14 2024-06-18 广东省航空航天装备技术研究所 Image refocusing control method and system
CN109889724B (en) * 2019-01-30 2020-11-06 北京达佳互联信息技术有限公司 Image blurring method and device, electronic equipment and readable storage medium
CN113141495B (en) * 2020-01-16 2023-03-24 纳恩博(北京)科技有限公司 Image processing method and device, storage medium and electronic device
CN113077481B (en) * 2021-03-29 2022-12-09 上海闻泰信息技术有限公司 Image processing method and device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582171A (en) * 2009-06-10 2009-11-18 清华大学 Method and device for creating depth maps
US20100110070A1 (en) * 2008-11-06 2010-05-06 Samsung Electronics Co., Ltd. 3d image generation apparatus and method
CN101996399A (en) * 2009-08-18 2011-03-30 三星电子株式会社 Device and method for estimating parallax between left image and right image
WO2014199127A1 (en) * 2013-06-10 2014-12-18 The University Of Durham Stereoscopic image generation with asymmetric level of sharpness
CN106412421A (en) * 2016-08-30 2017-02-15 成都丘钛微电子科技有限公司 System and method for rapidly generating large-size multi-focused image
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
CN107170008A (en) * 2017-05-19 2017-09-15 成都通甲优博科技有限责任公司 A kind of depth map creation method, system and image weakening method, system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110070A1 (en) * 2008-11-06 2010-05-06 Samsung Electronics Co., Ltd. 3d image generation apparatus and method
CN101582171A (en) * 2009-06-10 2009-11-18 清华大学 Method and device for creating depth maps
CN101996399A (en) * 2009-08-18 2011-03-30 三星电子株式会社 Device and method for estimating parallax between left image and right image
WO2014199127A1 (en) * 2013-06-10 2014-12-18 The University Of Durham Stereoscopic image generation with asymmetric level of sharpness
CN106412421A (en) * 2016-08-30 2017-02-15 成都丘钛微电子科技有限公司 System and method for rapidly generating large-size multi-focused image
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
CN107170008A (en) * 2017-05-19 2017-09-15 成都通甲优博科技有限责任公司 A kind of depth map creation method, system and image weakening method, system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109813334B (en) * 2019-03-14 2023-04-07 西安工业大学 Binocular vision-based real-time high-precision vehicle mileage calculation method
CN111815709A (en) * 2019-04-10 2020-10-23 四川大学 Unit attitude multi-image-plane three-dimensional reconstruction method based on common digital camera
CN111815709B (en) * 2019-04-10 2023-04-21 四川大学 Single-pose multi-image-plane three-dimensional reconstruction method based on common digital camera
GB2583774A (en) * 2019-05-10 2020-11-11 Robok Ltd Stereo image processing
GB2583774B (en) * 2019-05-10 2022-05-11 Robok Ltd Stereo image processing
CN112686937A (en) * 2020-12-25 2021-04-20 杭州海康威视数字技术股份有限公司 Depth image generation method, device and equipment
CN112686937B (en) * 2020-12-25 2024-05-31 杭州海康威视数字技术股份有限公司 Depth image generation method, device and equipment

Also Published As

Publication number Publication date
CN107170008B (en) 2019-12-24
CN107170008A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
WO2018209969A1 (en) Depth map creation method and system and image blurring method and system
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN104966270B (en) A kind of more image split-joint methods
WO2018127007A1 (en) Depth image acquisition method and system
WO2018120038A1 (en) Method and device for target detection
CN111368717B (en) Line-of-sight determination method, line-of-sight determination device, electronic apparatus, and computer-readable storage medium
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
WO2020119467A1 (en) High-precision dense depth image generation method and device
WO2022135588A1 (en) Image correction method, apparatus and system, and electronic device
CN111107337B (en) Depth information complementing method and device, monitoring system and storage medium
CN107274483A (en) A kind of object dimensional model building method
CN103440653A (en) Binocular vision stereo matching method
CN106296811A (en) A kind of object three-dimensional reconstruction method based on single light-field camera
CN106952247B (en) Double-camera terminal and image processing method and system thereof
CN104240229B (en) A kind of adaptive method for correcting polar line of infrared binocular camera
CN106846249A (en) A kind of panoramic video joining method
CN102075785A (en) Method for correcting wide-angle camera lens distortion of automatic teller machine (ATM)
EP3026631A1 (en) Method and apparatus for estimating depth of focused plenoptic data
CN106558038B (en) A kind of detection of sea-level and device
WO2021142843A1 (en) Image scanning method and device, apparatus, and storage medium
CN106888344A (en) Camera module and its inclined acquisition methods of image planes and method of adjustment
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
WO2018133027A1 (en) Grayscale constraint-based method and apparatus for integer-pixel search for three-dimensional digital speckle pattern
CN104104911B (en) Timestamp in panoramic picture generating process is eliminated and remapping method and system
CN107610216B (en) Particle swarm optimization-based multi-view three-dimensional point cloud generation method and applied camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910471

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910471

Country of ref document: EP

Kind code of ref document: A1