WO2020181509A1 - 一种影像处理方法、装置及系统 - Google Patents

一种影像处理方法、装置及系统 Download PDF

Info

Publication number
WO2020181509A1
WO2020181509A1 PCT/CN2019/077898 CN2019077898W WO2020181509A1 WO 2020181509 A1 WO2020181509 A1 WO 2020181509A1 CN 2019077898 W CN2019077898 W CN 2019077898W WO 2020181509 A1 WO2020181509 A1 WO 2020181509A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
feature point
target
feature points
Prior art date
Application number
PCT/CN2019/077898
Other languages
English (en)
French (fr)
Inventor
邓凯强
梁家斌
宋孟肖
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980005051.4A priority Critical patent/CN111247563A/zh
Priority to PCT/CN2019/077898 priority patent/WO2020181509A1/zh
Publication of WO2020181509A1 publication Critical patent/WO2020181509A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the invention relates to the field of image processing, and in particular to an image processing method, device and system.
  • the 3D reconstruction method based on UAV image sequences can replace traditional airborne surveys, ground surveys and other inefficient 3D spatial information acquisition methods.
  • Three-dimensional reconstruction based on UAV images can use the structure from Motion (SFM) method. This method can detect and match the feature points in the image to reconstruct three-dimensional spatial information.
  • SFM Structure from Motion
  • the current 3D reconstruction process has problems such as large memory usage and low operating efficiency.
  • the embodiment of the present invention provides an image processing method, which can reduce the memory occupancy rate and improve the operating efficiency.
  • an embodiment of the present invention provides an image processing method, including:
  • the vertex power of each feature point of each image is determined, and the vertex power is used to indicate that the spatial three-dimensional point corresponding to the feature point is included in the multiple images.
  • a target feature point set is determined from the feature points of the multiple images.
  • an embodiment of the present invention provides an image processing device including a memory and a processor
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the vertex power of each feature point of each image is determined, and the vertex power is used to indicate that the spatial three-dimensional point corresponding to the feature point is included in the multiple images.
  • a target feature point set is determined from the feature points of the multiple images.
  • an embodiment of the present invention provides an image processing system, including:
  • Movable platform used to capture multiple images by shooting cameras
  • the image processing equipment is used to perform the following operations based on the above multiple images:
  • the vertex power of each feature point of each image is determined, and the vertex power is used to indicate that the spatial three-dimensional point corresponding to the feature point is included in the multiple images.
  • a target feature point set is determined from the feature points of the multiple images.
  • the vertex degree of each feature point of each image is determined according to the correspondence between the feature points of multiple images; the grid is divided into each image to obtain the Grid number; determining the target feature point set according to the grid number of each image and the vertex degree of each feature point of each image.
  • the implementation of the embodiments of the present invention can reduce the number of feature points participating in the SFM algorithm, thereby reducing the calculation scale of the SFM algorithm and improving the operating efficiency.
  • FIG. 1 is a schematic structural diagram of an image processing system provided by an embodiment of the present invention
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the correspondence between feature points of multiple images according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for determining a target feature point set provided by an embodiment of the present invention
  • FIG. 5 is a flowchart of another method for determining a target feature point set according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of another method for determining a target feature point set according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of another image processing method provided by an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an image processing device provided by an embodiment of the present invention.
  • an embodiment of the present invention proposes an image processing method that can be applied to an image processing system and can be based on the grid of each image.
  • the number of nets and the vertex degree of each feature point of each image determine the target feature point set.
  • the image processing method described in the embodiment of the present invention can reduce the number of feature points, thereby reducing the memory occupation of the SFM, and improving the operating efficiency of the SFM.
  • FIG. 1 is a schematic structural diagram of an image processing system provided by an embodiment of the present invention.
  • the system includes an image processing device 11 and a movable platform 12; wherein, the movable platform 12 may include, but is not limited to, unmanned
  • a shooting camera 13 can be mounted on a movable platform to shoot images, such as drones, unmanned vehicles, and mobile robots.
  • Figure 1 uses a drone as an example.
  • the movable platform 12 can acquire a plurality of images through the shooting camera 13, and the acquired multiple images are processed by the image processing device 11 to reconstruct three-dimensional spatial information.
  • the reconstruction of the three-dimensional spatial information can use the Structure from Motion (SFM) algorithm.
  • SFM Structure from Motion
  • the principle of the SFM algorithm is to use the feature points of multiple images and the correspondence between the feature points of multiple images to estimate the position and posture of the shooting camera and three-dimensional space information, as shown in Figure 2.
  • the main steps of the SFM algorithm include:
  • S202 Estimate the position and posture of the shooting camera and the three-dimensional space information according to the feature points of the multiple images and the correspondence between the feature points of the multiple images;
  • S203 Optimize the position and posture of the shooting camera and the three-dimensional space information by using the beam method adjustment.
  • the beam adjustment is the core of SFM.
  • the essence of the beam adjustment is a nonlinear least squares optimization problem.
  • the optimal fitting is performed on the initial position and initial posture of the shooting camera and the three-dimensional points corresponding to the feature points. Optimize the three-dimensional information and the position and posture of the camera to be closer to the real three-dimensional information.
  • feature points are points in the image that have distinct characteristics, can effectively reflect the essential features of the image, and can identify the target object in the image.
  • the feature points in the image can be obtained through different feature point detection methods.
  • Common feature point detection methods include Features from Accelerated Segment Test (FAST), Scale Invariant Feature Transform (SIFT) , Speeded Up Robust Feature (SURF), etc.
  • the position and posture of the shooting camera and the three-dimensional space information can be estimated. It can be understood that the optical center C 1 of the shooting camera shown in FIG. 2 represents the position of the shooting camera when the image 1 was shot, and the optical center C 2 of the shooting camera represents the position of the shooting camera when the image 2 was shot.
  • the 3D reconstruction algorithm uses multiple images to perform 3D reconstruction, it usually extracts feature points that are much larger than the actual SFM algorithm, and then adds all the feature points to the beam adjustment for optimization.
  • this method will cause the operation scale of the SFM algorithm to be too large, and seriously reduce the operation efficiency. Therefore, how to reasonably reduce the operation scale of the SFM algorithm and improve the operation efficiency has become an urgent problem to be solved.
  • an embodiment of the present invention provides an image processing method, which can be applied to the image processing system shown in FIG. 1, and the image processing method may include the following steps:
  • the correspondence between the feature points of the multiple images may be obtained based on the feature descriptor between each feature point.
  • the image processing equipment can use the feature point detection method to obtain the feature points in each image, and obtain the feature descriptor of each feature point; secondly, determine two feature points based on the distance between the feature descriptors of each feature point. Correspondence between feature points.
  • the distance between feature descriptors based on each feature point can be Euclidean distance or Hamming distance.
  • the descriptor can be regarded as a high-dimensional vector, and the Euclidean distance between the two vectors is calculated to determine whether the two feature points correspond to each other.
  • the Hamming distance between two vectors can be calculated to determine whether two feature points correspond to each other.
  • a 128-bit descriptor is used to describe the feature points in image 1 and image 2.
  • the feature descriptor of feature point x 1 in image 1 is (11111...111), and the feature point in image 2
  • the feature descriptor of x 2 is (11111...000).
  • the distance between the feature descriptor of the feature point x 1 and the feature descriptor of the feature point x 2 can be calculated as (00000...111), if the distance is less than the preset threshold, the feature point x 1 corresponds to the feature point x 2 Three-dimensional point X of the same scene.
  • step S401 determining the vertex degree of each feature point of each image according to the correspondence between the feature points of the multiple images may include the following steps:
  • S4011 Acquire feature descriptors of feature points of multiple images
  • S4012 Determine the vertex degree of each feature point of each image according to the distance between feature descriptors of the feature points of the multiple images.
  • determining the vertex degree of each feature point of each image according to the distance between the feature descriptors of the feature points of the multiple images may include: when any feature point in the multiple images When the distance to the feature descriptor of the N feature points is less than the preset distance threshold, it is determined that the vertex degree of any feature point is N+1. Taking Fig. 3 as an example, if the distance between the feature descriptor of feature point x 1 in image 1 and the feature descriptor of feature point x 2 in image 2 is less than the preset threshold, then feature point x 1 and feature point The vertex degree of x 2 is 2.
  • the N+1 feature points may be feature points with the same name.
  • the vertex degree can be used to indicate the number of times that the three-dimensional points corresponding to the feature points are extracted as feature points in multiple images. Therefore, all images to be processed can be traversed to determine the vertex degree of each feature point. Specifically, when the distance between any feature point in the plurality of images and the feature descriptor of the N feature points is less than a preset distance threshold, it is determined that the vertex degree of any feature point is N+1.
  • the feature point in the image 1 x characterized by the feature point calculating x 1 sub-described characteristic feature of all the points in all pending images other than the image distance described sub 1, to obtain the feature point x 1 If the distance between the feature descriptor of the feature point and the feature descriptor of the N feature points is less than the preset distance threshold, it can be determined that the number of feature points with the same name of the feature point x 1 is N+1, and accordingly, the feature point x 1 The vertex degree is N+1.
  • S402 Perform grid division on each image to obtain the number of grids in each image.
  • each image can be divided by a uniform grid or a non-uniform grid.
  • the number of grids in each image is equal or unequal.
  • the number of grids of image 1 is 10000; for example, if image 2 is evenly divided into 80*80 grids, then the number of grids of image 2 is 6,400.
  • the number of grids in image 1 may be 10,000, and the number of grids in image 2 may be 6,400.
  • each image is divided into grids, so that the feature points in each image are distributed in each grid; when the feature points are subsequently filtered, it is beneficial to perform the feature points in each grid. filter.
  • this embodiment of the invention takes into account that the number of feature points in each image is very large, and each feature point may be unevenly distributed in the image, for example, if a certain area of the image has more weak textures or repeated textures , The area has fewer feature points; if the texture of a certain area in the image is richer, the area has more feature points. Therefore, the use of grid-based method to determine the target feature point set is conducive to making the screening Feature points can effectively reflect the essential characteristics of the image.
  • S403 Determine a target feature point set from the feature points of the multiple images according to the number of grids of each image and the vertex degree of each feature point of each image.
  • S403 may include the following steps:
  • the number of feature points of the image is greater than the number of grids of the image; when the number of feature points of the image is greater than the number of grids of the image, then the features in each grid of the image Point, the feature point with the largest vertex degree in each grid is retained, and the feature points retained by each image constitute a feature point set. For example, if image 1 is evenly divided into 2*2 grids, the number of grids in image 1 is 4.
  • image 1 includes 6 feature points P 11 , P 12 ... P 16 , where the feature point contained in grid 1 is P 11 , the pixel point coordinates contained in grid 2 are P 12 and P 13 , and grid 3 contains The pixel coordinates are P 14 , and the pixel coordinates of the grid 4 are P 15 and P 16 .
  • the analysis shows that the number of feature points included in image 1 is greater than the number of grids, and for each grid of image 1, the point with the largest vertex degree in each grid is retained.
  • grid 1 and grid 3 each contain only one feature point, all feature points of grid 1 and grid 3 are retained;
  • grid 2 contains two feature points P 12 and P 13 , compare P 12 and The vertex power of P 13 , assuming that the vertex power of P 12 is greater than the vertex power of P 13 , grid 2 retains the feature point P 12 ;
  • grid 4 contains two feature points P 15 and P 16 respectively , then compare P 15 and P The vertex power of 16 , assuming that the vertex power of P 16 is greater than the vertex power of P 15 , the grid 2 retains the feature point P 16 .
  • the set of feature points retained by image 1 is ⁇ P 11 , P 12 , P 14 , P 16 ⁇ .
  • the three-dimensional points corresponding to the feature points with the greater vertex degree are extracted as feature points in multiple images more often, the feature points with the greater vertex degree are more reliable, according to the feature points with larger vertex degree The higher the accuracy of recovering 3D points from the collection.
  • retaining the feature points with the largest vertex degree in each grid can make the determined target feature point set uniformly distributed on the image, and there will be no phenomenon that there are too many feature points in some images and few feature points in some images, which ensures The overall solution will not fall into the local optimal solution, which improves the overall accuracy of the SFM solution.
  • the method in S4031 can be used to retain the feature point with the largest vertex degree in each grid in each image.
  • the feature points retained by each image constitute a feature point set, and the feature points retained by multiple images constitute multiple feature point sets. Take the union of the above multiple feature point sets to obtain the target feature point set.
  • the set of feature points retained by image 1 is ⁇ P 11 , P 12 , P 14 , P 15 ⁇
  • the set of feature points retained by image 2 is ⁇ P 21 , P 22 , P 24 , P 25 ⁇
  • the target feature point The set is ⁇ P 11 , P 12 , P 14 , P 15 , P 21 , P 22 , P 24 , P 25 ⁇ .
  • S403 may include the following steps:
  • S4034 Take a union set of the feature points reserved for the multiple images as the target feature point set.
  • FIG. 7 is a schematic flowchart of another image processing method provided by an embodiment of the present invention.
  • the method is based on the target feature point set obtained in the embodiment shown in FIG. 4 to FIG. 6, and explains how to resume shooting The position and posture of the camera at the moment of shooting, as well as more accurate three-dimensional spatial information.
  • the image processing method may further include the following steps S701-S703:
  • S701 Acquire the initial position and initial posture of the shooting camera.
  • the initial position and initial posture of the shooting camera can be obtained according to the Global Positioning System (GPS) positioning information recorded in the image, and/or according to the relative position information of the shooting camera obtained by matching feature points with the same name. It can be understood that the initial position and initial posture of the shooting camera obtained by the above method usually have errors relative to the actual position and posture of the shooting camera.
  • GPS Global Positioning System
  • S702 Determine an initial three-dimensional point set according to the initial position and initial posture of the shooting camera and the target feature point set.
  • a group of feature points with the same name included in the target feature point set are the projection points of a three-dimensional point in space in different images, which can be determined in space according to the initial position and posture of the shooting camera and a group of feature points with the same name A three-dimensional point. It is understandable that for the target feature point set including multiple groups of feature points with the same name, according to the initial position and initial attitude of the shooting camera and multiple groups of feature points with the same name, multiple three-dimensional points can be determined in space. The points constitute the initial three-dimensional point set. Due to the error in the initial position and initial posture of the shooting camera, there are also errors between the multiple three-dimensional points included in the initial three-dimensional point set and the actual three-dimensional points in space.
  • S703 Fit the initial position and initial posture of the photographing camera, the target feature point set and the initial three-dimensional point set to obtain the target position and posture of the photographing camera and the target three-dimensional point set.
  • the above steps are the core steps of the beam adjustment.
  • the essence of the beam adjustment is a nonlinear least squares optimization problem.
  • the target 3D point set can be obtained And the target position and target posture of the shooting camera. It can be understood that the target position and target posture of the shooting camera are the actual position and actual posture of the shooting camera in space.
  • Table 1 compares the memory usage, running time and number of iterations of the above two schemes.
  • data set 1 is an orthographic data set, and the images are all taken at 90 degrees vertically downwards, a total of 137 images
  • data set 2 is an oblique data set, including the data of upright down and the images obtained by oblique shooting in four directions , A total of 269 images.
  • Data set 1 uses the image processing method of this application. Compared with the method of undetermined target feature point set in the existing solution, the memory peak value is reduced from 14.78GB to 7.59GB, which is nearly a half; Data set 2 uses this application’s After the image processing method, the peak memory has been reduced from 30.06GB to 10.80GB, a reduction of nearly 2/3. It can be seen that the image processing method of the present application can solve the memory bottleneck problem of the SFM algorithm to a great extent.
  • the performance improvement in other aspects of the image processing method described in this application is also obvious.
  • the running time of data set 1 has been increased from 199 seconds to 32 seconds
  • the running time of data set 2 has been increased from 418 seconds to 75 seconds. It can be seen that both data sets have Nearly 5 times the operating efficiency increase.
  • the beam adjustment is essentially a nonlinear least squares optimization problem, it is generally solved in an iterative manner. Therefore, the fewer iterations, the better the data convergence and the more accurate the adjustment result.
  • the number of iterations of data set 1 is reduced from 90 to 38, a reduction of nearly 2/3; the number of iterations of data set 2 is reduced from 130 to 64, Reduced by nearly half.
  • the image processing method provided by the embodiment of the present invention is based on the target feature point set, and the beam adjustment is used to determine the target position and target posture of the shooting camera and the target three-dimensional point set;
  • the set of target feature points after point screening can reduce the calculation scale of beam adjustment, reduce the memory usage of SFM, and improve the operating efficiency of the algorithm.
  • the embodiment of the present invention also provides an image processing device, which can execute the corresponding steps in the above image processing method.
  • the image processing device includes a memory 801 and a processor 802; the memory 801 is used to store program codes; the processor 802 calls the program codes, and when the program codes are executed, they are used to perform the following operations:
  • the vertex power of each feature point of each image is determined, and the vertex power is used to indicate that the spatial three-dimensional point corresponding to the feature point is included in the multiple images.
  • a target feature point set is determined from the feature points of the multiple images.
  • the processor 802 is further configured to:
  • the vertex degree of each feature point of each image is determined.
  • the processor 802 is further configured to:
  • the processor 802 is further configured to:
  • the processor 802 is further configured to:
  • the processor 802 is further configured to:
  • the target position and target posture of the shooting camera and the target three-dimensional point set are determined.
  • the processor 802 is further configured to:
  • the embodiment of the present invention provides an image processing device, which can determine a target feature point set according to the grid number of each image and the vertex degree of each feature point of each image, which can reduce the number of feature points , Thereby reducing the memory footprint of SFM and improving the operating efficiency of SFM.
  • a computer-readable storage medium stores a computer program.
  • the computer program is executed by a processor, the implementation of the present invention corresponding to FIGS. 4 to 7 is realized.
  • the related functions described in the examples can also realize the functions of the image processing device described in FIG. 8, which will not be repeated here.
  • the computer-readable storage medium may be an internal storage unit of the device described in any of the foregoing embodiments, such as a hard disk or memory of the device.
  • the computer-readable storage medium may also be an external storage device of the device, such as a plug-in hard disk equipped on the device, a Smart Media Card (SMC), or a Secure Digital (SD) card. , Flash Card, etc.
  • the computer-readable storage medium may also include both an internal storage unit of the device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the terminal.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例提供一种影像处理方法,包括根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数;对所述每个影像进行格网划分,获得所述每个影像的格网数;根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。采用本发明实施例提供的方法,可以减少参与SFM算法的特征点数量,从而降低SFM算法的运算规模,提高运行效率。

Description

一种影像处理方法、装置及系统 技术领域
本发明涉及图像处理领域,尤其涉及一种影像处理方法、装置及系统。
背景技术
由于无人机具有数据采集灵活、时效性强等特点,基于无人机影像序列的三维重建方法可以代替传统的机载测量、地面测量等时效性低的三维空间信息获取方式。基于无人机影像进行三维重建可以采用运动结构恢复(Structure from Motion,SFM)方法。该方法可以对影像中的特征点进行检测和匹配,以重建三维空间信息。然而,目前三维重建过程中具有内存占用大、运行效率低等问题。
发明内容
本发明实施例提供一种影像处理方法,可以降低内存占用率,并提高运行效率。
一方面,本发明实施例提供一种影像处理方法,包括:
根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数;
对所述每个影像进行格网划分,获得所述每个影像的格网数;
根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
另一方面,本发明实施例提供一种影像处理装置,包括存储器和处理器;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像 中被提取为特征点的次数;
对所述每个影像进行格网划分,获得所述每个影像的格网数;
根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
另一方面,本发明实施例提供一种影像处理系统,包括:
可移动平台,用于通过拍摄相机获取多个影像;
影像处理设备,用于基于上述多个影像执行如下操作:
根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数;
对所述每个影像进行格网划分,获得所述每个影像的格网数;
根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
本发明实施例中,根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数;对所述每个影像进行格网划分,获得所述每个影像的格网数;根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,确定目标特征点集合。实施本发明实施例,可以减少参与SFM算法的特征点数量,从而降低SFM算法的运算规模,提高运行效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种影像处理系统的结构示意图;
图2是本发明实施例提供的一种影像处理方法的流程图;
图3是本发明实施例提供的一种多个影像的特征点之间对应关系的示意图;
图4是本发明实施例提供的一种确定目标特征点集合的方法的流程图;
图5是本发明实施例提供的另一种确定目标特征点集合的方法的流程图;
图6是本发明实施例提供的另一种确定目标特征点集合的方法的流程图;
图7是本发明实施例提供的另一种影像处理方法的流程图;
图8是本发明实施例提供的一种影像处理装置的示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
为了解决现有技术中SFM算法提取的特征点数量大,算法运行效率低的问题,本发明实施例提出一种影像处理方法,可以应用于一种影像处理系统中,能够根据每个影像的格网数和每个影像的每个特征点的顶点度数,确定目标特征点集合。也就是说,本发明实施例所述的影像处理方法能够减少特征点数,从而减少SFM的内存占用,提高SFM的运行效率。以下结合附图对本发明实施例的相关内容进行阐述。
以下结合上述所示的影像处理系统,对本发明实施例所述的影像处理方法进行相对详细的阐述。
请参见图1,图1是本发明实施例提供的一种影像处理系统的结构示意图,所述系统包括影像处理设备11和可移动平台12;其中,可移动平台12可以包括但不限于无人机、无人车和移动机器人等,可移动平台上可以挂载拍摄相机13以拍摄影像,图1以无人机为例。可移动平台12可通过拍摄相机13获取多个影像,并将获取的多个影像由影像处理设备11进行处理,以重建三维空间信息。
重建三维空间信息可以采用运动结构恢复(Structure from Motion,SFM)算法。SFM算法的原理为利用多个影像的特征点以及多个影像的特征点之间的对应关系,估计拍摄相机的位置和姿态以及三维空间信息,如图2,SFM算 法的主要步骤包括:
S201、获取多个影像的特征点以及多个影像的特征点之间的对应关系;
S202、根据多个影像的特征点以及多个影像的特征点之间的对应关系,估计拍摄相机的位置和姿态以及三维空间信息;
S203、采用光束法平差优化拍摄相机的位置和姿态以及三维空间信息。
其中,光束法平差为SFM的核心,光束法平差实质是一个非线性最小二乘优化问题,通过对拍摄相机的初始位置和初始姿态以及特征点对应的三维点进行最优拟合,以优化三维空间信息以及相机的位置和姿态,以便更接近于真实的三维空间信息。
可以理解的是,特征点为影像中具有鲜明特性、能够有效反映影像本质特征并且能够标识影像中目标物体的点。影像中的特征点可以通过不同的特征点检测方法获取,常见的特征点检测方法有加速分割测试获取特征(Features from Accelerated Segment Test,FAST),尺度不变特征转换(Scale Invariant Feature Transform,SIFT),加速稳健特征(Speeded Up Robust Feature,SURF)等。
多个影像的特征点之间可以存在一种对应关系,如图3所示,其中,影像1和影像2从不同角度拍摄了三维点X;影像1的特征点x 1为该三维点X在影像1中的成像,影像2中的特征点x 2为该三维点X在影像2中的成像,则x 1和x 2之间具有一定的对应关系。相应地,若有多张影像同时拍摄到了该三维点X,那么该三维点X对应的多张影像上的特征点x 1,x 2,…,x n之间均存在一种对应关系。进而,利用多个影像的特征点以及多个影像的特征点之间的对应关系,可以估计拍摄相机的位置和姿态以及三维空间信息。可以理解的是,图2所示的拍摄相机的光心C 1表示拍摄相机在拍摄影像1时的位置,拍摄相机的光心C 2表示拍摄相机在拍摄影像2时的位置。
目前,三维重建算法利用多影像进行三维重建时,通常会提取数量上远大于SFM算法实际所需的特征点,进而将全部的特征点加入光束法平差中进行优化。然而,该方式将导致SFM算法的运算规模过大,严重降低了运行效率,因此,如何合理的降低SFM算法的运算规模,提高运行效率成为一个亟待解决的问题。
为了解决上述问题,本发明实施例提供一种影像处理方法,该影像处理方 法可以应用于图1所示的影像处理系统中,该影像处理方法可包括以下步骤:
S401,根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数。
本发明实施例中,多个影像的特征点之间的对应关系可以基于每个特征点之间的特征描述子来获得。首先,影像处理设备可以采用特征点检测方法,获取每个影像中的特征点,并且获取每个特征点的特征描述子;其次,基于各特征点的特征描述子之间的距离来确定两个特征点之间的对应关系。
其中,基于各特征点的特征描述子之间的距离可以为欧式距离或汉明距离。例如,对于float类型的特征描述子,可以将描述子视为一个高维度的向量,通过计算两个向量之间的欧式距离,来确定两个特征点是否相互对应。又例如,对于bit类型的特征描述子,可以通过计算两个向量之间的汉明距离来确定两个特征点是否相互对应。
以图3为例,采用128位的描述子对影像1和影像2中的特征点进行描述,影像1中的特征点x 1的特征描述子为(11111…111),影像2中的特征点x 2的特征描述子为(11111…000)。可以计算特征点x 1的特征描述子与特征点x 2的特征描述子之间的距离为(00000…111),若该距离小于预设的阈值,则特征点x 1和特征点x 2对应相同的场景三维点X。
在一种实施例中,步骤S401中,根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,可包括以下步骤:
S4011,获取多个影像的特征点的特征描述子;
S4012,根据多个影像的特征点的特征描述子之间的距离,确定所述每个影像的每个特征点的顶点度数。
其中,S4012中,根据所述多个影像的特征点的特征描述子之间的距离,确定每个影像的每个特征点的顶点度数,可包括:当所述多个影像中任一特征点与N个特征点的特征描述子的距离小于预设距离阈值时,则确定所述任一特征点的所述顶点度数为N+1。以图3为例,若影像1中的特征点x 1的特征描述子与影像2中的特征点x 2的特征描述子之间的距离小于预设的阈值,则特征点x 1和特征点x 2的顶点度数均为2。
可选地,当多个影像中任一特征点与N个特征点的特征描述子的距离小于预设距离阈值时,该N+1个特征点可以为同名特征点。相应地,顶点度数可用于指示特征点对应的空间三维点在多个影像中被提取为特征点的次数。因此,可遍历所有待处理的影像,进而确定每个特征点的顶点度数。具体的,当所述多个影像中任一特征点与N个特征点的特征描述子的距离小于预设距离阈值时,则确定所述任一特征点的所述顶点度数为N+1。例如,对于影像1中的特征点x 1,通过计算特征点x 1的特征描述子与除影像1之外的所有待处理影像中的所有特征点的特征描述子的距离,得到特征点x 1的特征描述子与N个特征点的特征描述子的距离均小于预设距离阈值,则可以确定该特征点x 1的同名特征点的数量为N+1,相应的,该特征点x 1的顶点度数为N+1。
S402,对所述每个影像进行格网划分,获得所述每个影像的格网数。
可选的,每个影像可以采用均匀格网划分,或非均匀格网划分。也就是说,每个影像的格网数相等或不等。
例如,影像1均匀划分为100*100个格网,则影像1的格网数为10000;又例如,影像2均匀划分为80*80个格网,则影像2的格网数为6400。又例如,影像1的格网数可为10000,影像2的格网数可为6400。
本发明实施例对每个影像进行格网划分,使每个影像中的特征点都分布于各格网中;在后续对特征点进行筛选时,有利于对每个格网中的特征点进行筛选。可见,该发明实施方式考虑到每个影像中的特征点数量非常大,并且各特征点在影像中可能是非均匀分布的,例如,若影像中的某一区域有较多的弱纹理或者重复纹理,则该区域的特征点较少;若影像中的某一区域的纹理比较丰富,则该区域的特征点较多,因此,采用基于格网的方式确定目标特征点集合,有利于使得筛选的特征点能够有效反映影像本质特征。
S403,根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
下面对图4实施例中的S403进一步描述,请参见图5,当每个影像的特征点数大于每个影像的格网数时,S403可以包括以下步骤:
S4031,当所述每个影像的特征点数大于所述每个影像的格网数时,针对所述每个影像中的每个格网,保留所述每个格网中顶点度数最大的特征点。
具体的,对于一个影像,确定该影像的特征点数是否大于该影像的格网数;当该影像的特征点数大于该影像的格网数时,则对该影像中的每个格网中的特征点,保留每个格网中顶点度数最大的特征点,每个影像保留的特征点构成一个特征点集合。例如,影像1均匀划分为2*2的格网,则影像1的格网数为4。假设影像1包括6个特征点P 11、P 12…P 16,其中,格网1包含的特征点为P 11,格网2包含的像素点坐标为P 12和P 13,格网3包含的像素点坐标为P 14,格网4包含的像素点坐标为P 15和P 16。分析可知影像1包括的特征点数大于格网数,则针对影像1的每个格网,保留每个格网中顶点度数最大的点。具体的,格网1和格网3分别只包含一个特征点,则保留格网1和格网3的所有特征点;格网2包含两个特征点P 12和P 13,则比较P 12和P 13的顶点度数,假设P 12的顶点度数大于P 13的顶点度数,则格网2保留特征点P 12;格网4分别包含两个特征点P 15和P 16,则比较P 15和P 16的顶点度数,假设P 16的顶点度数大于P 15的顶点度数,则格网2保留特征点P 16。综上所述,影像1保留的特征点集合为{P 11,P 12,P 14,P 16}。可以理解的是,顶点度数越大的特征点对应的空间三维点在多个影像中被提取为特征点的次数越多,则顶点度数越大的特征点更加可靠,根据顶点度数大的特征点的集合恢复三维点的精度越高。此外,保留每个格网中顶点度数最大的特征点可以使得所确定的目标特征点集合在影像上均匀分布,不会出现部分影像特征点特别多而部分影像特征点特别少的现象,保证了整体解算不会陷入局部最优解,从而使得SFM解算的整体精度有所提升。
S4032,对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
对于多个待处理的影像,若影像中的特征点数大于影像的格网数,则可以采用上述S4031中的方法,保留每个影像中的每个格网中顶点度数最大的特征点。每个影像保留的特征点构成一个特征点集合,那么多个影像保留的特征点构成多个特征点集合。将上述多个特征点集合取并集,得到目标特征点集合。例如,影像1保留的特征点集合为{P 11,P 12,P 14,P 15},影像2保留的特征点集合为{P 21,P 22,P 24,P 25},则目标特征点集合为{P 11,P 12,P 14,P 15,P 21,P 22,P 24,P 25}。
下面对图4实施例中的S403进一步描述,请参见图6,当每个影像的特征点数小于每个影像的格网数时,S403可以包括以下步骤:
S4033,当所述每个影像的特征点数小于所述每个影像的格网数时,则针对所述每个影像中的每个格网,保留所述每个格网中所有的特征点。
S4034,对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
具体的,当影像的特征点数小于该影像的格网数时,保留该影像包括的所有特征点。可以理解的是,S4033和S4034的具体实施方式可以参考上述实施例S4031和S4032中每个影像保留特征点以及确定目标特征点集合的实施方式,在此不赘述。
请参见图7,图7为本发明实施例提供的另一种影像处理方法的流程示意图,该方法基于图4至图6所示的实施例中获取的目标特征点集合,阐述了如何恢复拍摄相机在拍摄时刻的位置和姿态,以及得到更加准确的三维空间信息。具体的,该影像处理方法可进一步的包括以下步骤S701-S703:
S701,获取拍摄相机的初始位置和初始姿态。
拍摄相机的初始位置和初始姿态可以根据影像中记录的全球定位系统(Global Positioning System,GPS)定位信息得到,和/或根据同名特征点匹配得到的拍摄相机的相对位置信息得到。可以理解的是,采用上述方法得到的拍摄相机的初始位置和初始姿态通常相对于拍摄相机真实位置和姿态存在误差。
S702,根据所述拍摄相机的初始位置和初始姿态以及所述目标特征点集合,确定初始三维点集合。
目标特征点集合中包括的一组同名特征点为空间中的一个三维点在不同的影像中的投影点,则根据拍摄相机的初始位置和初始姿态以及一组同名特征点,可以在空间中确定一个三维点。可以理解的是,对于目标特征点集合中包括多组同名特征点,则根据拍摄相机的初始位置和初始姿态以及多组同名特征点,可以在空间中确定多个三维点,所述多个三维点构成初始三维点集合。由于拍摄相机的初始位置和初始姿态存在误差,那么初始三维点集合包括的多个三维点与空间中的实际三维点间也存在误差。
S703,对所述拍摄相机的初始位置和初始姿态,所述目标特征点集合以及所述初始三维点集合进行拟合,得到所述拍摄相机的目标位置和目标姿态以及所述目标三维点集合。
上述步骤为光束法平差的核心步骤,光束法平差的实质是一个非线性最小二乘优化问题,通过将目标特征点集合以及初始三维点集合进行最优拟合,可以得到目标三维点集合以及拍摄相机的目标位置和目标姿态。可以理解的是,拍摄相机的目标位置和目标姿态为空间中拍摄相机的实际位置和实际姿态。
请参见表1,表1分别将上述两种方案的内存占用、运行时间以及迭代次数进行了比较。其中,数据集1为正摄数据集,影像均90度垂直向下拍摄获得,共137张;数据集2为倾斜数据集,包括了正摄向下的数据和四个方向倾斜拍摄获得的影像,共269张影像。数据集1使用本申请的影像处理方法相较于现有方案中未确定目标特征点集的方式相比,内存峰值由14.78GB降到了7.59GB,减少了近一半;数据集2使用本申请的影像处理方法之后,内存峰值由30.06GB降到了10.80GB,减少了近2/3。可见,本申请的影像处理方法能够极大程度的解决SFM算法的内存瓶颈问题。
本申请所述的影像处理方法在其他方面的性能提升也比较明显。例如,从运行效率上看,相较于现有方案,数据集1的运行时间由199秒提升至32秒,数据集2的运行时间由418秒提升至75秒,可见,两数据集均有近5倍的运行效率提升。再例如,由于光束法平差本质上是一个非线性最小二乘优化的问题,一般都会采用迭代的方式求解,因此迭代次数越少说明数据收敛性越好,平差结果越精确。从平差迭代次数来看,相较于现有方案,数据集1的迭代次数由90次降低到38次,减少了近2/3;数据集2的迭代次数由130次降低到64次,减少了近一半。
表1
Figure PCTCN2019077898-appb-000001
综上所述,本发明实施例提供的一种影像处理方法,基于目标特征点集合,采用光束法平差确定拍摄相机的目标位置和目标姿态以及目标三维点集合;该影像处理方法基于对特征点筛选后的目标特征点集合,可以降低光束法平差的 运算规模,降低SFM的内存占用,提高算法的运行效率。
本发明实施例还提供一种影像处理装置,可以执行上述影像处理方法中的相应步骤。请参见图8,该影像处理装置包括存储器801和处理器802;存储器801用于存储程序代码;处理器802调用程序代码,当程序代码被执行时,用于执行以下操作:
根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数;
对所述每个影像进行格网划分,获得所述每个影像的格网数;
根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
在一种实施例中,处理器802还用于:
获取所述多个影像的特征点的特征描述子;
根据所述多个影像的特征点的特征描述子之间的距离,确定所述每个影像的每个特征点的顶点度数。
在一种实施例中,处理器802还用于:
当所述多个影像中任一特征点与N个特征点的特征描述子的距离小于预设距离阈值时,则确定所述任一特征点的所述顶点度数为N+1。
在一种实施例中,处理器802还用于:
当所述每个影像的特征点数大于所述每个影像的格网数时,针对所述每个影像中的每个格网,保留所述每个格网中顶点度数最大的特征点;
对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
在一种实施例中,处理器802还用于:
当所述每个影像的特征点数小于所述每个影像的格网数时,则针对所述每个影像中的每个格网,保留所述每个格网中所有的特征点;
对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
在一种实施例中,处理器802还用于:
基于所述目标特征点集合,确定拍摄相机的目标位置和目标姿态,以及目 标三维点集合。
在一种实施例中,处理器802还用于:
获取拍摄相机的初始位置和初始姿态;
根据所述拍摄相机的初始位置和初始姿态以及所述目标特征点集合,确定初始三维点集合;
对所述拍摄相机的初始位置和初始姿态,所述目标特征点集合以及所述初始三维点集合进行拟合,得到所述拍摄相机的目标位置和目标姿态以及所述目标三维点集合。
本发明实施例提供了一种影像处理装置,该装置可以根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,确定目标特征点集合,可以减少特征点数,从而减少SFM的内存占用,提高SFM的运行效率。
在本发明的实施例中还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本发明图4至图7所对应实施例中描述的相关功能,也可实现图8所述的影像处理装置的功能,在此不再赘述。
所述计算机可读存储介质可以是前述任一实施例所述的设备的内部存储单元,例如设备的硬盘或内存。所述计算机可读存储介质也可以是所述设备的外部存储设备,例如所述设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述计算机可读存储介质还可以既包括所述设备的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述终端所需的其他程序和数据。所述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (20)

  1. 一种影像处理方法,其特征在于,包括:
    根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数;
    对所述每个影像进行格网划分,获得所述每个影像的格网数;
    根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
  2. 根据权利要求1所述的方法,其特征在于,所述根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,包括:
    获取所述多个影像的特征点的特征描述子;
    根据所述多个影像的特征点的特征描述子之间的距离,确定所述每个影像的每个特征点的顶点度数。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述多个影像的特征点的特征描述子之间的距离,确定每个影像的每个特征点的顶点度数,包括:
    当所述多个影像中任一特征点与N个特征点的特征描述子的距离小于预设距离阈值时,则确定所述任一特征点的所述顶点度数为N+1。
  4. 根据权利要求3所述的方法,其特征在于,所述距离包括欧式距离或汉明距离。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合,包括:
    当所述每个影像的特征点数大于所述每个影像的格网数时,针对所述每个影像中的每个格网,保留所述每个格网中顶点度数最大的特征点;
    对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    当所述每个影像的特征点数小于所述每个影像的格网数时,则针对所述每个影像中的每个格网,保留所述每个格网中所有的特征点;
    对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
  7. 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:
    基于所述目标特征点集合,确定拍摄相机的目标位置和目标姿态,以及目标三维点集合。
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述目标特征点集合,确定拍摄相机的目标位置和目标姿态,以及目标三维点集合,包括:
    获取拍摄相机的初始位置和初始姿态;
    根据所述拍摄相机的初始位置和初始姿态以及所述目标特征点集合,确定初始三维点集合;
    对所述拍摄相机的初始位置和初始姿态,所述目标特征点集合以及所述初始三维点集合进行拟合,得到所述拍摄相机的目标位置和目标姿态以及所述目标三维点集合。
  9. 根据权利要求1所述的方法,其特征在于,所述每个影像的格网数相等或不等。
  10. 一种影像处理装置,其特征在于,包括存储器和处理器;
    所述存储器用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶 点度数,所述顶点度数用于指示所述特征点对应的空间三维点在所述多个影像中被提取为特征点的次数;
    对所述每个影像进行格网划分,获得所述每个影像的格网数;
    根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合。
  11. 根据权利要求10所述的影像处理装置,其特征在于,所述处理器在根据多个影像的特征点之间的对应关系,确定每个影像的每个特征点的顶点度数,时,执行如下操作:
    获取所述多个影像的特征点的特征描述子;
    根据所述多个影像的特征点的特征描述子之间的距离,确定所述每个影像的每个特征点的顶点度数。
  12. 根据权利要求11所述的影像处理装置,其特征在于,所述处理器在根据所述多个影像的特征点的特征描述子之间的距离,确定每个影像的每个特征点的顶点度数时,执行如下操作:
    当所述多个影像中任一特征点与N个特征点的特征描述子的距离小于预设距离阈值时,则确定所述任一特征点的所述顶点度数为N+1。
  13. 根据权利要求12所述的影像处理装置,其特征在于,所述距离包括欧式距离或汉明距离。
  14. 根据权利要求10所述的影像处理装置,其特征在于,所述处理器在根据所述每个影像的格网数和所述每个影像的每个特征点的顶点度数,从所述多个影像的特征点中确定目标特征点集合时,执行如下操作:
    当所述每个影像的特征点数大于所述每个影像的格网数时,针对所述每个影像中的每个格网,保留所述每个格网中顶点度数最大的特征点;
    对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
  15. 根据权利要求14所述的影像处理装置,其特征在于,所述处理器调用所述程序代码时,还执行如下操作:
    当所述每个影像的特征点数小于所述每个影像的格网数时,则针对所述每个影像中的每个格网,保留所述每个格网中所有的特征点;
    对所述多个影像保留的特征点取并集,作为所述目标特征点集合。
  16. 根据权利要求10至13任一项所述的影像处理装置,其特征在于,所述处理器调用所述程序代码时,还执行如下操作:
    基于所述目标特征点集合,确定拍摄相机的目标位置和目标姿态,以及目标三维点集合。
  17. 根据权利要求16所述的影像处理装置,其特征在于,所述处理器在基于所述目标特征点集合,确定拍摄相机的目标位置和目标姿态,以及目标三维点集合时,执行如下操作:
    获取拍摄相机的初始位置和初始姿态;
    根据所述拍摄相机的初始位置和初始姿态以及所述目标特征点集合,确定初始三维点集合;
    对所述拍摄相机的初始位置和初始姿态,所述目标特征点集合以及所述初始三维点集合进行拟合,得到所述拍摄相机的目标位置和目标姿态以及所述目标三维点集合。
  18. 根据要求10所述的影像处理装置,其特征在于,所述每个影像的格网数相等或不等。
  19. 一种影像处理系统,其特征在于,包括:
    可移动平台,用于通过拍摄相机获取多个影像;
    影像处理设备,用于执行如权利要求1至9任一项所述的影像处理方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存 储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1至9任一项所述的影像处理方法。
PCT/CN2019/077898 2019-03-12 2019-03-12 一种影像处理方法、装置及系统 WO2020181509A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980005051.4A CN111247563A (zh) 2019-03-12 2019-03-12 一种影像处理方法、装置及系统
PCT/CN2019/077898 WO2020181509A1 (zh) 2019-03-12 2019-03-12 一种影像处理方法、装置及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/077898 WO2020181509A1 (zh) 2019-03-12 2019-03-12 一种影像处理方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2020181509A1 true WO2020181509A1 (zh) 2020-09-17

Family

ID=70877362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077898 WO2020181509A1 (zh) 2019-03-12 2019-03-12 一种影像处理方法、装置及系统

Country Status (2)

Country Link
CN (1) CN111247563A (zh)
WO (1) WO2020181509A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124147A1 (en) * 2008-08-15 2013-05-16 Hailin Jin Random Sample Consensus for Groups of Data
CN103824278A (zh) * 2013-12-10 2014-05-28 清华大学 监控摄像机的标定方法和系统
CN104134203A (zh) * 2014-07-07 2014-11-05 上海珞琪软件有限公司 一种近景摄影测量的快速密集匹配法
CN105989626A (zh) * 2015-02-10 2016-10-05 深圳超多维光电子有限公司 三维场景构建方法及装置
CN108648270A (zh) * 2018-05-12 2018-10-12 西北工业大学 基于eg-slam的无人机实时三维场景重建方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914874B (zh) * 2014-04-08 2017-02-01 中山大学 一种无特征提取的紧致sfm三维重建方法
CN106033621B (zh) * 2015-03-17 2018-08-24 阿里巴巴集团控股有限公司 一种三维建模的方法及装置
CN107862744B (zh) * 2017-09-28 2021-05-18 深圳万图科技有限公司 航空影像三维建模方法及相关产品
CN108765298A (zh) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 基于三维重建的无人机图像拼接方法和系统
CN109325437B (zh) * 2018-09-17 2021-06-22 北京旷视科技有限公司 图像处理方法、装置和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124147A1 (en) * 2008-08-15 2013-05-16 Hailin Jin Random Sample Consensus for Groups of Data
CN103824278A (zh) * 2013-12-10 2014-05-28 清华大学 监控摄像机的标定方法和系统
CN104134203A (zh) * 2014-07-07 2014-11-05 上海珞琪软件有限公司 一种近景摄影测量的快速密集匹配法
CN105989626A (zh) * 2015-02-10 2016-10-05 深圳超多维光电子有限公司 三维场景构建方法及装置
CN108648270A (zh) * 2018-05-12 2018-10-12 西北工业大学 基于eg-slam的无人机实时三维场景重建方法

Also Published As

Publication number Publication date
CN111247563A (zh) 2020-06-05

Similar Documents

Publication Publication Date Title
CN108986161B (zh) 一种三维空间坐标估计方法、装置、终端和存储介质
CN108955718B (zh) 一种视觉里程计及其定位方法、机器人以及存储介质
WO2015135323A1 (zh) 一种摄像机跟踪方法及装置
CN107578376B (zh) 基于特征点聚类四叉划分和局部变换矩阵的图像拼接方法
CN111127524A (zh) 一种轨迹跟踪与三维重建方法、系统及装置
US11367195B2 (en) Image segmentation method, image segmentation apparatus, image segmentation device
CN110580720B (zh) 一种基于全景图的相机位姿估计方法
WO2021136386A1 (zh) 数据处理方法、终端和服务器
CN112200056B (zh) 人脸活体检测方法、装置、电子设备及存储介质
CN112328715A (zh) 视觉定位方法及相关模型的训练方法及相关装置、设备
CN114998773B (zh) 适用于无人机系统航拍图像的特征误匹配剔除方法及系统
US10791321B2 (en) Constructing a user's face model using particle filters
WO2021174539A1 (zh) 物体检测方法、可移动平台、设备和存储介质
CN113298871B (zh) 地图生成方法、定位方法及其系统、计算机可读存储介质
CN112270748B (zh) 基于图像的三维重建方法及装置
CN110188630A (zh) 一种人脸识别方法和相机
CN111738085A (zh) 实现自动驾驶同时定位与建图的系统构建方法及装置
WO2020181509A1 (zh) 一种影像处理方法、装置及系统
CN112257666B (zh) 目标图像内容的聚合方法、装置、设备及可读存储介质
CN110580737B (zh) 图像处理方法、系统以及具有存储功能的装置
WO2022041119A1 (zh) 三维点云处理方法及装置
CN114219706A (zh) 基于网格分区特征点约减的图像快速拼接方法
CN113160102A (zh) 三维场景重建的方法、装置、设备和存储介质
CN113793379A (zh) 相机姿态求解方法及系统、设备和计算机可读存储介质
US10553022B2 (en) Method of processing full motion video data for photogrammetric reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919365

Country of ref document: EP

Kind code of ref document: A1