WO2022028554A1 - 对光照鲁棒的主动相机重定位方法 - Google Patents
对光照鲁棒的主动相机重定位方法 Download PDFInfo
- Publication number
- WO2022028554A1 WO2022028554A1 PCT/CN2021/111064 CN2021111064W WO2022028554A1 WO 2022028554 A1 WO2022028554 A1 WO 2022028554A1 CN 2021111064 W CN2021111064 W CN 2021111064W WO 2022028554 A1 WO2022028554 A1 WO 2022028554A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- plane area
- area
- plane
- matching
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 81
- 238000005286 illumination Methods 0.000 title claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims description 56
- 238000004364 calculation method Methods 0.000 claims description 24
- 230000004927 fusion Effects 0.000 claims description 14
- 238000000354 decomposition reaction Methods 0.000 claims description 8
- 230000003628 erosive effect Effects 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 230000036544 posture Effects 0.000 abstract 2
- 238000012544 monitoring process Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000011524 similarity measure Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20072—Graph-based image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the invention belongs to the fields of artificial intelligence and computer vision, and relates to an active vision technology, in particular to an active camera relocation method robust to illumination.
- Active camera repositioning aims to physically restore the camera's six-degree-of-freedom pose to be consistent with the reference image, and plays an important role in environmental monitoring, preventive protection of historical and cultural heritage, and detection of minor changes.
- An important application of vision technology [1].
- the active camera relocation process includes the relative pose estimation of the camera and the dynamic adjustment of the camera. The adjustment of the camera is completed by the robot platform.
- the present invention provides two active camera relocation methods with the same inventive concept, and uses all valid planes in the observation scene to jointly estimate the motion information of the camera, effectively reducing the existing relocation methods for the illumination consistency and the intensity of previous observation scenes. Rely on the consistency of scene structure, and effectively reduce the time spent in the relocation process, thereby supporting reliable and efficient outdoor task operations.
- the technical solution is as follows:
- An active camera relocation method robust to illumination comprising the following steps:
- Step 1 Extract the effective plane area image set of the scene in the current observation image T and the reference observation image R and
- Step 2 Image collection in the effective plane area and The matching relationship is established in the method as follows:
- Step 3 Obtain the relative pose P i of the camera guided by each set of matching planes
- Step 4 Obtain the guidance camera motion information by fusing all the relative poses P i of the cameras.
- Step 1 specifically includes:
- Step 1 Determine the area ratio: Calculate the ratio of the image area of the detected plane area to the original scene image area;
- Step 2 Set thresholds and select sets individually and , the area where the ratio of the area of the plane area image to the scene image area is greater than the threshold constitutes an effective plane area image set and
- Step 3 If Collection and If the area ratio of all the plane area images in the set is less than the threshold, then the effective plane area image set is determined according to the image area with the larger area ratio of the plane area in the respective sets.
- the threshold set in step 2 can be 10%.
- the third step is as follows:
- Step 1 For each pair of matching planes calculate The homography matrix H i between ;
- Step 2 For each homography matrix H i , perform singular value decomposition on it respectively, and obtain the corresponding rotation matrix and translation vector, that is, the relative pose P i of the camera guided by the set of matching planes.
- the fourth step is as follows:
- Step 1 Numerical Weights The relationship between the number of feature matching point pairs that participate in the calculation of P i and the participating matching It is determined by the ratio of the number of feature matching point pairs calculated by the relative poses of all L cameras in the;
- Step 2 Distribution weight ⁇ i : image of the effective plane area that will participate in the calculation of P i Generated by clipping the circumscribed rectangle of the shape of the plane area will Divide evenly into grid areas, count the number of feature matching points involved in Pi calculation in each grid, calculate the variance of the number of related feature matching points distributed in each grid, and then calculate the distribution weight, the greater the variance. The smaller the value, the higher the weight.
- the present invention also provides an active camera relocation method robust to illumination, comprising the following steps:
- Step 1 Extract the effective plane area image set of the scene in the current observation image T and the reference observation image R (N represents the total number of planes, n is the plane index, p is the identification of the plane) and (M represents the total number of planes, m is the plane index, and p is the identification of the plane);
- Step 2 Image collection in the effective plane area and The matching relationship is established in the method as follows:
- Step 3 Obtain the relative pose P i of the camera guided by each set of matching planes
- Step 4 Obtain the guidance camera motion information by fusing all the relative poses P i of the cameras.
- Step 5 Determine whether the relocation process is completed. If the relocation is completed, terminate; if not, repeat iterative steps 1 to 5.
- step one specifically includes:
- Step 1 Determine the area ratio: Calculate the ratio of the image area of the detected plane area to the original scene image area;
- Step 2 Set thresholds and select sets individually and , the area where the ratio of the area of the plane area image to the scene image area is greater than the threshold constitutes an effective plane area image set and
- Step 3 If Collection and The area ratio of all the plane area images in the set is less than the threshold, then the effective plane area image set is determined according to the image area with the larger area ratio of the plane area in the respective sets.
- the threshold set in step 2 can be 10%.
- the third step is as follows:
- Step 1 For each pair of matching planes calculate The homography matrix H i between ;
- Step 2 For each homography matrix H i , perform singular value decomposition on it respectively, and obtain the corresponding rotation matrix and translation vector, that is, the relative pose P i of the camera guided by the set of matching planes.
- the fourth step is as follows:
- Step 1 Numerical Weights The relationship between the number of feature matching point pairs that participate in the calculation of P i and the participating matching It is determined by the ratio of the number of feature matching point pairs calculated by the relative poses of all L cameras in the;
- Step 2 Distribution weight ⁇ i : image of the effective plane area that will participate in the calculation of P i Generated by clipping the circumscribed rectangle of the shape of the plane area will Divide evenly into grid areas, count the number of feature matching points involved in Pi calculation in each grid, calculate the variance of the number of related feature matching points distributed in each grid, and then calculate the distribution weight, the greater the variance. The smaller the value, the higher the weight.
- Step five is as follows:
- the estimation of the camera motion information ie, the relative pose of the camera
- it can effectively reduce the apparent difference of the observed three-dimensional scene caused by different lighting conditions (direction, intensity).
- the selection and matching of effective planes can effectively reduce the influence of the structural changes of the observed 3D scene on the estimation of the relative pose of the camera.
- the existing relocation equipment can be reliably operated outdoors, and the limitation of the active camera relocation operation scene caused by the scene illumination difference and structural changes can be basically shielded.
- FIG. 1 is a flowchart of an active camera relocation method robust to illumination according to Embodiment 1 of the present invention
- FIG. 2 is a flowchart of an active camera relocation method robust to illumination according to Embodiment 2 of the present invention
- Fig. 3 is the relocation software and hardware system schematic diagram of Embodiment 2;
- FIG. 4 is a time and precision comparison between the method of the present invention and the existing optimal relocation method.
- Step 1 Determine the area ratio. The ratio of the image area of all detected planar regions to the original scene image area is calculated separately.
- Step 2 Pick sets individually and , the area where the area of the plane area image accounts for more than 10% of the scene image area constitutes an effective plane area image set and
- Step 3 If Collection and If all the area ratios are less than 10%, then select the image areas with the top 5 area ratios of the plane areas in their respective sets to form an effective plane area image set. and (If less than 5, the actual number shall prevail).
- Step 1 Establish a graph's node similarity measure and a graph's edge similarity measure.
- the node similarity between the two established graphs is measured by the node weight matrix, which is based on the number of SIFT feature points that establish a matching relationship on the plane images represented by the two nodes between different graphs; value matrix to measure the edge similarity between the two established graphs, specifically, the edge in the same graph represents the minimum Euclidean distance between the plane images represented by the two nodes it connects to, respectively from the two established graphs.
- the absolute value of the difference between the minimum Euclidean distances represented by the edges of the graph measures edge similarity.
- Step 2 Establish the problem objective function. Integrate the graph's node similarity weight matrix and the graph's edge similarity weight matrix in step 1 with the matrix W, where the diagonal elements of W are the node similarities between the two graphs, and all the off-diagonal elements of W Represents the edge similarity between two graphs of elements. Establish the objective function to solve the optimal allocation matrix X * :
- the obtained optimal allocation matrix contains and The matching situation of the nodes in the middle, and then obtain the image set and matching relationship
- X c is the column expanded form of X.
- Step 2 Calculate the homography matrix.
- Four pairs of matching points are randomly selected in X and Y, the data is normalized, the transformation matrix H is solved, and it is recorded as model M; the projection error between all data in the data set and model M is calculated, and the number of interior points is recorded. After the iteration is completed, the transformation matrix corresponding to the optimal model is selected as The homography matrix H i between .
- Step 1 Calculate the relative pose of the candidate camera.
- the rotation matrix r, the translation vector t, the plane normal vector n, and the distance d from the plane to the camera can be obtained as follows:
- Step 2 Select the relative pose of the camera. Use each set of arithmetic solutions (r i , t i ) in the candidate sequence of camera relative poses to triangulate the matching feature points involved in the calculation to restore the three-dimensional space point coordinates corresponding to the feature points on the image. The three-dimensional space points recovered by each set of arithmetic solutions are counted to satisfy the number of space points located in front of the camera model, and the reprojection error is also counted. Finally, a set of arithmetic solutions with the largest number of spatial points in front of the camera model and a small reprojection error is the camera relative pose P i guided by a set of matching planes.
- Step 1 Numerical Weights The number and participation of feature matching point pairs calculated by participating Pi It is determined by the ratio of the number of feature matching point pairs calculated by the relative poses of all L cameras.
- Step 2 Distribution weight ⁇ i : image of the effective plane area that will participate in the calculation of P i Generated by clipping the circumscribed rectangle of the shape of the plane area will Divide evenly into 10 ⁇ 10 grid areas, count the number of feature matching points participating in the Pi calculation in each grid, calculate the variance of the number of related feature matching points distributed in each grid, and then calculate Distribution weight, the smaller the variance, the higher the weight.
- Step 1 Convert the rotation matrix in Pi to Euler angle representation, i.e. Among them, r X , r Y , and r Z respectively represent the Euler rotation angles of the three coordinate directions in the three-dimensional space.
- Step 2 Use the calculated numerical weights and the distribution weight ⁇ i to fuse all P i :
- the "bisection and half" strategy used by the existing relocation methods is adopted to guide the camera motion according to the translation direction information provided by the obtained translation vector.
- the motion step size is less than the threshold ⁇ , it is determined that the relocation process is completed, and the relocation is terminated, otherwise steps (1) to (5) are repeated, and if it is not the first iteration, there is no need to repeat the extraction of the scene in the reference image in step 1.
- Flat area utilizing first-time information.
- Embodiment 2 of this patent application is a further refinement of Embodiment 1.
- supplementary definitions are obtained in Embodiment 2.
- the device components of Embodiment 2 have certain improvements, and the device components will be described.
- N represents the total number of planes, n is the plane index, p is the identification of the plane
- M represents the total number of planes, m is the plane index, and p is the identifier representing the plane.
- Step 1 Determine the area ratio. The ratio of the image area of all detected planar regions to the original scene image area is calculated separately.
- Step 2 Pick sets individually and , the area where the area of the plane area image accounts for more than 10% of the scene image area constitutes an effective plane area image set and
- Step 3 If Collection and If all the area ratios are less than 10%, then select the image areas with the top 5 area ratios of the plane areas in their respective sets to form an effective plane area image set. and (If less than 5, the actual number shall prevail).
- Step 1 Establish a graph's node similarity measure and a graph's edge similarity measure.
- the node similarity between the two established graphs is measured by the node weight matrix, which is based on the number of SIFT feature points that establish a matching relationship on the plane images represented by the two nodes between different graphs; value matrix to measure the edge similarity between the two established graphs, specifically, the edge in the same graph represents the minimum Euclidean distance between the plane images represented by the two nodes it connects, to be derived from the two established graphs respectively.
- the absolute value of the difference between the minimum Euclidean distances represented by the edges of the graph measures edge similarity.
- Step 2 Establish the problem objective function. Integrate the graph's node similarity weight matrix and the graph's edge similarity weight matrix in step 1 with the matrix W, where the diagonal elements of W are the node similarities between the two graphs, and all the off-diagonal elements of W Represents the edge similarity between two graphs of elements. Establish the objective function to solve the optimal allocation matrix X * :
- the obtained optimal allocation matrix contains and The matching situation of the nodes in the middle, and then obtain the image set and matching relationship
- X c is the column expanded form of X.
- Step 2 Calculate the homography matrix.
- Four pairs of matching points are randomly selected in X and Y, the data is normalized, the transformation matrix H is solved, and it is recorded as model M; the projection error between all data in the data set and model M is calculated, and the number of interior points is recorded. After the iteration is completed, the transformation matrix corresponding to the optimal model is selected as The homography matrix H i between .
- Step 1 Calculate the relative pose of the candidate camera.
- the rotation matrix r, the translation vector t, the plane normal vector n, and the distance d from the plane to the camera can be obtained as follows:
- Step 2 Select the relative pose of the camera. Use each set of arithmetic solutions (r i , t i ) in the candidate sequence of camera relative poses to triangulate the matching feature points involved in the calculation to restore the three-dimensional space point coordinates corresponding to the feature points on the image. The three-dimensional space points recovered by each set of arithmetic solutions are counted to satisfy the number of space points located in front of the camera model, and the reprojection error is also counted. Finally, a set of arithmetic solutions with the largest number of spatial points in front of the camera model and a small reprojection error is the camera relative pose P i guided by a set of matching planes.
- Step 1 Numerical Weights The number and participation of feature matching point pairs calculated by participating Pi It is determined by the ratio of the number of feature matching point pairs calculated by the relative poses of all L cameras.
- Step 2 Distribution weight ⁇ i : image of the effective plane area that will participate in the calculation of P i Generated by clipping the circumscribed rectangle of the shape of the plane area will Divide evenly into 10 ⁇ 10 grid areas, count the number of feature matching points participating in the Pi calculation in each grid, calculate the variance of the number of related feature matching points distributed in each grid, and then calculate Distribution weight, the smaller the variance, the higher the weight.
- Step 1 Convert the rotation matrix in Pi to Euler angle representation, i.e. Among them, r X , r Y , and r Z respectively represent the Euler rotation angles of the three coordinate directions in the three-dimensional space.
- Step 2 Use the calculated numerical weights and the distribution weight ⁇ i to fuse all P i :
- the "bisection and half" strategy used by the existing relocation methods is adopted to guide the camera motion according to the translation direction information provided by the obtained translation vector.
- the motion step size is less than the threshold ⁇ , it is determined that the relocation process is completed, and the relocation is terminated, otherwise steps (1) to (5) are repeated, and if it is not the first iteration, there is no need to repeat the extraction of the scene in the reference image in step 1.
- Flat area utilizing first-time information.
- the precise relocation of the camera is realized by the relocation software and hardware system as shown in Figure 3.
- the repositioning software and hardware system consists of four modules: a six-degree-of-freedom micro-motion gimbal, an intelligent gimbal control system, a repositioning software system and an active camera repositioning method robust to illumination.
- the above method process is integrated in the relocation software system. In actual work, the staff selects the historical observation image according to the UI prompt of the software system, and then the system automatically executes the above method, namely steps (1) to (5).
- step (4) the method obtains the information that finally guides the camera movement
- the software system sends motion instructions to the PTZ intelligent control system, and the PTZ intelligent control system then drives the 6-DOF micro-movement PTZ to execute motion
- the PTZ intelligent control system returns the motion completion command to the relocation software system, and performs the next method iteration.
- the judgment condition of step (5) is satisfied, the system shoots the current image before shooting, and the camera repositioning is completed.
- the experiment uses the mechanical monitoring platform equipped with Canon 5DMark III camera to carry out the relocation experiment.
- the equipment uses the method of the present invention and the most advanced relocation method to carry out the relocation operation respectively.
- the experiment was carried out in three types of monitoring scenarios, indoor and outdoor, divided into ordinary scenarios (no obvious illumination and scene structure changes), illumination change scenarios (outdoor relocation operations in different weather and different times, indoors in a set of controllable directions and Intensive LED light for relocation operations), structural change scenarios (outdoor relocation operations in scenes with a large number of plants in different seasons, indoor relocation operations in movable object scenes).
- the results show that: for ordinary detection scenes without obvious changes in illumination and scene structure, the two relocation methods have better performance. There is no significant difference in relocation accuracy, but the time performance of this method is better; for scenes with changes in light intensity and light direction, this method not only has good advantages in time performance, but also has significant relocation accuracy.
- the existing optimal relocation method fails, the relocation results of this method still support relevant requirements under the same circumstances; for scenes with obvious scene structure changes, this method Significant advantages are shown in both time performance and relocation accuracy. Thus, the feasibility and superiority of the inventive method can be demonstrated.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一种对光照鲁棒的主动相机重定位方法,包括下列步骤:提取当前观测图像T及参考观测图像R中场景的有效平面区域图像集合;在有效平面区域图像集合中建立匹配关系;获得每组匹配平面所指导的相机相对位姿;通过融合所有的相机相对位姿获得指导相机运动信息;通过运动步长判断重定位过程是否完成。
Description
本发明属于人工智能、计算机视觉领域,涉及主动视觉技术,具体为对光照鲁棒的主动相机重定位方法。
主动式相机重定位,旨在物理真实地将相机六自由度位姿恢复到与拍摄参考图像时一致,在环境监测,历史文化遗产预防性保护,微小变化检测等领域发挥着重要作用,是主动视觉技术的一个重要应用[1]。主动式相机重定位过程包括相机的相对位姿估计和相机的动态调整,相机的调整由机器人平台完成。
目前最先进的主动式相机重定位方法在大量野生赋存环境下的文化遗产微小变化检测任务中取得了巨大成功[2]。但需要注意的是,这些监测任务是在稳定和可控的环境条件下进行的,在这种条件下,所采用图像的特征匹配结果能够支持准确的相机位姿估计。
然而,当历次观测的光照条件(方向和强度)不同时,监测结果差强人意。光照的显著差异会改变场景(尤其是普遍存在的三维结构场景)的表观,进而使涉及到姿态估计的图像中的特征点描述子会发生变化,导致相机位姿估计失败。此外,如果观测场景中的背景(非监测对象区域)变化较大,例如,被监测古建筑附近的植被在不同季节可能发生剧烈变化(甚至结构变化),将显著增加图像中误匹配特征点的数量,严重影响重定位结果的准确性。实际任务中常见的上述两种情况,会严重损害主动相机重定位的精度,导致实际监测结果不可靠,从而无法支持户外不可控环境条件下的重定位作业。
参考文献:
Feng W,Tian F P,Zhang Q,et al.Fine-Grained Change Detection of Misaligned Scenes with Varied Illuminations[C]ICCV.IEEE,2015.
Tian F P,Feng W,Zhang Q,et al.Active camera relocalization from a single reference image without hand-eye calibration[J].IEEE transactions on pattern analysis and machine intelligence,2018,41(12):2791-2806.
冯伟,孙济洲,张乾,田飞鹏,韩瑞泽,专利名称:一种无需手眼标定的相机六自由度位姿精确重定位方法,申请号:CN201611140264.2
发明内容
本发提供了具有相同发明构思的两种主动相机重定位方法,利用观测场景中的所有有效平面对相机的运动信息进行联合估计,有效减少了现有重定位方法对历次观测场景光照一致性和场景结构一致性的依赖,同时有效减少了重定位过程的时间花费,从而支持可靠高效的户外任务作业。技术方案如下:
一种对光照鲁棒的主动相机重定位方法,包括下列步骤:
步骤三:获得每组匹配平面所指导的相机相对位姿P
i;
步骤四:通过融合所有的相机相对位姿P
i获得指导相机运动信息。
步骤一具体包括:
第1步:确定面积比:分别计算所检测到的平面区域的图像面积与原场景图像面积的比例;
第2步所设定的阈值可以为10%。
步骤三具体为:
第2步:对于每一个单应性矩阵H
i,分别对其进行奇异值分解,求取所对应的旋转矩阵和平移向量,即为该组匹配平面所指导的相机相对位姿P
i。
步骤四具体为:
(1)确定参与每个P
i计算的SIFT特征点个数及特征点在平面上的分布;
(2)根据每个P
i所对应的SIFT特征点个数即数值权和SIFT特征点分布情况即分布权进行权重融合,按照各自对相机相对位姿的影响比重对所有的P
i做加权融合,获得最终用来指导相机运动的信息,两个权重的确立方法如下:
第2步:分布权η
i:将参与P
i计算的有效平面区域图像
按照平面区域形状的外接矩形裁剪生成
将
均匀分割成网格区域,统计每一个网格中参与P
i计算的特征匹配点个数,计算这些分布在每一个网格中的相关特征匹配点个数的方差,进而计算分布权,方差越小,权值越高。
本发明同时提供一种对光照鲁棒的主动相机重定位方法,包括下列步骤:
(2)对于有效平面区域图像集合
以其中每一个平面区域图像
作节点建立无向全连通图
(V表示节点集合,每个节点对应一个平面;E为边集合,每个边表示边上两个节点代表的平面间的欧氏距离);对有效平面区域图像集合
做同样操作得到无向全连通图
(3)以平面图像的SIFT特征点个数作为图的节点权重,以平面图像间的欧式距离作为图的边权重,求解
和
间的图匹配问题,建立有效平面区域图像集合
和
中匹配关系
(L表示当前观测及参考观测图像中具有匹配关系的图像个数,i为索引)。
步骤三:获得每组匹配平面所指导的相机相对位姿P
i;
步骤四:通过融合所有的相机相对位姿P
i获得指导相机运动信息。
步骤五:判断重定位过程是否完成。若重定位完成则终止;若未完成则重复迭代步骤一至步骤五。
优选地,步骤一具体包括:
第1步:确定面积比:分别计算所检测到的平面区域的图像面积与原场景图像面积的比例;
第2步所设定的阈值可以为10%。
步骤三具体为:
第2步:对于每一个单应性矩阵H
i,分别对其进行奇异值分解,求取所对应的旋转矩阵和平移向量,即为该组匹配平面所指导的相机相对位姿P
i。
步骤四具体为:
(1)确定参与每个P
i计算的SIFT特征点个数及特征点在平面上的分布;
(2)根据每个P
i所对应的SIFT特征点个数即数值权和SIFT特征点分布情况即分布权进行权重融合,按照各自对相机相对位姿的影响比重对所有的P
i做加权融合,获得最终用来指导相机运动的信息,其中,两个权重的确立方法如下:
第2步:分布权η
i:将参与P
i计算的有效平面区域图像
按照平面区域形状的外接矩形裁剪生成
将
均匀分割成网格区域,统计每一个网格中参与P
i计算的特征匹配点个数,计算这些分布在每一个网格中的相关特征匹配点个数的方差,进而计算分布权,方差越小,权值越高。
步骤五具体为:
根据相机运动信息中平移维度的尺度大小判断重定位是否终止,当运动步长小于步长阈值ξ时,则判定重定位过程完成,终止重定位,否则重复步骤一至步骤五。
本发明提供的技术方案的有益效果是:
1、本发明在重定位过程中,由于对于相机运动信息(即相机相对位姿)的估计是基于平面的,可以有效减少由不同光照条件(方向、强度)导致观测的三维场景的表观不同对相机相对位姿估计的影响;同时,有效平面的选取和匹配可以有效减少观测的三维场景的结构性变化对相机相对位姿估计的影响。能使得现有的重定位设备可以在户外可靠地作业,基本屏蔽由场景光照差异和结构性变化所带来的主动相机重定位作业场景的限制。
2、本发明在重定位过程中,由于计算相机运动信息(即相机相对位姿)所使用的数学方法与现有重定位方法不同,有效减少了重定位过程的时间花费,能使得现有重定位设备更加高效地作业。
图1为本发明实施例1的一种对光照鲁棒的主动相机重定位方法的流程图;
图2为本发明实施例2的一种对光照鲁棒的主动相机重定位方法的流程图;
图3为实施例2的重定位软硬件系统示意图;
图4为本发明方法与现有最优重定位方法的时间和精度比较。
下面结合附图对本发明中的技术方案进行清楚、完整地描述。基于本发明中的技术方案,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例1
(一)有效平面的确定
说明1:检测图像中的平面区域
说明2:选取有效平面区域
第1步:确定面积比。分别计算所有检测到的平面区域的图像面积与原场景图像面积的比例。
(二)平面匹配关系的建立
说明3:图匹配问题的建立与求解
第1步:建立图的节点相似性度量和图的边相似性度量标准。以节点权值矩阵来度量两个建立的图之间的节点相似性,度量依据来源于不同图之间两个节点所表示的平面图像上建立起匹配关系的SIFT特征点个数;以边权值矩阵来度量两个建立的图之间的边相似性,具体地,同一个图中的边代表它所连接的两个节点所表示的平面图像间的最小欧式距离,以分别来自两个建立的图之间的边所代表的最小欧氏距离之差的绝对值度量边相似性。
第2步:建立问题目标函数。用矩阵W整合第1步中的图的节点相似性权值矩阵和图的边相似性权值矩阵,其中W的对角元素两个图之间的节点相似性,W的所有非对角元素表示元素两个图之间的边相似性。建立目标函数,求解最优分配矩阵X
*:
(三)匹配平面组指导的相机相对位姿的估计
获得每组匹配平面所指导的相机相对位姿P
i的具体方法为:
说明4:匹配平面对间单应性矩阵的计算
第1步:特征匹配。使用确立匹配平面部分提取的SIFT特征点,对于
中的所有特征点,在
中寻找与其描述子最相近的特征点作为匹配特征点。获得的最终匹配点集为:
中的特征点集合 X=[x
1,x
2,...,x
N]
3×N依次对应
中的特征点集合Y=[y
1,y
2,...,y
N]
3×N,其中x
i,y
i均为齐次坐标。
第2步:计算单应性矩阵。在X,Y中随机选出四对匹配点,对数据进行归一化,求解变换矩阵H,记为模型M;计算数据集中所有数据与模型M的投影误差,并记录内点个数。迭代完成后选取最优模型对应的变换矩阵为
间的单应性矩阵H
i。
(2)对于每一个单应性矩阵H
i,分别对其进行奇异值分解,求取所对应的旋转矩阵和平移向量,即为该组匹配平面所指导的相机相对位姿P
i。
说明5:匹配平面对所指导的相机相对位姿的估计
第1步:计算候选相机相对位姿。对于每一个单应性矩阵H与相机内参矩阵K,可以获得A=K
-1HK。对A进行矩阵的奇异值分解A=UΛV
T,Λ=diag(λ
1,λ
2,λ
3)(λ
1≥λ
2≥λ
3)。根据单应性矩阵的物理含义,可以得到旋转矩阵r,平移向量t,平面法向量n,平面距相机的距离d有如下关系:
第2步:选取相机相对位姿。使用相机相对位姿的候选序列中的每一组算数解(r
i,t
i)对参与计算的匹配特征点三角化恢复图像上特征点所对应的三维空间点坐标。统计由每组算数解恢复的三维空间点满足空间点位于相机模型前方的个数,同时统计重投影误差。最终位于相机模型前方的空间点个数最多且重投影误差小的一组算数解即为一组匹配平面所指导的相机相对位姿P
i。
(四)获得指导相机运动的信息
通过融合所有的相机相对位姿P
i获得指导相机运动信息的具体方法为:
(1)确定参与每个P
i计算的SIFT特征点个数及特征点在平面上的分布。
说明6:融合权重的确立
第2步:分布权η
i:将参与P
i计算的有效平面区域图像
按照平面区域形状的外接矩形裁剪生成
将
均匀分割成10×10大小的网格区域,统计每一个网格中参与P
i计算的特征匹配点个数,计算这些分布在每一个网格中的相关特征匹配点个数的方差,进而计算分布权,方差越小,权值越高。
说明7:指导相机运动信息的确定
(五)判断重定位是否完成
判断重定位过程是否完成的具体方法为:
由于计算的平移向量不可避免地缺失物理真实尺度,采用已有的重定位方法使用的“二分折半”策略来根据所得的平移向量提供的平移方向信息指导相机运动。当运动步长小于阈值ξ时,则判定重定位过程完成,终止重定位,否则重复步骤(一)至步骤(五),且非首次迭代时,在步骤一中无需重复提取参考图像中场景的平面区域,利用首次信息。
实施例2
此次专利申请的实施例2,是实施例1的进一步细化,对于实施例1中相同的参数未定义清楚的地方,在实施例2中得到补充定义。此外,实施例2的装置部件有一定的改进,并对装置部件展开描述。
(一)有效平面的确定
重定位过程开始前,提取当前观测图像T及参考观测图像R中场景的所有有效平面区域图像集合
(N表示平面总数,n为平面索引,p为表示平面的标识)和
(M表示平面总数,m为平面索引,p为表示平面的标识)。具体步骤如下:
说明1:检测图像中的平面区域
说明2:选取有效平面区域
第1步:确定面积比。分别计算所有检测到的平面区域的图像面积与原场景图像面积的比例。
(二)平面匹配关系的建立
说明3:图匹配问题的建立与求解
第1步:建立图的节点相似性度量和图的边相似性度量标准。以节点权值矩阵来度量两个建立的图之间的节点相似性,度量依据来源于不同图之间两个节点所表示的平面图像上建立起匹配关系的SIFT特征点个数;以边权值矩阵来度量两个建立的图之间的边相似性,具体地,同一个图中的边代表它所连接的两个节点所表示的平面图像间的最小欧式距离,以分别来自两个建立的图之间的边所代表的最小欧氏距离之差的绝对值度量边相似性。
第2步:建立问题目标函数。用矩阵W整合第1步中的图的节点相似性权值矩阵和图的边相似性权值矩阵,其中W的对角元素两个图之间的节点相似性,W的所有非对角元素表示元素两个图之间的边相似性。建立目标函数,求解最优分配矩阵X
*:
(三)匹配平面组指导的相机相对位姿的估计
获得每组匹配平面所指导的相机相对位姿P
i的具体方法为:
说明4:匹配平面对间单应性矩阵的计算
第1步:特征匹配。使用确立匹配平面部分提取的SIFT特征点,对于
中的所有特征点,在
中寻找与其描述子最相近的特征点作为匹配特征点。获得的最终匹配点集为:
中的特征点集合X=[x
1,x
2,...,x
N]
3×N依次对应
中的特征点集合Y=[y
1,y
2,...,y
N]
3×N,其中x
i,y
i均为齐次坐标。
第2步:计算单应性矩阵。在X,Y中随机选出四对匹配点,对数据进行归一化,求解变换矩阵H,记为模型M;计算数据集中所有数据与模型M的投影误差,并记录内点个数。迭代完成后选取最优模型对应的变换矩阵为
间的单应性矩阵H
i。
(2)对于每一个单应性矩阵H
i,分别对其进行奇异值分解,求取所对应的旋转矩阵和平移向量,即为该组匹配平面所指导的相机相对位姿P
i。
说明5:匹配平面对所指导的相机相对位姿的估计
第1步:计算候选相机相对位姿。对于每一个单应性矩阵H与相机内参矩阵K,可以获得A=K
-1HK。对A进行矩阵的奇异值分解A=UΛV
T,Λ=diag(λ
1,λ
2,λ
3)(λ
1≥λ
2≥λ
3)。根据单应性矩阵的物理含义,可以得到旋转矩阵r,平移向量t,平面法向量n,平面距相机的距离d有如下关系:
第2步:选取相机相对位姿。使用相机相对位姿的候选序列中的每一组算数解(r
i,t
i)对参与计算的匹配特征点三角化恢复图像上特征点所对应的三维空间点坐标。统计由每组算数解恢复的三维空间点满足空间点位于相机模型前方的个数,同时统计重投影误差。最终位于相机模型前方的空间点个数最多且重投影误差小的一组算数解即为一组匹配平面所指导的相机相对位姿P
i。
(四)获得指导相机运动的信息
通过融合所有的相机相对位姿P
i获得指导相机运动信息的具体方法为:
(1)确定参与每个P
i计算的SIFT特征点个数及特征点在平面上的分布。
说明6:融合权重的确立
第2步:分布权η
i:将参与P
i计算的有效平面区域图像
按照平面区域形状的外接矩形裁剪生成
将
均匀分割成10×10大小的网格区域,统计每一个网格中参与P
i计算的特征匹配点个数,计算这些分布在每一个网格中的相关特征匹配点个数的方差,进而计算分布权,方差越小,权值越高。
说明7:指导相机运动信息的确定
(五)判断重定位是否完成
判断重定位过程是否完成的具体方法为:
由于计算的平移向量不可避免地缺失物理真实尺度,采用已有的重定位方法使用的“二分折半”策略来根据所得的平移向量提供的平移方向信息指导相机运动。当运动步长小于阈值ξ时,则判定重定位过程完成,终止重定位,否则重复步骤(一)至步骤(五),且非首次迭代时,在步骤一中无需重复提取参考图像中场景的平面区域,利用首次信息。
(六)利用重定位软硬件系统实现相机精确重定位
通过如图3所示的重定位软硬件系统实现相机的精确重定位。重定位软硬件系统由六自由度微动云台、云台智能控制系统、重定位软件系统及对光照鲁棒的主动相机重定位方法四个模块组成。重定位软件系统中集成了上述方法过程。实际工作中,工作人员根据软件系统UI提示选择历史观测图像,然后系统自动执行上述方法,即步骤(一)至步骤(五)。在步骤(四)中,方法获得最终指导相机运动的信息
软件系统发送运动指令给云台智能控制系统,云台智能控制系统进而驱动六自由度微动云台执行运动
执行后,云台智能控制系统返回运动完成指令给重定位软件系统,并进行下一次方法迭代。当满足步骤(五)判断条件后,系统拍摄当拍摄前图像,相机重定位完成。
下面结合具体实例对本发明方法进行可行性验证:
实验使用搭载佳能5DMarkⅢ相机的机械监测平台进行重定位实验,对于同一监测目标,设备分别使 用本发明方法和现有最先进的重定位方法进行重定位作业。实验在室内和室外的三类监测场景进行,分为普通场景(无明显光照和场景结构变化)、光照变化场景(室外在不同天气和不同时刻进行重定位作业,室内在一组可控方向和强度的LED光下进行重定位作业)、结构变化场景(室外在不同季节有大量植物的场景下进行重定位作业,室内在可移动物体场景下进行重定位作业)。
结果分析选取重定位过程花费的时间和重定位后拍摄图像与参考图像之间的特征点平均距离(AFD)作为评估重定位方法的指标。其中AFD是指两幅图像所有匹配的特征点之间的欧式距离的平均值,这可以直观地评价重定位的精度。
根据图2所展示的本方法和已有最优的重定位方法[3]在不同场景下重定位作业的结果表明:对于无明显光照和场景结构变化的普通检测场景,两种重定位方法的重定位精度无显著优劣差异,但本方法的时间性能更加好;对于有光照强度和光照方向变化的场景,本方法除了在时间性能上有良好的优越性外,在重定位精度方面也有显著地优越性,尤其对于室外场景,在已有最优的重定位方法重定位作业失败时,相同情况下,本方法的重定位结果仍然支持相关需求;对于有明显场景结构变化的场景,本方法在时间性能和重定位精度方面均体现了显著的优越性。因此,可以表明发明方法的可行性和优越性。
Claims (13)
- 一种对光照鲁棒的主动相机重定位方法,包括下列步骤:步骤三:获得每组匹配平面所指导的相机相对位姿P i;步骤四:通过融合所有的相机相对位姿P i获得指导相机运动信息。
- 根据权利要求1所述的主动相机重定位方法,其特征在于,步骤一具体包括:第1步:确定面积比:分别计算所检测到的平面区域的图像面积与原场景图像面积的比例;
- 根据权利要求2所述的主动相机重定位方法,其特征在于,第2步所设定的阈值为10%。
- 根据权利要求1所述的主动相机重定位方法,其特征在于,步骤四具体为:(1)确定参与每个P i计算的SIFT特征点个数及特征点在平面上的分布;(2)根据每个P i所对应的SIFT特征点个数即数值权和SIFT特征点分布情况即分布权进行权重融合,按照各自对相机相对位姿的影响比重对所有的P i做加权融合,获得最终用来指导相机运动的信息,两个权重的确立方法如下:
- 根据权利要求5所述的主动相机重定位方法,其特征在于,数值权占比0.8,分布权占比0.2。
- 一种对光照鲁棒的主动相机重定位方法,包括下列步骤:(2)对于有效平面区域图像集合 以其中每一个平面区域图像 作节点建立无向全连通图 V表示节点集合,每个节点对应一个平面;E为边集合,每个边表示边上两个节点代表的平面间的欧氏距离;对有效平面区域图像集合 做同样操作得到无向全连通图(3)以平面图像的SIFT特征点个数作为图的节点权重,以平面图像间的欧式距离作为图的边权重,求解 和 间的图匹配问题,建立有效平面区域图像集合 和 中匹配关系 L表示当前观测及参考观测图像中具有匹配关系的图像个数,i为索引。步骤三:获得每组匹配平面所指导的相机相对位姿P i;步骤四:通过融合所有的相机相对位姿P i获得指导相机运动信息;步骤五:判断重定位过程是否完成,若重定位完成则终止;若未完成则重复迭代步骤一至步骤五。
- 根据权利要求7所述的主动相机重定位方法,其特征在于,步骤一具体包括:第1步:确定面积比:分别计算所检测到的平面区域的图像面积与原场景图像面积的比例;
- 根据权利要求8所述的主动相机重定位方法,其特征在于,第2步所设定的阈值为10%。
- 根据权利要求7所述的主动相机重定位方法,其特征在于,步骤四具体为:(1)确定参与每个P i计算的SIFT特征点个数及特征点在平面上的分布;(2)根据每个P i所对应的SIFT特征点个数即数值权和SIFT特征点分布情况即分布权进行权重融合,按照各自对相机相对位姿的影响比重对所有的P i做加权融合,获得最终用来指导相机运动的信息,两个权重的确立方法如下:
- 根据权利要求7所述的主动相机重定位方法,其特征在于,步骤五具体为:根据相机运动信息中平移维度的尺度大小判断重定位是否终止,当运动步长小于预设的步长阈值ξ时,则判定重定位过程完成,终止重定位,否则重复步骤一至步骤五。
- 根据权利要求7所述的主动相机重定位方法,其特征在于,数值权占比0.8,分布权占比0.2。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/018,820 US20230300455A1 (en) | 2020-08-06 | 2021-08-06 | Active camera relocation method having robustness to illumination |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010783333.1A CN112070831B (zh) | 2020-08-06 | 2020-08-06 | 一种基于多平面联合位姿估计的主动相机重定位方法 |
CN202010783333.1 | 2020-08-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022028554A1 true WO2022028554A1 (zh) | 2022-02-10 |
Family
ID=73657629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/111064 WO2022028554A1 (zh) | 2020-08-06 | 2021-08-06 | 对光照鲁棒的主动相机重定位方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230300455A1 (zh) |
CN (1) | CN112070831B (zh) |
WO (1) | WO2022028554A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115355822A (zh) * | 2022-10-19 | 2022-11-18 | 成都新西旺自动化科技有限公司 | 一种异形对位计算方法及系统 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070831B (zh) * | 2020-08-06 | 2022-09-06 | 天津大学 | 一种基于多平面联合位姿估计的主动相机重定位方法 |
US20210118182A1 (en) * | 2020-12-22 | 2021-04-22 | Intel Corporation | Methods and apparatus to perform multiple-camera calibration |
US20220414899A1 (en) * | 2021-06-29 | 2022-12-29 | 7-Eleven, Inc. | Item location detection using homographies |
CN114742869B (zh) * | 2022-06-15 | 2022-08-16 | 西安交通大学医学院第一附属医院 | 基于图形识别的脑部神经外科配准方法及电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140369557A1 (en) * | 2013-06-14 | 2014-12-18 | Qualcomm Incorporated | Systems and Methods for Feature-Based Tracking |
CN106595601A (zh) * | 2016-12-12 | 2017-04-26 | 天津大学 | 一种无需手眼标定的相机六自由度位姿精确重定位方法 |
CN108648240A (zh) * | 2018-05-11 | 2018-10-12 | 东南大学 | 基于点云特征地图配准的无重叠视场相机姿态标定方法 |
CN111402331A (zh) * | 2020-02-25 | 2020-07-10 | 华南理工大学 | 基于视觉词袋和激光匹配的机器人重定位方法 |
CN112070831A (zh) * | 2020-08-06 | 2020-12-11 | 天津大学 | 一种基于多平面联合位姿估计的主动相机重定位方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780297B (zh) * | 2016-11-30 | 2019-10-25 | 天津大学 | 场景和光照变化条件下的图像高精度配准方法 |
CN107067423A (zh) * | 2016-12-16 | 2017-08-18 | 天津大学 | 一种适用于开放赋存环境的文物本体微变监测的方法 |
US10984508B2 (en) * | 2017-10-31 | 2021-04-20 | Eyedaptic, Inc. | Demonstration devices and methods for enhancement for low vision users and systems improvements |
CN109387204B (zh) * | 2018-09-26 | 2020-08-28 | 东北大学 | 面向室内动态环境的移动机器人同步定位与构图方法 |
-
2020
- 2020-08-06 CN CN202010783333.1A patent/CN112070831B/zh active Active
-
2021
- 2021-08-06 US US18/018,820 patent/US20230300455A1/en active Pending
- 2021-08-06 WO PCT/CN2021/111064 patent/WO2022028554A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140369557A1 (en) * | 2013-06-14 | 2014-12-18 | Qualcomm Incorporated | Systems and Methods for Feature-Based Tracking |
CN106595601A (zh) * | 2016-12-12 | 2017-04-26 | 天津大学 | 一种无需手眼标定的相机六自由度位姿精确重定位方法 |
CN108648240A (zh) * | 2018-05-11 | 2018-10-12 | 东南大学 | 基于点云特征地图配准的无重叠视场相机姿态标定方法 |
CN111402331A (zh) * | 2020-02-25 | 2020-07-10 | 华南理工大学 | 基于视觉词袋和激光匹配的机器人重定位方法 |
CN112070831A (zh) * | 2020-08-06 | 2020-12-11 | 天津大学 | 一种基于多平面联合位姿估计的主动相机重定位方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115355822A (zh) * | 2022-10-19 | 2022-11-18 | 成都新西旺自动化科技有限公司 | 一种异形对位计算方法及系统 |
CN115355822B (zh) * | 2022-10-19 | 2023-01-17 | 成都新西旺自动化科技有限公司 | 一种异形对位计算方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN112070831B (zh) | 2022-09-06 |
US20230300455A1 (en) | 2023-09-21 |
CN112070831A (zh) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022028554A1 (zh) | 对光照鲁棒的主动相机重定位方法 | |
Chen et al. | Sports camera calibration via synthetic data | |
CN104601964B (zh) | 非重叠视域跨摄像机室内行人目标跟踪方法及系统 | |
CN111126304A (zh) | 一种基于室内自然场景图像深度学习的增强现实导航方法 | |
CN109559320A (zh) | 基于空洞卷积深度神经网络实现视觉slam语义建图功能的方法及系统 | |
CN111161334B (zh) | 一种基于深度学习的语义地图构建方法 | |
Feng et al. | Fine-grained change detection of misaligned scenes with varied illuminations | |
CN109752855A (zh) | 一种光斑发射装置和检测几何光斑的方法 | |
CN111079518A (zh) | 一种基于执法办案区场景下的倒地异常行为识别方法 | |
CN114119732B (zh) | 基于目标检测和K-means聚类的联合优化动态SLAM方法 | |
CN109443200A (zh) | 一种全局视觉坐标系和机械臂坐标系的映射方法及装置 | |
CN103886324B (zh) | 一种基于对数似然图像的尺度自适应目标跟踪方法 | |
CN111709997B (zh) | 一种基于点与平面特征的slam实现方法及系统 | |
CN112347974A (zh) | 人体头部姿态估计算法及操作员工作状态识别系统 | |
CN108596947A (zh) | 一种适用于rgb-d相机的快速目标跟踪方法 | |
CN103533332B (zh) | 一种2d视频转3d视频的图像处理方法 | |
CN107644203A (zh) | 一种形状自适应分类的特征点检测方法 | |
CN114235815A (zh) | 基于场景过滤的换流站户外电气设备表面缺陷检测方法 | |
CN112365537B (zh) | 一种基于三维点云对齐的主动相机重定位方法 | |
CN115862074A (zh) | 人体指向确定、屏幕控制方法、装置及相关设备 | |
Putra et al. | Camera-based object detection and identification using yolo method for indonesian search and rescue robot competition | |
US20230360262A1 (en) | Object pose recognition method based on triangulation and probability weighted ransac algorithm | |
Suzui et al. | Toward 6 dof object pose estimation with minimum dataset | |
Manawadu et al. | Object Recognition and Pose Estimation from RGB-D Data Using Active Sensing | |
CN115376073B (zh) | 一种基于特征点的异物检测方法和系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21853819 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21853819 Country of ref document: EP Kind code of ref document: A1 |