WO2019084726A1 - Marker-based camera image processing method, and augmented reality device - Google Patents

Marker-based camera image processing method, and augmented reality device Download PDF

Info

Publication number
WO2019084726A1
WO2019084726A1 PCT/CN2017/108404 CN2017108404W WO2019084726A1 WO 2019084726 A1 WO2019084726 A1 WO 2019084726A1 CN 2017108404 W CN2017108404 W CN 2017108404W WO 2019084726 A1 WO2019084726 A1 WO 2019084726A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sequence
camera
marker
feature point
Prior art date
Application number
PCT/CN2017/108404
Other languages
French (fr)
Chinese (zh)
Inventor
谢俊
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to CN201780096283.6A priority Critical patent/CN111344740A/en
Priority to PCT/CN2017/108404 priority patent/WO2019084726A1/en
Publication of WO2019084726A1 publication Critical patent/WO2019084726A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Abstract

The present invention relates to a marker-based camera image processing method, and an augmented reality device. The method comprises the steps of: selecting or extracting a marker image, performing perspective transformation on the marker image, and obtaining a marker image sequence; extracting sequence feature points of each sequence image in the marker image sequence; obtaining a current camera image, extracting image feature points of the current camera image, matching the image feature points of the current camera image with the sequence feature points of the sequence image, and obtaining feature point pairs that are successfully matched; and in combination with an intrinsic parameter matrix of a camera, computing an extrinsic parameter matrix of a current frame camera according to the feature point pairs that are successfully matched. In the present invention, a marker can be effectively recognized, the recognition rate of the marker can be increased in a great angle, the accuracy rate of feature point matching in a feature point matching process between a marker image and a camera image can also be improved, the computing amount is small, and accordingly, the present invention is applicable to mobile devices.

Description

基于标志物的摄像头图像处理方法、 增强现实设备 技术领域  Camera-based image processing method based on markers, augmented reality device
[0001] 本发明涉及增强现实技术领域, 更具体地说, 涉及一种基于标志物的摄像头图 像处理方法、 增强现实设备。  [0001] The present invention relates to the field of augmented reality technology, and more particularly to a marker-based camera image processing method and an augmented reality device.
背景技术  Background technique
[0002] 现有的增强现实技术 (Augmented Reality , 简称 AR) , 一般可通过计算机视 觉技术来真实拍摄场景与标记符号之间的相对位置关系, 输入实吋拍摄的图像 [0002] The existing Augmented Reality (AR) generally uses a computer vision technology to actually capture the relative positional relationship between a scene and a marker symbol, and input an image that is actually captured.
, 通过将标志物图像与实吋拍摄的图像进行比对, 具体为: 在拍摄的图像中搜 索识别出标志物的图像所对应的连通区域, 该连通区域作为候选对象, 得到各 连通区域的轮廓线, 若能提取出四条相交的直边, 则作为可能的标志物; 利用 四条直角边找到的角特征, 进行形变矫正, 进而获得标志物图像与拍摄图像之 间的对应关系。 By comparing the marker image with the image captured by the real image, the following is: searching the captured image for the connected region corresponding to the image identifying the marker, and the connected region is used as a candidate to obtain the contour of each connected region. Line, if it can extract four intersecting straight edges, it can be used as a possible marker; use the angular features found by four right-angled sides to perform deformation correction, and then obtain the correspondence between the marker image and the captured image.
[0003] 但是该种方法所选取的标志物图像是唯一的, 当摄像头在拍摄过程中变换角度 且移动吋, 所得到的图像与标志物图像差距很大, 此吋, 在进行特征比对吋, 需比对的数据量更多、 运算速度更慢, 且对标志物的识别率及准确率均会变差 技术问题  [0003] However, the marker image selected by the method is unique. When the camera changes the angle and moves 吋 during the shooting process, the obtained image and the marker image have a large gap, and then, the feature comparison is performed. , the amount of data to be compared is more, the operation speed is slower, and the recognition rate and accuracy of the markers are worse.
[0004] 本发明要解决的技术问题在于, 提供一种有效识别标志物且有效提升特征点匹 配过程中的准确率, 适于移动设备的基于标志物的摄像头图像处理方法及装置 , 以及包括该方法的实现增强现实的方法及设备、 以及计算机可读存储介质。 问题的解决方案  [0004] The technical problem to be solved by the present invention is to provide a marker-based camera image processing method and apparatus for effectively identifying a marker and effectively improving the accuracy in the feature point matching process, and the like Methods and apparatus for implementing augmented reality, and computer readable storage media. Problem solution
技术解决方案  Technical solution
[0005] 本发明解决其技术问题所采用的技术方案是: 构造一种基于标志物的摄像头图 像处理方法, 包括以下步骤:  [0005] The technical solution adopted by the present invention to solve the technical problem thereof is: Constructing a marker-based camera image processing method, comprising the following steps:
[0006] A: 选择或提取一幅标志物图像, 对所述标志物图像进行透视变换, 获得标志 物图像序列; [0007] B: 提取所述标志物图像序列中每一幅序列图像的序列特征点; [0006] A: selecting or extracting a marker image, performing perspective transformation on the marker image to obtain a marker image sequence; [0007] B: extracting sequence feature points of each sequence image in the sequence of marker images;
[0008] C: 获取当前摄像头图像; [0008] C: obtaining a current camera image;
[0009] D: 提取所述当前摄像头图像的图像特征点, 将所述当前摄像头图像的图像特 征点与所述序列图像的序列特征点进行配对, 获取匹配成功的特征点对;  [0009] D: extracting image feature points of the current camera image, pairing image feature points of the current camera image with sequence feature points of the sequence image, and obtaining feature point pairs that are successfully matched;
[0010] E: 根据所述匹配成功的特征点对, 结合摄像头的内参矩阵计算出当前帧摄像 头的外参矩阵, 所述当前帧摄像头的外参矩阵为标志物图像与摄像头图像匹配 成功的特征点的坐标对应关系。 [0010] E: calculating an outer parameter matrix of the current frame camera according to the matching feature point pair of the camera, and the outer parameter matrix of the current frame camera is a feature that the marker image and the camera image are successfully matched. The coordinate correspondence of the points.
[0011] 本发明还提供一种基于标志物的摄像头图像处理装置, 包括: [0011] The present invention also provides a marker-based camera image processing apparatus, including:
[0012] 标志物图像序列获取模块, 用于选择或提取一幅标志物图像, 对所述标志物图 像进行透视变换, 获得标志物图像序列; [0012] a marker image sequence acquisition module, configured to select or extract a marker image, perform perspective transformation on the marker image, and obtain a marker image sequence;
[0013] 第一特征点提取模块, 用于提取所述标志物图像序列中每一幅序列图像的序列 特征点; [0013] a first feature point extraction module, configured to extract sequence feature points of each sequence image in the sequence of marker images;
[0014] 当前摄像头图像获取模块, 用于获取当前摄像头图像;  [0014] a current camera image acquisition module, configured to acquire a current camera image;
[0015] 特征点配对模块, 用于提取所述当前摄像头图像的图像特征点, 将所述当前摄 像头图像的图像特征点与所述序列图像的序列特征点进行配对, 获取匹配成功 的特征点对;  [0015] a feature point pairing module, configured to extract image feature points of the current camera image, and pair image feature points of the current camera image with sequence feature points of the sequence image to obtain a feature point pair with successful matching ;
[0016] 外参矩阵计算模块, 用于根据所述匹配成功的特征点对, 结合摄像头的内参矩 阵计算出当前帧摄像头的外参矩阵, 所述当前帧摄像头的外参矩阵为标志物图 像与摄像头图像匹配成功的特征点的坐标对应关系。 。  [0016] an external parameter matrix calculation module, configured to calculate, according to the feature point pair that is successfully matched, an external parameter matrix of the current frame camera according to the internal parameter matrix of the camera, where the external parameter matrix of the current frame camera is a marker image and The camera image matches the coordinate correspondence of the feature points that are successful. .
[0017] 本发明还提供一种实现增强现实的方法, 采用上述基于标志物的摄像头图像处 理方法获得摄像头的外参矩阵。 [0017] The present invention also provides a method for realizing augmented reality, which uses the above-described marker-based camera image processing method to obtain an external parameter matrix of a camera.
[0018] 本发明还提供一种实现增强现实的设备, 包括处理器, 所述处理器用于执行存 储器中存储的计算机程序吋实现如上所述方法的步骤。 [0018] The present invention also provides an apparatus for implementing augmented reality, comprising a processor for executing a computer program stored in a memory to implement the steps of the method as described above.
[0019] 本发明还提供一种计算机可读存储介质, 其上存储有计算机程序, 所述计算机 程序被处理器执行吋实现如上所述方法的步骤。 The present invention also provides a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the steps of the method as described above.
发明的有益效果  Advantageous effects of the invention
有益效果  Beneficial effect
[0020] 本发明可以有效识别标志物, 在大角度下可增加标志物的识别率, 同吋还可提 高标志物图像与摄像头图像特征点匹配过程中的特征点匹配的准确率, 且计算 量小, 适用于移动设备。 [0020] The invention can effectively identify the marker, and can increase the recognition rate of the marker under a large angle, and can also mention The accuracy of the feature point matching between the high marker image and the camera image feature point matching process is small, and is suitable for mobile devices.
对附图的简要说明  Brief description of the drawing
附图说明  DRAWINGS
[0021] 下面将结合附图及实施例对本发明作进一步说明, 附图中:  [0021] The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which:
[0022] 图 1是本发明基于标志物的摄像头图像处理方法实施例一流程示意图;  1 is a schematic flow chart of a first embodiment of a method for processing a camera image based on a marker according to the present invention;
[0023] 图 2是本发明基于标志物的摄像头图像处理方法实施例二的流程示意图;  2 is a schematic flow chart of a second embodiment of a method for processing a camera image based on a marker according to the present invention;
[0024] 图 3-1为标志物图像序列示意图;  [0024] FIG. 3-1 is a schematic diagram of a sequence of marker images;
[0025] 图 3-2为原始标志物图像示意图;  [0025] FIG. 3-2 is a schematic diagram of an original marker image;
[0026] 图 3-3为一个标志物图像在每个方向上进行 2次透视变换生成的图像示意图; 3-3 is a schematic diagram of an image generated by two perspective transformations of a marker image in each direction;
[0027] 图 4为一幅标志物图像特征点自匹配结果示意图; 4 is a schematic diagram of self-matching results of feature points of a marker image;
[0028] 图 5为感兴趣区域和非感兴趣区域示意图;  [0028] FIG. 5 is a schematic diagram of a region of interest and a region of non-interest;
[0029] 图 6为典型图像处理示意图;  6 is a schematic diagram of typical image processing;
[0030] 图 7为摄像头外参矩阵误差分析示意图;  [0030] FIG. 7 is a schematic diagram of error analysis of a camera external reference matrix;
[0031] 图 8是本发明基于标志物的摄像头函数关系获取装置功能模块示意图。  8 is a schematic diagram of functional modules of a camera function relationship acquisition device based on a marker according to the present invention.
实施该发明的最佳实施例  BEST MODE FOR CARRYING OUT THE INVENTION
本发明的最佳实施方式  BEST MODE FOR CARRYING OUT THE INVENTION
[0032] 为了对本发明的技术特征、 目的和效果有更加清楚的理解, 现对照附图详细说 明本发明的具体实施方式。 [0032] In order to more clearly understand the technical features, objects and effects of the present invention, the specific embodiments of the present invention will be described in detail with reference to the accompanying drawings.
[0033] 参阅图 1, 图 1是本发明基于标志物的摄像头图像处理方法实施例一流程示意图[0033] Referring to FIG. 1, FIG. 1 is a schematic flow chart of a first embodiment of a method for processing a camera image based on a marker according to the present invention.
。 本实施例的基于标志物的摄像头图像处理方法可应用于增强现实技术。 . The marker-based camera image processing method of the present embodiment can be applied to augmented reality technology.
[0034] 如图 1所示, 本实施例的基于标志物的摄像头图像处理方法包括以下步骤: [0035] 步骤 A: 选择或提取一幅标志物图像, 对标志物图像进行透视变换, 获得标志 物图像序列。 [0034] As shown in FIG. 1, the marker-based camera image processing method of this embodiment includes the following steps: [0035] Step A: selecting or extracting a marker image, performing perspective transformation on the marker image to obtain a logo Object image sequence.
[0036] 标志物图像序列, 可以通过采用预设变换矩阵, 对所选择或提取的标志物图像 进行姿态变换获得。 其中, 对所选择或提取的标志物图像所进行的姿态变换包 括平移和旋转。  [0036] The marker image sequence can be obtained by performing pose transformation on the selected or extracted marker image by using a preset transformation matrix. Among them, the posture transformation performed on the selected or extracted marker image includes translation and rotation.
[0037] 所选择或提取的标志物图像为预先存储在存储器中的标志物图像, 其中, 标志 物图像可以是从图像库中直接调用的图像, 也可以是通过现场拍摄得到并保存 在存储器中的实拍图像, 本发明标志物图像的来源不作具体要求。 [0037] The selected or extracted marker image is a marker image previously stored in the memory, wherein the marker The object image may be an image directly called from the image library, or may be a real shot image obtained by field shooting and stored in a memory. The source of the marker image of the present invention is not specifically required.
[0038] 预设变换矩阵可通过预设标志物图像与常规使用场景下离摄像头的距离计算得 到, 其所采用的变换可以为透视变换。  [0038] The preset transformation matrix can be calculated by preset the marker image and the distance from the camera in the conventional use scene, and the transformation used can be a perspective transformation.
[0039] 进一步地, 在步骤 A之前, 本发明的基于标志物的摄像头图像处理方法还包括 [0039] Further, before step A, the marker-based camera image processing method of the present invention further includes
[0040] 步骤 A1 : 获取摄像头的内参矩阵, 摄像头的内参矩阵包括摄像头的参数信息。 [0040] Step A1: Obtain an internal parameter matrix of the camera, and the internal reference matrix of the camera includes parameter information of the camera.
[0041] 其中, 摄像头的参数信息为摄像头自身的各种参数, 例如, 摄像头自身的横像 素数量、 纵像素数量, 摄像头的横、 纵归一化焦距等。 这些参数可通过对摄像 头做预先标定得到, 也可以通过读取摄像头参数信息 (像素、 焦距等) 直接计 算得到, 本实施例不做具体要求。 [0041] wherein the parameter information of the camera is various parameters of the camera itself, for example, the number of horizontal pixels of the camera itself, the number of vertical pixels, the horizontal and vertical focal lengths of the camera, and the like. These parameters can be obtained by pre-calibrating the camera, or by directly reading the camera parameter information (pixel, focal length, etc.). This embodiment does not require specific requirements.
[0042] 步骤 A2: 初始化系统环境、 配置系统参数。 该步骤主要包括搭建系统硬件平台[0042] Step A2: Initialize the system environment and configure system parameters. This step mainly includes building a system hardware platform.
、 设置能够支撑二维和三维图形的绘图环境、 分配图像缓存空间, 识别摄像头 等。 Set the drawing environment that can support 2D and 3D graphics, allocate image buffer space, identify cameras, and more.
[0043] 步骤 B: 提取标志物图像序列中每一幅序列图像的序列特征点。 [0043] Step B: Extracting sequence feature points of each sequence image in the sequence of marker images.
[0044] 进一步地, 本实施例在步骤 C之前还包括: [0044] Further, before the step C, the embodiment further includes:
[0045] B11 : 利用特征点提取算法对所述标志物图像序列中的所有序列图像进行特征 点提取。 例如, 可采用 surf特征点提取算法、 sift特征点提取算法或者 ORB特征点 提取算法。  [0045] B11: Feature point extraction is performed on all sequence images in the sequence of marker images by using a feature point extraction algorithm. For example, a surf feature point extraction algorithm, a sift feature point extraction algorithm, or an ORB feature point extraction algorithm may be employed.
[0046] 本实施例中, 本发明优选 ORB特征点提取算法对每一幅序列图像的序列特征点 进行提取。  In the present embodiment, the ORB feature point extraction algorithm of the present invention extracts sequence feature points of each sequence image.
[0047] 相比于 surf特征和 sift特征, 本实施例采用 ORB算法对标志物图像的特征点进行 特征点提取, 具有旋转不变性, 且提取速度更快, 进而使得本方案可适用于移 动设备。  Compared with the surf feature and the sift feature, the present embodiment uses the ORB algorithm to extract the feature points of the feature points of the marker image, has rotation invariance, and the extraction speed is faster, thereby making the solution applicable to mobile devices. .
[0048] B12: 对步骤 B11中所提取的每一幅序列图像的序列特征点进行自匹配。  [0048] B12: Perform self-matching on the sequence feature points of each sequence image extracted in step B11.
[0049] 将所提取的每一幅序列图像的序列特征点进行自匹配, 即每幅序列图像自己的 序列特征点和自己的序列特征点进行匹配。 通过将每幅序列图像的序列特征点 进行自匹配可获取每幅序列图像中相似度很高的特征点。 [0050] 可以理解地, 本实施例中可采用阈值法进行自匹配, 即匹配过程中将每幅序列 图像中任意两个序列特征点的 ORB特征数列的对应位置的值相减, 然后取绝对 值, 将这些绝对值累加起来, 获得绝对值的累加值, 该累加值即为特征点匹配 的配对值, 如果累加值大于阈值, 则判断匹配失败, 小于阈值判断匹配成功, 即该两个序列图像的序列特征点匹配。 [0049] The sequence feature points of each of the extracted sequence images are self-matched, that is, each sequence image's own sequence feature points are matched with their own sequence feature points. Feature points with high similarity in each sequence image can be obtained by self-matching the sequence feature points of each sequence image. [0050] It can be understood that in the embodiment, the threshold method can be used for self-matching, that is, the value of the corresponding position of the ORB feature sequence of any two sequence feature points in each sequence image is subtracted, and then the absolute value is taken. The value is added to obtain the absolute value of the accumulated value, which is the pairing value of the feature point matching. If the accumulated value is greater than the threshold, the matching failure is judged, and the matching is less than the threshold to determine that the matching is successful, that is, the two sequences are The sequence feature points of the image match.
[0051] B13: 刪除自匹配成功的序列特征点, 保留自匹配失败的序列特征点。  [0051] B13: The sequence feature points that are successfully matched from the sequence are deleted, and the sequence feature points that fail from the matching are retained.
[0052] 由于在步骤 B12中所获得每幅序列图像中的自匹配成功的序列特征点, 会造成 后续匹配混乱, 因此, 在该步骤中, 将每幅序列图像中自匹配成功的序列特征 点刪除掉, 以减少这些序列特征点对后续操作的影响, 进一步加快处理速度, 减少匹配、 运算量, 提升了特征点匹配的精度; 同吋保留自匹配失败的序列特 征点, 用于后续的步骤的配对。  [0052] Since the self-matching sequence feature points in each sequence image obtained in step B12 may cause subsequent matching confusion, in this step, the self-matching sequence feature points in each sequence image are successfully matched. Deleted to reduce the impact of these sequence feature points on subsequent operations, further speed up the processing, reduce the matching, the amount of operations, and improve the accuracy of feature point matching; the same holds the sequence feature points of the self-matching failure for subsequent steps. Pairing.
[0053] 步骤 C: 获取当前摄像头图像。  [0053] Step C: Acquire the current camera image.
[0054] 可以理解地, 当前摄像头图像为摄像头拍摄到的真实环境中的一帧, 其是实吋 获取到的图像。  [0054] It can be understood that the current camera image is a frame in the real environment captured by the camera, which is a real acquired image.
[0055] 步骤 D: 提取所获取的当前摄像头图像的图像特征点, 将当前摄像头图像的图 像特征点与序列图像的序列特征点进行配对, 获取匹配成功的特征点对。  [0055] Step D: Extracting the acquired image feature points of the current camera image, and pairing the image feature points of the current camera image with the sequence feature points of the sequence image to obtain a feature point pair with successful matching.
[0056] 进一步地, 步骤 D具体包括: [0056] Further, step D specifically includes:
[0057] D11 : 基于预设外参矩阵, 识别出当前摄像头图像中的感兴趣区域, 并去除非 感兴趣区域。  [0057] D11: Identify a region of interest in the current camera image based on the preset foreign parameter matrix, and remove the non-interest region.
[0058] 在步骤 C中, 获取到当前摄像头图像, 即摄像头拍摄到的当前帧吋, 根据预设 外参矩阵, 在所获取的当前摄像头图像中进行搜索, 识别出上一帧匹配得到的 标志物图像在摄像头图像中的区域, 该区域即为当前摄像头图像中的感兴趣区 域, 同吋基于该感兴趣区域去除非感兴趣区域。 在具体操作过程中, 将感兴趣 区域以外的区域采用色块替代 (如非感兴趣区域全部用黑色填充) , 当非感兴 趣区域全部用黑色填充后, 感兴趣区域以外的特征点将无法提取, 从而加快了 计算速度。  [0058] In step C, the current camera image is acquired, that is, the current frame captured by the camera, and the search is performed in the acquired current camera image according to the preset external parameter matrix, and the flag obtained by matching the previous frame is recognized. The area of the object image in the camera image, which is the region of interest in the current camera image, and the non-interest region is removed based on the region of interest. In the specific operation process, the regions outside the region of interest are replaced by color patches (if the non-interest regions are all filled with black), when the non-interest regions are all filled with black, the feature points outside the region of interest will not be extracted. , thus speeding up the calculation.
[0059] 在此需要说明的是, 如果预设外参矩阵不是根据第一帧摄像头图像处理所获得 的, 则该预设外参矩阵为上一帧依据本发明基于标志物的摄像头图像处理方法 所获得的摄像头的外参矩阵。 在上一帧获得摄像头的外参矩阵后, 保存在存储 器中, 用于作为下一帧摄像头图像处理的预设外参矩阵。 如果预设外参矩阵为 根据第一帧摄像头图像处理获得, 则第一帧摄像头外参矩阵可采用现有的图像 处理方案获得, 并保存在存储器中, 用于作为第二帧摄像头图像处理的预设外 参矩阵。 [0059] It should be noted that if the preset external reference matrix is not obtained according to the first frame camera image processing, the preset external reference matrix is the previous frame. The marker-based camera image processing method according to the present invention. The external parameter matrix of the obtained camera. After obtaining the external parameter matrix of the camera in the previous frame, it is stored in the memory for use as a preset external parameter matrix for the next frame camera image processing. If the preset external reference matrix is obtained according to the first frame camera image processing, the first frame camera external reference matrix can be obtained by using an existing image processing scheme and stored in the memory for use as the second frame camera image processing. Preset the external parameter matrix.
[0060] 可以理解地, 若在上一帧没有获得摄像头的外参矩阵吋, 则不执行该步骤, 即 不需要执行步骤 Dl l, 也就是说, 不需要对所获取的当前摄像头图像进行感兴趣 区域的识别。  [0060] It can be understood that if the external parameter matrix 摄像 of the camera is not obtained in the previous frame, the step is not performed, that is, the step D1 l does not need to be performed, that is, the acquired current camera image does not need to be sensed. Identification of areas of interest.
[0061] D12: 提取当前摄像头图像中的感兴趣区域的感兴趣特征点。  [0061] D12: Extracting the feature points of interest of the region of interest in the current camera image.
[0062] 当前摄像头图像中的感兴趣区域的感兴趣特征点, 可以利用特征点提取算法提 取, 本实施例优选 ORB特征点提取算法进行提取。  [0062] The feature points of interest of the region of interest in the current camera image may be extracted by using a feature point extraction algorithm. In this embodiment, the ORB feature point extraction algorithm is preferably used for extraction.
[0063] D13: 将感兴趣区域的感兴趣特征点与步骤 B13中保留的序列特征点进行配对 [0063] D13: pairing the feature points of interest of the region of interest with the sequence feature points retained in step B13
[0064] 进一步地, 步骤 D13具体包括: [0064] Further, step D13 specifically includes:
[0065] D131 : 根据预设外参矩阵, 从标志物图像序列中获取典型图像, 并获取典型图 像的典型特征点。  [0065] D131: According to the preset external parameter matrix, a typical image is obtained from the sequence of the marker image, and typical feature points of the typical image are obtained.
[0066] 典型图像为最接近于外参矩阵描述的匹配特征点的坐标对应关系的标志物图像 , 即在标志物图像序列的所有序列图像中, 典型图像是与当前摄像头拍摄到的 当前摄像头图像角度、 位置、 状态最接近的序列图像。 可以理解地, 当上一帧 没有获取到摄像头的外参矩阵吋, 则不需选取典型图像。  [0066] A typical image is a marker image that is closest to the coordinate correspondence of matching feature points described by the outer parameter matrix, that is, in all sequence images of the marker image sequence, the typical image is the current camera image captured with the current camera. The closest sequence image of angle, position, and state. It can be understood that when the outer frame matrix 摄像 of the camera is not acquired in the previous frame, it is not necessary to select a typical image.
[0067] 具体地, 典型图像的获取可通过以下步骤实现:  [0067] Specifically, the acquisition of a typical image can be achieved by the following steps:
[0068] D1311 : 获取标志物图像序列中每一幅序列图像对应的序列顶点坐标;  [0068] D1311: acquiring sequence vertex coordinates corresponding to each sequence image in the sequence of marker images;
[0069] D1312: 基于序列顶点坐标计算每一幅序列图像各条边长长度, 依次保存, 获 得每一幅序列图像的第一边长长度序列;  [0069] D1312: calculating lengths of each side length of each sequence image based on the sequence vertex coordinates, sequentially storing, and obtaining a first side length length sequence of each sequence image;
[0070] D1313: 对所获得的每一幅序列图像的第一边长长度序列进行归一化处理; [0071] D1314: 根据预设外参矩阵, 获得感兴趣区域的感兴趣顶点坐标; [0070] D1313: normalizing the first side length sequence of each obtained sequence image; [0071] D1314: obtaining, according to the preset outer parameter matrix, coordinates of the vertex of interest of the region of interest;
[0072] D1315: 基于所获得的感兴趣区域的感兴趣顶点坐标计算感兴趣区域的第二边 长长度序列, 并对计算出的感兴趣区域的第二边长长度序列进行归一化处理; [0073] D1316: 分别计算步骤 D1313中的经归一化处理的所有序列图像的第一边长长 度序列与步骤 D1315中的经归一化处理的感兴趣区域的第二边长长度序列的欧式 距离或曼哈顿距离。 [0072] D1315: calculating a second side length length sequence of the region of interest based on the obtained vertex coordinates of the region of interest, and normalizing the calculated second side length sequence of the region of interest; [0073] D1316: respectively calculating a first side length length sequence of all sequence images of the normalized processing in step D1313 and a second side length length sequence of the normalized processing region of interest in step D1315 Distance or Manhattan distance.
[0074] D1317: 根据所获得的欧式距离或曼哈顿距离进行判断, 获取典型图像。  [0074] D1317: Determine a typical image according to the obtained Euclidean distance or Manhattan distance.
[0075] 通过计算出当前感兴趣区域的第二边长长度序列与标志物图像的第一边长长度 序列的欧式距离或曼哈顿距离, 进行判断, 从中选取最小的欧式距离或最小的 曼哈顿距离, 进而确定出典型图像, 即欧式距离最小或曼哈顿距离最小的序列 图像即为典型图像。 [0075] determining, by calculating a second side length sequence of the current region of interest and a Euclidean distance or Manhattan distance of the first side length sequence of the marker image, selecting a minimum Euclidean distance or a minimum Manhattan distance, Further, a typical image is determined, that is, a sequence image having the smallest Euclidean distance or the smallest Manhattan distance is a typical image.
[0076] 可以理解地, 在步骤 B中获得标志物图像序列中每一幅序列图像的序列特征点 后, 将每一幅序列图像的序列特征点进行对应保存。 因此, 当在步骤 D1317中获 得序列图像中包含的典型图像后, 即可根据所获得的典型图像获得其对应的序 列特征点, 这些序列特征点作为典型图像的典型特征点。  [0076] It can be understood that, after the sequence feature points of each sequence image in the marker image sequence are obtained in step B, the sequence feature points of each sequence image are correspondingly saved. Therefore, when a typical image contained in the sequence image is obtained in step D1317, the corresponding sequence feature points can be obtained from the obtained typical image, and these sequence feature points are typical feature points of a typical image.
[0077] D132: 将感兴趣区域的感兴趣特征点与典型图像的典型特征点进行配对。 [0077] D132: Pairing the feature points of interest of the region of interest with typical feature points of the typical image.
[0078] 通过从标志物图像序列中选取出典型图像, 再将所提取的感兴趣区域的感兴趣 特征点与典型图像的典型特征点进行配对, 可以大大减少配对数量、 缩短配对 吋间, 加快运算速度, 且配对的准确率更高。 [0078] By selecting a typical image from the sequence of marker images, and then pairing the extracted feature points of the region of interest with the typical feature points of the typical image, the number of pairs can be greatly reduced, and the pairing time can be shortened and accelerated. The speed of operation, and the accuracy of pairing is higher.
[0079] 进一步地, 步骤 D132具体包括步骤: [0079] Further, step D132 specifically includes the steps of:
[0080] D1321 : 利用阈值法对典型图像的典型特征点与感兴趣区域的感兴趣特征点进 行酉己对。  [0080] D1321: Using the threshold method, the typical feature points of the typical image and the feature points of interest of the region of interest are performed.
[0081] D1322: 判断典型图像的典型特征点与感兴趣区域的感兴趣特征点匹配的配对 值是否大于阈值, 若是, 提取该配对值大于阈值的典型标志物图像的特征点。  [1321] D1322: Determine whether a pairing value of a typical feature point of the typical image and a feature point of interest of the region of interest is greater than a threshold, and if so, extract a feature point of the typical marker image whose pairing value is greater than a threshold.
[0082] 例如, 当典型图像的一个特征点可以匹配到感兴趣区域的多个感兴趣特征点吋 , 无法确定该典型图像的典型特征点与感兴趣区域中的哪个感兴趣特征点的配 对是正确的, 易造成混乱, 为了避免造成混乱, 干扰正确配对, 需将该典型图 像中易造成混乱的典型特征点提取出来。  [0082] For example, when one feature point of a typical image can match a plurality of feature points of interest in the region of interest, it cannot be determined which pair of feature points of the typical image and which feature point in the region of interest are paired with Correct, easy to cause confusion, in order to avoid confusion, interfere with correct pairing, you need to extract the typical feature points in the typical image that are easy to cause confusion.
[0083] D1323: 在匹配结果中, 去除该配对值大于阈值的典型图像的典型特征点, 获 得匹配成功的特征点对。  [0083] D1323: In the matching result, the typical feature points of the typical image whose pairing value is greater than the threshold value are removed, and the feature point pairs that match the success are obtained.
[0084] 在步骤 D1322中, 提取出配对值大于阈值的典型图像的典型特征点后, 在匹配 结果中, 去除前述配对值大于阈值的典型特征点; 同吋将配对值大于阈值的典 型特征点作为差特征点保存到上一帧得到的差特征点表中。 [0084] In step D1322, after extracting a typical feature point of a typical image whose pairing value is greater than a threshold, after matching In the result, the typical feature points whose pairing value is greater than the threshold value are removed; the typical feature points whose pairing value is greater than the threshold value are saved as the difference feature points in the difference feature point table obtained in the previous frame.
[0085] 换言之, 在步骤 D132中, 根据典型图像的典型特征点 (即典型特征点集合) 与 感兴趣区域的感兴趣特征点, 分别计算典型特征点中每一个典型特征点与感兴 趣特征点中每一个感兴趣特征点的配对值, 并将所计算出的配对值与阈值进行 比较, 若大于阈值, 则配对失败, 若小于阈值, 则配对成功, 即配对值小于阈 值的两个特征点为匹配成功的特征点对。 例如, 典型图像的一个典型特征点设 为 X, 感兴趣区域的一个感兴趣特征点为 Y, 典型特征点 X与感兴趣特征点 Υ的配 对为 Μ, 若 Μ大于阈值, 则典型特征点 X与感兴趣特征点 Υ配对失败, 并将典型 特征点 X作为差特征点保存到差特征点表中; 若 Ν小于阈值, 则典型特征点 X与 感兴趣特征点 Υ配对成功, 典型特征点 X与感兴趣特征点 Υ为匹配成功的特征点 对。  [0085] In other words, in step D132, each of the typical feature points and the feature points of interest are respectively calculated according to typical feature points of the typical image (ie, typical feature point sets) and the feature points of interest of the region of interest. a pairing value of each of the feature points in interest, and comparing the calculated pairing value with a threshold value. If the threshold value is greater than the threshold value, the pairing fails. If the threshold value is less than the threshold value, the pairing is successful, that is, the two feature points whose pairing value is smaller than the threshold value. To match successful feature point pairs. For example, a typical feature point of a typical image is set to X, a feature point of interest in the region of interest is Y, and the pair of typical feature points X and the feature point of interest is Μ. If Μ is greater than the threshold, the typical feature point X Pairing with the feature points of interest fails, and the typical feature point X is saved as a difference feature point to the difference feature point table; if Ν is less than the threshold, the typical feature point X is successfully paired with the feature point of interest, and the typical feature point X A feature point pair that matches the feature point of interest as a match.
[0086] 可以理解地, 如果当前帧为第一帧, 则差特征点表为空。  [0086] It can be understood that if the current frame is the first frame, the difference feature point table is empty.
[0087] 步骤 Ε: 根据匹配成功的特征点对, 结合摄像头的内参矩阵计算出当前帧摄像 头的外参矩阵, 当前帧摄像头的外参矩阵为标志物图像与摄像头图像匹配成功 的特征点的坐标对应关系。  [0087] Step Ε: According to the matching feature point pair, the external parameter matrix of the current frame camera is calculated according to the internal parameter matrix of the camera, and the outer parameter matrix of the current frame camera is the coordinate of the feature point where the marker image and the camera image match successfully. Correspondence relationship.
[0088] 具体地, 采用 RPP (Robust Planar Pose) 算法, 根据步骤 D获取的匹配成功的 特征点对, 并结合摄像头的内参矩阵计算出当前帧的摄像头的外参矩阵。 [0088] Specifically, using the RPP (Robust Planar Pose) algorithm, according to the matching feature point pairs obtained in step D, and calculating the external parameter matrix of the camera of the current frame in combination with the internal parameter matrix of the camera.
[0089] 摄像头的外参矩阵, 为拍摄标志物的摄像头, 在空间中如何通过平移和旋转, 可以拍摄到当前采集到的标志物图像状态。 即表示标志物图像与摄像头拍摄的 摄像头图像的点坐标的对应关系, 该对应关系用一个函数来表示, 而这个函数 使用矩阵的方式描述, 即为外参矩阵, 换言之, 摄像头的外参矩阵即表示标志 物图像与摄像头图像的点的坐标的对应关系。 [0089] The external parameter matrix of the camera, which is a camera for photographing markers, can capture the current captured image state of the marker by translating and rotating in space. That is, the correspondence relationship between the marker image and the point coordinates of the camera image captured by the camera is shown. The correspondence is represented by a function, and this function is described by a matrix, that is, an external parameter matrix. In other words, the external parameter matrix of the camera is A correspondence relationship between the marker image and the coordinates of the point of the camera image is indicated.
[0090] 进一步地, 本实施例的基于标志物的摄像头图像处理方法在步骤 E之后还包括 [0090] Further, the marker-based camera image processing method of the embodiment further includes after step E
[0091] F: 对步骤 E获取的当前帧摄像头的外参矩阵进行误差计算, 获取误差结果。 [0091] F: Perform error calculation on the outer parameter matrix of the current frame camera acquired in step E, and obtain an error result.
[0092] 步骤 F具体包括步骤: [0092] Step F specifically includes the steps of:
[0093] F1 : 基于步骤 E获取的当前帧摄像头的外参矩阵, 结合序列图像与当前摄像头 图像所有匹配成功的特征点的坐标, 计算该序列图像的序列特征点坐标在摄像 头坐标系中的计算坐标; [0093] F1: based on the outer frame matrix of the current frame camera acquired in step E, combining the sequence image with the current camera Calculating the coordinates of all the feature points that match the success of the image, and calculating the calculated coordinates of the sequence feature point coordinates of the sequence image in the camera coordinate system;
[0094] F2: 计算步骤 F1中所获得的计算坐标与当前摄像头图像中匹配成功的特征点的 匹配坐标之间的误差距离; [0094] F2: calculating an error distance between the calculated coordinates obtained in step F1 and the matching coordinates of the feature points that are successfully matched in the current camera image;
[0095] F3: 根据所获得的所有误差距离, 计算平均误差距离。 [0095] F3: Calculate the average error distance based on all the error distances obtained.
[0096] 0: 根据误差结果, 验证当前帧的摄像头的外参矩阵是否正确。 [0096] 0: According to the error result, it is verified whether the external parameter matrix of the camera of the current frame is correct.
[0097] 01: 判断平均误差距离是否大于平均误差距离阈值; [0097] 01: determining whether the average error distance is greater than an average error distance threshold;
[0098] 02: 若平均误差距离大于平均误差距离阈值, 则步骤 E获得的摄像头的外参矩 阵错误, 否则, 确定步骤 E获得的当前帧摄像头的外参矩阵正确。  [0098] 02: If the average error distance is greater than the average error distance threshold, the external parameter matrix of the camera obtained in step E is incorrect, otherwise, the outer parameter matrix of the current frame camera obtained in step E is determined to be correct.
[0099] 优选地, 若在步骤 G2中确定步骤 E所获得的当前帧摄像头的外参矩阵是正确的 , 则进一步执行以下步骤:  [0099] Preferably, if it is determined in step G2 that the outer parameter matrix of the current frame camera obtained in step E is correct, the following steps are further performed:
[0100] 更新上一帧保存的感兴趣区域及差特征点表;  [0100] updating the region of interest and the difference feature point table saved in the previous frame;
[0101] 保存当前帧摄像头的外参矩阵。 所保存的当前帧摄像头的外参矩阵可用于下一 帧的摄像头图像的预设外参矩阵。  [0101] The outer parameter matrix of the current frame camera is saved. The saved outer frame matrix of the current frame camera can be used for the preset outer parameter matrix of the camera image of the next frame.
[0102] 可以理解地, 若在步骤 G2中验证出步骤 E所获得的当前帧摄像头的外参矩阵是 错误的, 则本次图像处理失败, 所获得的外参矩阵不保存, 同吋清空所保存的 感兴趣区域及差特征点表。  [0102] It can be understood that if it is verified in step G2 that the outer parameter matrix of the current frame camera obtained in step E is erroneous, the current image processing fails, and the obtained outer parameter matrix is not saved, and the same is cleared. The saved region of interest and the difference feature point table.
[0103] 参阅图 2, 图 2为本发明基于标志物的摄像头图像处理方法实施例二的流程示意 图。 该实施例的基于标志物的摄像头图像处理方法可用于实现现实增强技术。 [0103] Referring to FIG. 2, FIG. 2 is a schematic flow chart of a second embodiment of a method for processing a camera image based on a marker according to the present invention. The marker-based camera image processing method of this embodiment can be used to implement a reality enhancement technique.
[0104] 如图 2所示, 本实施例的基于标志物的摄像头图像处理方法包括步骤 201〜步骤 2[0104] As shown in FIG. 2, the marker-based camera image processing method of this embodiment includes steps 201 to 2
09。 具体地: 09. specifically:
[0105] 步骤 201 : 获取摄像头的内参矩阵, 摄像头的内参矩阵包括摄像头的参数信息  [0105] Step 201: Acquire an internal parameter matrix of the camera, where the internal parameter matrix of the camera includes parameter information of the camera
[0106] 其中, 摄像头的参数信息为摄像头自身的各种参数, 例如, 摄像头自身的横像 素数量、 纵像素数量, 摄像头的横、 纵归一化焦距等。 这些参数可通过对摄像 头做预先标定得到, 也可以通过读取摄像头参数信息 (像素、 焦距等) 直接计 算得到, 本实施例不做具体要求。 [0106] wherein the parameter information of the camera is various parameters of the camera itself, for example, the number of horizontal pixels of the camera itself, the number of vertical pixels, the horizontal and vertical focal lengths of the camera, and the like. These parameters can be obtained by pre-calibrating the camera, or by directly reading the camera parameter information (pixel, focal length, etc.). This embodiment does not require specific requirements.
[0107] 步骤 202: 选择或提取一幅标志物图像, 对标志物图像进行透视变换, 获得标 志物图像序列。 [0107] Step 202: Select or extract a marker image, perform perspective transformation on the marker image, and obtain a target A sequence of images of volunteers.
[0108] 可以理解地, 步骤 202的实质是为了模拟当摄像头非垂直于标志物所选择或提 取的标志物图像 (其为原始标志物图像) 吋, 拍摄到的标志物图像。  [0108] It is understood that the essence of step 202 is to simulate a marker image that is captured when the camera image is selected or extracted by the camera that is not perpendicular to the marker (which is the original marker image).
[0109] 具体地, 模拟方式可采用透视变换获取, 所使用的预设变换矩阵可通过预设标 志物图像与常规使用场景下离摄像头的距离矩阵。  [0109] Specifically, the simulation mode may be acquired by perspective transformation, and the preset transformation matrix used may be a preset distance matrix between the target image and the conventional use scene.
[0110] 如图 3-1为图 3-2通过变换矩阵进行透视变换后所获得的标志物图像序列, 其中 , 透视变换矩阵可以通过模拟摄像头的姿态变化来虚拟构造外参矩阵, 并结合 摄像头的内参矩阵计算得到。  [0110] FIG. 3 is a sequence of a marker image obtained by performing perspective transformation by a transformation matrix of FIG. 3-2, wherein the perspective transformation matrix can virtually construct an external parameter matrix by simulating a posture change of the camera, and combining the camera The internal parameter matrix is calculated.
[0111] 可以理解地, 本实施例中, 图 3-1只生成了 4个方向 (每 90度变换一个方向) , 每个方向为 1个倾斜角度, 但若需要获得更好的效果, 也可以增加方向和每个方 向的角度 (如图 3-3所示) ; 在图 3-3中, 每个方向均做了 2个透视变换生成 2个图 像。  [0111] It can be understood that, in this embodiment, FIG. 3-1 only generates four directions (one direction is changed every 90 degrees), and each direction is one tilt angle, but if a better effect is needed, You can increase the direction and angle of each direction (as shown in Figure 3-3); in Figure 3-3, 2 perspective transformations are performed in each direction to generate 2 images.
[0112] 变换矩阵是预设的, 每当采用新的标志物图像吋, 均可采用同一个预设的变换 矩阵生成标志物图像序列。  [0112] The transformation matrix is preset, and each time a new marker image is used, a sequence of marker images can be generated using the same preset transformation matrix.
[0113] 步骤 203: 提取标志物图像序列中每一幅序列图像的序列特征点, 并对每一幅 序列图像的序列特征点进行自匹配, 去除自匹配成功的序列特征点。 [0113] Step 203: Extract sequence feature points of each sequence image in the sequence of marker images, and perform self-matching on sequence feature points of each sequence image to remove sequence feature points that are successfully matched.
[0114] 本实施例, 对标志物图像序列中的每一幅序列图像的序列特征点的提取可采用[0114] In this embodiment, the extraction of sequence feature points of each sequence image in the sequence of marker images may be adopted.
ORB算法进行提取。 ORB算法提取的 ORB特征具有旋转不变性, 且提取速度快The ORB algorithm performs extraction. The ORB feature extracted by the ORB algorithm has rotation invariance and fast extraction speed
, 适于移动设备在移动过程中运行。 Suitable for mobile devices to run during the move.
[0115] ORB特征是一个长度为 64的整形数列, 匹配的过程就是将 2个特征点的 ORB特 征数列的对应位置的值相减, 然后取绝对值, 并且将这些绝对值累加起来, 获 得绝对值的累加值, 该累加值即为特征点匹配的配对值。 采用阈值法进行判断[0115] The ORB feature is an integer sequence of length 64. The matching process is to subtract the values of the corresponding positions of the ORB feature sequence of the two feature points, then take the absolute value, and add these absolute values to obtain absolute The accumulated value of the value, which is the pairing value of the feature point matching. Threshold method
, 如果累加值大于阈值则判断匹配失败, 小于阈值判断匹配成功。 If the accumulated value is greater than the threshold, the matching failure is judged, and if the value is smaller than the threshold, the matching is successful.
[0116] 本实施例中, 对每一幅序列图像的所有序列特征点进行自匹配的目的是为了去 掉当前标志物图像中, 相似度很高的特征点。 如图 4所示 a、 b、 c三个点代表的特 征点就是我们要去掉的特征点, 这三个特征点相似度很高, 会造成匹配混乱。 [0116] In this embodiment, the purpose of self-matching all sequence feature points of each sequence image is to remove feature points with high similarity in the current marker image. As shown in Figure 4, the three points represented by a, b, and c are the feature points that we want to remove. These three feature points have high similarity, which will cause confusion.
[0117] 进一步地, 对于变化产生的标志物图像计算得到的 ORB特征点, 特征点的坐标 用变换前的坐标位置描述, ORB特征序列的值不变。 例如, 原标志物图像中坐 标为 (1,1) 的点, 经过透视变换后到了坐标 (10,10) , 在变换后的图像中检测 到该坐标点是 ORB坐标点, 那么计算 ORB特征序列吋用变换后的图像 (因为是 在变换后检测到的特征点) , 但是该点的坐标位置用原图坐标 (1,1) 记录。 通 过该种方式, 可使得在求解摄像头的外参矩阵吋, 是通过标志物图像的坐标和 摄像头中标志物图像坐标的对应关系计算出来的, 实际上就是一系列特征点坐 标的对应关系计算得到, 因此要使用原标志物图像的坐标才能确保计算出正确 其结果。 [0117] Further, for the ORB feature point calculated by the marker image generated by the change, the coordinates of the feature point are described by the coordinate position before the transformation, and the value of the ORB feature sequence is unchanged. For example, sitting in the original marker image The point marked as (1,1) is transformed into the coordinates (10,10) after the perspective transformation. When the coordinate point is detected as the ORB coordinate point in the transformed image, the ORB feature sequence is calculated and the transformed image is used. Because it is the feature point detected after the transformation), the coordinate position of the point is recorded with the original coordinate (1, 1). In this way, the external parameter matrix 求解 in the solution camera can be calculated by the correspondence between the coordinates of the marker image and the coordinates of the marker image in the camera, which is actually calculated by the correspondence between a series of feature point coordinates. Therefore, the coordinates of the original marker image are used to ensure that the correct result is calculated.
[0118] 步骤 204: 获取当前摄像头图像。  [0118] Step 204: Acquire a current camera image.
[0119] 步骤 205: 基于预设外参矩阵, 识别出当前摄像头图像中的感兴趣区域, 并去 除非感兴趣区域。  [0119] Step 205: Identify the region of interest in the current camera image based on the preset foreign parameter matrix, and go to the region of interest.
[0120] 在此需要说明的是, 如果预设外参矩阵不是根据第一帧摄像头图像处理所获得 的, 则该预设外参矩阵为上一帧依据本发明基于标志物的摄像头图像处理方法 所获得的摄像头的外参矩阵。 在上一帧获得摄像头的外参矩阵外, 保存在存储 器中, 用于作为下一帧摄像头图像处理的预设外参矩阵。 如果预设外参矩阵为 根据第一帧摄像头图像处理获得, 则第一帧摄像头外参矩阵可采用现有的图像 处理方案获得, 并保存在存储器中, 用于作为第二帧摄像头图像处理的预设外 参矩阵。  [0120] It should be noted that, if the preset external reference matrix is not obtained according to the first frame camera image processing, the preset external reference matrix is the previous frame. The marker-based camera image processing method according to the present invention. The external parameter matrix of the obtained camera. In addition to obtaining the external parameter matrix of the camera in the previous frame, it is stored in the memory and used as a preset external parameter matrix for the next frame camera image processing. If the preset external reference matrix is obtained according to the first frame camera image processing, the first frame camera external reference matrix can be obtained by using an existing image processing scheme and stored in the memory for use as the second frame camera image processing. Preset the external parameter matrix.
[0121] 感兴趣区域是由上一帧的处理结果得到的, 其可由多个顶点描述的多边形 (如 4个顶点描述的四边形) 。 根据预设外参矩阵, 获得在当前帧的摄像头图像中对 应的感兴趣区域。  [0121] The region of interest is derived from the processing of the previous frame, which can be described by a plurality of vertices (eg, a quadrilateral described by four vertices). According to the preset external parameter matrix, the region of interest corresponding to the camera image of the current frame is obtained.
[0122] 如图 5所示, 白色区域即为感兴趣区域, 白色区域以外的区域即为非感兴趣区 ±或, 其中, 四边形外部图像 (非感兴趣区域) 全部用黑色代替, 通过将非感兴 趣区域全部用黑色代替, 可使黑色区域的特征点无法被提取, 可大大提高计算 速度。  [0122] As shown in FIG. 5, the white area is the region of interest, and the area outside the white area is the non-interest area ± or, wherein the quadrilateral external image (non-interest area) is replaced by black, The regions of interest are all replaced by black, so that the feature points of the black area can not be extracted, which can greatly improve the calculation speed.
[0123] 步骤 206: 提取当前摄像头图像中的感兴趣区域的感兴趣特征点。  [0123] Step 206: Extract the feature points of interest of the region of interest in the current camera image.
[0124] 当前摄像头图像中的感兴趣区域的感兴趣特征点, 利用特征点提取算法提取, 本实施例优选采用 ORB特征点提取算法进行提取。  [0124] The feature points of interest of the region of interest in the current camera image are extracted by the feature point extraction algorithm. This embodiment preferably uses an ORB feature point extraction algorithm for extraction.
[0125] 步骤 207: 根据预设外参矩阵, 从标志物图像序列中获取典型图像, 并获取典 型图像的典型特征点。 [0125] Step 207: Obtain a typical image from the sequence of the marker image according to the preset external parameter matrix, and obtain the code. Typical feature points of a type image.
[0126] 步骤 207实质是为了从标志物图像序列中, 选取与当前摄像头图像状态的最接 近的标志物图像序列中的标志物图像, 即前述的典型图像, 该典型图像用于与 当前摄像头图像进行匹配。  [0126] Step 207 is essentially to select a marker image in the sequence of marker images that is closest to the current camera image state from the sequence of marker images, ie, the aforementioned exemplary image, which is used for the current camera image. Make a match.
[0127] 可以理解地, 若上一帧外参矩阵计算没有成功 (即上一帧获得的摄像头的外参 矩阵是错误的) , 使用原始的标志物图像的特征点, 即图 3-2的特征点。 否则在 所有标志物图像序列中选取一个典型图像, 取该典型图像的典型特征点用于与 当前帧获取的摄像头图像的特征点进行匹配。 具体的选择方式如下:  [0127] It can be understood that if the previous frame external parameter matrix calculation is not successful (ie, the external parameter matrix of the camera obtained in the previous frame is wrong), the feature points of the original marker image are used, that is, the function of FIG. 3-2. Feature points. Otherwise, a typical image is selected from all the sequence of marker images, and typical feature points of the typical image are used to match the feature points of the camera image acquired by the current frame. The specific options are as follows:
[0128] 计算标志物图像序列中, 每一幅标志物图像四条边的长度, 并按照顺序保存下 来, 如图 6, 4条边长度按照 1、 2、 3、 4顺序保存为 5个长度序列。 对于每个序列 , 每条边长都除以序列的 1号边长 (包括 1号边) , 做归一化处理。 再根据上一 帧计算保存下来的摄像头的外参矩阵, 计算点 A、 B、 C、 D在上一帧摄像头图像 上的坐标位置, 计算上一帧摄像头图像上的线段 AC、 AB、 BD、 DC的长度, 组 成长度序列, 同样做归一化处理。 和标志物图像序列中的所有长度序列 (上述 的 5个长度序列) 进行匹配。  [0128] Calculating the length of four sides of each marker image in the sequence of marker images, and storing them in order, as shown in FIG. 6, the lengths of the four sides are stored as 5 length sequences in the order of 1, 2, 3, and 4. . For each sequence, each side length is divided by the length of the 1st side of the sequence (including the 1st side) and normalized. Then calculate the external parameter matrix of the saved camera according to the previous frame, calculate the coordinate positions of the points A, B, C, and D on the camera image of the previous frame, and calculate the line segments AC, AB, BD on the image of the previous frame camera. The length of the DC, the sequence of lengths, is also normalized. Match all length sequences (5 length sequences above) in the sequence of marker images.
[0129] 具体的匹配方式为分别计算摄像头图像组成的长度序列和各个标志物图像组成 的长度序列的欧式距离或曼哈顿距离, 从所计算出的欧式距离或曼哈顿距离中 选出最小的, 其中, 欧式距离或曼哈顿距离最小的对应的序列图像即为典型图 像。  [0129] The specific matching manner is to separately calculate the length sequence of the camera image composition and the Euclidean distance or the Manhattan distance of the length sequence composed of the respective marker images, and select the smallest one from the calculated Euclidean distance or Manhattan distance, wherein A corresponding sequence image with the smallest Euclidean distance or Manhattan distance is a typical image.
[0130] 步骤 208: 将步骤 207中获取的典型图像的典型特征点与感兴趣区域的感兴趣特 征点进行配对, 获取差特征点, 并在匹配结果中, 去除该差特征点, 获得匹配 成功的特征点对。  [0130] Step 208: Pair the typical feature points of the typical image acquired in step 207 with the feature points of interest in the region of interest to obtain a difference feature point, and remove the difference feature point in the matching result to obtain a matching success. Feature point pairs.
[0131] 利用阈值法对典型图像的典型特征点与感兴趣区域的感兴趣特征点进行配对。  [0131] A typical feature point of a typical image is paired with a feature point of interest of a region of interest using a threshold method.
判断典型图像的典型特征点与感兴趣区域的感兴趣特征点匹配的配对值是否大 于阈值, 若是, 提取该配对值大于阈值的典型标志物图像的特征点。  It is judged whether the pairing value of the typical feature point of the typical image matches the feature point of interest of the region of interest is greater than a threshold, and if so, the feature point of the typical marker image whose pairing value is larger than the threshold is extracted.
[0132] 例如, 当一个序列图像的特征点可以匹配到感兴趣区域的多个特征点吋, 无法 确定该序列图像的特征点与感兴趣区域中的哪个特征点的配对是正确的, 易造 成混乱, 为了避免造成混乱, 干扰正确配对, 需将配对值大于阈值的典型特征 点提取出来, 将配对值大于阈值的典型特征点作为当前帧获得的差特征点。 [0132] For example, when feature points of a sequence image can be matched to multiple feature points of the region of interest, it is impossible to determine which feature point of the sequence image is correctly paired with which feature point in the region of interest, which is easy to cause Chaos, in order to avoid confusion, interfere with correct pairing, the typical characteristics of the pairing value is greater than the threshold The points are extracted, and a typical feature point whose pairing value is larger than the threshold is used as a difference feature point obtained by the current frame.
[0133] 进一步地, 将配对值大于阈值的典型特征点保存到上一帧得到的差特征点表中 。 可以理解地, 如果当前帧为第一帧, 则差特征点表为空。  [0133] Further, the typical feature points whose pairing value is greater than the threshold value are saved in the difference feature point table obtained in the previous frame. It can be understood that if the current frame is the first frame, the difference feature point table is empty.
[0134] 步骤 209: 根据步骤 208获得的匹配成功的特征点对, 结合摄像头的内参矩阵计 算出当前帧摄像头的外参矩阵, 并验证所计算出的当前帧摄像头的外参矩阵是 否正确。  [0134] Step 209: Calculate the external parameter matrix of the current frame camera according to the matching feature point pair obtained in step 208, and verify whether the calculated outer frame matrix of the current frame camera is correct.
[0135] 在该步骤中, 采用 Robust Planar Pose (RPP)算法计算摄像头的外参矩阵。 可以 理解地, 利用 RPP算法计算摄像头的外参矩阵, 一般需要摄像头的内参矩阵以及 至少四对标志物图像特征点坐标与摄像头图像特征点坐标的对应关系, 也就是 [0135] In this step, the external parameter matrix of the camera is calculated using the Robust Planar Pose (RPP) algorithm. Obviously, using the RPP algorithm to calculate the external parameter matrix of the camera generally requires the internal reference matrix of the camera and the correspondence between the coordinates of the feature points of at least four pairs of marker images and the coordinates of the feature points of the camera image, that is,
, 匹配的特征点对大于或等于四。 , matching feature point pairs are greater than or equal to four.
[0136] 为了让匹配精度更高, 本发明可采用设定阈值的方式进行处理, 具体地, 可设 定一个阈值, 假设为 N (N>4) , 在进行外参矩阵计算前, 先判断匹配的特征点 对的对数是否大于 N, 若是, 则执行后续的操作, 若否, 则当前计算处理失败。 [0136] In order to make the matching precision higher, the present invention may perform processing by setting a threshold. Specifically, a threshold may be set, which is assumed to be N (N>4), and is determined before the calculation of the external parameter matrix. Whether the logarithm of the matched feature point pair is greater than N, and if so, the subsequent operation is performed, and if not, the current calculation process fails.
[0137] 进一步地, 本实施例中通过误差分析的方法验证所计算的当前帧的摄像头的外 参矩阵是否正确。 [0137] Further, in the embodiment, the method of error analysis is used to verify whether the calculated outer parameter matrix of the camera of the current frame is correct.
[0138] 如下面的数学模型 (该数学模型为摄像头透视投影模型) 所示, Ml代表摄像 头的内参矩阵, M2代表计算出的当前帧的摄像头的外参矩阵。 设 0XYZ为世界 坐标系, uv为以像素为单位的图像坐标系。 如果物点 P在世界坐标系下的坐标为 (X, Y, Ζ) , 对应的物点 Ρ在图像坐标系的坐标为 (u, v) 。  [0138] As shown in the following mathematical model (the mathematical model is a camera perspective projection model), M1 represents the internal parameter matrix of the camera, and M2 represents the calculated external parameter matrix of the camera of the current frame. Let 0XYZ be the world coordinate system and uv be the image coordinate system in pixels. If the coordinates of the object point P in the world coordinate system are (X, Y, Ζ), the coordinates of the corresponding object point Ρ in the image coordinate system are (u, v).
[0139] 本发明的方法直接以原始标志物像素点坐标作为世界坐标系的 X轴和 Y轴坐标 , Z为 0 (拍摄以原始标志物图像位置为 Z轴零点) , 当前摄像头拍摄的像素坐标 为像素坐标系。 此吋, 所得到的摄像头的内参矩阵和外参矩阵就代表拍摄原始 标志物图像吋标志物图像各点的图像坐标, 到之后提取摄像头图像坐标的函数 关系。 利用这个函数关系, 和每个匹配成功的标志物图像特征点的坐标, 计算 出一个图像坐标, 求这个图像坐标和匹配成功的摄像头图像特征点的距离, 这 个距离设为标志物图像的特征点的误差距离。  [0139] The method of the present invention directly uses the original marker pixel point coordinates as the X-axis and Y-axis coordinates of the world coordinate system, Z is 0 (photographed with the original marker image position as the Z-axis zero point), and the pixel coordinates of the current camera shot. Is the pixel coordinate system. In this case, the obtained internal reference matrix and the external reference matrix represent the image coordinates of each point of the original marker image 吋 marker image, and then the function relationship of the camera image coordinates is extracted. Using this function relationship, and the coordinates of each matching marker image feature point, an image coordinate is calculated, and the distance between the image coordinate and the matching camera image feature point is obtained, and the distance is set as the feature point of the marker image. The error distance.
[0140] 对于每一帧的计算结果, 将当前计算的标志物图像的所有特征点 (匹配成功的 ) 的误差距离相加, 除以总匹配成功的对数, 计算平均误差距离。 设置一个阈 值, 如果平均误差距离大于这个阈值, 则计算失败, 即当前帧计算出的摄像头 的外参矩阵错误; 若平均误差距离小于这个阈值, 则计算成功, 即当前帧计算 出的摄像头的外参矩阵正确。 [0140] For the calculation result of each frame, the error distances of all the feature points (matching success) of the currently calculated marker image are added, divided by the logarithm of the total matching success, and the average error distance is calculated. Set a threshold Value, if the average error distance is greater than this threshold, the calculation fails, that is, the external parameter matrix of the camera calculated in the current frame is incorrect; if the average error distance is less than the threshold, the calculation is successful, that is, the external parameter matrix of the camera calculated in the current frame correct.
[0141] 步骤 210: 进一步的, 若在步骤 209中确定计算出的当前帧摄像头的外参矩阵是 正确的, 更新上一帧保存的感兴趣区域及差特征点表, 并保存当前帧计算出的 摄像头的外参矩阵。 其中, 所保存的当前帧摄像头的外参矩阵作为下一帧摄像 头图像的预设外参矩阵。  [0141] Step 210: Further, if it is determined in step 209 that the calculated external parameter matrix of the current frame camera is correct, updating the region of interest and the difference feature point table saved in the previous frame, and saving the current frame is calculated. The external parameter matrix of the camera. The saved outer parameter matrix of the current frame camera is used as a preset outer parameter matrix of the next frame camera image.
[0142] 具体为: 对于每个标志物图像的特征点, 如果误差距离大于误差距离阈值, 则 该标志物图像的特征点为差特征点, 并将该标志物的特征点的序号记录下来加 入上一帧保存的差特征点表。 或者, 也可以设置统计最近连续几帧的该标志物 图像的特征点的误差距离和, 设置误差距离和阈值, 若误差距离和大于误差距 离和阈值, 则可确定该标志物图像的特征点为差特征点, 并将该标志物图像的 特征点加入上一帧保存的差特征点表。  [0142] Specifically: for the feature point of each marker image, if the error distance is greater than the error distance threshold, the feature point of the marker image is a difference feature point, and the sequence number of the feature point of the marker is recorded and added. The difference feature point table saved in the previous frame. Alternatively, the error distance sum of the feature points of the marker image of the last consecutive frames may be set, and the error distance and the threshold may be set. If the error distance is greater than the error distance and the threshold, the feature point of the marker image may be determined as The difference feature point is added to the feature point of the marker image added to the difference feature point table saved in the previous frame.
[0143] 如图 7所示, 虚线代表标志物图像特征点与摄像头图像特征点匹配成功的特征 点对, 实线代表在执行 RPP算法后, 利用计算得到的摄像头的外参矩阵推导出的 坐标点对应关系。 左图中 1、 2、 3、 4代表的是在标志物图像中的特征点的位置 , 右图代表的是对应的 1在摄像头图像中的特征点的位置, 其中, 、 2'、 3'、 4' 为在摄像头图像中与标志物图像中的 1、 2、 3、 4配对成功的特征点; 、 2"、 3" 、 4"为采用 RPP算法后利用所获得的外参矩阵进行反向验证计算得到的 1、 2、 3 、 4在摄像头图像中的特征点。 根据验证结果即可判断是否正确。 如图 7中的特 征点 4, 4'与 4"之间的误差距离很大, 此吋, 即可判断出特征点 4为差特征点, 并 将其加入差特征点表中。  [0143] As shown in FIG. 7, the dotted line represents a feature point pair in which the marker image feature point and the camera image feature point match successfully, and the solid line represents the coordinates derived by using the calculated external parameter matrix of the camera after performing the RPP algorithm. Point correspondence. In the left figure, 1, 2, 3, 4 represent the position of the feature point in the marker image, and the right image represents the position of the corresponding feature point in the camera image, where, 2', 3' 4' is a feature point that is successfully paired with 1, 2, 3, and 4 in the image of the marker in the camera image; 2, 3, and 4" are reversed by using the obtained external parameter matrix after using the RPP algorithm The feature points in the camera image calculated by the verification are 1, 2, 3, and 4. According to the verification result, it can be judged whether it is correct. The error distance between the feature points 4, 4' and 4" in Fig. 7 is large. Then, it can be judged that the feature point 4 is a difference feature point and is added to the difference feature point table.
[0144] 进一步地, 若所计算出的当前帧摄像头的外参矩阵是正确的, 即表明本次计算 成功, 再根据内参矩阵和外参矩阵计算图 6中的 A、 B、 C、 D四个点在当前摄像 头图像中的坐标, 这四个坐标的顶点所表示的四边形即为感兴趣区域, 保存这 四个点的坐标用于下一帧排除非感兴趣区域, 以及从标志物图像序列中选择出 最接近当前帧获得的摄像头外参矩阵描述的变化关系的序列图像 (即前述的典 型图像) , 及该典型图像的典型特征点。 [0145] 可以理解地, 若当前帧或后续任意一帧所获得的外参矩阵错误, 均清空感兴趣 区域的差特征点表。 [0144] Further, if the calculated external parameter matrix of the current frame camera is correct, that is, the calculation is successful, and then the A, B, C, and D in FIG. 6 are calculated according to the internal reference matrix and the external parameter matrix. The coordinates of the points in the current camera image, the quadrilateral represented by the vertices of the four coordinates is the region of interest, the coordinates of the four points are saved for the next frame to exclude the non-interest region, and the sequence from the marker image A sequence image (i.e., the aforementioned typical image) of the change relationship described by the camera outer parameter matrix obtained closest to the current frame is selected, and typical feature points of the typical image are selected. [0145] It can be understood that if the outer parameter matrix obtained by the current frame or any subsequent frame is wrong, the difference feature point table of the region of interest is cleared.
[0146] 本发明还提供了一种实现增强现实的方法, 该方法采用上述基于标志物的摄像 头图像处理方法获得摄像头的外参矩阵。  [0146] The present invention also provides a method for realizing augmented reality, which uses the above-described marker-based camera image processing method to obtain an external parameter matrix of a camera.
[0147] 进一步地, 本实现增强现实的方法还包括: [0147] Further, the method for implementing augmented reality further includes:
[0148] 根据摄像头的内参矩阵及外参矩阵, 在预设的模型中绘制出摄像头当前位置下 的虚拟图形;  [0148] according to the internal reference matrix of the camera and the external parameter matrix, the virtual graphic at the current position of the camera is drawn in the preset model;
[0149] 将获得的虚拟图形与当前摄像头图像进行合成, 得到合成图像。 可以理解地, 所合成的图像即为 AR图像。  [0149] The obtained virtual graphic is combined with the current camera image to obtain a composite image. It can be understood that the synthesized image is an AR image.
[0150] 本发明还提供一种实现增强现实的设备, 设备包括处理器, 处理器用于执行存 储器中存储的计算机程序吋实现如上所述方法的步骤。 [0150] The present invention also provides an apparatus for implementing augmented reality, the apparatus comprising a processor for executing a computer program stored in the memory to implement the steps of the method as described above.
[0151] 本发明还提供一种计算机可读存储介质, 其上存储有计算机程序, 计算机程序 被处理器执行吋实现如上所述方法的步骤。 The present invention also provides a computer readable storage medium having stored thereon a computer program executed by a processor to implement the steps of the method as described above.
[0152] 本发明还提供一种基于标志物的摄像头图像处理装置, 该装置包括: [0152] The present invention also provides a marker-based camera image processing apparatus, the apparatus comprising:
[0153] 标志物图像序列获取模块 801, 用于选择或提取一幅标志物图像, 对标志物图 像进行透视变换, 获得标志物图像序列; [0153] a marker image sequence obtaining module 801, configured to select or extract a marker image, and perform perspective transformation on the marker image to obtain a marker image sequence;
[0154] 第一特征点提取模块 802, 用于提取标志物图像序列中每一幅序列图像的序列 特征点; [0154] The first feature point extraction module 802 is configured to extract sequence feature points of each sequence image in the sequence of marker images;
[0155] 当前摄像头图像获取模块 803, 用于获取当前摄像头图像;  [0155] The current camera image acquisition module 803 is configured to acquire a current camera image;
[0156] 特征点配对模块 804, 用于提取所述当前摄像头图像的图像特征点, 将当前摄 像头图像的图像特征点与序列图像的序列特征点进行配对, 获取匹配成功的特 征点对;  [0156] a feature point pairing module 804, configured to extract an image feature point of the current camera image, and pair an image feature point of the current camera image with a sequence feature point of the sequence image to obtain a feature point pair that is successfully matched;
[0157] 外参矩阵计算模块 805, 用于根据匹配成功的特征点对, 结合摄像头的内参矩 阵计算出当前帧摄像头的外参矩阵, 当前帧摄像头外参矩阵为标志物图像与摄 像头图像匹配成功的特征点的坐标对应关系。  [0157] The external parameter matrix calculation module 805 is configured to calculate a foreign parameter matrix of the current frame camera according to the matching feature point pair, and the current frame camera external parameter matrix is successfully matched with the camera image by the current frame camera external reference matrix. The coordinate correspondence of the feature points.
[0158] 进一步地, 本实施例的基于标志物的摄像头函数关系获取装置, 还包括: [0159] 误差验证模块 806, 用于对所获取的当前帧摄像头的外参矩阵进行误差计算, 获取误差结果, 根据误差结果, 验证当前帧的摄像头的外参矩阵是否正确。 以上所述仅为本发明的实施例, 并非因此限制本发明的专利范围, 凡是利用本 发明说明书及附图内容所作的等效结构或等效流程变换, 或直接或间接运用在 其他相关的技术领域, 均同理包括在本发明的专利保护范围内。 [0158] Further, the marker-based camera function relationship obtaining apparatus of the embodiment further includes: [0159] an error verification module 806, configured to perform error calculation on the acquired outer parameter matrix of the current frame camera, and obtain an error. As a result, based on the error result, it is verified whether the outer parameter matrix of the camera of the current frame is correct. The above is only the embodiment of the present invention, and is not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present invention may be directly or indirectly applied to other related technologies. The scope of the invention is included in the scope of patent protection of the present invention.

Claims

权利要求书 Claim
一种基于标志物的摄像头图像处理方法, 其特征在于, 所述方法包括 以下步骤: A marker-based camera image processing method, characterized in that the method comprises the following steps:
A: 选择或提取一幅标志物图像, 对所述标志物图像进行透视变换, 获得标志物图像序列;  A: selecting or extracting a marker image, performing perspective transformation on the marker image to obtain a sequence of marker images;
B: 提取所述标志物图像序列中每一幅序列图像的序列特征点; C: 获取当前摄像头图像;  B: extracting sequence feature points of each sequence image in the sequence of marker images; C: acquiring a current camera image;
D: 提取所述当前摄像头图像的图像特征点, 将所述当前摄像头图像 的图像特征点与所述序列图像的序列特征点进行配对, 获取匹配成功 的特征点对;  D: extracting image feature points of the current camera image, and pairing image feature points of the current camera image with sequence feature points of the sequence image to obtain a feature point pair with successful matching;
E: 根据所述匹配成功的特征点对, 结合摄像头的内参矩阵计算出当 前帧摄像头的外参矩阵, 所述当前帧摄像头的外参矩阵为标志物图像 与摄像头图像匹配成功的特征点的坐标对应关系。  E: calculating an outer parameter matrix of the current frame camera according to the matching feature point pair of the camera, and the outer parameter matrix of the current frame camera is the coordinate of the feature point that the marker image and the camera image match successfully. Correspondence relationship.
根据权利要求 1所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 A之前进一步包括步骤: The method for processing a camera image based on a marker according to claim 1, wherein the step A further comprises the steps of:
A1 : 获取所述摄像头的内参矩阵, 所述摄像头的内参矩阵包括摄像 头的参数信息;  A1: acquiring an internal parameter matrix of the camera, where the internal parameter matrix of the camera includes parameter information of the camera;
A2: 初始化系统环境、 配置系统参数。  A2: Initialize the system environment and configure system parameters.
根据权利要求 1所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 A具体包括步骤: The method for processing a camera image based on a marker according to claim 1, wherein the step A specifically includes the following steps:
采用预设变换矩阵, 对所选择或提取的标志物图像进行姿态变换, 生 成所述标志物图像序列。 The selected or extracted marker image is subjected to pose transformation using a preset transformation matrix to generate the marker image sequence.
根据权利要求 1所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 C之前进一步包括: The method of processing a marker-based camera image according to claim 1, wherein the step C further comprises:
B11 : 利用特征点提取算法对所述标志物图像序列中的所有序列图像 进行特征点提取。  B11: Feature point extraction is performed on all sequence images in the marker image sequence by using a feature point extraction algorithm.
根据权利要求 4所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 B11包括: 利用 ORB特征点提取算法对所述标志物图像序列中的所有序列图像进 行特征点提取。 The method of processing a marker-based camera image according to claim 4, wherein the step B11 comprises: Feature point extraction is performed on all sequence images in the marker image sequence by using an ORB feature point extraction algorithm.
[权利要求 6] 根据权利要求 4所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 C之前进一步包括:  The method of claim 4, wherein the step C further comprises:
B12: 对所述步骤 B11中所提取的每一幅序列图像的序列特征点进行 自匹配;  B12: Perform self-matching on sequence feature points of each sequence image extracted in step B11;
B13: 刪除自匹配成功的序列特征点, 保留自匹配失败的序列特征点  B13: Deleting self-matching sequence feature points, retaining sequence feature points from self-matching failure
[权利要求 7] 根据权利要求 6所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 D具体包括步骤: [Claim 7] The method for processing a camera image based on a marker according to claim 6, wherein the step D specifically includes the following steps:
D11 : 基于预设外参矩阵, 识别出当前摄像头图像中的感兴趣区域, 并去除非感兴趣区域;  D11: identifying a region of interest in the current camera image based on the preset foreign parameter matrix, and removing the non-interest region;
D12: 提取所述当前摄像头图像中的感兴趣区域的感兴趣特征点; D13: 将所述感兴趣区域的感兴趣特征点与所述步骤 B13中保留的序 列特征点进行配对。  D12: extracting a feature point of interest of the region of interest in the current camera image; D13: pairing the feature point of interest of the region of interest with the sequence feature point retained in step B13.
[权利要求 8] 根据权利要求 7所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 D13具体包括步骤:  [Claim 8] The method for processing a camera image based on a marker according to claim 7, wherein the step D13 specifically includes the following steps:
D131 : 根据所述预设外参矩阵, 从所述标志物图像序列中获取典型 图像, 并获取所述典型图像的典型特征点;  D131: Obtain a typical image from the sequence of the marker image according to the preset foreign parameter matrix, and acquire a typical feature point of the typical image;
D132: 将所述感兴趣区域的感兴趣特征点与所述典型图像的典型特 征点进行配对。  D132: Pair the feature points of interest of the region of interest with typical feature points of the typical image.
[权利要求 9] 根据权利要求 8所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 D131具体包括步骤:  [Claim 9] The method for processing a camera image based on a marker according to claim 8, wherein the step D131 specifically includes the following steps:
D1311 : 获取所述标志物图像序列中所述每一幅序列图像对应的序列 顶点坐标;  D1311: Obtain a sequence vertex coordinate corresponding to each sequence image in the sequence of the marker image;
D1312: 基于所述序列顶点坐标计算每一幅序列图像各条边长长度, 依次保存, 获得每一幅序列图像的第一边长长度序列;  D1312: Calculate each side length of each sequence image based on the sequence vertex coordinates, and sequentially save to obtain a first side length sequence of each sequence image;
D1313: 对所获得的每一幅序列图像的第一边长长度序列进行归一化 处理; D1313: Normalize the first side length sequence of each sequence image obtained deal with;
D1314: 根据所述预设外参矩阵, 获得所述感兴趣区域的感兴趣顶点 坐标;  D1314: Obtain, according to the preset outer parameter matrix, coordinates of a vertex of interest of the region of interest;
D1315: 基于所获得的感兴趣区域的感兴趣顶点坐标计算所述感兴趣 区域的第二边长长度序列, 并对计算出的感兴趣区域的第二边长长度 序列进行归一化处理;  D1315: Calculate a second side length length sequence of the region of interest based on the obtained vertex coordinates of the region of interest, and normalize the calculated second side length sequence of the region of interest;
D1316: 分别计算所述步骤 D1313中的经归一化处理的所有序列图像 的第一边长长度序列与所述步骤 D1315中的经归一化处理的感兴趣区 域的第二边长长度序列的欧式距离或曼哈顿距离; D1317: 根据所获得的欧式距离或曼哈顿距离进行判断, 获取所述典 型图像。  D1316: respectively calculating a first side length length sequence of all sequence images of the normalized processing in step D1313 and a second side length length sequence of the normalized processing region of interest in step D1315 Euclidean distance or Manhattan distance; D1317: Judging from the obtained Euclidean distance or Manhattan distance, the typical image is obtained.
[权利要求 10] 根据权利要求 9所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 D132具体包括:  [Claim 10] The method for processing a camera image based on a marker according to claim 9, wherein the step D132 specifically includes:
D1321 : 利用阈值法对所述典型图像的典型特征点与所述感兴趣区域 的感兴趣特征点进行配对;  D1321: pairing a typical feature point of the typical image with a feature point of interest of the region of interest by using a threshold method;
D1322: 判断所述典型图像的典型特征点与所述感兴趣区域的感兴趣 特征点匹配的配对值是否大于阈值, 若是, 提取该配对值大于阈值的 典型图像的典型特征点;  D1322: determining whether a pairing value of a typical feature point of the typical image and a feature point of interest of the region of interest is greater than a threshold, and if so, extracting a typical feature point of a typical image whose pairing value is greater than a threshold;
D1323: 在匹配结果中, 去除该配对值大于阈值的典型图像的典型特 征点, 获得匹配成功的特征点对。  D1323: In the matching result, the typical feature points of the typical image whose pairing value is greater than the threshold value are removed, and the feature point pairs with matching success are obtained.
[权利要求 11] 根据权利要求 1所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 E具体包括步骤: [Claim 11] The method for processing a camera image based on a marker according to claim 1, wherein the step E specifically includes the following steps:
采用 RPP算法, 根据步骤 D获取的匹配成功的特征点对, 并结合摄像 头的内参矩阵计算出当前帧摄像头的外参矩阵。  The RPP algorithm is used to calculate the matching feature point pairs obtained in step D, and the external parameter matrix of the current frame camera is calculated according to the internal parameter matrix of the camera.
[权利要求 12] 根据权利要求 1所述的基于标志物的摄像头图像处理方法, 其特征在 于, 在所述步骤 E之后进一步还包括: The method of claim 1, wherein the step E further comprises:
F: 对所述步骤 E获取的当前帧摄像头的外参矩阵进行误差计算, 获 取误差结果; G: 根据所述误差结果, 验证所述当前帧摄像头的外参矩阵是否正确 F: performing error calculation on the outer parameter matrix of the current frame camera acquired in step E, and obtaining an error result; G: verifying, according to the error result, whether the external parameter matrix of the current frame camera is correct
[权利要求 13] 根据权利要求 12所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述步骤 F具体包括步骤: The method of claim 12, wherein the step F comprises the following steps:
F1 : 基于所述步骤 E获取的当前帧摄像头的外参矩阵, 结合所述序列 图像与所述当前摄像头图像所有匹配成功的特征点的坐标, 计算该序 列图像的序列特征点坐标在摄像头坐标系中的计算坐标;  F1: based on the outer parameter matrix of the current frame camera acquired in the step E, combining the coordinates of the feature points of the sequence image and the current camera image, and calculating the sequence feature point coordinates of the sequence image in the camera coordinate system Calculated coordinates in ;
F2: 计算所述步骤 F1中所获得的计算坐标与所述当前摄像头图像中 匹配成功的特征点的匹配坐标之间的误差距离;  F2: calculating an error distance between the calculated coordinates obtained in the step F1 and the matching coordinates of the feature points in the current camera image that match the success;
F3: 根据所获得的所有误差距离, 计算平均误差距离;  F3: Calculate the average error distance based on all the error distances obtained;
所述步骤 G具体包括步骤:  The step G specifically includes the steps of:
01: 判断所述平均误差距离是否大于平均误差距离阈值;  01: determining whether the average error distance is greater than an average error distance threshold;
02: 若所述平均误差距离大于平均误差距离阈值, 则所述步骤 E获得 的当前帧摄像头的外参矩阵错误, 否则, 确定所述步骤 E获得的当前 帧摄像头的外参矩阵正确。  02: If the average error distance is greater than the average error distance threshold, the outer parameter matrix of the current frame camera obtained in step E is wrong. Otherwise, it is determined that the outer frame matrix of the current frame camera obtained in step E is correct.
[权利要求 14] 根据权利要求 13所述的基于标志物的摄像头图像处理方法, 其特征在 于, 所述方法还包括: The method of claim 13 is as follows: The method further includes:
若所述步骤 E获得的当前帧摄像头的外参矩阵正确, 进一步执行以下 步骤:  If the outer frame matrix of the current frame camera obtained in the step E is correct, further perform the following steps:
更新上一帧保存的感兴趣区域及差特征点表;  Updating the region of interest and the difference feature point table saved in the previous frame;
保存当前帧摄像头的外参矩阵, 作为下一帧摄像头图像的预设外参矩 阵。  The outer parameter matrix of the current frame camera is saved as the preset external reference matrix of the next frame camera image.
[权利要求 15] —种基于标志物的摄像头图像处理装置, 其特征在于, 所述装置包括 标志物图像序列获取模块, 用于选择或提取一幅标志物图像, 对所述 标志物图像进行透视变换, 获得标志物图像序列; 第一特征点提取模块, 用于提取所述标志物图像序列中每一幅序列图 像的序列特征点; 当前摄像头图像获取模块, 用于获取当前摄像头图像; [Claim 15] A marker-based camera image processing apparatus, wherein the apparatus includes a marker image sequence acquisition module for selecting or extracting a marker image, and performing perspective on the marker image Transforming, obtaining a sequence of marker images; a first feature point extraction module, configured to extract sequence feature points of each sequence image in the sequence of marker images; a current camera image acquisition module, configured to acquire a current camera image;
特征点配对模块, 用于提取所述当前摄像头图像的图像特征点, 将所 述当前摄像头图像的图像特征点与所述序列图像的序列特征点进行配 对, 获取匹配成功的特征点对; a feature point matching module, configured to extract an image feature point of the current camera image, and match an image feature point of the current camera image with a sequence feature point of the sequence image to obtain a feature point pair that is successfully matched;
外参矩阵计算模块, 用于根据所述匹配成功的特征点对, 结合摄像头 的内参矩阵计算出当前帧摄像头的外参矩阵, 所述当前帧摄像头的外 参矩阵为标志物图像与摄像头图像匹配成功的特征点的坐标对应关系 根据权利要求 15所述的基于标志物的摄像头图像处理装置, 其特征在 于, 所述装置还包括: The outer parameter matrix calculation module is configured to calculate an outer parameter matrix of the current frame camera according to the matching feature point pair of the matching, and the outer parameter matrix of the current frame camera is the target image matching the camera image A marker-based camera image processing device according to claim 15, wherein the device further comprises:
误差验证模块, 用于对所获取的当前帧摄像头的外参矩阵进行误差计 算, 获取误差结果, 根据所述误差结果, 验证所述当前帧摄像头的外 参矩阵是否正确。 The error verification module is configured to perform error calculation on the acquired outer parameter matrix of the current frame camera, obtain an error result, and verify whether the outer parameter matrix of the current frame camera is correct according to the error result.
一种实现增强现实的方法, 其特征在于, 该方法采用权利要求 1-14任 一项所述的基于标志物的摄像头图像处理方法获得摄像头的外参矩阵 根据权利要求 17所述的实现增强现实的方法, 其特征在于, 所述方法 还包括: A method for realizing augmented reality, characterized in that the method uses the marker-based camera image processing method according to any one of claims 1 to 14 to obtain an external parameter matrix of the camera, and realizes augmented reality according to claim 17. The method, further comprising:
根据所述摄像头的内参矩阵及外参矩阵, 在预设的模型中绘制出摄像 头当前位置下的虚拟图形; Drawing a virtual graphic at a current position of the camera in a preset model according to the internal reference matrix of the camera and the external parameter matrix;
将获得的虚拟图形与当前摄像头图像进行合成, 得到合成图像。 一种实现增强现实的设备, 其特征在于, 所述设备包括处理器, 所述 处理器用于执行存储器中存储的计算机程序吋实现如权利要求 1-14任 一项所述方法的步骤。 The obtained virtual graphic is combined with the current camera image to obtain a composite image. An apparatus for implementing augmented reality, characterized in that the apparatus comprises a processor for executing a computer program stored in a memory to implement the steps of the method of any one of claims 1-14.
一种计算机可读存储介质, 其上存储有计算机程序, 其特征在于, 所 述计算机程序被处理器执行吋实现如权利要求 1-14任一项所述方法的 步骤。 A computer readable storage medium having stored thereon a computer program, characterized in that the computer program is executed by a processor to perform the steps of the method of any of claims 1-14.
PCT/CN2017/108404 2017-10-30 2017-10-30 Marker-based camera image processing method, and augmented reality device WO2019084726A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780096283.6A CN111344740A (en) 2017-10-30 2017-10-30 Camera image processing method based on marker and augmented reality equipment
PCT/CN2017/108404 WO2019084726A1 (en) 2017-10-30 2017-10-30 Marker-based camera image processing method, and augmented reality device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/108404 WO2019084726A1 (en) 2017-10-30 2017-10-30 Marker-based camera image processing method, and augmented reality device

Publications (1)

Publication Number Publication Date
WO2019084726A1 true WO2019084726A1 (en) 2019-05-09

Family

ID=66332453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108404 WO2019084726A1 (en) 2017-10-30 2017-10-30 Marker-based camera image processing method, and augmented reality device

Country Status (2)

Country Link
CN (1) CN111344740A (en)
WO (1) WO2019084726A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230274461A1 (en) * 2019-12-27 2023-08-31 Snap Inc. Marker-based shared augmented reality session creation

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
CN101661617A (en) * 2008-08-30 2010-03-03 深圳华为通信技术有限公司 Method and device for camera calibration
US20140188669A1 (en) * 2012-12-11 2014-07-03 Holition Limited Augmented reality system and method
CN103955931A (en) * 2014-04-29 2014-07-30 江苏物联网研究发展中心 Image matching method and device
CN104050475A (en) * 2014-06-19 2014-09-17 樊晓东 Reality augmenting system and method based on image feature matching
CN104299215A (en) * 2014-10-11 2015-01-21 中国兵器工业第二O二研究所 Feature point calibrating and matching image splicing method
CN107038758A (en) * 2016-10-14 2017-08-11 北京联合大学 A kind of augmented reality three-dimensional registration method based on ORB operators
CN107248169A (en) * 2016-03-29 2017-10-13 中兴通讯股份有限公司 Image position method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8310539B2 (en) * 2009-05-29 2012-11-13 Mori Seiki Co., Ltd Calibration method and calibration device
CN103411553B (en) * 2013-08-13 2016-03-02 天津大学 The quick calibrating method of multi-linear structured light vision sensors
CN105701827B (en) * 2016-01-15 2019-04-02 中林信达(北京)科技信息有限责任公司 The parametric joint scaling method and device of Visible Light Camera and infrared camera
CN106127737A (en) * 2016-06-15 2016-11-16 王向东 A kind of flat board calibration system in sports tournament is measured
CN106874865A (en) * 2017-02-10 2017-06-20 深圳前海大造科技有限公司 A kind of augmented reality implementation method based on image recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661617A (en) * 2008-08-30 2010-03-03 深圳华为通信技术有限公司 Method and device for camera calibration
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
US20140188669A1 (en) * 2012-12-11 2014-07-03 Holition Limited Augmented reality system and method
CN103955931A (en) * 2014-04-29 2014-07-30 江苏物联网研究发展中心 Image matching method and device
CN104050475A (en) * 2014-06-19 2014-09-17 樊晓东 Reality augmenting system and method based on image feature matching
CN104299215A (en) * 2014-10-11 2015-01-21 中国兵器工业第二O二研究所 Feature point calibrating and matching image splicing method
CN107248169A (en) * 2016-03-29 2017-10-13 中兴通讯股份有限公司 Image position method and device
CN107038758A (en) * 2016-10-14 2017-08-11 北京联合大学 A kind of augmented reality three-dimensional registration method based on ORB operators

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230274461A1 (en) * 2019-12-27 2023-08-31 Snap Inc. Marker-based shared augmented reality session creation

Also Published As

Publication number Publication date
CN111344740A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
US11288492B2 (en) Method and device for acquiring 3D information of object
US11164323B2 (en) Method for obtaining image tracking points and device and storage medium thereof
JP6464934B2 (en) Camera posture estimation apparatus, camera posture estimation method, and camera posture estimation program
CN110648397B (en) Scene map generation method and device, storage medium and electronic equipment
WO2011161579A1 (en) Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation
JP2015079490A (en) Method, device and system for selecting frame
WO2019196745A1 (en) Face modelling method and related product
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN108345821B (en) Face tracking method and device
CN104050475A (en) Reality augmenting system and method based on image feature matching
CN108133492B (en) Image matching method, device and system
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
US20230252664A1 (en) Image Registration Method and Apparatus, Electronic Apparatus, and Storage Medium
CN113689503B (en) Target object posture detection method, device, equipment and storage medium
CN109902675B (en) Object pose acquisition method and scene reconstruction method and device
JP2017123087A (en) Program, device and method for calculating normal vector of planar object reflected in continuous photographic images
JP2021174554A (en) Image depth determination method and living creature recognition method, circuit, device, storage medium
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN114187333A (en) Image alignment method, image alignment device and terminal equipment
JP2017097578A (en) Information processing apparatus and method
CN106997366B (en) Database construction method, augmented reality fusion tracking method and terminal equipment
JP4550769B2 (en) Image detection apparatus and image detection method
JP6086491B2 (en) Image processing apparatus and database construction apparatus thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17930515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17930515

Country of ref document: EP

Kind code of ref document: A1