WO2022036478A1 - Machine vision-based augmented reality blind area assembly guidance method - Google Patents

Machine vision-based augmented reality blind area assembly guidance method Download PDF

Info

Publication number
WO2022036478A1
WO2022036478A1 PCT/CN2020/109426 CN2020109426W WO2022036478A1 WO 2022036478 A1 WO2022036478 A1 WO 2022036478A1 CN 2020109426 W CN2020109426 W CN 2020109426W WO 2022036478 A1 WO2022036478 A1 WO 2022036478A1
Authority
WO
WIPO (PCT)
Prior art keywords
ellipse
blind spot
camera
machine vision
augmented reality
Prior art date
Application number
PCT/CN2020/109426
Other languages
French (fr)
Chinese (zh)
Inventor
殷伟萍
罗赛
张志远
金星鉴
Original Assignee
江苏瑞科科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏瑞科科技有限公司 filed Critical 江苏瑞科科技有限公司
Priority to PCT/CN2020/109426 priority Critical patent/WO2022036478A1/en
Publication of WO2022036478A1 publication Critical patent/WO2022036478A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems

Definitions

  • the invention relates to the field of augmented reality blind area assembly guidance, in particular to a machine vision-based augmented reality blind area assembly guidance method.
  • the main purpose of the present invention is to provide an augmented reality blind area assembly guidance method based on machine vision, which can effectively solve the problems in the background technology.
  • An augmented reality blind spot assembly guidance method based on machine vision including identification and detection of marker ellipse, positioning and tracking of blind spot objects, and AR visualization of blind spot assembly information
  • the identification and detection of marker ellipse include image input, image grayscale Degreeization, Gaussian filtering, edge detection, contour search, ellipse fitting, ellipse screening, ellipse center point coordinate output
  • the input end of the identification and detection of the marker point ellipse is connected to a camera
  • the positioning and tracking of the blind spot object includes Machine vision-based location tracking, sensor-based location tracking, and hybrid location tracking, AR visualization of blind spot assembly information including PCs, handhelds, headsets, and projections.
  • the camera is used to collect images of the blind spot assembly site
  • the image preprocessing is used to reduce image noise
  • the identification and detection of the elliptical marker points is to locate and track the blind spot to be assembled object bound to the marker points in real time.
  • the identification and detection process of the mark point ellipse is as follows: first, the original image is grayscaled, and then Gaussian filtering is performed, and then the maximum inter-class variance method (OTSU) proposed by Hough PV is used for image binarization.
  • OTS maximum inter-class variance method
  • the canny edge detection operator is used to extract the effective edge contour in the binary image, and the obtained pixel point set on the boundary contour is not a complete contour curve. Therefore, the contour information stored in the linked list structure is traversed and filtered, and the filtered Get the complete 2D contour curve.
  • the blind spot object is tracked by pasting elliptical marker points on the surface of the blind spot object, and indirectly tracking the blind spot object to be assembled by tracking an ellipse.
  • the camera in order to determine the pose of the ellipse pasting surface, the camera needs to be calibrated first, the camera's internal parameter matrix M c , the camera's internal parameter distortion parameter matrix, etc. are obtained, and then the coordinates of the camera in the real world coordinate system are determined by solving the PnP problem The conversion relationship between (3D) and coordinates (2D) in pixel coordinates.
  • the PnP problem is a method for solving 3D-2D point-to-point motion, which describes how to estimate the pose of the camera when n three-dimensional space point coordinates and their two-dimensional projection positions are known.
  • the least As long as the spatial coordinates of the three points are known, that is, the 3D coordinates, it can be used to estimate the motion of the camera and the pose of the camera.
  • the present invention has the following beneficial effects: the machine vision-based augmented reality blind spot assembly guidance method uses a camera to collect images of the blind spot assembly site, reduces image noise through image preprocessing, adopts contour search and ellipse fitting. Identify the mark point ellipse in other ways. Since the mark point is attached to the outer surface of the object to be assembled in the blind area, the positioning and tracking of the object to be assembled in the blind area can be realized by positioning and tracking the ellipse, and by solving the PnP problem, the 3D world of the center point of the ellipse can be realized.
  • Coordinates and 2D pixel coordinates are converted to obtain the pose (R/T) of the camera, and the pose information is passed into unity3D to realize virtual and real combination with the model in unity3D.
  • the assembly guide is The information is superimposed in the assembly environment by means of projection, and the AR visualization guides the assembly by means of the principle of local error amplification.
  • the camera tracking algorithm based on artificial landmarks has the advantages of low algorithm complexity, strong tracking stability, high tracking accuracy, and small drift. In the field of augmented reality assembly, the tracking accuracy of mechanical products is required to be high, so the camera tracking method based on artificial landmarks has significant advantages.
  • FIG. 1 is an overall schematic diagram of a machine vision-based augmented reality blind spot assembly guidance method of the present invention.
  • FIG. 2 is a working flow chart of a machine vision-based augmented reality blind spot assembly guidance method of the present invention.
  • FIG. 3 is a flowchart of ellipse recognition and detection of an augmented reality blind spot assembly guidance method based on machine vision of the present invention.
  • FIG. 4 is a frame diagram of positioning and tracking of objects in a blind spot of an augmented reality blind spot assembly guidance method based on machine vision of the present invention.
  • an augmented reality blind spot assembly guidance method based on machine vision includes identification and detection of marker ellipses, positioning and tracking of blind spot objects, and AR visualization of blind spot assembly information.
  • the identification and detection of marker ellipses include image Input, image grayscale, Gaussian filtering, edge detection, contour search, ellipse fitting, ellipse screening, ellipse center point coordinate output, the input end of marker point ellipse recognition and detection is connected to a camera, and the positioning and tracking of blind spot objects includes Positioning tracking for machine vision, sensor-based positioning tracking, and hybrid positioning tracking, AR visualization of blind spot assembly information includes PCs, handheld devices, head-mounted devices, and projections.
  • the camera is used to collect images of the blind spot assembly site, the image preprocessing is used to reduce image noise, and the identification and detection of elliptical marker points is to locate and track the objects to be assembled in the blind spot bound to the marker points in real time.
  • the identification and detection process of the marker point ellipse is as follows: first, the original image is grayscaled, and then Gaussian filtering is performed, and then the maximum inter-class variance method (OTSU) proposed by Hough PV is used to binarize the image, and then the canny edge detection algorithm is used. Sub-extract the effective edge contour in the binary image, and the obtained pixel point set on the boundary contour is not a complete contour curve, so the contour information stored in the form of linked list structure is traversed and filtered, and a complete two-dimensional contour curve.
  • OTSU maximum inter-class variance method
  • the blind spot object tracking is to indirectly track the blind spot object to be assembled by pasting the elliptical marker points on the surface of the blind spot object by tracking the ellipse.
  • the PnP problem is a method for solving 3D-2D point-to-point motion. It describes how to estimate the pose of the camera when the coordinates of n three-dimensional space points and their two-dimensional projection positions are known. In one image, at least three points are known.
  • the spatial coordinates of , or 3D coordinates can be used to estimate the motion of the camera and the pose of the camera.
  • the present invention is an augmented reality blind spot assembly guidance method based on machine vision.
  • a camera is used to collect images of the blind spot assembly site, image noise is reduced through image preprocessing, contour search, ellipse fitting, etc. are used. Since the mark point is attached to the outer surface of the object to be assembled in the blind area, the positioning and tracking of the object to be assembled in the blind area can be realized by positioning and tracking the ellipse, and by solving the PnP problem, the 3D world coordinates of the center point of the ellipse can be realized. Convert with 2D pixel coordinates to obtain the pose (R/T) of the camera, pass the pose information into unity3D, and combine virtual and real with the model in unity3D. It is superimposed in the assembly environment by means of projection, and AR visualization guides the assembly by means of the principle of local error amplification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed in the present invention is a machine vision-based augmented reality blind area assembly guidance method, comprising: identification and detection of a mark point ellipse, location tracking of a blind area object, and AR visualization of blind area assembly information. The identification and detection of the mark point ellipse comprises: image input, image graying, Gaussian filtering, edge detection, outline searching, ellipse fitting, ellipse screening, and ellipse center coordinates output. An input end of the identification and detection of the mark point ellipse is connected to a camera. The location tracking of the blind area object comprises: machine vision-based location tracking, sensor-based location tracking, and hybrid location tracking. The AR visualization of blind area assembly information comprises a PC, a handheld device, a head-mounted device, and projection. The machine vision-based augmented reality blind area assembly guidance method can significantly improve the efficiency of blind area assembly and can effectively reduce the assembly error rate.

Description

一种基于机器视觉的增强现实盲区装配引导方法An Augmented Reality Blind Spot Assembly Guidance Method Based on Machine Vision 技术领域technical field
本发明涉及增强现实盲区装配引导领域,特别涉及一种基于机器视觉的增强现实盲区装配引导方法。The invention relates to the field of augmented reality blind area assembly guidance, in particular to a machine vision-based augmented reality blind area assembly guidance method.
背景技术Background technique
对于盲区手工装配,由于工人视线受阻,无法看到待装配零件的实时状态,对装配的效率和准确率造成了极大影响,针对这一问题,提出了一种基于机器视觉的增强现实盲区装配方法。For manual assembly in blind areas, workers cannot see the real-time status of the parts to be assembled due to the blocked vision, which has a great impact on the efficiency and accuracy of assembly. In response to this problem, an augmented reality blind area assembly based on machine vision is proposed. method.
技术问题technical problem
本发明的主要目的在于提供一种基于机器视觉的增强现实盲区装配引导方法,可以有效解决背景技术中的问题。The main purpose of the present invention is to provide an augmented reality blind area assembly guidance method based on machine vision, which can effectively solve the problems in the background technology.
技术解决方案technical solutions
为实现上述目的,本发明采取的技术方案为。In order to achieve the above purpose, the technical solution adopted by the present invention is as follows.
一种基于机器视觉的增强现实盲区装配引导方法,包括标志点椭圆的识别和检测、盲区对象的定位追踪以及盲区装配信息的AR可视化,所述标志点椭圆的识别和检测包括图像输入、图像灰度化、高斯滤波、边缘检测、轮廓查找、椭圆拟合、椭圆筛选、椭圆中心点坐标输出,所述标志点椭圆的识别和检测的输入端连接有相机,所述盲区对象的定位追踪包括基于机器视觉的定位追踪、基于传感器的定位追踪以及混合定位追踪,所述盲区装配信息的AR可视化包括PC、手持式设备、头戴式设备以及投影。An augmented reality blind spot assembly guidance method based on machine vision, including identification and detection of marker ellipse, positioning and tracking of blind spot objects, and AR visualization of blind spot assembly information, the identification and detection of marker ellipse include image input, image grayscale Degreeization, Gaussian filtering, edge detection, contour search, ellipse fitting, ellipse screening, ellipse center point coordinate output, the input end of the identification and detection of the marker point ellipse is connected to a camera, and the positioning and tracking of the blind spot object includes Machine vision-based location tracking, sensor-based location tracking, and hybrid location tracking, AR visualization of blind spot assembly information including PCs, handhelds, headsets, and projections.
优选的,所述相机用于采集盲区装配现场的图像,所述图像预处理用于降低图像噪声,所述识别和检测椭圆标志点是为了实时定位追踪与标志点绑定的盲区待装配对象。Preferably, the camera is used to collect images of the blind spot assembly site, the image preprocessing is used to reduce image noise, and the identification and detection of the elliptical marker points is to locate and track the blind spot to be assembled object bound to the marker points in real time.
优选的,所述标志点椭圆的识别和检测过程为:首先对原始图像灰度化处理后进行高斯滤波,然后采用Hough PV 所提出的最大类间方差法 (OTSU)图像二值化处理,再利用 canny 边缘检测算子提取二值图像中的有效边缘轮廓,得到的边界轮廓上的像素点集并不是完整的轮廓曲线,因此再对使用链表结构形式储存的轮廓信息进行遍历筛选,经筛选处理得到完整的二维轮廓曲线。Preferably, the identification and detection process of the mark point ellipse is as follows: first, the original image is grayscaled, and then Gaussian filtering is performed, and then the maximum inter-class variance method (OTSU) proposed by Hough PV is used for image binarization. The canny edge detection operator is used to extract the effective edge contour in the binary image, and the obtained pixel point set on the boundary contour is not a complete contour curve. Therefore, the contour information stored in the linked list structure is traversed and filtered, and the filtered Get the complete 2D contour curve.
优选的,所述盲区对象的追踪是通过将椭圆标志点粘贴在盲区对象表面,通过追踪椭圆的方式间接追踪盲区待装配对象。Preferably, the blind spot object is tracked by pasting elliptical marker points on the surface of the blind spot object, and indirectly tracking the blind spot object to be assembled by tracking an ellipse.
优选的,所述为了确定椭圆粘贴表面的位姿,首先需要标定相机,获得相机的内参矩阵 M c 、相机内参畸变参数矩阵等,然后通过解 PnP 问题,确定相机在真实世界坐标系下的坐标(3D)与像素坐标下的坐标(2D)之间的转换关系。Preferably, in order to determine the pose of the ellipse pasting surface, the camera needs to be calibrated first, the camera's internal parameter matrix M c , the camera's internal parameter distortion parameter matrix, etc. are obtained, and then the coordinates of the camera in the real world coordinate system are determined by solving the PnP problem The conversion relationship between (3D) and coordinates (2D) in pixel coordinates.
优选的,所述PnP 问题是求解 3D-2D 点对运动的方法,描述了当知道 n 个三维空间点坐标及其二维投影位置时,如何估计相机的位姿,在 1 幅图像中,最少只要知道 3 个点的空间坐标即3D 坐标,就可以用于估计相机的运动以及相机的姿态。Preferably, the PnP problem is a method for solving 3D-2D point-to-point motion, which describes how to estimate the pose of the camera when n three-dimensional space point coordinates and their two-dimensional projection positions are known. In one image, the least As long as the spatial coordinates of the three points are known, that is, the 3D coordinates, it can be used to estimate the motion of the camera and the pose of the camera.
有益效果beneficial effect
与现有技术相比,本发明具有如下有益效果:该基于机器视觉的增强现实盲区装配引导方法,使用相机采集盲区装配现场的图像,通过图像预处理降低图像噪声,采用轮廓查找、椭圆拟合等方式识别标志点椭圆,由于标志点贴在盲区待装配对象的外表面,所以通过对椭圆的定位追踪就可以实现盲区待装配对象的定位追踪,并通过解 PnP 问题,实现椭圆中心点 3D 世界坐标与 2D 像素坐标的转换,从而求得相机的位姿(R/T),将位姿信息传入 unity3D 中,与 unity3D中的模型实现虚实结合,最后根据虚实结合注册的结果,将装配引导信息通过投影的方式在装配环境中叠加,同时借助局部误差放大的原理进行 AR 可视化引导装配,基于人工标志点的摄像机追踪算法具有算法复杂度低、追踪稳定性强、追踪精度高、漂移小等特点,在增强现实装配领域,机械产品追踪精度要求高,因此基于人工标志点的摄像机追踪方法具有显著优势。Compared with the prior art, the present invention has the following beneficial effects: the machine vision-based augmented reality blind spot assembly guidance method uses a camera to collect images of the blind spot assembly site, reduces image noise through image preprocessing, adopts contour search and ellipse fitting. Identify the mark point ellipse in other ways. Since the mark point is attached to the outer surface of the object to be assembled in the blind area, the positioning and tracking of the object to be assembled in the blind area can be realized by positioning and tracking the ellipse, and by solving the PnP problem, the 3D world of the center point of the ellipse can be realized. Coordinates and 2D pixel coordinates are converted to obtain the pose (R/T) of the camera, and the pose information is passed into unity3D to realize virtual and real combination with the model in unity3D. Finally, according to the result of virtual and real combination registration, the assembly guide is The information is superimposed in the assembly environment by means of projection, and the AR visualization guides the assembly by means of the principle of local error amplification. The camera tracking algorithm based on artificial landmarks has the advantages of low algorithm complexity, strong tracking stability, high tracking accuracy, and small drift. In the field of augmented reality assembly, the tracking accuracy of mechanical products is required to be high, so the camera tracking method based on artificial landmarks has significant advantages.
附图说明Description of drawings
图1为本发明一种基于机器视觉的增强现实盲区装配引导方法的整体方案图。FIG. 1 is an overall schematic diagram of a machine vision-based augmented reality blind spot assembly guidance method of the present invention.
图2为本发明一种基于机器视觉的增强现实盲区装配引导方法的工作流程图。FIG. 2 is a working flow chart of a machine vision-based augmented reality blind spot assembly guidance method of the present invention.
图3为本发明一种基于机器视觉的增强现实盲区装配引导方法的椭圆识别检测流程图。FIG. 3 is a flowchart of ellipse recognition and detection of an augmented reality blind spot assembly guidance method based on machine vision of the present invention.
图4为本发明一种基于机器视觉的增强现实盲区装配引导方法的盲区对象的定位追踪框架图。FIG. 4 is a frame diagram of positioning and tracking of objects in a blind spot of an augmented reality blind spot assembly guidance method based on machine vision of the present invention.
本发明的实施方式Embodiments of the present invention
为使本发明实现的技术手段、创作特征、达成目的与功效易于明白了解,下面结合具体实施方式,进一步阐述本发明。In order to make the technical means, creative features, achievement goals and effects realized by the present invention easy to understand, the present invention will be further described below with reference to the specific embodiments.
如图1所示,一种基于机器视觉的增强现实盲区装配引导方法,包括标志点椭圆的识别和检测、盲区对象的定位追踪以及盲区装配信息的AR可视化,标志点椭圆的识别和检测包括图像输入、图像灰度化、高斯滤波、边缘检测、轮廓查找、椭圆拟合、椭圆筛选、椭圆中心点坐标输出,标志点椭圆的识别和检测的输入端连接有相机,盲区对象的定位追踪包括基于机器视觉的定位追踪、基于传感器的定位追踪以及混合定位追踪,盲区装配信息的AR可视化包括PC、手持式设备、头戴式设备以及投影。As shown in Figure 1, an augmented reality blind spot assembly guidance method based on machine vision includes identification and detection of marker ellipses, positioning and tracking of blind spot objects, and AR visualization of blind spot assembly information. The identification and detection of marker ellipses include image Input, image grayscale, Gaussian filtering, edge detection, contour search, ellipse fitting, ellipse screening, ellipse center point coordinate output, the input end of marker point ellipse recognition and detection is connected to a camera, and the positioning and tracking of blind spot objects includes Positioning tracking for machine vision, sensor-based positioning tracking, and hybrid positioning tracking, AR visualization of blind spot assembly information includes PCs, handheld devices, head-mounted devices, and projections.
相机用于采集盲区装配现场的图像,图像预处理用于降低图像噪声,识别和检测椭圆标志点是为了实时定位追踪与标志点绑定的盲区待装配对象。The camera is used to collect images of the blind spot assembly site, the image preprocessing is used to reduce image noise, and the identification and detection of elliptical marker points is to locate and track the objects to be assembled in the blind spot bound to the marker points in real time.
标志点椭圆的识别和检测过程为:首先对原始图像灰度化处理后进行高斯滤波,然后采用Hough PV 所提出的最大类间方差法 (OTSU)图像二值化处理,再利用 canny 边缘检测算子提取二值图像中的有效边缘轮廓,得到的边界轮廓上的像素点集并不是完整的轮廓曲线,因此再对使用链表结构形式储存的轮廓信息进行遍历筛选,经筛选处理得到完整的二维轮廓曲线。The identification and detection process of the marker point ellipse is as follows: first, the original image is grayscaled, and then Gaussian filtering is performed, and then the maximum inter-class variance method (OTSU) proposed by Hough PV is used to binarize the image, and then the canny edge detection algorithm is used. Sub-extract the effective edge contour in the binary image, and the obtained pixel point set on the boundary contour is not a complete contour curve, so the contour information stored in the form of linked list structure is traversed and filtered, and a complete two-dimensional contour curve.
盲区对象的追踪是通过将椭圆标志点粘贴在盲区对象表面,通过追踪椭圆的方式间接追踪盲区待装配对象。The blind spot object tracking is to indirectly track the blind spot object to be assembled by pasting the elliptical marker points on the surface of the blind spot object by tracking the ellipse.
为了确定椭圆粘贴表面的位姿,首先需要标定相机,获得相机的内参矩阵 M c 、相机内参畸变参数矩阵等,然后通过解 PnP 问题,确定相机在真实世界坐标系下的坐标(3D)与像素坐标下的坐标(2D)之间的转换关系。In order to determine the pose of the ellipse pasted surface, it is first necessary to calibrate the camera, obtain the camera's internal parameter matrix M c , the camera's internal parameter distortion parameter matrix, etc., and then solve the PnP problem to determine the camera's coordinates (3D) and pixels in the real world coordinate system The transformation relationship between coordinates (2D) under coordinates.
PnP 问题是求解 3D-2D 点对运动的方法,描述了当知道 n 个三维空间点坐标及其二维投影位置时,如何估计相机的位姿,在 1 幅图像中,最少只要知道 3 个点的空间坐标即3D 坐标,就可以用于估计相机的运动以及相机的姿态。The PnP problem is a method for solving 3D-2D point-to-point motion. It describes how to estimate the pose of the camera when the coordinates of n three-dimensional space points and their two-dimensional projection positions are known. In one image, at least three points are known. The spatial coordinates of , or 3D coordinates, can be used to estimate the motion of the camera and the pose of the camera.
需要说明的是,本发明为一种基于机器视觉的增强现实盲区装配引导方法,在使用时,使用相机采集盲区装配现场的图像,通过图像预处理降低图像噪声,采用轮廓查找、椭圆拟合等方式识别标志点椭圆,由于标志点贴在盲区待装配对象的外表面,所以通过对椭圆的定位追踪就可以实现盲区待装配对象的定位追踪,并通过解 PnP 问题,实现椭圆中心点 3D 世界坐标与 2D 像素坐标的转换,从而求得相机的位姿(R/T),将位姿信息传入 unity3D 中,与 unity3D中的模型实现虚实结合,最后根据虚实结合注册的结果,将装配引导信息通过投影的方式在装配环境中叠加,同时借助局部误差放大的原理进行 AR 可视化引导装配。It should be noted that the present invention is an augmented reality blind spot assembly guidance method based on machine vision. When in use, a camera is used to collect images of the blind spot assembly site, image noise is reduced through image preprocessing, contour search, ellipse fitting, etc. are used. Since the mark point is attached to the outer surface of the object to be assembled in the blind area, the positioning and tracking of the object to be assembled in the blind area can be realized by positioning and tracking the ellipse, and by solving the PnP problem, the 3D world coordinates of the center point of the ellipse can be realized. Convert with 2D pixel coordinates to obtain the pose (R/T) of the camera, pass the pose information into unity3D, and combine virtual and real with the model in unity3D. It is superimposed in the assembly environment by means of projection, and AR visualization guides the assembly by means of the principle of local error amplification.
以上显示和描述了本发明的基本原理和主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles and main features of the present invention and the advantages of the present invention have been shown and described above. Those skilled in the art should understand that the present invention is not limited by the above-mentioned embodiments, and the descriptions in the above-mentioned embodiments and the description are only to illustrate the principle of the present invention. Without departing from the spirit and scope of the present invention, the present invention will have Various changes and modifications fall within the scope of the claimed invention. The claimed scope of the present invention is defined by the appended claims and their equivalents.

Claims (6)

  1. 一种基于机器视觉的增强现实盲区装配引导方法,包括标志点椭圆的识别和检测、盲区对象的定位追踪以及盲区装配信息的AR可视化,其特征在于:所述标志点椭圆的识别和检测包括图像输入、图像灰度化、高斯滤波、边缘检测、轮廓查找、椭圆拟合、椭圆筛选、椭圆中心点坐标输出,所述标志点椭圆的识别和检测的输入端连接有相机,所述盲区对象的定位追踪包括基于机器视觉的定位追踪、基于传感器的定位追踪以及混合定位追踪,所述盲区装配信息的AR可视化包括PC、手持式设备、头戴式设备以及投影。An augmented reality blind spot assembly guidance method based on machine vision, including identification and detection of marker ellipse, positioning and tracking of blind spot objects, and AR visualization of blind spot assembly information, characterized in that the identification and detection of the marker ellipse include images Input, image grayscale, Gaussian filtering, edge detection, contour search, ellipse fitting, ellipse screening, ellipse center point coordinate output, the input end of the identification and detection of the marker point ellipse is connected with a camera, and the blind area object is Positioning tracking includes machine vision-based positioning tracking, sensor-based positioning tracking, and hybrid positioning tracking, and AR visualization of blind spot assembly information includes PCs, handheld devices, head-mounted devices, and projections.
  2. 根据权利要求1所述的一种基于机器视觉的增强现实盲区装配引导方法,其特征在于:所述相机用于采集盲区装配现场的图像,所述图像预处理用于降低图像噪声,所述识别和检测椭圆标志点是为了实时定位追踪与标志点绑定的盲区待装配对象。A machine vision-based augmented reality blind spot assembly guidance method according to claim 1, wherein the camera is used to collect images of the blind spot assembly site, the image preprocessing is used to reduce image noise, and the identification And detection of ellipse mark points is to locate and track objects to be assembled in the blind area bound to the mark points in real time.
  3. 根据权利要求1所述的一种基于机器视觉的增强现实盲区装配引导方法,其特征在于:所述标志点椭圆的识别和检测过程为:首先对原始图像灰度化处理后进行高斯滤波,然后采用Hough PV 所提出的最大类间方差法 (OTSU)图像二值化处理,再利用 canny 边缘检测算子提取二值图像中的有效边缘轮廓,得到的边界轮廓上的像素点集并不是完整的轮廓曲线,因此再对使用链表结构形式储存的轮廓信息进行遍历筛选,经筛选处理得到完整的二维轮廓曲线。A machine vision-based augmented reality blind spot assembly guidance method according to claim 1, characterized in that: the identification and detection process of the marker point ellipse is: first, performing Gaussian filtering on the original image after grayscale processing, and then The maximum inter-class variance method (OTSU) proposed by Hough PV is used to binarize the image, and then use The canny edge detection operator extracts the effective edge contour in the binary image, and the obtained pixel point set on the boundary contour is not a complete contour curve. Complete 2D profile curves.
  4. 根据权利要求1所述的一种基于机器视觉的增强现实盲区装配引导方法,其特征在于:所述盲区对象的追踪是通过将椭圆标志点粘贴在盲区对象表面,通过追踪椭圆的方式间接追踪盲区待装配对象。A machine vision-based augmented reality blind spot assembly guidance method according to claim 1, wherein the blind spot object is tracked by pasting elliptical marker points on the surface of the blind spot object, and indirectly tracking the blind spot by tracking an ellipse Object to be assembled.
  5. 根据权利要求1所述的一种基于机器视觉的增强现实盲区装配引导方法,其特征在于:所述为了确定椭圆粘贴表面的位姿,首先需要标定相机,获得相机的内参矩阵 M c 、相机内参畸变参数矩阵等,然后通过解 PnP 问题,确定相机在真实世界坐标系下的坐标(3D)与像素坐标下的坐标(2D)之间的转换关系。A machine vision-based augmented reality blind spot assembly guidance method according to claim 1, characterized in that: in order to determine the pose of the elliptical sticking surface, the camera needs to be calibrated first, and the camera's internal parameter matrix M c and the camera's internal parameters are obtained. Distortion parameter matrix, etc., and then by solving the PnP problem, determine the conversion relationship between the coordinates (3D) of the camera in the real world coordinate system and the coordinates (2D) in the pixel coordinates.
  6. 根据权利要求1所述的一种基于机器视觉的增强现实盲区装配引导方法,其特征在于:所述PnP 问题是求解 3D-2D 点对运动的方法,描述了当知道 n 个三维空间点坐标及其二维投影位置时,如何估计相机的位姿,在 1 幅图像中,最少只要知道 3 个点的空间坐标即3D 坐标,就可以用于估计相机的运动以及相机的姿态。A machine vision-based augmented reality blind spot assembly guidance method according to claim 1, characterized in that: the PnP problem is a method for solving 3D-2D point-to-point motion, which describes when n three-dimensional space point coordinates and How to estimate the camera's pose when its two-dimensional projection position, in an image, as long as the spatial coordinates of at least three points, that is, the 3D coordinates, can be used to estimate the camera's motion and the camera's attitude.
PCT/CN2020/109426 2020-08-17 2020-08-17 Machine vision-based augmented reality blind area assembly guidance method WO2022036478A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/109426 WO2022036478A1 (en) 2020-08-17 2020-08-17 Machine vision-based augmented reality blind area assembly guidance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/109426 WO2022036478A1 (en) 2020-08-17 2020-08-17 Machine vision-based augmented reality blind area assembly guidance method

Publications (1)

Publication Number Publication Date
WO2022036478A1 true WO2022036478A1 (en) 2022-02-24

Family

ID=80322446

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/109426 WO2022036478A1 (en) 2020-08-17 2020-08-17 Machine vision-based augmented reality blind area assembly guidance method

Country Status (1)

Country Link
WO (1) WO2022036478A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596355A (en) * 2022-03-16 2022-06-07 哈尔滨工业大学 High-precision pose measurement method and system based on cooperative target
CN114913140A (en) * 2022-04-29 2022-08-16 合肥工业大学 Image processing method for hole shaft assembly
CN116597551A (en) * 2023-06-21 2023-08-15 厦门万安智能有限公司 Intelligent building access management system based on private cloud
CN117152415A (en) * 2023-09-01 2023-12-01 北京奥乘智能技术有限公司 Method, device, equipment and storage medium for detecting marker of medicine package
CN117553756A (en) * 2024-01-10 2024-02-13 中国人民解放军32806部队 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6930715B1 (en) * 2000-07-21 2005-08-16 The Research Foundation Of The State University Of New York Method, system and program product for augmenting an image of a scene with information about the scene
US20080181454A1 (en) * 2004-03-25 2008-07-31 United States Of America As Represented By The Secretary Of The Navy Method and Apparatus for Generating a Precision Fires Image Using a Handheld Device for Image Based Coordinate Determination
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109813510A (en) * 2019-01-14 2019-05-28 中山大学 High-speed rail bridge based on unmanned plane vertically moves degree of disturbing measurement method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6930715B1 (en) * 2000-07-21 2005-08-16 The Research Foundation Of The State University Of New York Method, system and program product for augmenting an image of a scene with information about the scene
US20080181454A1 (en) * 2004-03-25 2008-07-31 United States Of America As Represented By The Secretary Of The Navy Method and Apparatus for Generating a Precision Fires Image Using a Handheld Device for Image Based Coordinate Determination
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109813510A (en) * 2019-01-14 2019-05-28 中山大学 High-speed rail bridge based on unmanned plane vertically moves degree of disturbing measurement method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG ZENGLEI, YAN YUXIANG;HAN DECHUAN;BAI XIAOLIANG;ZHANG SHUSHENG: "Product Blind Area Assembly Method Based on Augmented Reality and Machine Vision", JOURNAL OF NORTHWESTERN POLYTECHNICAL UNIVERSITY, XUBEI GONGYE DAXUE , SHAANXI, CN, vol. 37, no. 3, 30 June 2019 (2019-06-30), CN , pages 496 - 502, XP055902700, ISSN: 1000-2758, DOI: 10.1051/jnwpu/20193730496 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596355A (en) * 2022-03-16 2022-06-07 哈尔滨工业大学 High-precision pose measurement method and system based on cooperative target
CN114596355B (en) * 2022-03-16 2024-03-08 哈尔滨工业大学 High-precision pose measurement method and system based on cooperative targets
CN114913140A (en) * 2022-04-29 2022-08-16 合肥工业大学 Image processing method for hole shaft assembly
CN116597551A (en) * 2023-06-21 2023-08-15 厦门万安智能有限公司 Intelligent building access management system based on private cloud
CN116597551B (en) * 2023-06-21 2024-06-11 厦门万安智能有限公司 Intelligent building access management system based on private cloud
CN117152415A (en) * 2023-09-01 2023-12-01 北京奥乘智能技术有限公司 Method, device, equipment and storage medium for detecting marker of medicine package
CN117152415B (en) * 2023-09-01 2024-04-23 北京奥乘智能技术有限公司 Method, device, equipment and storage medium for detecting marker of medicine package
CN117553756A (en) * 2024-01-10 2024-02-13 中国人民解放军32806部队 Off-target amount calculating method, device, equipment and storage medium based on target tracking
CN117553756B (en) * 2024-01-10 2024-03-22 中国人民解放军32806部队 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Similar Documents

Publication Publication Date Title
WO2022036478A1 (en) Machine vision-based augmented reality blind area assembly guidance method
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
CN108932475B (en) Three-dimensional target identification system and method based on laser radar and monocular vision
Banerjee et al. Online camera lidar fusion and object detection on hybrid data for autonomous driving
US20220152829A1 (en) Visual navigation inspection and obstacle avoidance method for line inspection robot
WO2020135446A1 (en) Target positioning method and device and unmanned aerial vehicle
Zhu et al. Online camera-lidar calibration with sensor semantic information
CN109270534A (en) A kind of intelligent vehicle laser sensor and camera online calibration method
CN106017477A (en) Visual navigation system of orchard robot
CN104400265B (en) A kind of extracting method of the welding robot corner connection characteristics of weld seam of laser vision guiding
CN106650701B (en) Binocular vision-based obstacle detection method and device in indoor shadow environment
CN106845491B (en) Automatic correction method based on unmanned plane under a kind of parking lot scene
CN104933718A (en) Physical coordinate positioning method based on binocular vision
CN101441769A (en) Real time vision positioning method of monocular camera
CN106774296A (en) A kind of disorder detection method based on laser radar and ccd video camera information fusion
CN112906797A (en) Plane grabbing detection method based on computer vision and deep learning
CN113570631B (en) Image-based pointer instrument intelligent identification method and device
CN104700385B (en) The binocular visual positioning device realized based on FPGA
CN105528789A (en) Robot vision positioning method and device, and visual calibration method and device
CN110334625A (en) A kind of parking stall visual identifying system and its recognition methods towards automatic parking
CN114140439A (en) Laser welding seam feature point identification method and device based on deep learning
TW202121331A (en) Object recognition system based on machine learning and method thereof
WO2024131200A1 (en) Monocular-vision-based vehicle 3d locating method and apparatus, and vehicle
CN108161930A (en) A kind of robot positioning system of view-based access control model and method
CN110533716A (en) A kind of semantic SLAM system and method based on 3D constraint

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20949691

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20949691

Country of ref document: EP

Kind code of ref document: A1