WO2019105044A1 - Method and system for lens distortion correction and feature extraction - Google Patents

Method and system for lens distortion correction and feature extraction Download PDF

Info

Publication number
WO2019105044A1
WO2019105044A1 PCT/CN2018/096004 CN2018096004W WO2019105044A1 WO 2019105044 A1 WO2019105044 A1 WO 2019105044A1 CN 2018096004 W CN2018096004 W CN 2018096004W WO 2019105044 A1 WO2019105044 A1 WO 2019105044A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
module
points
feature
contour
Prior art date
Application number
PCT/CN2018/096004
Other languages
French (fr)
Chinese (zh)
Inventor
王峰
肖飞
汪进
黄祖德
邱文添
李诗语
曹彬
Original Assignee
东莞市普灵思智能电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东莞市普灵思智能电子有限公司 filed Critical 东莞市普灵思智能电子有限公司
Publication of WO2019105044A1 publication Critical patent/WO2019105044A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Definitions

  • the invention belongs to the field of computer vision detection, and in particular relates to a method and a system for correcting lens distortion and feature extraction with high computational efficiency and effective reduction of CPU running time.
  • Computer vision inspection has broad application prospects in the fields of robots, unmanned vehicles and drones.
  • the amount of data to be processed is large and the real-time requirements are high.
  • Lens distortion calibration and feature extraction have a wide range of applications in the field of computer vision detection, especially indoor positioning systems.
  • radio ranging is popular, for example, using ultra-wideband positioning or WiFi to receive signal strength positioning, and then using the triangulation principle to determine the absolute position of the camera. Radio positioning requires early grasp of environmental information, which has high positioning accuracy but high cost and maintenance.
  • Other positioning methods such as odometer positioning method, inertial sensor positioning method, etc., after the initial pose of a given robot or drone, the displacement amount is accumulated by using a sensor such as a robot or a drone.
  • Robots or drones can obtain higher positioning accuracy within a short distance of movement, but deviations in position and attitude angles due to vertical or horizontal sliding of robots or drones, and odometers or inertial sensors, etc. It is easy to produce error accumulation itself. If the error is not corrected in time, it will directly lead to lower positioning accuracy of the robot or drone. Therefore, it is not suitable for long-term use, which is not conducive to the popularity of robots or other devices.
  • the visual sensor mainly relies on the processing and recognition of the collected images, thereby sensing the surrounding environment and realizing its own positioning.
  • the visual sensors are mainly divided into three types: panoramic vision sensor, binocular vision sensor and monocular vision sensor.
  • the panoramic sensor has a large observation range, but due to its difficult processing, high price, easy image distortion, and complicated image processing, it is difficult to obtain a panoramic sensor.
  • the binocular vision sensor is mainly used in the case where the depth of the scene is high, and the image processing capability of the computer is also high, so it cannot be widely used.
  • the monocular vision sensor is low in price and is suitable for applications where the image processing is not demanding in a general daily life environment, and can meet a wide range of application requirements.
  • the camera can be mounted on the bottom of the robot and captured and decoded based on the ground-based QR code to calculate the position of the robot.
  • Distortion correction and feature extraction are especially important during visual sensor positioning.
  • the traditional distortion correction and feature extraction system is shown in Figure 1. It generally consists of the following key modules: camera calibration module, distortion correction module, inverse perspective mapping (IPM) module, inverse perspective transformation output image processing module, RANSAC (Random Sample Consensus) extraction feature Graphic key module.
  • the calibration method uses the widely used Zhang Zhengyou calibration method. This method requires the camera to take pictures of the calibration plate from different angles, and then provide these pictures to the calibration toolbox, which can give the camera's internal reference matrix and distortion coefficient.
  • the calibration toolbox is a set of calibration procedures. Due to the extensive use of Zhang Zhengyou's calibration method, these program groups can be downloaded online. Matlab and opencv have corresponding calibration toolboxes.
  • the lens captures the image first, and then pre-processes the image, including adjusting the brightness and contrast of the image, so that the subsequent image processing steps are better.
  • the next step is distortion correction. Since the image after distortion correction only removes the distortion and preserves the original perspective effect of the image, the inverse perspective transformation is performed to obtain the image without the perspective effect. At this time, the image outputted by the reverse perspective is processed. , including the color map into a grayscale image, noise reduction processing on the image, and then edge detection. Then the RANSAC algorithm is used to extract the feature graph and obtain the key points of the graph.
  • the coordinates of the key points in the physical space are calculated to calculate the equation of the feature graph in the physical world, which is convenient for calculating the positional relationship between the lens and the feature graph in the physical world.
  • distortion correction and inverse perspective transformation are very time-consuming. Taking HDTV images as an example, 1920 ⁇ 1080 pictures need to process 2 million points of data, resulting in low efficiency of correction and inverse perspective transformation, which cannot be satisfied. Real-time requirements.
  • the invention aims to solve the technical problem that the distortion correction and feature extraction method and the system in the prior art have large data processing capacity and high CPU usage, which makes the robot or the drone unable to perform real-time positioning, and provides a new high computational efficiency.
  • the key to the invention lies in: by means of advanced calculation methods, the lens distortion of the image is quickly corrected and the feature points are extracted, and then converted to the physical space, and then the analytical form of the feature points is obtained to realize real-time matching and positioning.
  • the present invention adopts the following technical solutions:
  • a method for lens distortion correction and feature extraction includes the following steps:
  • Image pre-processing includes changing a color map into a gray image, adjusting picture brightness and contrast, and edge detection, thereby displaying an outline of the image;
  • S106 Calculate the physical world equation of the key point, and calculate the equation of the characteristic graphic in the physical world coordinate system by using the key points obtained by S105 in the real physical world coordinates.
  • the invention also provides a system for lens distortion correction and feature extraction adapted to the above method, which comprises the following modules:
  • Input image module for collecting picture information
  • the image preprocessing module is configured to change the color map into a gray image, adjust the brightness and contrast of the picture, perform noise reduction processing on the picture, and then perform edge detection to display the outline of the image;
  • a contour point distortion correction module configured to restore the contour points extracted by the extracted image contour point module to an undistorted plane
  • the key point inverse perspective transformation module is used for performing inverse perspective transformation on the key points extracted in the key feature module of the extracted graphic feature, and obtaining the key points in the real physical world coordinates;
  • the feature graph physical world equation module is obtained, and the key points obtained by the key point inverse perspective transformation module are used in the real physical world coordinates to calculate the equation of the feature graph in the physical world coordinate system.
  • the present invention extracts the contour points of the image before the distortion correction, and then performs distortion correction on the contour points, which can greatly reduce the data amount of the distortion correction processing and greatly improve the calculation efficiency.
  • the key points of the feature graphic are extracted before the inverse perspective transformation, thereby reducing the amount of data of the inverse perspective transformation and further improving the calculation efficiency. Therefore, the present invention can significantly reduce the amount of CPU calculation under high fidelity requirements, significantly reduce the hardware cost while improving the calculation efficiency, and can quickly obtain ground-related physical feature information and achieve high-precision positioning in the room.
  • 1 is a block flow diagram of a conventional distortion correction and feature extraction system
  • FIG. 2 is a block flow diagram of a lens distortion correction and feature extraction system of the present invention
  • FIG. 3 is a flow chart of a lens distortion correction and feature extraction method according to the present invention.
  • Figure 4 is a diagram showing the relationship between the camera and the physical world coordinate system of the present invention.
  • the lens distortion correction and feature extraction system of the present invention comprises the following modules: an input image module, a picture preprocessing module, an extracted image contour point module, a contour point distortion correction module, an extraction feature key module, and a key point.
  • the inverse perspective transformation module obtains the feature graphic physical world equation module.
  • the method for lens distortion correction and feature extraction of the present invention comprises the following six steps, that is, S101 acquires lens picture information, S102 picture pre-processing, S103 extracts picture contour points and performs lens distortion correction, S104 extracts graphic feature key points, S105 key point inverse perspective transformation, S106 calculation key point physical world equation, as shown in Figure 3. Specific steps are as follows:
  • step S101 Perform step S101 by using an input image module. Use the camera to capture images and measure and record the camera's height and attitude angle for subsequent module processing.
  • step S102 Perform step S102 by using a picture preprocessing module. This includes turning the color map into a gray image, adjusting the brightness and contrast of the image, denoising the image, and then performing edge detection to display the outline of the image.
  • the color map becomes a grayscale image, which can greatly reduce the amount of data processed, and the image information such as details and texture of the image is not reduced compared with the original image.
  • the subsequent processing steps also utilize the information of the image details, and need not be utilized.
  • the color information of the image, so turning into a grayscale image is a good choice. After converting to a grayscale image, the image is denoised.
  • the noise reduction process suppresses the noise of the target image while preserving the features of the graphic detail as much as possible, which can increase the effectiveness and reliability of the subsequent processing.
  • edge detection can be performed to display detailed information such as the outline of the image.
  • the cut picture step is performed after the edge detection step to remove the disturbed contour portion. Thereby further reducing the amount of data processing for subsequent operations.
  • step S103 Perform step S103 by using the extracted image contour point module and the contour point distortion correction module.
  • the contour points displayed after processing the image preprocessing module are extracted; then, using the calibrated camera internal reference matrix and the distortion coefficient, the contour points are corrected radially and tangentially by the correction formula to restore the undistorted plane.
  • extracting the image contour point is to save the coordinates of the displayed contour point on the binary image after the edge detection.
  • equidistant points can be taken on the column, and equidistant points of the same value or different values can be used on the line, or both.
  • the equidistant sampling value of the column is 5, and the equidistant sampling value of the row is 3, that is, every 5 columns, one point is taken until the end of the line, and then every 5 columns can be continued across 3 lines. point.
  • the amount of data can be reduced by an order of magnitude.
  • Contour point distortion correction is to correct the coordinates of the contour points saved on the distortion plane to the coordinate points on the undistorted plane. On undistorted planes, these corrected contour points show the true physical shape of the image contained in the image.
  • the imaging model of the camera established in the contour point distortion correction module is as follows:
  • R and T represent the rotation matrix and the translation vector in the camera external reference, respectively, and (x, y, z) is the coordinate transformed from the three-dimensional world coordinate point to the camera coordinate system.
  • ⁇ ' ⁇ (1+k 1 ⁇ 2 +k 2 ⁇ 4 +k 3 ⁇ 6 +k 4 ⁇ 6 )
  • c u the lateral coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
  • c v the longitudinal coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
  • K1, k2, k3, k4 are the distortion coefficients of the camera, which are obtained by camera calibration.
  • the above is a transformation formula for transforming the three-dimensional world coordinate points (X, Y, Z) to image coordinates (u, v).
  • the point correction is the inverse process described above, and the formula is still applicable.
  • S104 Execute S104 by using the extracted graphic feature key module.
  • the feature pattern is first fitted with RANSAC, and the key points that can characterize the feature pattern are extracted.
  • the RANSAC extracts feature key points on the undistorted plane.
  • the first thing to do is to find the feature graphic and then extract the key points of the feature graphic.
  • a reasonable number of representative key points can be selected as needed without having to send all the key points to subsequent steps for processing. It is advisable to select representative key points to ensure that the original graphics can be restored.
  • a graph with a function analytic pattern such as a line or a circle
  • it is not necessary to send the contour points on the entire line or the circle to the subsequent steps for processing because 2 points can determine a straight line, 3 points It is possible to determine a circle, so that two contour points can be selected at the contour point of the line, and the contour points on the circle are selected and sent to the subsequent steps for processing, and the reconstruction of the graphics in the physical world can be completed.
  • step S105 Perform step S105 by using a keypoint inverse perspective transformation module.
  • the key point inverse perspective transformation is to transform the coordinates of the key points of the extracted feature graphics to the real physical world coordinates. This physical world coordinate is represented in the camera coordinate system, so this step requires the height and attitude angle information of the camera. Because the key point coordinates of the extracted feature graphic are the coordinates in the undistorted plane, which is converted from the distortion plane, but the distortion is eliminated, the perspective effect of the camera shooting is not eliminated. Therefore, the key point inverse perspective transformation is to transform the points on the undistorted plane into the real physical space. At this time, there is no perspective effect, and the positional relationship between the transformed points truly reflects the positional relationship in the physical world.
  • T is the transformation matrix, where:
  • c u the lateral coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
  • c v the longitudinal coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
  • h camera height, which can be obtained by measurement
  • s 1 the sine value sin ⁇ of the camera depression angle ⁇
  • s 2 sinusoidal value sin ⁇ of camera yaw angle ⁇ ;
  • S106 Execute S106 by using the obtained feature graphic physical world equation module. Based on the coordinates of the obtained key points in the camera coordinate system, the equations of the feature patterns described by these key points can be recalculated. Therefore, it is convenient to calculate the positional relationship between the camera and the feature graphic, and complete high-precision positioning.
  • the operational efficiency of the present invention is increased by several orders of magnitude because it optimizes precisely the time-consuming distortion correction and inverse perspective transformation steps of conventional methods.
  • the traditional method needs to process 2 million points of data in both distortion correction and inverse perspective transformation.
  • the new method has to deal with a thousand points. Because for such a size of the picture, the extractable contour points are about a thousand points, so only need to process the thousands of data, the amount of processed data is three orders of magnitude smaller than 2 million, so the correct operation efficiency is greatly improve.
  • Feature graphics can be represented by a few representative key points: for example, a line or line segment can be represented by two points; a circle can be represented by three points on a circle.
  • the previously extracted contour points also contain redundant data quantities.
  • the application scenario of this embodiment is an indoor area capable of extracting ground features for indoor positioning.
  • the camera is used to obtain the feature information of the indoor floor tile edge line.
  • the X-axis and Y-axis directions are established with reference to FIG. 4, and the position of the camera center point is directly changed to the relative position information of the line in the captured picture.
  • S101 Acquire a lens image. Capture images for subsequent module processing.
  • S102 Image preprocessing. Converting the captured lens image from a color map to a grayscale image; edge detection, displaying the outline of the image; cutting the image to remove the contour portion of the interference.
  • Converting a picture from a color map to a grayscale image can reduce the amount of computation of subsequent data to be processed, speed up operational efficiency, and make grayscale images easier to process.
  • Edge detection can be used to display outline information of an image. Significant changes in image properties often reflect important events and changes in the image. Edge detection can identify points in the image where the brightness changes significantly, and these points can be used to form the edge features of the image. Edge detection of the image can greatly reduce the amount of data of the original image, and eliminate information that can be considered irrelevant, retaining important structural properties of the image, thereby facilitating subsequent extraction of feature information of the image. In the embodiment of the present invention, the Canny operator can be used to perform edge detection on the image.
  • Cut the picture because the camera is fixedly mounted on a car, it may capture the fixed contour of some parts of the car itself. This is not useful information. You can shoot every frame of the picture and the original picture with the outline of the car. Perform image difference and remove the interference information of the fixed contour in the car, which can enhance the robustness of the fitted feature points.
  • S103 Extract a picture outline point and correct it.
  • the contours displayed by the image processed by the S102 module are extracted, and the contour points of the camera are corrected by radial and tangential correction using the calibration camera internal reference matrix and the distortion coefficient, so as to be restored to the undistorted plane.
  • the contour point related information of the ground figure is obtained, plus the calibrated camera internal reference matrix and the distortion coefficient, and the obtained contour points are efficiently corrected.
  • Such point correction is more efficient than the entire picture correction, greatly reducing CPU runtime.
  • This method runs under the Linux environment of Raspberry Pi 3, which can greatly reduce the hardware cost.
  • the obtained contour points may be related parameters used to describe the characteristics of the ground image.
  • the formed feature graphic may be an analytical geometry that can be described using parameters, and the specific shape of the feature graphic may be a straight line, a circle, a diamond, or the like. In the embodiment of the present invention, the specific shape of the feature graphic is not limited. Taking the indoor floor tile as an example, when the robot is traveling in the room, the feature pattern fitted in the image taken from the camera may be the edge line of the indoor tile.
  • the feature pattern may be fitted from the corrected feature points by using a Hough Transform, or the feature pattern may be fitted from the image by using the RANSAC algorithm.
  • S104 RANSAC extracts a key point.
  • the feature pattern is first fitted with RANSAC, and the key points that can characterize the feature pattern are extracted.
  • the system first fits the contour point to the straight line, because the line of the floor is a straight line, the fitting straight line uses a random sampling consistency algorithm to fit the straight line, and selects two key points in the contour point of the found floor straight line. Yes, because the line only needs 2 points to be characterized. Thereby, the amount of subsequent operation data can be further reduced.
  • step S105 Key point inverse perspective transformation. Perform the inverse perspective transformation on the key points obtained in step S104 to obtain the physical coordinates of these key points in the real world.
  • the program sets up to two lines of square floor at the same time, so four key points can be obtained, which are the starting point of a straight line extracted by RANSAC (u s1 , v s1 ), and the end point is (u e1 , v e1 ), extract another line starting point (u s2 , v s2 ), and the end point is (u e2 , v e2 ).
  • the matrix generated by extracting the start and end points of each line is:
  • the generated matrix M is imported into the transformation matrix of the inverse perspective transformation image and converted to the ground, and a matrix composed of corresponding physical world coordinate points is obtained.
  • the matrix form is as follows:
  • T is the inverse perspective transformation matrix
  • the generated points (x s1 , y s1 ), (x e1 , y e1 ), (x s2 , y s2 ), (x e2 , y e2 ) are key points in the image transformed to The coordinates of the physical world.
  • the coordinates of this physical world are based on the camera's optical axis point.
  • the algorithm can also fit other geometric figures and calculate the corresponding geometric image equations to further calculate the corresponding physical information.
  • S106 Calculate a physical world equation of a key point. Using the real world physical coordinates of the key points obtained in S105, the equations of the feature graph in the physical world coordinate system are calculated.
  • the geometric equation represented by the key points can be obtained, and the geometric image converted from the geometric image in the picture to the physical world coordinates is completed, and the physical world coordinates are utilized.
  • the straight line equation below makes it easy to find the distance from the camera to the edge of the floor, thus achieving high-precision positioning in the room.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Disclosed are a method and system for lens distortion correction and feature extraction. Said method mainly comprises the following six steps: acquiring lens picture information; preprocessing a picture; extracting picture contour points and performing lens distortion correction; extracting graphic characteristics keypoints; performing inverse perspective transformation on the keypoints; and computing an equation for the keypoints in the physical world. The system mainly comprises hardware modules which provide an operating platform for the method of the present application, i.e. an image input module, a picture pre-processing module, an image contour point extraction module, a contour point distortion correction module, a graphic characteristic keypoint extraction module, a keypoint inverse perspective transformation module, and a module for obtaining an equation for graphic characteristics in the physical world. Compared with the conventional method and system, the method and system of the present application can reduce the amount of CPU computation by more than 6 times, significantly reducing hardware cost while improving the computational efficiency, being able to quickly obtain ground-related physical characteristic information, realizing high-precision indoor positioning, being particularly applicable to an embedded system having less computational resources.

Description

一种镜头畸变矫正和特征提取的方法及系统Method and system for correcting lens distortion and feature extraction 技术领域Technical field
本发明属于计算机视觉检测领域,尤其涉及一种计算效率高、能有效减少CPU运行时间的镜头畸变矫正和特征提取的方法及系统。The invention belongs to the field of computer vision detection, and in particular relates to a method and a system for correcting lens distortion and feature extraction with high computational efficiency and effective reduction of CPU running time.
背景技术Background technique
计算机视觉检测在机器人、无人驾驶车辆、无人机等领域有广阔的应用前景,其待处理的数据量大,实时性要求高。镜头畸变校准和特征提取在计算机视觉检测领域有很广泛的应用,尤其是室内定位系统。高精度室内定位系统中,采用无线电方式测距比较流行,例如采用超宽带定位或者WiFi接受信号强度定位等,然后利用三角原理等方法确定摄像头的绝对位姿。无线电定位需要提前掌握环境信息,其定位精度高但成本亦高且维护麻烦。其它定位方法如里程计定位法、惯性传感器定位法等,在给定机器人或无人机等初始位姿后,利用机器人或无人机等自身的传感器对位移量进行累加。机器人或无人机等在移动的一小段距离内可以得到较高的定位精度,但是随着机器人或无人机等纵向或横向的滑动引起位置和姿态角的偏差,同时里程计或惯性传感器等本身容易产生误差累加。如果误差没有及时修正,将会直接导致机器人或者无人机定位精度变低。故不适合长时间的使用,不利于机器人或者其它设备的普及。Computer vision inspection has broad application prospects in the fields of robots, unmanned vehicles and drones. The amount of data to be processed is large and the real-time requirements are high. Lens distortion calibration and feature extraction have a wide range of applications in the field of computer vision detection, especially indoor positioning systems. In the high-precision indoor positioning system, radio ranging is popular, for example, using ultra-wideband positioning or WiFi to receive signal strength positioning, and then using the triangulation principle to determine the absolute position of the camera. Radio positioning requires early grasp of environmental information, which has high positioning accuracy but high cost and maintenance. Other positioning methods such as odometer positioning method, inertial sensor positioning method, etc., after the initial pose of a given robot or drone, the displacement amount is accumulated by using a sensor such as a robot or a drone. Robots or drones can obtain higher positioning accuracy within a short distance of movement, but deviations in position and attitude angles due to vertical or horizontal sliding of robots or drones, and odometers or inertial sensors, etc. It is easy to produce error accumulation itself. If the error is not corrected in time, it will directly lead to lower positioning accuracy of the robot or drone. Therefore, it is not suitable for long-term use, which is not conducive to the popularity of robots or other devices.
鉴于无线电、里程计、惯性传感器等定位法的固有缺陷,采用视 觉传感器进行定位逐渐成为计算机视觉检测领域主流。视觉传感器主要依靠对采集的图像进行处理识别,进而对周围环境进行感知,实现自身的定位。目前视觉传感器主要分为全景视觉传感器、双目视觉传感器、单目视觉传感器三种。其中全景传感器观测范围大,但是由于其加工困难、价格昂贵、图像容易畸变、图像处理复杂等,使得全景传感器难以得到实际应用。双目视觉传感器主要用于对景物深度要求较高的场合,其对计算机的图像处理能力要求也较高,因而无法得到广泛应用。单目视觉传感器,价格低,适用于一般的日常生活环境中对图像处理要求不高的场合,能够满足广泛的应用需求。此外,还可以将摄像头安装在机器人底部,基于地面的二维码,进行拍摄并解码,从而计算出机器人的位置。In view of the inherent defects of positioning methods such as radios, odometers, and inertial sensors, the use of optical sensors for positioning has gradually become the mainstream in the field of computer vision detection. The visual sensor mainly relies on the processing and recognition of the collected images, thereby sensing the surrounding environment and realizing its own positioning. At present, the visual sensors are mainly divided into three types: panoramic vision sensor, binocular vision sensor and monocular vision sensor. Among them, the panoramic sensor has a large observation range, but due to its difficult processing, high price, easy image distortion, and complicated image processing, it is difficult to obtain a panoramic sensor. The binocular vision sensor is mainly used in the case where the depth of the scene is high, and the image processing capability of the computer is also high, so it cannot be widely used. The monocular vision sensor is low in price and is suitable for applications where the image processing is not demanding in a general daily life environment, and can meet a wide range of application requirements. In addition, the camera can be mounted on the bottom of the robot and captured and decoded based on the ground-based QR code to calculate the position of the robot.
在视觉传感器定位过程中,畸变矫正和特征提取尤为重要。传统的畸变矫正和特征提取系统如图1所示。它一般由以下几个关键模块组成:摄像头标定模块,畸变矫正模块,逆透视变换(inverse perspective mapping,IPM)模块,逆透视变换输出图像处理模块,RANSAC(Random Sample Consensus,随机采样一致)提取特征图形关键点模块。Distortion correction and feature extraction are especially important during visual sensor positioning. The traditional distortion correction and feature extraction system is shown in Figure 1. It generally consists of the following key modules: camera calibration module, distortion correction module, inverse perspective mapping (IPM) module, inverse perspective transformation output image processing module, RANSAC (Random Sample Consensus) extraction feature Graphic key module.
摄像头标定是最前期的步骤,后续的处理步骤跟摄像头的参数是密不可分的。标定的方法采用运用广泛的张正友标定法。这个方法需要用摄像头从不同角度拍摄标定板图片,然后把这些图片提供给标定工具箱,工具箱就能够给出摄像头的内参矩阵和畸变系数。标定工具箱是一套标定程序,由于张正友标定法运用广泛,网上可以下载到这 些程序组,matlab和opencv均有对应的标定工具箱。Camera calibration is the most advanced step, and subsequent processing steps are inseparable from the camera parameters. The calibration method uses the widely used Zhang Zhengyou calibration method. This method requires the camera to take pictures of the calibration plate from different angles, and then provide these pictures to the calibration toolbox, which can give the camera's internal reference matrix and distortion coefficient. The calibration toolbox is a set of calibration procedures. Due to the extensive use of Zhang Zhengyou's calibration method, these program groups can be downloaded online. Matlab and opencv have corresponding calibration toolboxes.
首先镜头采集图片先进行图片的预处理,包括调节图片的亮度和对比度,可以让后续图像处理步骤的效果更好。接下来是畸变矫正,由于畸变矫正后的图片只是去除了畸变,还保留图片原有的透视效果,所以再进行逆透视变换得到没有透视效果的图片,此时再对逆透视输出的图像进行处理,包括彩色图变成灰度图,对图片进行降噪处理,然后进行边沿检测。之后运用RANSAC算法提取特征图形并得到表征图形的关键点,最后计算出关键点在物理空间中的坐标从而求出特征图形在物理世界的方程,方便计算出镜头跟特征图形在物理世界的位置关系。其中畸变矫正和逆透视变换非常耗时,以高清电视图像为例,1920×1080的图片就需要处理2百万个点的数据,导致矫正的效率和逆透视变换的效率均不高,不能满足实时的要求。First, the lens captures the image first, and then pre-processes the image, including adjusting the brightness and contrast of the image, so that the subsequent image processing steps are better. The next step is distortion correction. Since the image after distortion correction only removes the distortion and preserves the original perspective effect of the image, the inverse perspective transformation is performed to obtain the image without the perspective effect. At this time, the image outputted by the reverse perspective is processed. , including the color map into a grayscale image, noise reduction processing on the image, and then edge detection. Then the RANSAC algorithm is used to extract the feature graph and obtain the key points of the graph. Finally, the coordinates of the key points in the physical space are calculated to calculate the equation of the feature graph in the physical world, which is convenient for calculating the positional relationship between the lens and the feature graph in the physical world. . Among them, distortion correction and inverse perspective transformation are very time-consuming. Taking HDTV images as an example, 1920×1080 pictures need to process 2 million points of data, resulting in low efficiency of correction and inverse perspective transformation, which cannot be satisfied. Real-time requirements.
事实证明,运用传统的畸变矫正和特征提取系统在树莓派3的Linux环境中进行图像矫正和匹配将消耗大量CPU资源和时间,导致机器人或者无人机无法进行实时定位。It turns out that using traditional distortion correction and feature extraction systems for image correction and matching in the Linux environment of Raspberry Pi 3 consumes a lot of CPU resources and time, making it impossible for robots or drones to locate in real time.
发明内容Summary of the invention
本发明旨在解决现有技术中畸变矫正和特征提取方法和系统数据处理量大、CPU占用率高,导致机器人或无人机无法进行实时定位的技术问题,提供一种新的计算效率高的镜头畸变矫正和特征提取的方法和系统。The invention aims to solve the technical problem that the distortion correction and feature extraction method and the system in the prior art have large data processing capacity and high CPU usage, which makes the robot or the drone unable to perform real-time positioning, and provides a new high computational efficiency. Method and system for lens distortion correction and feature extraction.
本发明的关键在于:通过先进的计算方法,快速的矫正图像的镜 头畸变并提取特征点、再转换到物理空间,再求出特征点的解析形式,实现实时的匹配和定位。The key to the invention lies in: by means of advanced calculation methods, the lens distortion of the image is quickly corrected and the feature points are extracted, and then converted to the physical space, and then the analytical form of the feature points is obtained to realize real-time matching and positioning.
为了解决上述技术问题,本发明采用了如下技术方案:In order to solve the above technical problems, the present invention adopts the following technical solutions:
一种镜头畸变矫正和特征提取的方法,包括以下步骤:A method for lens distortion correction and feature extraction includes the following steps:
S101,获取镜头图片信息;S101. Acquire lens image information.
S102,图片预处理,所述图片预处理包括将彩色图变成灰色图,调节图片亮度和对比度和边缘检测,从而将图像的轮廓显示出来;S102: Image pre-processing, the image pre-processing includes changing a color map into a gray image, adjusting picture brightness and contrast, and edge detection, thereby displaying an outline of the image;
S103,提取图片轮廓点并进行镜头畸变矫正,将经过S102处理的图片显示的轮廓点提取出来,然后利用矫正公式矫正这些轮廓点,将这些轮廓点还原到无畸变的平面上;S103, extracting image contour points and performing lens distortion correction, extracting contour points displayed by the image processed by S102, and then correcting the contour points by using a correction formula, and resting the contour points to an undistorted plane;
S104,提取图形特征关键点,利用RANSAC算法拟合出特征图像,再把能表征特征图形的关键点提取出来;S104, extracting key points of the graphic feature, fitting the feature image by using the RANSAC algorithm, and extracting key points capable of characterizing the feature graphic;
S105,关键点逆透视变换,将S104处理的关键点进行逆透视变换,得到这些关键点在真实的物理世界坐标;S105, the key point inverse perspective transformation, the key points processed by S104 are subjected to inverse perspective transformation, and the key points in the real physical world coordinates are obtained;
S106,计算关键点物理世界方程式,利用S105得到的关键点在真实的物理世界坐标,计算出特征图形在物理世界坐标体系下的方程。S106: Calculate the physical world equation of the key point, and calculate the equation of the characteristic graphic in the physical world coordinate system by using the key points obtained by S105 in the real physical world coordinates.
本发明还提供了一种与上述方法相适应的镜头畸变矫正和特征提取的系统,它包括以下模块:The invention also provides a system for lens distortion correction and feature extraction adapted to the above method, which comprises the following modules:
输入图像模块,用于采集图片信息;Input image module for collecting picture information;
图片预处理模块,用于将彩色图变成灰色图,调节图片亮度和对比度,对图片进行降噪处理,然后进行边缘检测,从而将图像的轮廓显示出来;The image preprocessing module is configured to change the color map into a gray image, adjust the brightness and contrast of the picture, perform noise reduction processing on the picture, and then perform edge detection to display the outline of the image;
提取图像轮廓点模块,用于将图片预处理模块处理后显示出的轮廓点提取出来;Extracting an image contour point module for extracting contour points displayed after processing by the image preprocessing module;
轮廓点畸变校正模块,用于将提取图像轮廓点模块提取的轮廓点还原到无畸变的平面上;a contour point distortion correction module, configured to restore the contour points extracted by the extracted image contour point module to an undistorted plane;
提取图形特征关键点模块,用于将能表征特征图形的关键点提取出来;Extracting a graphical feature key point module for extracting key points capable of characterizing the feature graphic;
关键点逆透视变换模块,用于将提取图形特征关键点模块中提取的关键点进行逆透视变换,得到这些关键点在真实的物理世界坐标;The key point inverse perspective transformation module is used for performing inverse perspective transformation on the key points extracted in the key feature module of the extracted graphic feature, and obtaining the key points in the real physical world coordinates;
得到特征图形物理世界方程模块,利用关键点逆透视变换模块中得到的关键点在真实的物理世界坐标,计算出特征图形在物理世界坐标体系下的方程。The feature graph physical world equation module is obtained, and the key points obtained by the key point inverse perspective transformation module are used in the real physical world coordinates to calculate the equation of the feature graph in the physical world coordinate system.
有益效果:Beneficial effects:
相比传统的畸变矫正和特征提取方法,本发明在畸变矫正之前先提取图片的轮廓点,然后针对轮廓点进行畸变矫正,可以极大减少畸变校正处理的数据量,极大地提高了计算效率。在逆透视变换之前先提取特征图形关键点,从而减少逆透视变换的数据量,进一步提高计算效率。因而,本发明能够在高保真的要求下显著减少CPU运算量,在提高计算效率的同时显著降低硬件成本,能够快速获得地面相关的物理特征信息、实现室内高精度定位。Compared with the traditional distortion correction and feature extraction method, the present invention extracts the contour points of the image before the distortion correction, and then performs distortion correction on the contour points, which can greatly reduce the data amount of the distortion correction processing and greatly improve the calculation efficiency. The key points of the feature graphic are extracted before the inverse perspective transformation, thereby reducing the amount of data of the inverse perspective transformation and further improving the calculation efficiency. Therefore, the present invention can significantly reduce the amount of CPU calculation under high fidelity requirements, significantly reduce the hardware cost while improving the calculation efficiency, and can quickly obtain ground-related physical feature information and achieve high-precision positioning in the room.
附图说明DRAWINGS
图1为传统的畸变校正和特征提取系统的模块流程图;1 is a block flow diagram of a conventional distortion correction and feature extraction system;
图2为本发明镜头畸变校正和特征提取系统的模块流程图;2 is a block flow diagram of a lens distortion correction and feature extraction system of the present invention;
图3为本发明镜头畸变校正和特征提取方法流程图;3 is a flow chart of a lens distortion correction and feature extraction method according to the present invention;
图4为本发明摄像头与物理世界坐标系的关系图。Figure 4 is a diagram showing the relationship between the camera and the physical world coordinate system of the present invention.
具体实施方式Detailed ways
为了使本领域的技术人员清楚明了地理解本发明,现结合具体实施方式,对本发明进行详细说明。The present invention will be described in detail in conjunction with the specific embodiments.
如图2所示,本发明的镜头畸变矫正和特征提取系统包括以下模块:输入图像模块、图片预处理模块、提取图像轮廓点模块、轮廓点畸变校正模块、提取图形特征关键点模块、关键点逆透视变换模块、得到特征图形物理世界方程模块。As shown in FIG. 2, the lens distortion correction and feature extraction system of the present invention comprises the following modules: an input image module, a picture preprocessing module, an extracted image contour point module, a contour point distortion correction module, an extraction feature key module, and a key point. The inverse perspective transformation module obtains the feature graphic physical world equation module.
相应地,本发明的镜头畸变矫正和特征提取的方法包括以下六个步骤,即S101获取镜头图片信息、S102图片预处理、S103提取图片轮廓点并进行镜头畸变矫正、S104提取图形特征关键点、S105关键点逆透视变换、S106计算关键点物理世界方程式,如图3所示。具体步骤如下:Correspondingly, the method for lens distortion correction and feature extraction of the present invention comprises the following six steps, that is, S101 acquires lens picture information, S102 picture pre-processing, S103 extracts picture contour points and performs lens distortion correction, S104 extracts graphic feature key points, S105 key point inverse perspective transformation, S106 calculation key point physical world equation, as shown in Figure 3. Specific steps are as follows:
S101:利用输入图像模块执行S101步骤。利用摄像头采集图片并测量和记录摄像头的高度和姿态角,供后续模块处理使用。S101: Perform step S101 by using an input image module. Use the camera to capture images and measure and record the camera's height and attitude angle for subsequent module processing.
S102:利用图片预处理模块执行S102步骤。包括将彩色图变成灰色图,调节图片亮度和对比度,对图片进行降噪处理,然后进行边缘检测,从而将图像的轮廓显示出来。彩色图变成灰度图可以大大减少处理的数据量,并且图像的细节,纹理等图像信息相比原图又不会 减少,后续的处理步骤要利用的也是图像细节的信息,不需要利用到图像的颜色信息,所以转成灰度图像是一个很好的选择。转成灰度图后,对图片进行降噪处理,降噪处理是指在尽量保留图形细节特征的条件下对目标图像的噪声进行抑制,可以增加后续处理的有效性和可靠性。完成了前序的两步,就可以进行边沿检测,将图像的轮廓等细节信息显示出来。优选地,在边缘检测步骤之后进行剪切图片步骤,将干扰的轮廓部分去掉。从而进一步减少后续操作的数据处理量。S102: Perform step S102 by using a picture preprocessing module. This includes turning the color map into a gray image, adjusting the brightness and contrast of the image, denoising the image, and then performing edge detection to display the outline of the image. The color map becomes a grayscale image, which can greatly reduce the amount of data processed, and the image information such as details and texture of the image is not reduced compared with the original image. The subsequent processing steps also utilize the information of the image details, and need not be utilized. The color information of the image, so turning into a grayscale image is a good choice. After converting to a grayscale image, the image is denoised. The noise reduction process suppresses the noise of the target image while preserving the features of the graphic detail as much as possible, which can increase the effectiveness and reliability of the subsequent processing. After completing the two steps of the preamble, edge detection can be performed to display detailed information such as the outline of the image. Preferably, the cut picture step is performed after the edge detection step to remove the disturbed contour portion. Thereby further reducing the amount of data processing for subsequent operations.
S103:利用提取图像轮廓点模块和轮廓点畸变校正模块先后执行S103步骤。将图片预处理模块处理后显示出的轮廓点提取出来;然后利用标定的摄像头内参矩阵和畸变系数,用矫正公式径向和切向矫正这些轮廓点,使之还原到无畸变的平面。S103: Perform step S103 by using the extracted image contour point module and the contour point distortion correction module. The contour points displayed after processing the image preprocessing module are extracted; then, using the calibrated camera internal reference matrix and the distortion coefficient, the contour points are corrected radially and tangentially by the correction formula to restore the undistorted plane.
其中,提取图像轮廓点就是在进行了边沿检测后的二值图片上,把显示出来的轮廓点的坐标保存起来。为减少数据量,在列上可以采取等距离采点,行上也可以用同数值或不同数值的等距离采点,或两者配合使用。比如采点时,列的等距离采样数值为5,行的等距离采样数值为3,就是每隔5列采一个点,一直到该行的行末,然后可以跨3行继续每隔5列采点。经过这样的采样过程,数据量又可以减少1个数量级。当然需根据图片的尺寸选择合理的采样数值,不然图像的轮廓信息会丢失严重,后期的特征图形在物理世界就无法重建了。轮廓点畸变矫正就是把之前保存起来的在畸变平面上的轮廓点的坐标矫正到无畸变平面上的坐标点。在无畸变的平面上,这些被矫正的轮廓点可以把图片蕴含的图形的真实物理形状显示出来。Among them, extracting the image contour point is to save the coordinates of the displayed contour point on the binary image after the edge detection. In order to reduce the amount of data, equidistant points can be taken on the column, and equidistant points of the same value or different values can be used on the line, or both. For example, when collecting points, the equidistant sampling value of the column is 5, and the equidistant sampling value of the row is 3, that is, every 5 columns, one point is taken until the end of the line, and then every 5 columns can be continued across 3 lines. point. After such a sampling process, the amount of data can be reduced by an order of magnitude. Of course, it is necessary to select a reasonable sampling value according to the size of the picture, otherwise the contour information of the image will be seriously lost, and the later feature graphics cannot be reconstructed in the physical world. Contour point distortion correction is to correct the coordinates of the contour points saved on the distortion plane to the coordinate points on the undistorted plane. On undistorted planes, these corrected contour points show the true physical shape of the image contained in the image.
轮廓点畸变校正模块中建立的摄像头的成像模型如下:The imaging model of the camera established in the contour point distortion correction module is as follows:
设(X,Y,Z)为一个三维世界坐标点,投影在图像上的二维坐标为(u,v),投影关系如下:Let (X, Y, Z) be a three-dimensional world coordinate point, and the two-dimensional coordinates projected on the image are (u, v), and the projection relationship is as follows:
Figure PCTCN2018096004-appb-000001
Figure PCTCN2018096004-appb-000001
其中,R和T分别代表相机外参中的旋转矩阵和平移向量,(x,y,z)是从三维世界坐标点变换到摄像头坐标系下的坐标。Where R and T represent the rotation matrix and the translation vector in the camera external reference, respectively, and (x, y, z) is the coordinate transformed from the three-dimensional world coordinate point to the camera coordinate system.
x′=x/zx'=x/z
y′=y/zy'=y/z
r 2=x′ 2+y′ 2 r 2 =x' 2 +y' 2
θ=atan(r)θ=atan(r)
θ′=θ(1+k 1θ 2+k 2θ 4+k 3θ 6+k 4θ 6) θ'=θ(1+k 1 θ 2 +k 2 θ 4 +k 3 θ 6 +k 4 θ 6 )
x″=(θ′/r)xx′′=(θ′/r)x
y″=(θ′/r)yy′′=(θ′/r)y
u=f u*x″+c u u=f u *x"+c u
v=f v*y″+c v v=f v *y"+c v
其中:among them:
f u:摄像头横向焦距,由摄像头标定得出; f u : the lateral focal length of the camera, which is obtained by the camera calibration;
f v:摄像头纵向焦距,由摄像头标定得出; f v : the longitudinal focal length of the camera, which is obtained by the camera calibration;
c u:摄像头光轴对应的图像中心点的横向坐标,由摄像头标定得出; c u : the lateral coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
c v:摄像头光轴对应的图像中心点的纵向坐标,由摄像头标定得出; c v : the longitudinal coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
k1,k2,k3,k4是摄像头的畸变系数,由摄像头标定得出。K1, k2, k3, k4 are the distortion coefficients of the camera, which are obtained by camera calibration.
以上是三维世界坐标点(X,Y,Z)变换到图像坐标(u,v)的变换公式。而点校正就是上述的逆过程,公式照样适用。The above is a transformation formula for transforming the three-dimensional world coordinate points (X, Y, Z) to image coordinates (u, v). The point correction is the inverse process described above, and the formula is still applicable.
S104:利用提取图形特征关键点模块执行S104。先用RANSAC拟合出特征图形,再把能表征特征图形的关键点提取出来。S104: Execute S104 by using the extracted graphic feature key module. The feature pattern is first fitted with RANSAC, and the key points that can characterize the feature pattern are extracted.
RANSAC提取特征图形关键点在无畸变的平面上完成。首先要找到特征图形,然后再提取特征图形的关键点。根据特征图形的不同,可以根据需要选择合理数量的具有代表性的关键点,而不必将所有的关键点均送到后续步骤中进行处理。代表性关键点的选取以确保一定可以恢复出原图形来为宜。例如具有函数解析式的图形,如直线或圆等图形,就没有必要把整条直线上或圆上的轮廓点都送到后续步骤进行处理,因为2个点就可以确定一条直线,3个点就可以确定一个圆,所以就可以在直线的轮廓点挑选2个轮廓点,圆上的轮廓点挑选3个轮廓点送到后续步骤进行处理,一样可以完成这些图形在物理世界的重建。The RANSAC extracts feature key points on the undistorted plane. The first thing to do is to find the feature graphic and then extract the key points of the feature graphic. Depending on the feature graphic, a reasonable number of representative key points can be selected as needed without having to send all the key points to subsequent steps for processing. It is advisable to select representative key points to ensure that the original graphics can be restored. For example, a graph with a function analytic pattern, such as a line or a circle, it is not necessary to send the contour points on the entire line or the circle to the subsequent steps for processing, because 2 points can determine a straight line, 3 points It is possible to determine a circle, so that two contour points can be selected at the contour point of the line, and the contour points on the circle are selected and sent to the subsequent steps for processing, and the reconstruction of the graphics in the physical world can be completed.
S105:利用关键点逆透视变换模块执行S105步骤。关键点逆透视变换就是把提取的特征图形的关键点的坐标变换到真实的物理世界坐标,这个物理世界坐标是在摄像头坐标系下表示的,故这个步骤需要摄像头的高度和姿态角信息。因为提取的特征图形的关键点坐标是在无畸变平面下的坐标,是从畸变平面转换过来的,只是消除了畸变,摄像头拍摄的透视效果并没有消除。所以关键点逆透视变换就是把无畸变平面上的点变换到真实的物理空间中,此时没有透视效果,变换后的点之间的位置关系真实反映着物理世界中的位置关系。S105: Perform step S105 by using a keypoint inverse perspective transformation module. The key point inverse perspective transformation is to transform the coordinates of the key points of the extracted feature graphics to the real physical world coordinates. This physical world coordinate is represented in the camera coordinate system, so this step requires the height and attitude angle information of the camera. Because the key point coordinates of the extracted feature graphic are the coordinates in the undistorted plane, which is converted from the distortion plane, but the distortion is eliminated, the perspective effect of the camera shooting is not eliminated. Therefore, the key point inverse perspective transformation is to transform the points on the undistorted plane into the real physical space. At this time, there is no perspective effect, and the positional relationship between the transformed points truly reflects the positional relationship in the physical world.
逆透视变换的公式如下:The formula for inverse perspective transformation is as follows:
Figure PCTCN2018096004-appb-000002
Figure PCTCN2018096004-appb-000002
T是变换矩阵,其中:T is the transformation matrix, where:
f u:摄像头横向焦距,由摄像头标定得出; f u : the lateral focal length of the camera, which is obtained by the camera calibration;
f v:摄像头纵向焦距,由摄像头标定得出; f v : the longitudinal focal length of the camera, which is obtained by the camera calibration;
c u:摄像头光轴对应的图像中心点的横向坐标,由摄像头标定得出; c u : the lateral coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
c v:摄像头光轴对应的图像中心点的纵向坐标,由摄像头标定得出; c v : the longitudinal coordinate of the image center point corresponding to the optical axis of the camera, which is obtained by the camera calibration;
h:摄像头高度,可通过测量得到;h: camera height, which can be obtained by measurement;
s 1:摄像头俯角α的正弦值sinα; s 1 : the sine value sinα of the camera depression angle α;
s 2:摄像头偏航角β的正弦值sinβ; s 2 : sinusoidal value sinβ of camera yaw angle β;
c 1:摄像头俯角α的余弦值cosα; c 1 : cosine value cosα of the camera depression angle α;
c 2:摄像头偏航角β的余弦值cosβ; c 2 : cosine value cosβ of camera yaw angle β;
假设,关键点的坐标为(u,v),则:Assume that the coordinates of the key points are (u, v), then:
Figure PCTCN2018096004-appb-000003
Figure PCTCN2018096004-appb-000003
其中(x,y)就是(u,v)对应的物理世界的坐标。Where (x, y) is the coordinate of the physical world corresponding to (u, v).
S106:利用得到特征图形物理世界方程模块执行S106。根据得到的关键点在摄像头坐标系下的坐标,可以重新计算出这些关键点所描述的特征图形的方程。从而方便计算摄像头跟特征图形的位置关系,完成高精度的定位。S106: Execute S106 by using the obtained feature graphic physical world equation module. Based on the coordinates of the obtained key points in the camera coordinate system, the equations of the feature patterns described by these key points can be recalculated. Therefore, it is convenient to calculate the positional relationship between the camera and the feature graphic, and complete high-precision positioning.
相比传统的畸变矫正和特征提取方法和系统,本发明的运行效率提高了几个数量级,因为它优化的恰恰是传统方法中耗时较多的畸变矫正和逆透视变换步骤。Compared to conventional distortion correction and feature extraction methods and systems, the operational efficiency of the present invention is increased by several orders of magnitude because it optimizes precisely the time-consuming distortion correction and inverse perspective transformation steps of conventional methods.
以1920×1080的图片为例,传统方法在畸变矫正和逆透视变换都需处理200万个点的数据量。而新方法要处理的点大概就上千个。因为对于这样尺寸的图片,可提取的轮廓点大概就上千个点,所以只需处理这上千个数据,处理的数据量相比200万小了三个数量级,所以矫正的运行效率大大的提高。特征图形可以使用很少的代表性关键点代表:比如直线或者线段可以使用两个点来代表;圆可以使用圆上三个点代表。对于提取的特征图形来说,之前提取的轮廓点也含有冗余的数据量,我们只需把能表征特征图形的代表性关键点送给逆透视进行处理,所以逆透视变换这步的处理数据量在原有的上千个数据点的基础上又可以大大减少,运行效率又得到了提高。经过RANSAC提取特征图形关键点的这个步骤后,数据量可以大大减少,处理这些关 键点比在运行RANSAC前的几千个点少了3个数量级。实验中,利用树莓派3中Linux环境运行时,每秒处理30帧图比以前利用传统方法每秒处理5帧图提高6倍。如果图片的尺寸越大,优势就越明显。所以,本发明的方法可以提供很好的实时性。Taking the 1920×1080 picture as an example, the traditional method needs to process 2 million points of data in both distortion correction and inverse perspective transformation. The new method has to deal with a thousand points. Because for such a size of the picture, the extractable contour points are about a thousand points, so only need to process the thousands of data, the amount of processed data is three orders of magnitude smaller than 2 million, so the correct operation efficiency is greatly improve. Feature graphics can be represented by a few representative key points: for example, a line or line segment can be represented by two points; a circle can be represented by three points on a circle. For the extracted feature graphics, the previously extracted contour points also contain redundant data quantities. We only need to send the representative key points that can represent the feature graphics to the inverse perspective for processing, so the inverse perspective transforms the processing data of this step. The amount can be greatly reduced on the basis of the original thousands of data points, and the operational efficiency is improved. After this step of RANSAC extracting the key points of the feature graphics, the amount of data can be greatly reduced, and processing these key points is three orders of magnitude less than the thousands of points before running RANSAC. In the experiment, when using the Linux environment running in the Raspberry Pi 3, processing 30 frames per second is 6 times higher than the previous processing of 5 frames per second by the conventional method. If the size of the image is larger, the advantage is more obvious. Therefore, the method of the present invention can provide good real-time performance.
实施例1Example 1
本实施例的应用场景为能够提取地面特征的室内区域进行室内定位。首先采用摄像头获得室内地面瓷砖边缘线的特征信息,参照图4建立X轴和Y轴方向,快速变换得到摄像头中心点位置到拍摄图片中直线相对位置信息。The application scenario of this embodiment is an indoor area capable of extracting ground features for indoor positioning. Firstly, the camera is used to obtain the feature information of the indoor floor tile edge line. The X-axis and Y-axis directions are established with reference to FIG. 4, and the position of the camera center point is directly changed to the relative position information of the line in the captured picture.
S101:获取镜头图片。采集图片供后续模块处理。S101: Acquire a lens image. Capture images for subsequent module processing.
S102:图片预处理。将采集的镜头图片从彩色图转化成灰度图;边沿检测,把图像的轮廓显示出来;剪切图片,把干扰的轮廓部分去掉。S102: Image preprocessing. Converting the captured lens image from a color map to a grayscale image; edge detection, displaying the outline of the image; cutting the image to remove the contour portion of the interference.
将图片从彩色图转化成灰度图可以减少后续待处理数据的运算量,加快运行效率,并且灰度图在处理起来也更简单。Converting a picture from a color map to a grayscale image can reduce the amount of computation of subsequent data to be processed, speed up operational efficiency, and make grayscale images easier to process.
边缘检测可以用于显示图像的轮廓信息。图像属性中的显著变化通常反映了图像的重要事件和变化,通过边缘检测可以标识图像中亮度变化明显的点,通过这些点可以构成图像的边缘特征。对图像进行边缘检测可以大幅度地减少了原始图像的数据量,并且剔除了可以认为不相关的信息,保留了图像重要的结构属性,从而便于后续提取图像的特征信息。在本发明实施例中,可以采用Canny算子对图像进行边缘检测。Edge detection can be used to display outline information of an image. Significant changes in image properties often reflect important events and changes in the image. Edge detection can identify points in the image where the brightness changes significantly, and these points can be used to form the edge features of the image. Edge detection of the image can greatly reduce the amount of data of the original image, and eliminate information that can be considered irrelevant, retaining important structural properties of the image, thereby facilitating subsequent extraction of feature information of the image. In the embodiment of the present invention, the Canny operator can be used to perform edge detection on the image.
剪切图片,由于摄像头是固定安装在一辆小车上的,可能会拍到小车本身的一些部位的固定轮廓,这不是有用信息,可以对摄像头拍摄得到每一帧图片与含有小车轮廓的原始图片进行图像差分,去掉小车中固定轮廓的干扰信息,这样可以增强拟合特征点的鲁棒性。Cut the picture, because the camera is fixedly mounted on a car, it may capture the fixed contour of some parts of the car itself. This is not useful information. You can shoot every frame of the picture and the original picture with the outline of the car. Perform image difference and remove the interference information of the fixed contour in the car, which can enhance the robustness of the fitted feature points.
S103:提取图片轮廓点并矫正。把经过S102模块处理的图片显示出的轮廓提取出来,利用标定的摄像头内参矩阵和畸变系数,用矫正公式径向和切向矫正这些轮廓点,使之还原到无畸变的平面。S103: Extract a picture outline point and correct it. The contours displayed by the image processed by the S102 module are extracted, and the contour points of the camera are corrected by radial and tangential correction using the calibration camera internal reference matrix and the distortion coefficient, so as to be restored to the undistorted plane.
经过S102过程获得地面图形的轮廓点相关信息加上标定的摄像头内参矩阵和畸变系数,对获得的轮廓点进行高效地矫正。这样的点矫正比整张图片矫正效率更高,大大减少CPU运行时间。该方法运行在树莓派3中Linux环境下,可极大减少硬件成本。Through the S102 process, the contour point related information of the ground figure is obtained, plus the calibrated camera internal reference matrix and the distortion coefficient, and the obtained contour points are efficiently corrected. Such point correction is more efficient than the entire picture correction, greatly reducing CPU runtime. This method runs under the Linux environment of Raspberry Pi 3, which can greatly reduce the hardware cost.
获得的轮廓点可以是用来描述地面图像特征的相关参数。形成的特征图形可以是能够使用参数描述的解析几何图形,特征图形的具体形状可以是直线、圆、菱形等。在本发明实施例中,对特征图形的具体形状不做限定。以室内地面瓷砖为例,机器人在该室内行驶时,从摄像头拍摄出的图像中拟合出的特征图形可以是该室内板砖的边缘线。The obtained contour points may be related parameters used to describe the characteristics of the ground image. The formed feature graphic may be an analytical geometry that can be described using parameters, and the specific shape of the feature graphic may be a straight line, a circle, a diamond, or the like. In the embodiment of the present invention, the specific shape of the feature graphic is not limited. Taking the indoor floor tile as an example, when the robot is traveling in the room, the feature pattern fitted in the image taken from the camera may be the edge line of the indoor tile.
在本发明实施例中,可以利用霍夫变换(Hough Transform)从矫正的特征点拟合出特征图形,或者是利用RANSAC算法从该图像中拟合出特征图形。In the embodiment of the present invention, the feature pattern may be fitted from the corrected feature points by using a Hough Transform, or the feature pattern may be fitted from the image by using the RANSAC algorithm.
S104:RANSAC提取关键点。先用RANSAC拟合出特征图形,再把能表征特征图形的关键点提取出来。S104: RANSAC extracts a key point. The feature pattern is first fitted with RANSAC, and the key points that can characterize the feature pattern are extracted.
本系统是先对轮廓点进行拟合直线,因为地板的线条是直线,拟合直线采用随机采样一致性算法进行对直线拟合,在找到的地板直线的轮廓点中精选2个关键点即可,因为直线只需2点就可以表征。从而可以进一步减少后续运算数据量。The system first fits the contour point to the straight line, because the line of the floor is a straight line, the fitting straight line uses a random sampling consistency algorithm to fit the straight line, and selects two key points in the contour point of the found floor straight line. Yes, because the line only needs 2 points to be characterized. Thereby, the amount of subsequent operation data can be further reduced.
S105:关键点逆透视变换。对S104步骤获取的关键点进行逆透视变换,得到这些关键点在真实世界的物理坐标。S105: Key point inverse perspective transformation. Perform the inverse perspective transformation on the key points obtained in step S104 to obtain the physical coordinates of these key points in the real world.
根据实验的需要,比如算法中提取直线,程序设置最多同时提取方形地板的2条直线,所以可以获得4个关键点,分别是RANSAC提取的一条直线起点为(u s1,v s1),终点为(u e1,v e1),提取另外一条直线起点为(u s2,v s2),终点为(u e2,v e2)。提取每条直线的起点和终点生成的矩阵是: According to the needs of the experiment, such as extracting straight lines in the algorithm, the program sets up to two lines of square floor at the same time, so four key points can be obtained, which are the starting point of a straight line extracted by RANSAC (u s1 , v s1 ), and the end point is (u e1 , v e1 ), extract another line starting point (u s2 , v s2 ), and the end point is (u e2 , v e2 ). The matrix generated by extracting the start and end points of each line is:
Figure PCTCN2018096004-appb-000004
Figure PCTCN2018096004-appb-000004
将生成的矩阵M导入到逆透视变换的图像转换到地面的转换矩阵中去,就得到对应的物理世界坐标点组成的矩阵。矩阵形式如下:The generated matrix M is imported into the transformation matrix of the inverse perspective transformation image and converted to the ground, and a matrix composed of corresponding physical world coordinate points is obtained. The matrix form is as follows:
Figure PCTCN2018096004-appb-000005
Figure PCTCN2018096004-appb-000005
其中T是逆透视变换矩阵,生成的点(x s1,y s1),(x e1,y e1),(x s2,y s2),(x e2,y e2)是图像中的关键点变换到物理世界的坐标,这个物理世界的坐标是以摄像头光轴点为坐标原点。 Where T is the inverse perspective transformation matrix, the generated points (x s1 , y s1 ), (x e1 , y e1 ), (x s2 , y s2 ), (x e2 , y e2 ) are key points in the image transformed to The coordinates of the physical world. The coordinates of this physical world are based on the camera's optical axis point.
算法还可以拟合其它几何图形并计算出相应的几何图像方程式,进一步计算相应的物理信息。The algorithm can also fit other geometric figures and calculate the corresponding geometric image equations to further calculate the corresponding physical information.
S106:计算关键点物理世界方程。用S105得到的关键点的真实世界物理坐标,计算出特征图形在物理世界坐标系下的方程。S106: Calculate a physical world equation of a key point. Using the real world physical coordinates of the key points obtained in S105, the equations of the feature graph in the physical world coordinate system are calculated.
利用求得的关键点的物理世界坐标,可以求得关键点所表示的几何图形方程,此时就完成了从图片中的几何图像转换到物理世界坐标上的几何图像,此时利用物理世界坐标下的直线方程就可以很方便求出摄像头到地板边缘直线的距离,从而实现室内高精度定位。Using the physical world coordinates of the obtained key points, the geometric equation represented by the key points can be obtained, and the geometric image converted from the geometric image in the picture to the physical world coordinates is completed, and the physical world coordinates are utilized. The straight line equation below makes it easy to find the distance from the camera to the edge of the floor, thus achieving high-precision positioning in the room.
显然,上述实施例仅仅是为了清楚地说明所作的举例,而非对实施方式的限制。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。只要是在本发明实施例基础上做出的常识性的改动方案,都处于本发明的保护范围之中。It is apparent that the above-described embodiments are merely illustrative of the examples, and are not intended to limit the embodiments. Other variations or modifications of the various forms may be made by those skilled in the art in light of the above description. There is no need and no way to exhaust all of the implementations. As long as the common-sense modification made on the basis of the embodiment of the present invention is within the protection scope of the present invention.

Claims (9)

  1. 一种镜头畸变矫正和特征提取的方法,包括以下步骤:A method for lens distortion correction and feature extraction includes the following steps:
    S101,获取镜头图片信息;S101. Acquire lens image information.
    S102,图片预处理,所述图片预处理包括将彩色图变成灰色图,调节图片亮度和对比度、并对图片进行降噪处理然后进行边缘检测,从而将图像的轮廓显示出来;S102: Image pre-processing, the image pre-processing comprises: changing a color map into a gray image, adjusting a picture brightness and contrast, performing noise reduction processing on the picture, and performing edge detection, thereby displaying an outline of the image;
    S103,提取图片轮廓点并进行镜头畸变矫正,将经过S102处理的图片显示的轮廓点提取出来,然后利用矫正公式矫正这些轮廓点,将这些轮廓点还原到无畸变的平面上;S103, extracting image contour points and performing lens distortion correction, extracting contour points displayed by the image processed by S102, and then correcting the contour points by using a correction formula, and resting the contour points to an undistorted plane;
    S104,提取图形特征关键点,利用特征拟合算法拟合出特征图像,再把能表征特征图形的关键点提取出来;S104, extracting key points of the graphic feature, using the feature fitting algorithm to fit the feature image, and extracting key points capable of characterizing the feature graphic;
    S105,关键点逆透视变换,将S104处理的关键点进行逆透视变换,得到这些关键点在真实的物理世界坐标;S105, the key point inverse perspective transformation, the key points processed by S104 are subjected to inverse perspective transformation, and the key points in the real physical world coordinates are obtained;
    S106,计算关键点物理世界方程式,利用S105得到的关键点在真实的物理世界坐标,计算出特征图形在物理世界坐标体系下的方程。S106: Calculate the physical world equation of the key point, and calculate the equation of the characteristic graphic in the physical world coordinate system by using the key points obtained by S105 in the real physical world coordinates.
  2. 根据权利要求1所述的方法,其特征在于:S102步骤中还包括图片裁剪处理,即将干扰的轮廓部分除去。The method according to claim 1, wherein the step S102 further comprises a picture cropping process, that is, removing the contour portion of the interference.
  3. 根据权利要求1所述的方法,其特征在于:S103中,畸变矫正时只矫正提取出来的轮廓点,而不是整张图片进行校正,并且提取轮廓点时,在列和行上分别采用等距离采点方式进行取点。The method according to claim 1, wherein in S103, only the extracted contour points are corrected when the distortion is corrected, instead of correcting the entire image, and when the contour points are extracted, the equidistance is respectively adopted on the columns and the rows. Take points to pick up points.
  4. 根据权利要求1所述的方法,其特征在于:S103中,所述镜头畸变矫正包含径向和切向修正,修正系数由事前镜头标定得出。The method according to claim 1, wherein in S103, said lens distortion correction comprises radial and tangential correction, and the correction coefficient is obtained by prior lens calibration.
  5. 根据权利要求4所述的方法,其特征在于:所述事前镜头标定 包括采集标定板图片,然后输入标定程序工具箱,由标定程序工具箱计算出摄像头内参矩阵和畸变系数。The method according to claim 4, wherein the pre-shooting calibration comprises collecting a calibration plate image, and then inputting a calibration program toolbox, and calculating a camera internal reference matrix and a distortion coefficient by the calibration program toolbox.
  6. 根据权利要求1所述的方法,其特征在于:S104中,针对具有函数解析式的特征图形,选取的轮廓点为一定能够确定该特征图形形状的具有代表性的关键点。The method according to claim 1, wherein in S104, for the feature graphic having the function analytic expression, the selected contour point is a representative key point capable of determining the shape of the characteristic graphic shape.
  7. 一种镜头畸变矫正和特征提取的系统,包括以下模块:A system for lens distortion correction and feature extraction, comprising the following modules:
    输入图像模块,用于采集图片信息;Input image module for collecting picture information;
    图片预处理模块,用于将彩色图变成灰色图,调节图片亮度和对比度,对图片进行降噪处理,然后进行边缘检测,从而将图像的轮廓显示出来;The image preprocessing module is configured to change the color map into a gray image, adjust the brightness and contrast of the picture, perform noise reduction processing on the picture, and then perform edge detection to display the outline of the image;
    提取图像轮廓点模块,用于将图片预处理模块处理后显示出的轮廓点提取出来;Extracting an image contour point module for extracting contour points displayed after processing by the image preprocessing module;
    轮廓点畸变校正模块,用于将提取图像轮廓点模块提取的轮廓点还原到无畸变的平面上;a contour point distortion correction module, configured to restore the contour points extracted by the extracted image contour point module to an undistorted plane;
    提取图形特征关键点模块,用于将能表征特征图形的关键点提取出来;Extracting a graphical feature key point module for extracting key points capable of characterizing the feature graphic;
    关键点逆透视变换模块,用于将提取图形特征关键点模块中提取的关键点进行逆透视变换,得到这些关键点在真实的物理世界坐标;The key point inverse perspective transformation module is used for performing inverse perspective transformation on the key points extracted in the key feature module of the extracted graphic feature, and obtaining the key points in the real physical world coordinates;
    得到特征图形物理世界方程模块,用于利用关键点逆透视变换模块中得到的关键点在真实的物理世界坐标,计算出特征图形在物理世界坐标体系下的方程。The feature graph physical world equation module is obtained, which is used to calculate the equation of the feature graph in the physical world coordinate system by using the key points obtained by the key point inverse perspective transformation module in the real physical world coordinates.
  8. 根据权利要求7所述的系统,其特征在于:所述轮廓点畸变校 正模块包括摄像头标定模块,所述摄像头标定模块主要用于采集标定板图片,然后输入标定程序工具箱,由标定程序工具箱计算出摄像头内参矩阵和畸变系数。The system according to claim 7, wherein said contour point distortion correction module comprises a camera calibration module, said camera calibration module is mainly for collecting a calibration plate picture, and then inputting a calibration program toolbox, by a calibration program toolbox. Calculate the internal reference matrix and distortion coefficient of the camera.
  9. 根据权利要求8所述的系统,其特征在于:所述轮廓点镜头畸变矫正模块采用径向和切向修正,修正系数由所述标定程序工具箱计算出摄像头内参矩阵和畸变系数得出。The system according to claim 8, wherein said contour point lens distortion correction module employs radial and tangential correction, and the correction coefficient is obtained by calculating a camera internal reference matrix and a distortion coefficient by said calibration program toolbox.
PCT/CN2018/096004 2017-11-28 2018-07-17 Method and system for lens distortion correction and feature extraction WO2019105044A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711216895.2A CN109842756A (en) 2017-11-28 2017-11-28 A kind of method and system of lens distortion correction and feature extraction
CN201711216895.2 2017-11-28

Publications (1)

Publication Number Publication Date
WO2019105044A1 true WO2019105044A1 (en) 2019-06-06

Family

ID=66663837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096004 WO2019105044A1 (en) 2017-11-28 2018-07-17 Method and system for lens distortion correction and feature extraction

Country Status (2)

Country Link
CN (1) CN109842756A (en)
WO (1) WO2019105044A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199556A (en) * 2019-12-31 2020-05-26 同济大学 Indoor pedestrian detection and tracking method based on camera
CN111260565A (en) * 2020-01-02 2020-06-09 北京交通大学 Distorted image correction method and system based on distorted distribution map
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111311693A (en) * 2020-03-16 2020-06-19 威海经济技术开发区天智创新技术研究院 Online calibration method and system for multi-view camera
CN111415307A (en) * 2020-03-13 2020-07-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112686959A (en) * 2019-10-18 2021-04-20 菜鸟智能物流控股有限公司 Method and device for correcting image to be recognized
CN112819772A (en) * 2021-01-28 2021-05-18 南京挥戈智能科技有限公司 High-precision rapid pattern detection and identification method
CN112927306A (en) * 2021-02-24 2021-06-08 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN113324578A (en) * 2020-01-09 2021-08-31 西北农林科技大学 Appearance quality and storage index measuring instrument for fresh apple
CN113822807A (en) * 2020-07-07 2021-12-21 湖北亿立能科技股份有限公司 Virtual ruler calculation method based on second-order radial distortion correction method
CN113984569A (en) * 2021-10-26 2022-01-28 深圳市地铁集团有限公司 Hob abrasion image identification and measurement method, hob detection system and shield machine
CN114170246A (en) * 2021-12-08 2022-03-11 广东奥普特科技股份有限公司 Positioning method of precision displacement platform
CN114998571A (en) * 2022-05-27 2022-09-02 中国科学院重庆绿色智能技术研究院 Image processing and color detection method based on fixed-size marker
CN115409980A (en) * 2022-09-02 2022-11-29 重庆众仁科技有限公司 Method and system for correcting distorted image
CN115775282A (en) * 2023-01-29 2023-03-10 广州市易鸿智能装备有限公司 Method, device and storage medium for high-speed online image distortion correction
CN117011185A (en) * 2023-08-21 2023-11-07 自行科技(武汉)有限公司 Electronic rearview mirror CMS image correction method and system and electronic rearview mirror
CN114494038B (en) * 2021-12-29 2024-03-29 扬州大学 Target surface perspective distortion correction method based on improved YOLOX-S
CN112686959B (en) * 2019-10-18 2024-06-11 菜鸟智能物流控股有限公司 Correction method and device for image to be identified

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110471086B (en) * 2019-09-06 2021-12-03 北京云迹科技有限公司 Radar fault detection system and method
CN111223055B (en) * 2019-11-20 2023-05-02 南京拓控信息科技股份有限公司 Train wheel tread image correction method
CN112037192A (en) * 2020-08-28 2020-12-04 西安交通大学 Method for collecting burial depth information in town gas public pipeline installation process
CN112019747B (en) * 2020-09-01 2022-06-17 北京德火科技有限责任公司 Foreground tracking method based on holder sensor
CN116399874B (en) * 2023-06-08 2023-08-22 华东交通大学 Method and program product for shear speckle interferometry to non-destructive detect defect size
CN117237669B (en) * 2023-11-14 2024-02-06 武汉海微科技有限公司 Structural member feature extraction method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038163A (en) * 2007-02-07 2007-09-19 北京航空航天大学 Single-vision measuring method of space three-dimensional attitude of variable-focus video camera
US20150093042A1 (en) * 2012-06-08 2015-04-02 Huawei Technologies Co., Ltd. Parameter calibration method and apparatus
CN105224908A (en) * 2014-07-01 2016-01-06 北京四维图新科技股份有限公司 A kind of roadmarking acquisition method based on orthogonal projection and device
CN105825470A (en) * 2016-03-10 2016-08-03 广州欧科信息技术股份有限公司 Fisheye image correction method base on point cloud image
CN105957041A (en) * 2016-05-27 2016-09-21 上海航天控制技术研究所 Wide-angle lens infrared image distortion correction method
US20170270647A1 (en) * 2014-12-09 2017-09-21 SZ DJI Technology Co., Ltd. Image processing method, device and photographic apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222B (en) * 2011-03-04 2012-09-05 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
CN102982524B (en) * 2012-12-25 2015-03-25 北京农业信息技术研究中心 Splicing method for corn ear order images
CN104200454B (en) * 2014-05-26 2017-02-01 深圳市中瀛鑫科技股份有限公司 Fisheye image distortion correction method and device
CN106023170B (en) * 2016-05-13 2018-10-23 成都索贝数码科技股份有限公司 A kind of binocular 3D distortion correction methods based on GPU processors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038163A (en) * 2007-02-07 2007-09-19 北京航空航天大学 Single-vision measuring method of space three-dimensional attitude of variable-focus video camera
US20150093042A1 (en) * 2012-06-08 2015-04-02 Huawei Technologies Co., Ltd. Parameter calibration method and apparatus
CN105224908A (en) * 2014-07-01 2016-01-06 北京四维图新科技股份有限公司 A kind of roadmarking acquisition method based on orthogonal projection and device
US20170270647A1 (en) * 2014-12-09 2017-09-21 SZ DJI Technology Co., Ltd. Image processing method, device and photographic apparatus
CN105825470A (en) * 2016-03-10 2016-08-03 广州欧科信息技术股份有限公司 Fisheye image correction method base on point cloud image
CN105957041A (en) * 2016-05-27 2016-09-21 上海航天控制技术研究所 Wide-angle lens infrared image distortion correction method

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686959A (en) * 2019-10-18 2021-04-20 菜鸟智能物流控股有限公司 Method and device for correcting image to be recognized
CN112686959B (en) * 2019-10-18 2024-06-11 菜鸟智能物流控股有限公司 Correction method and device for image to be identified
CN111199556A (en) * 2019-12-31 2020-05-26 同济大学 Indoor pedestrian detection and tracking method based on camera
CN111199556B (en) * 2019-12-31 2023-07-04 同济大学 Indoor pedestrian detection and tracking method based on camera
CN111260565A (en) * 2020-01-02 2020-06-09 北京交通大学 Distorted image correction method and system based on distorted distribution map
CN111260565B (en) * 2020-01-02 2023-08-11 北京交通大学 Distortion image correction method and system based on distortion distribution diagram
CN113324578A (en) * 2020-01-09 2021-08-31 西北农林科技大学 Appearance quality and storage index measuring instrument for fresh apple
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111415307A (en) * 2020-03-13 2020-07-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111415307B (en) * 2020-03-13 2024-03-26 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111311693A (en) * 2020-03-16 2020-06-19 威海经济技术开发区天智创新技术研究院 Online calibration method and system for multi-view camera
CN111311693B (en) * 2020-03-16 2023-11-14 威海经济技术开发区天智创新技术研究院 Online calibration method and system for multi-camera
CN113822807A (en) * 2020-07-07 2021-12-21 湖北亿立能科技股份有限公司 Virtual ruler calculation method based on second-order radial distortion correction method
CN112819772B (en) * 2021-01-28 2024-05-03 南京挥戈智能科技有限公司 High-precision rapid pattern detection and recognition method
CN112819772A (en) * 2021-01-28 2021-05-18 南京挥戈智能科技有限公司 High-precision rapid pattern detection and identification method
CN112927306B (en) * 2021-02-24 2024-01-16 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN112927306A (en) * 2021-02-24 2021-06-08 深圳市优必选科技股份有限公司 Calibration method and device of shooting device and terminal equipment
CN113984569A (en) * 2021-10-26 2022-01-28 深圳市地铁集团有限公司 Hob abrasion image identification and measurement method, hob detection system and shield machine
CN114170246B (en) * 2021-12-08 2024-05-17 广东奥普特科技股份有限公司 Positioning method for precision displacement platform
CN114170246A (en) * 2021-12-08 2022-03-11 广东奥普特科技股份有限公司 Positioning method of precision displacement platform
CN114494038B (en) * 2021-12-29 2024-03-29 扬州大学 Target surface perspective distortion correction method based on improved YOLOX-S
CN114998571B (en) * 2022-05-27 2024-04-12 中国科学院重庆绿色智能技术研究院 Image processing and color detection method based on fixed-size markers
CN114998571A (en) * 2022-05-27 2022-09-02 中国科学院重庆绿色智能技术研究院 Image processing and color detection method based on fixed-size marker
CN115409980A (en) * 2022-09-02 2022-11-29 重庆众仁科技有限公司 Method and system for correcting distorted image
CN115409980B (en) * 2022-09-02 2023-12-22 重庆众仁科技有限公司 Distortion image correction method and system
CN115775282B (en) * 2023-01-29 2023-06-02 广州市易鸿智能装备有限公司 Method, device and storage medium for correcting image distortion at high speed on line
CN115775282A (en) * 2023-01-29 2023-03-10 广州市易鸿智能装备有限公司 Method, device and storage medium for high-speed online image distortion correction
CN117011185B (en) * 2023-08-21 2024-04-19 自行科技(武汉)有限公司 Electronic rearview mirror CMS image correction method and system and electronic rearview mirror
CN117011185A (en) * 2023-08-21 2023-11-07 自行科技(武汉)有限公司 Electronic rearview mirror CMS image correction method and system and electronic rearview mirror

Also Published As

Publication number Publication date
CN109842756A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
WO2019105044A1 (en) Method and system for lens distortion correction and feature extraction
CN109544456B (en) Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion
CN109035320B (en) Monocular vision-based depth extraction method
CN109269430B (en) Multi-standing-tree breast height diameter passive measurement method based on deep extraction model
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
WO2022156755A1 (en) Indoor positioning method and apparatus, device, and computer-readable storage medium
CN114399554B (en) Calibration method and system of multi-camera system
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
CN107084680B (en) A kind of target depth measurement method based on machine monocular vision
CN111402330B (en) Laser line key point extraction method based on planar target
CN109859137B (en) Wide-angle camera irregular distortion global correction method
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
Kurban et al. Plane segmentation of kinect point clouds using RANSAC
CN111724446B (en) Zoom camera external parameter calibration method for three-dimensional reconstruction of building
CN103700082B (en) Image split-joint method based on dual quaterion relative orientation
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
Xinmei et al. Passive measurement method of tree height and crown diameter using a smartphone
WO2008032375A1 (en) Image correcting device and method, and computer program
CN114529681A (en) Hand-held double-camera building temperature field three-dimensional model construction method and system
CN112508885B (en) Method and system for detecting three-dimensional central axis of bent pipe
CN107990825B (en) High-precision position measuring device and method based on priori data correction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18884190

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18884190

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18884190

Country of ref document: EP

Kind code of ref document: A1