CN117808880A - Monocular vision space cooperation target pose measurement method - Google Patents

Monocular vision space cooperation target pose measurement method Download PDF

Info

Publication number
CN117808880A
CN117808880A CN202410012782.4A CN202410012782A CN117808880A CN 117808880 A CN117808880 A CN 117808880A CN 202410012782 A CN202410012782 A CN 202410012782A CN 117808880 A CN117808880 A CN 117808880A
Authority
CN
China
Prior art keywords
image
pose
target
coordinates
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410012782.4A
Other languages
Chinese (zh)
Inventor
陈元枝
王展
姜文英
韦柳夏
聂锟
王旭哲
吴玉霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202410012782.4A priority Critical patent/CN117808880A/en
Publication of CN117808880A publication Critical patent/CN117808880A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of digital photogrammetry, in particular to a method for measuring the pose of a spatial cooperation target of monocular vision, which uses a light-emitting diode as a target characteristic point to realize the measurement of the relative pose under the condition of poor light; and then image data of the feature targets under different angles and distances are acquired, graying treatment is carried out, filtering noise reduction treatment is carried out, clustering analysis is carried out on the filtered images, the ROI area is screened out, boundary point coordinates of the area are extracted, ellipse fitting is carried out on the extracted boundary to obtain center pixel coordinates of the feature points, the initial pose is calculated by combining the coordinates of the feature point target coordinate system and the calibrated camera internal reference matrix, the obtained pose data is optimized by using a Gaussian Newton optimization algorithm, and finally the pose data is output. Through the processing, the method can obtain more accurate feature point center positioning coordinates, and the problem that the EPnP algorithm is not strong in robustness to image noise is also solved.

Description

Monocular vision space cooperation target pose measurement method
Technical Field
The invention relates to the technical field of digital photogrammetry, in particular to a monocular vision space cooperation target pose measuring method.
Background
In recent years, with the continuous development of automation and intelligence, pose measurement plays an increasingly important role, especially in the fields of space intersection, mechanical assembly, robot control, virtual reality, unmanned equipment docking and corresponding military. The relative pose information of two moving objects can be accurately and rapidly acquired, and the relative pose information is an important guarantee that a control system can stably implement further operation.
The existing relative pose calculation technology research mostly adopts a detection mode of a single sensor, such as an inertial sensor type, an electromagnetic sensor type, an optical camera type or a laser sensor type, and the research is often easy to be influenced by the positioning precision of characteristic points, limited in detection space range, easy to be influenced by ambient light and the like because only one detection mode exists.
Therefore, a spatial position and posture detection system capable of effectively improving the positioning precision of the central coordinates of the target feature points, avoiding incapability of measuring relative postures caused by poor light and dim environment and improving mobility and portability is required to be studied.
Disclosure of Invention
The invention aims to provide a monocular vision space cooperation target pose measurement method, which aims to solve the problem that the existing pose calculation method is insufficient in robustness and accuracy caused by image noise interference, and can meet the problem of relative pose measurement of a camera and a target under the condition of insufficient light.
In order to achieve the above purpose, the invention provides a monocular vision space cooperation target pose measurement method, which comprises the following steps:
step 1: designing a cooperative target comprising a light emitting diode;
step 2: calibrating a camera and acquiring 20 calibration plate images with different angles and different distances by using the camera;
step 3: the method comprises the steps of acquiring an image by using a camera and inputting the image to a PC (personal computer) terminal;
step 4: preprocessing an input image, and fitting and obtaining a characteristic point center pixel coordinate by using an improved characteristic point center positioning algorithm;
step 5: and (5) performing pose calculation and solving an optimal solution by using a Gauss Newton method.
Optionally, the cooperative target is a background black square target, five characteristic points are formed by 5 light emitting diodes, the setting positions of the light emitting diodes are respectively one of four corner points, and the center of the square is one.
Optionally, in step 2, calibrating the camera through a matlab tool box to obtain a camera internal reference matrix K.
Optionally, the improved feature point center positioning algorithm execution process includes the following steps:
carrying out graying treatment on the frame image to generate a two-dimensional image;
filtering noise using median filtering, mean filtering and gaussian filtering;
performing cluster analysis on the image after noise reduction by using a K-means clustering method, and dividing the whole image into three parts, namely a background, a halation and a light source;
determining an ROI region and highlighting a light source region;
processing the image by using a findContours function to obtain boundary coordinates of the ROI;
and carrying out ellipse fitting by using a fitEllipse function to obtain the central pixel coordinates of the feature points.
Optionally, the background, halo and light source regions correspond to black background image pixels, feature point halo image pixels and feature point source image pixels, respectively.
Optionally, the process of determining the ROI area and highlighting the light source area, specifically, performing binarization processing on the image by using an image binary function by setting a threshold, communicating the background area and the halo area of the image after cluster analysis, and highlighting the light source area.
Optionally, in step 5, the obtained central pixel coordinates of the feature points, the feature point coordinates under the target coordinate system and the calibrated camera internal reference matrix are input, the initial data is obtained by calculation using an EPnP algorithm, and then the optimal solution is obtained by using a gaussian newton method, so as to obtain a rotation matrix R and a translation matrix t of the relative pose matrix with higher accuracy.
The invention provides a monocular vision space cooperation target pose measurement method, which uses a light-emitting diode as a target characteristic point to realize relative pose measurement under the condition of poor light; then image data of the feature targets under different angles and distances are collected, gray processing is carried out on the images, filtering noise reduction processing is carried out on the images, clustering analysis is carried out on the filtered images, ROI areas are screened, boundary extraction is carried out on the ROI areas to obtain boundary point coordinates of the areas, ellipse fitting is carried out on the extracted boundaries according to the boundary point coordinates to obtain center pixel coordinates of feature points, initial pose is calculated by combining the coordinates of a feature point target coordinate system and calibrated camera internal reference matrixes, and finally pose data are output by optimizing the obtained pose data through Gaussian Newton optimization algorithm. Through the corresponding processing method, the invention can obtain more accurate feature point center positioning coordinates, and also reduces the problem that the EPnP algorithm has weak robustness to image noise.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of the execution steps of a monocular vision spatial cooperation target pose measurement method according to the present invention.
FIG. 2 is a combination diagram of a neutral stability curve and a coexistence curve for increasing permeability of a net train in an embodiment of the present invention;
FIG. 3 is a combination diagram of a neutral stability curve and a coexistence curve with an increased number of communicating vehicles in an embodiment of the present invention;
FIG. 4 is a combination graph of a neutral stability curve and a coexistence curve for increasing permeability of an on-line commercial vehicle and an off-line commercial vehicle in an embodiment of the present invention;
FIG. 5 is a combination diagram of a neutral stability curve and a coexistence curve with an increased flow rate difference response coefficient according to the embodiment of the present invention;
FIG. 6 is a graph of neutral stability versus slope change in an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
Referring to fig. 1, the invention provides a monocular vision spatial cooperation target pose measurement method, which comprises the following steps:
s1: designing a cooperative target comprising a light emitting diode;
s2: calibrating a camera and acquiring 20 calibration plate images with different angles and different distances by using the camera;
s3: the method comprises the steps of acquiring an image by using a camera and inputting the image to a PC (personal computer) terminal;
s4: preprocessing an input image, and fitting and obtaining a characteristic point center pixel coordinate by using an improved characteristic point center positioning algorithm;
s5: and (5) performing pose calculation and solving an optimal solution by using a Gauss Newton method.
The following is further described in connection with the specific implementation steps:
the specific flow of the monocular vision space cooperation target pose measuring method is shown as 2, and the whole flow is as follows: calibrating a camera, designing a cooperative target with characteristic points which can be effectively identified, collecting image data of the characteristic target at different angles and distances, carrying out graying treatment on the image, carrying out filtering noise reduction treatment, carrying out clustering analysis on the filtered image, screening out a region of interest (ROI), carrying out boundary extraction on the region of interest (ROI) to obtain boundary point coordinates of the region, carrying out ellipse fitting on the extracted boundary according to the boundary point coordinates to obtain center pixel coordinates of the characteristic points, solving an initial pose by combining the coordinates of a characteristic point target coordinate system and a calibrated camera internal reference matrix, optimizing the obtained pose data by using a Gaussian Newton optimization algorithm, and finally outputting the pose data.
In the aspect of camera calibration, specifically, a Camera Calibrator function in a Matlab toolbox is used for calibrating camera internal parameters, as shown in fig. 3, a calibration plate is selected as a black-white checkerboard calibration plate, the size of each checkerboard is 15mm x 15mm, 20 calibration plate images in different postures and positions are acquired and input into the Matlab for calibration, and a calibrated camera internal parameter matrix is obtained and stored in a Matlab working area.
The camera internal reference matrix obtained by calibration is
In the aspect of target design, considering that the measuring system adopts an EPnP pose calculating method, and when the number of the characteristic points is smaller than 3, the pixel projection points and the three-dimensional space points have infinite corresponding relations, and the relative pose cannot be obtained, therefore, the number of the characteristic points is 5. For camera view field calculation, the camera adopts a 4MP CMOS imaging sensor, the video resolution is 640 x 480, the focal length is 3.6mm, the pixel size u is 9.7um, and the camera horizontal and vertical view angles can be obtained by the following formula:
the working distance of the camera is set to be 0.1m-2m, in order to ensure that the cooperative target is in the field of view in the measuring process, the characteristic points can be effectively identified, and the pixels occupied by the cooperative target and the characteristic points at the relative distances of 0.1m and 2m are calculated according to the principle of small-hole imaging.
The aperture imaging schematic diagram is shown in fig. 4, in which W represents the actual size of an object, d is the object distance, f is the focal length of a camera, and x res And y res Representing the number of pixels occupied by an object in the x and y directions of the sensor can be obtained bySolving the pixel number occupied by the object under different object distances. The cooperative target structure is determined according to the above requirements and the calculation result as shown in fig. 5:
the method comprises the steps of carrying out comprehensive filtering on an image, adopting a mode of combining median filtering and mean filtering, defining a 3*3 sliding window, moving the window to each pixel position of the image, sequencing pixels in the window, removing intermediate values to serve as new values of central pixels, and repeating the process until the whole image is covered. And then, carrying out average filtering on the image, defining a 3*3 sliding window, moving the window to each pixel position of the image, averaging the pixel values in the window to obtain a new value of a central pixel, and repeating the process until the whole image is covered.
The filtered images are subjected to cluster analysis, and the original images are firstly converted from a matrix of 480 rows and 640 columns into a column vector with 307200 elements by using a Reshap () function, wherein the column vector is a gray characteristic parameter in the clustering process. Setting the number of clusters to 3, clustering the column vectors using a Kmeans () function:
(1) Randomly selecting K data points as initial clustering centers;
(2) Assigning each data point to the cluster in which the cluster center closest to it is located;
(3) Calculating the mean value of all members of each cluster, and taking the mean value as a new cluster center;
(4) Repeating the steps (2) and (3) until the change of the clustering center is small or the preset iteration times are reached, and finally obtaining the clustered images with three gray values.
Extracting the ROI region to binarize the clustered image, setting a threshold value T, wherein the T is smaller than the gray value of the luminous region and larger than the gray value of the background and halation region, traversing the gray value of each (x, y) coordinate of the clustered image, setting the coordinate value smaller than T to 0, setting the coordinate value larger than T to 1, and finally obtaining the ROI region with clear boundary.
Specifically, a global threshold for image binarization is first calculated using a graythresh function that uses the Otsu method to calculate the threshold, i.e., the calculation is done by minimizing the intra-class variance of black and white pixels in the image. Obtaining a global threshold T, binarizing the image by using an image function, inputting the image to be binarized and the calculated global threshold T, and dividing pixels in the image into two types according to the gray value of the image relative to the size of the global threshold: white pixels and black pixels, thereby completing the extraction of the ROI area (the white area is the ROI area).
And obtains a boundary coordinate matrix of 2 columns and rows of the region n (n is the number of coordinate points) using a findContours () function.
Performing ellipse fitting, namely performing ellipse fitting on the obtained boundary coordinate matrix by using fitEllipse, wherein the ellipse fitting process comprises collecting data points; initializing parameters, center coordinates, semi-major axis and semi-minor axis, wherein the standard equation of ellipse isIn the ellipse fitting process, a least square method is generally adopted to find the optimal ellipse parameters so as to minimize the fitting error. The goal of the least squares method is to minimize the sum of squares of the distances between the actual data points and the fitted ellipses. Let the elliptic equation be
F(x,y;a,b,x c ,y c )=0;
The least squares optimization problem is solvedMinimum value. Wherein (x) i ,y i ) Is the boundary coordinates of the input. Finally, obtaining optimal ellipse parameters from fitting results: center coordinates, semi-major axis, semi-minor axis. The center coordinates are the center pixel coordinates of the required feature points.
In the centering process, the effects obtained by using the method and conventional center point positioning are as shown in fig. 6: in the figure, circles represent the center pixel coordinates of real feature points, crosses represent the center pixel coordinates of feature points obtained by a traditional feature point positioning algorithm, and plus represents the center pixel coordinates of feature points obtained by the method.
The pose calculation is carried out by using an EPnP algorithm, and the coordinate of the characteristic point in the target coordinate system is P i w (i=1, …, n), the coordinates in the camera coordinate system are P i c (i=1, …, n), four control points are selected in the target coordinate systemThe control point is in the camera coordinate system as +.>P mentioned above i w ,P i c ,All are non-homogeneous coordinates. The coordinates of the feature points are expressed as a weighted sum of the coordinates of the control points:wherein alpha is ij Is a weighting coefficient, so that the same weighting sum relation: -can be obtained in the camera coordinate system>From the projective transformation relation can be obtained +.>The camera reference matrix K, + is already obtained during the previous calibration of the camera>Wherein { u } i } i=1,…,n Is the characteristic point { p } i } i=1,…,n Can be obtained from the above equation by taking two linear equations
All the characteristic points are connected in series to form a linear equation system Mx=0, whereinBy calculating M T M eigenvector gets v i But also require { beta ] i } i=1,…,N 。M T The number of eigenvalues of m=0 is related to the camera focal length, and the EPnP algorithm suggests considering only n=1, 2,3, 4. The camera is characterized by only coordinate transformation, and does not change the distance between the characteristic points, so that +.>And then obtainSolving { beta } i } i=1,…,N
Performing Gaussian optimization, wherein the optimized objective function is thatFinally, calculating the relative pose of the camera and the target: calculating coordinates of the virtual point in the camera coordinate system,calculating the coordinate of the feature point in the camera reference system +.>Calculate { P i w } i=1,…,n Center of gravity->And matrix A, calculate { P ] i c } i=1,…,n Center of gravity->And matrix B, calculate h=b T A, calculating SVD decomposition h=u Σv of H T Rotation matrix r=uv in the calculated pose T If |R| < 0, then R (2,:) = -R (2,:), calculate the translation matrix in pose +.>And solving the pose.
The above disclosure is only a preferred embodiment of the present invention, and it should be understood that the scope of the invention is not limited thereto, and those skilled in the art will appreciate that all or part of the procedures described above can be performed according to the equivalent changes of the claims, and still fall within the scope of the present invention.

Claims (7)

1. The method for measuring the pose of the spatial cooperation target of monocular vision is characterized by comprising the following steps of:
step 1: designing a cooperative target comprising a light emitting diode;
step 2: calibrating a camera and acquiring 20 calibration plate images with different angles and different distances by using the camera;
step 3: the method comprises the steps of acquiring an image by using a camera and inputting the image to a PC (personal computer) terminal;
step 4: preprocessing an input image, and fitting and obtaining a characteristic point center pixel coordinate by using an improved characteristic point center positioning algorithm;
step 5: and (5) performing pose calculation and solving an optimal solution by using a Gauss Newton method.
2. The method for measuring the pose of the spatial cooperation target of monocular vision according to claim 1, wherein the method comprises the steps of,
the cooperative target is a background black square target, five characteristic points are formed by 5 light emitting diodes, the setting positions of the light emitting diodes are respectively one of four corner points, and the center of the square is one.
3. The method for measuring the pose of the spatial cooperation target of monocular vision according to claim 2, wherein,
in the step 2, calibrating the camera through a matlab tool box to obtain a camera internal reference matrix K.
4. A method for measuring the pose of a spatial cooperation target of monocular vision according to claim 3,
the improved characteristic point center positioning algorithm execution process comprises the following steps:
carrying out graying treatment on the frame image to generate a two-dimensional image;
filtering noise using median filtering, mean filtering and gaussian filtering;
performing cluster analysis on the image after noise reduction by using a K-means clustering method, and dividing the whole image into three parts, namely a background, a halation and a light source;
determining an ROI region and highlighting a light source region;
processing the image by using a findContours function to obtain boundary coordinates of the ROI;
and carrying out ellipse fitting by using a fitEllipse function to obtain the central pixel coordinates of the feature points.
5. The method for measuring the pose of the spatial cooperation target of monocular vision according to claim 4, wherein,
the background, halo and light source areas correspond to black background image pixels, feature point halo image pixels and feature point light source image pixels, respectively.
6. The method for measuring the pose of the spatial cooperation target of monocular vision according to claim 5, wherein,
and determining the ROI region and highlighting the light source region, namely performing binarization processing on the image by using an image function by setting a threshold value, communicating the background region and the halo region of the image after cluster analysis, and highlighting the light source region.
7. The method for measuring the pose of the spatial cooperation target of monocular vision according to claim 6, wherein,
in step 5, the obtained central pixel coordinates of the feature points, the feature point coordinates under the target coordinate system and the calibrated camera internal reference matrix are input, initial data are obtained through calculation by using an EPnP algorithm, and then an optimal solution is obtained by using a Gauss Newton method, so that a more accurate rotation matrix R and a translation matrix t of the relative pose matrix are obtained.
CN202410012782.4A 2024-01-04 2024-01-04 Monocular vision space cooperation target pose measurement method Pending CN117808880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410012782.4A CN117808880A (en) 2024-01-04 2024-01-04 Monocular vision space cooperation target pose measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410012782.4A CN117808880A (en) 2024-01-04 2024-01-04 Monocular vision space cooperation target pose measurement method

Publications (1)

Publication Number Publication Date
CN117808880A true CN117808880A (en) 2024-04-02

Family

ID=90433105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410012782.4A Pending CN117808880A (en) 2024-01-04 2024-01-04 Monocular vision space cooperation target pose measurement method

Country Status (1)

Country Link
CN (1) CN117808880A (en)

Similar Documents

Publication Publication Date Title
CN108932736B (en) Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
Geiger et al. Automatic camera and range sensor calibration using a single shot
JP6305171B2 (en) How to detect objects in a scene
CN110766758B (en) Calibration method, device, system and storage device
US7747106B2 (en) Method and system for filtering, registering, and matching 2.5D normal maps
CN111144207B (en) Human body detection and tracking method based on multi-mode information perception
CN110599489A (en) Target space positioning method
JP2019125057A (en) Image processing apparatus, method thereof and program
EP3376433B1 (en) Image processing apparatus, image processing method, and image processing program
Tran et al. Non-contact gap and flush measurement using monocular structured multi-line light vision for vehicle assembly
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN112396656A (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
Shi et al. Extrinsic calibration and odometry for camera-LiDAR systems
CN107679542B (en) Double-camera stereoscopic vision identification method and system
CN115685160A (en) Target-based laser radar and camera calibration method, system and electronic equipment
CN112734844A (en) Monocular 6D pose estimation method based on octahedron
CN111583342A (en) Target rapid positioning method and device based on binocular vision
Li et al. Vision-based target detection and positioning approach for underwater robots
Sun et al. Automatic targetless calibration for LiDAR and camera based on instance segmentation
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms
CN112819935A (en) Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision
Pudics et al. Safe robot navigation using an omnidirectional camera
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system
CN117808880A (en) Monocular vision space cooperation target pose measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination