CN111145201B - Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method - Google Patents

Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method Download PDF

Info

Publication number
CN111145201B
CN111145201B CN201911369130.1A CN201911369130A CN111145201B CN 111145201 B CN111145201 B CN 111145201B CN 201911369130 A CN201911369130 A CN 201911369130A CN 111145201 B CN111145201 B CN 111145201B
Authority
CN
China
Prior art keywords
mark
image
coordinate
white
black
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911369130.1A
Other languages
Chinese (zh)
Other versions
CN111145201A (en
Inventor
戴吾蛟
邢磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201911369130.1A priority Critical patent/CN111145201B/en
Publication of CN111145201A publication Critical patent/CN111145201A/en
Application granted granted Critical
Publication of CN111145201B publication Critical patent/CN111145201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

The invention provides a steady and rapid unmanned aerial vehicle photogrammetry mark detection and positioning method, firstly, the enhancement and graying processing are carried out on the photographic image, then comparing the barycentric coordinates of each pair of black connected domains and white connected domains in the image, when the distance between the two connected domains does not exceed a set pixel threshold value, the region is considered as a candidate target region and the centroid coordinates of the white connected domain are taken as the initial detection coordinates of the candidate mark, the number of zero crossings in the neighborhood of the candidate flag is then detected by a zero crossing detector, and if and only if the number of zero crossings is 4, and (4) retaining the candidate target, otherwise, deleting the candidate target, finally performing polynomial surface fitting on the mark image gray intensity response graph generated based on the local radon transform, and iteratively solving the precise coordinate where the maximum value point of the fitted surface is located, namely the high-precision coordinate of the measured mark. The invention has strong anti-interference capability, improves the calculation efficiency and the mark positioning precision, and is suitable for processing the large-resolution images.

Description

Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method
Technical Field
The invention belongs to the technical field of engineering measurement, and particularly relates to a robust and rapid unmanned aerial vehicle photogrammetry mark detection and positioning method.
Background
With the gradual maturity of hardware technology of unmanned aerial vehicles and computer vision algorithms, photogrammetry methods based on unmanned aerial vehicle platforms have been widely applied in the field of engineering survey. The coordinate reference frame for determining the photogrammetry three-dimensional model by using the unmanned aerial vehicle photogrammetry markers is one of main research contents of the photogrammetry technology based on an unmanned aerial vehicle platform, and the research on the automatic detection and positioning method for the measurement markers, which has high accuracy, high precision positioning and no manual intervention, is of great importance for improving the automation degree and the measurement precision of the photogrammetry technology. However, achieving excellent survey mark detection performance is still a challenging task due to the complex natural environment, the small imaging size of the survey mark itself, the long camera shooting distance, the blurred mark texture information, and so on.
Determining the model coordinate frame of reference is one of the main tasks of unmanned aerial vehicle photogrammetry. There are two main types of methods at present: the first method is to utilize a GNSS/INS module coupled with a camera, which greatly shortens the field working time, but in the application of an unmanned aerial vehicle with low cost and light weight, the precision of the small GNSS/INS module is poorer; the other method is to estimate the pose parameters of the model by using a ground measurement mark and a light beam adjustment method, and although the method is heavy in field work and high in labor cost, the method is still the preferred method in the application of precision engineering measurement at present due to good model pose calculation precision. However, even though we do not account for the laborious work involved in laying out survey markers in the field, selecting the coordinates of the markers on the photographic image (i.e., the geo-photograph taken by the drone including the markers) is still a time-consuming, laborious, tedious, and highly prone to human overlooking, and it is impractical to rely solely on manual selection of the coordinates of the image of the markers, especially in a wide range of photogrammetry projects. Therefore, the intelligent and high-precision detection and positioning of the measurement mark are very important for determining the model coordinate reference frame. The measurement mark is either an image or a mark.
The ground measuring marks are various in types, and compared with the traditional circular mode measuring marks, the cross mode measuring marks (cross marks for short) have the advantages of obvious central contrast, no eccentricity difference and the like. In recent years, many researchers have conducted research on the detection and positioning method of cross-shaped markers, and the basic steps include: 1. detecting a center corner point, wherein the purpose is to roughly determine a candidate target which is possibly a measuring mark on an original image, and screening the target to remove an error target; 2. and calculating the accurate coordinates of the target sub-pixel level to realize the accurate positioning of the measuring mark.
Harris corner detection is the most common corner detection method, but has the defects that not only can cross corners be detected, but also other types of corners such as spine-shaped corners can be detected, and single screening cannot be realized, so that the detection effectiveness and reliability of the unmanned aerial vehicle measuring mark can be seriously influenced. In addition, the computer vision library OpenCV provides a typical cross-shaped mark detection method, which detects corner coordinates by segmenting black and white rectangular squares and using edge features thereof; wang et al also propose a method of fitting the edge lines of the landmarks to detect the coordinates of the corners; however, both methods are based on image edge features, and the features are susceptible to image noise, image blur and other factors, so that the positioning accuracy of the marker is not high. Geiger et al propose a template matching-based method for initial detection, and obtain better experimental results, but the method is a sliding window-based search method, and the computational complexity is higher, so the computational efficiency is low, and the practicability is lacking.
In summary, the present invention needs a cross-shaped mark detection and positioning method with higher calculation efficiency and higher mark positioning accuracy.
Disclosure of Invention
The invention aims to provide a robust and rapid unmanned aerial vehicle photogrammetry mark detection and positioning method, which has the following specific scheme:
a robust and fast unmanned aerial vehicle photogrammetry mark detection and positioning method comprises the following steps:
firstly, a mark is arranged in a measuring area, and the mark comprises a cross-shaped feature formed by a black area and a white area.
Step two, carrying out initial detection on the mark;
2.1) image preprocessing based on enhanced gray scale conversion;
the photographic image is subjected to enhancement and graying processing according to the formula (1),
GARYimg(u,v)=Rimg(u,v)+Gimg(u,v)+Bimg(u,v)-δ (1)
wherein img represents the abbreviation of the English word image, u and v represent the horizontal axis coordinate and the vertical axis coordinate of the image coordinate system, respectively, and GARYimg(u, v) represents the converted grayscale image, Rimg(u,v)、Gimg(u, v) and Bimg(u, v) respectively represent an R band, a G band and a B band of the color image, and δ is a constant;
in the image preprocessing process, the gray value larger than 255 or smaller than 0 is automatically cut off to meet the format requirement of a computer 8-bit image, and on the premise of not being interfered by external factors, the black in the RGB image is (0, 0, 0) and the white is (255, 255, 255), so that after the processing of the formula (1), the white or near-white area in the mark image has a larger gray value, the black or near-black area has a smaller gray value, and the black part and the white part in the measuring mark are obviously distinguished;
2.2) candidate mark detection based on image connected domain extraction and centroid extraction;
extracting a black connected domain in the image and calculating a centroid of the black connected domain, extracting a white connected domain in the image and calculating a centroid of the white connected domain, comparing centroid coordinates of the two connected domains, and when the distance between the centroid coordinates of a pair of black connected domain and white connected domain is less than or equal to a set pixel threshold, considering the region as a candidate target region, and using the centroid coordinates of the white or black connected domain as initial detection coordinates of a candidate mark.
And step three, further screening the candidate marks.
Fourthly, accurately positioning the mark;
4.1) generating a mark image response diagram based on the local radon transform principle;
calculating the integral of all possible central lines in the marker coordinate neighborhood range according to the formulas (2) and (3), returning the square of the difference between the maximum integral value and the minimum integral value, further obtaining the gray intensity response graph of the marker image,
Figure GDA0003182825680000031
Figure GDA0003182825680000032
wherein m represents the size of the image area to be calculated, i.e. the selected image area is square and has length and width of m, Rflocal[u,v,α]Denotes the integral value of the center line obtained after the image is rotated by an angle α, max and min denote the acquisition of Rflocal[u,v,α]Maximum and minimum values at different angles of rotation, fc[u,v]Representing a gray scale intensity response graph of an original image after local radon transformation;
4.2) accurately positioning the coordinates of the markers on the response graph based on a polynomial surface fitting method;
first, let f (x) be the estimated coordinate x of the mark*The initial value of the coordinate of the known mark is x0Substituting the coordinate x into least square method, and calculating the coordinate x of maximum value of polynomial surface according to formula (4) by iterationTWhen x isTWhen the variation of the second threshold value is smaller than the set variation threshold value, the iterative computation is ended,
Figure GDA0003182825680000033
the fitting process of the polynomial surface can be expressed by equation (5),
Figure GDA0003182825680000034
wherein the content of the first and second substances,
Figure GDA0003182825680000035
representing a polynomial function, x and y representing input coordinates, equation (4) being a general polynomial mathematical representation, c ═ c1,c1,…,c6]TIs a parameter vector of a polynomial function, N is the number of sample points in f (x), xtAs possible maximum coordinates, ΔiIs an error term;
a linear equation Ac-b of 0 is then constructed to solve the parameter vector c, which is denoted as b ═ f (x)t1),…,f(xtn)]TThe matrix a can be represented by equation (6),
Figure GDA0003182825680000036
when the accurate positions of all candidate targets are calculated, the same neighborhood radius r and the same image coordinate system are uniformly used, so that the matrix A only needs to be calculated once;
and finally, iteratively solving the precise coordinate where the maximum value point of the fitting curved surface is located by utilizing a Gaussian-Newton method, namely the high-precision coordinate of the measuring mark.
Preferably, in the first step, the black area is a circle, and the white area is two sectors of 90 ° concentrically arranged in the circle; or the white area is circular, and the black area is two 90-degree sectors concentrically arranged in the circle.
Preferably, in 2.2) of the second step, the connected domain centroid of the circular region is extracted, then the image is expanded to ensure that the two fan-shaped regions in the measurement mark can be connected together, and then the connected domain centroid of the fan-shaped region is extracted.
Preferably, in the third step, the zero-crossing detector is used to detect the number of zero-crossing points in the neighborhood of the candidate mark, if and only if the number of zero-crossing points is 4, the candidate target is retained, otherwise the candidate target is deleted.
The zero-crossing detector can be used for searching the point where the gray value of the image passes through zero after the image is subjected to Laplace transformation, namely the point where the intensity of the Laplace image changes remarkably. These points are typically distributed at image edge locations, i.e., locations where the image changes rapidly in grayscale. Because the measuring mark adopted by the invention is cross-shaped, 4 edges with rapid gray intensity change are shared around the neighborhood of the mark center point, and therefore, the candidate target can be screened according to whether the number of zero-crossing points in the neighborhood of the candidate target is 4 or not.
Preferably, in the second step, δ in the formula (1) takes the value of 10; in the fourth step, the value of the neighborhood radius r is 7.
Preferably, the white areas of the measuring marks are made of white crystal colored lattice reflective cloth, and the black areas are made of black flocked light absorbing cloth.
The reflecting strength of the material for making the mark has great influence on the detection and positioning of the mark, and the crystal color lattice reflecting cloth adopts advanced micro-lattice full-reflecting clothThe reflecting technology has super-strong reflecting strength and the highest reflecting coefficient can reach 200cd/lux/m2The influence of uneven illumination and shadow shielding on the imaging of the white part of the mark can be effectively inhibited; meanwhile, the black flocking light absorption cloth has the advantages of good light absorption and pure and distinct black, and is commonly used for indoor photography. The two materials are combined, so that the imaging quality of the measurement mark under the complex environment condition can be greatly improved.
The technical scheme provided by the invention at least has the following beneficial effects:
1. the invention provides a new measuring mark detection and positioning method based on the existing research, which can convert an image from an RGB color image into a gray image by initially detecting a mark, greatly enhance the contrast of the image, particularly the contrast of black and white areas, and overcome the influence of interference factors such as image blurring, image noise, uneven illumination and the like on target detection; and the calculation of the marker positioning is transferred from the original image to the gray scale intensity response graph, and a polynomial surface fitting method is combined, so that the calculation accuracy of the marker coordinates is higher, the efficiency is higher, the method is very suitable for processing the large-resolution image, and the high efficiency and the high speed of the unmanned aerial vehicle platform photogrammetry data processing are favorably realized.
2. The invention designs a cross-shaped feature which is formed by a black circular area and two white fan-shaped areas together on the measuring mark, and combines white crystal color lattice reflective cloth and black flocking light absorption cloth during manufacturing, so that the mark has obvious contrast and a special pattern, and the measuring mark has better uniqueness and is beneficial to being identified on an image.
3. Because the cross mark in the invention is highly associated with the boundary of the black-white area, the candidate target which accords with the graph characteristic of the measuring mark of the invention is screened out by utilizing the characteristic that the zero-crossing detector detects the position of the rapid change of the image gray scale, thereby having obvious advantages in the detection effectiveness and reliability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a flow chart of the unmanned aerial vehicle photogrammetry marker detection and positioning method of the present invention;
FIG. 2 is a block diagram of a cross-shaped measurement marker used in the present invention;
FIG. 3 is a graph comparing the time consumed in the initial detection of the marker by the method of the present invention with that of the prior art;
FIG. 4 is a graph comparing the mark positioning accuracy under the condition of Gaussian blur addition for the method of the present invention and the prior art;
FIG. 5 is a graph comparing the mark positioning accuracy under the condition of Gaussian noise addition in the method of the present invention and the prior art;
FIG. 6 is a comparison graph of the positioning accuracy of the mark under the condition of affine transformation of the image by the method of the present invention and the prior art.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a robust and fast method for detecting and positioning a photogrammetric survey mark of an unmanned aerial vehicle comprises the following steps:
step one, arranging marks in a measurement area;
referring to fig. 2, the background of the measurement mark is white and includes a black circular area and two equal 90 ° white sector areas arranged in the black circular area, the centers of the three areas coincide and the two white sector areas are arranged at an interval of 90 °, and the boundaries of the black area and the white area together form a cross-shaped feature of the measurement mark. In this embodiment, the white area is made of white crystal color lattice reflective cloth, and the black area is made of black flocking light absorption cloth.
Step two, carrying out initial detection on the mark;
2.1) image preprocessing based on enhanced gray scale conversion;
the photographic image is subjected to enhancement and graying processing according to the formula (1),
GARYimg(u,v)=Rimg(u,v)+Gimg(u,v)+Bimg(u,v)-δ (1)
wherein img represents the abbreviation of the English word image, u and v represent the horizontal axis coordinate and the vertical axis coordinate of the image coordinate system, respectively, and GARYimg(u, v) represents the converted grayscale image, Rimg(u,v)、Gimg(u, v) and Bimg(u, v) represent the R band, G band and B band of the color image, respectively, δ is a constant and 10 in this example;
2.2) candidate mark detection based on image connected domain extraction and centroid extraction;
firstly, extracting a black connected domain in an image and calculating the mass center of the black connected domain; then, performing expansion processing on the image to ensure that two white fan-shaped areas in the measuring mark can be connected together; then, extracting a white connected domain in the image and calculating the centroid of the white connected domain; and finally, comparing the centroid coordinates of the two connected domains, and when the centroid coordinates of a pair of black connected domain and white connected domain are equal, namely the distance difference between the two coordinates is not more than 3 pixels, considering the region as a candidate target region, and taking the centroid coordinate of the white connected domain as the initial detection coordinate of the candidate mark.
Step three, further screening the candidate marks;
and detecting the number of zero-crossing points in the neighborhood of the candidate mark by using a zero-crossing detector, if and only if the number of the zero-crossing points is 4, keeping the candidate target, and if not, deleting the candidate target.
Fourthly, accurately positioning the mark;
4.1) generating a mark image response diagram based on the local radon transform principle;
calculating the integral of all possible central lines in the marker coordinate neighborhood range according to the formulas (2) and (3), returning the square of the difference between the maximum integral value and the minimum integral value, further obtaining the gray intensity response graph of the marker image,
Figure GDA0003182825680000061
Figure GDA0003182825680000062
wherein m represents the size of the image area to be calculated, i.e. the selected image area is square and has length and width of m, Rflocal[u,v,α]Denotes the integral value of the center line obtained after the image is rotated by an angle α, max and min denote the acquisition of Rflocal[u,v,α]Maximum and minimum values at different angles of rotation, fc[u,v]Representing a gray scale intensity response graph of an original image after local radon transformation;
4.2) accurately positioning the coordinates of the markers on the response graph based on a polynomial surface fitting method;
first, let f (x) be the estimated coordinate x of the mark*The initial value of the coordinate of the known mark is x0Substituting the coordinate x into least square method, and calculating the coordinate x of maximum value of polynomial surface according to formula (4) by iterationTWhen x isTWhen the variation amount of (d) is smaller than the set variation threshold τ of 3, the iterative computation is ended,
Figure GDA0003182825680000071
the fitting process of the polynomial surface can be expressed by equation (5),
Figure GDA0003182825680000072
wherein the content of the first and second substances,
Figure GDA0003182825680000073
representing a polynomial function, x and y representing input coordinates, equation (4) being a general polynomial mathematical representation, c ═ c1,c1,…,c6]TIs a parameter vector of a polynomial function, N is the number of sample points in f (x), xtAs possible maximum coordinates, ΔiIs an error term;
a linear equation Ac-b of 0 is then constructed to solve the parameter vector c, which is denoted as b ═ f (x)t1),…,f(xtn)]TThe matrix a can be represented by equation (6),
Figure GDA0003182825680000074
when the accurate positions of all candidate targets are calculated, the same neighborhood radius r is 7 and the same image coordinate system are uniformly used, so that the matrix A only needs to be calculated once;
and finally, iteratively solving the precise coordinate where the maximum value point of the fitting curved surface is located by utilizing a Gaussian-Newton method, namely the high-precision coordinate of the measuring mark.
First, calculation efficiency contrast experiment
The resolution of experimental image data is 4000 x 3000, the image contains 4 effective measuring marks, the measuring marks uniformly use the cross marks in the invention, the method (RFDM) provided by the invention is compared with the other three existing methods, the initial detection processing of the marks is carried out on a single image, the calculation time consumption of the four methods is counted respectively, and the comparison result is shown in figure 3.
The three prior methods as comparative examples are:
1. an initial detection method based on template matching/sliding windows proposed by Geiger et al, reference Geiger, a.; moosmann, f.; car, o.; (iii) Schuster, B.Automatic camera and range sensor calibration using a single shot. in Proceedings of the IEEE International Conference on Robotics & Automation; 2012; pp.3936-3943;
2. an initial calibration method based on precise positioning and detection of corner points on a chessboard proposed by duca et al, reference duca, a.; free, U.S. accurate Detection and Localization of Checker Corners for calibration.2018, 126;
3. harris classic corner detection method, reference Harris, c.; stephens, M.AcOMBINED CORNER AND EDGEDETECTOR.1988, 147-151.
As can be seen from the experimental result in FIG. 3, compared with the existing method, the method of the present invention has obvious advantages in computational efficiency, and is very suitable for processing the image data of the unmanned aerial vehicle with large resolution.
Second, positioning accuracy contrast experiment
Two existing mark coordinate positioning methods are selected, which respectively comprise:
1. sub-pixel localization method provided by the computer vision library OpenCV, reference Bradski, g.; kaehler, A.learning OpenCV: computer Vision C + + with the OpenCVLibrary; 2013;
2. the Hyowon method is characterized in that a polynomial function is used for fitting gray level distribution in a measuring mark neighborhood on an original image, and the reference documents Ha and H are adopted; perdoch, m.; alias mail, h.; kweon, i.s.; sheikh, Y.Deltille Grids for geometrical Camera calibration. in Proceedings of the 2017IEEE International Conference on Computer Vision (ICCV); 2017; pp.5354-5362.
In order to truly simulate the imaging condition of the measurement mark in the actual environment, the experiment simulates the actual condition by the modes of affine transformation of the image, addition of Gaussian noise and Gaussian blur. Namely: converting the image according to the zenith angle rotation phi (angle) to obtain analog data after affine transformation; the image is processed according to the mean value of 0 and the standard deviation of intensity of sigman(percent) adding gaussian noise; the image is gaussian blurred with a mean of 0 and a standard deviation of σ (pixels).
The specific implementation steps are as follows: 1) when the image is subjected to the gaussian blur processing, phi is kept at 0 DEG, sigmanKeeping the value of sigma constant at 3%, and respectively setting the values of sigma to 0, 1, 2, 3, 4 and 5; 2) when gaussian noise is added to an image, let σ be constant, let Φ be 0 °, and σ be 1nThe values of (A) are respectively 0%, 1%, 2%, 3%, 4% and 5%; 3) when affine transformation processing is performed on an image, σ is kept equal to 1, σnLet phi take values of 0 °, 15 °, 30 °, 45 °, 60 °, and 75 °, respectively, if 3% is unchanged.
Finally, 18 simulated images in total of 3 groups of data are obtained, the images are positioned by using the method (RFDM) of the invention and the two existing methods, the experimental results are counted, and the analysis is as follows:
1) gaussian blur test: the experimental result is shown in fig. 4, and the method of the present invention has significant advantages in localization performance compared to Hyowon and OpenCV.
2) And (3) Gaussian noise test: experimental results as shown in fig. 5, when the image noise increases, the positioning accuracy of Hyowon and OpenCV is significantly affected, especially OpenCV, when σ increasesnAt 5%, the positioning error is about 4 times the initial positioning error, whereas the method of the present invention is at σnThe range is less than or equal to 5 percent, the positioning performance is good and is superior to Hyowon and OpenCV;
3) affine transformation test: the experimental result is shown in fig. 6, when the zenith angle rotation phi is larger than or equal to 60 degrees, the affine transformation of the image has obvious influence on the three methods, and in addition, the effective pixels of the mark image are reduced due to the affine transformation, so that the method has excellent positioning performance even though the method does not show obvious advantages compared with the other two methods.
The above description is only a preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and various modifications and changes may be made by those skilled in the art. Any improvement or equivalent replacement directly or indirectly applied to other related technical fields within the spirit and principle of the invention and the contents of the specification and the drawings of the invention shall be included in the protection scope of the invention.

Claims (6)

1. A robust and fast unmanned aerial vehicle photogrammetry mark detection and positioning method is characterized by comprising the following steps:
firstly, arranging a mark in a measurement region, wherein the mark comprises a cross-shaped feature formed by a black region and a white region;
step two, carrying out initial detection on the mark;
2.1) image preprocessing based on enhanced gray scale conversion;
the photographic image is subjected to enhancement and graying processing according to the formula (1),
GARYimg(u,v)=Rimg(u,v)+Gimg(u,v)+Bimg(u,v)-δ (1)
wherein img represents the abbreviation of the English word image, u and v represent the horizontal axis coordinate and the vertical axis coordinate of the image coordinate system, respectively, and GARYimg(u, v) represents the converted grayscale image, Rimg(u,v)、Gimg(u, v) and Bimg(u, v) respectively represent an R band, a G band and a B band of the color image, and δ is a constant;
2.2) candidate mark detection based on image connected domain extraction and centroid extraction;
extracting a black connected domain in the image and calculating a centroid of the black connected domain, extracting a white connected domain in the image and calculating a centroid of the white connected domain, comparing centroid coordinates of the two connected domains, and when the distance between the centroid coordinates of a pair of black connected domains and the centroid coordinates of the white connected domain is less than or equal to a set pixel threshold, considering the region as a candidate target region, and using the centroid coordinates of the white or black connected domain as initial detection coordinates of a candidate mark;
step three, further screening the candidate marks;
fourthly, accurately positioning the mark;
4.1) generating a mark image response diagram based on the local radon transform principle;
calculating the integral of all possible central lines in the marker coordinate neighborhood range according to the formulas (2) and (3), returning the square of the difference between the maximum integral value and the minimum integral value, further obtaining the gray intensity response graph of the marker image,
Figure FDA0003182825670000011
Figure FDA0003182825670000012
wherein m represents the size of the image area to be calculated, i.e. the selected image area is square and has length and width of m, Rflocal[u,v,α]Denotes the integral value of the center line obtained after the image is rotated by an angle α, max and min denote the acquisition of Rflocal[u,v,α]Maximum and minimum values at different angles of rotation, fc[u,v]Representing a gray scale intensity response graph of an original image after local radon transformation;
4.2) accurately positioning the coordinates of the markers on the response graph based on a polynomial surface fitting method;
first, let f (x) be the estimated coordinate x of the mark*The initial value of the coordinate of the known mark is x0Substituting the coordinate x into least square method, and calculating the coordinate x of maximum value of polynomial surface according to formula (4) by iterationTWhen x isTWhen the variation of the second threshold value is smaller than the set variation threshold value, the iterative computation is ended,
Figure FDA0003182825670000021
the fitting process of the polynomial surface can be expressed by equation (5),
Figure FDA0003182825670000022
wherein the content of the first and second substances,
Figure FDA0003182825670000023
representing a polynomial function, x and y representing outputsCoordinate, formula (4) is a general polynomial mathematical representation, c ═ c1,c1,…,c6]TIs a parameter vector of a polynomial function, N is the number of sample points in f (x), xtAs possible maximum coordinates, ΔiIs an error term;
a linear equation Ac-b of 0 is then constructed to solve the parameter vector c, which is denoted as b ═ f (x)t1),…,f(xtn)]TThe matrix a can be represented by equation (6),
Figure FDA0003182825670000024
when the accurate positions of all candidate targets are calculated, the same neighborhood radius r and the same image coordinate system are uniformly used, so that the matrix A only needs to be calculated once;
and finally, iteratively solving the precise coordinate where the maximum value point of the fitting curved surface is located by utilizing a Gaussian-Newton method, namely the high-precision coordinate of the measuring mark.
2. The unmanned aerial vehicle photogrammetry mark detection and positioning method of claim 1, wherein in step one, the black area is a circle, and the white area is two 90 ° sectors concentrically arranged within the circle; or the white area is circular, and the black area is two 90-degree sectors concentrically arranged in the circle.
3. The unmanned aerial vehicle photogrammetry mark detection and positioning method according to claim 2, wherein in step two, 2.2), the connected domain centroid of the circular area is extracted, then the image is expanded to ensure that the two fan-shaped areas in the survey mark can be connected together, and then the connected domain centroid of the fan-shaped area is extracted.
4. The UAV photogrammetry marker detection and localization method according to claim 3, wherein in step three, a zero-crossing detector is used to detect the number of zero-crossing points in the neighborhood of the candidate marker, if and only if the number of zero-crossing points is 4, the candidate target is retained, otherwise the candidate target is deleted.
5. The unmanned aerial vehicle photogrammetry mark detection and positioning method according to any one of claims 1-4, characterized in that in the second step, the value of δ in the formula (1) is 10; in the fourth step, the value of the neighborhood radius r is 7.
6. The UAV photogrammetry mark detection and positioning method of claim 5, wherein the white areas of the survey mark are made of white crystal colored grid reflective cloth, and the black areas are made of black flocked light absorbing cloth.
CN201911369130.1A 2019-12-26 2019-12-26 Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method Active CN111145201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911369130.1A CN111145201B (en) 2019-12-26 2019-12-26 Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911369130.1A CN111145201B (en) 2019-12-26 2019-12-26 Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method

Publications (2)

Publication Number Publication Date
CN111145201A CN111145201A (en) 2020-05-12
CN111145201B true CN111145201B (en) 2021-10-08

Family

ID=70520647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911369130.1A Active CN111145201B (en) 2019-12-26 2019-12-26 Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method

Country Status (1)

Country Link
CN (1) CN111145201B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284160B (en) * 2021-04-23 2024-03-12 北京天智航医疗科技股份有限公司 Method, device and equipment for identifying surgical navigation mark beads

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7532772B2 (en) * 2004-07-20 2009-05-12 Duke University Coding for compressive imaging
CN105023013B (en) * 2015-08-13 2018-03-06 西安电子科技大学 The object detection method converted based on Local standard deviation and Radon
US11244452B2 (en) * 2017-10-16 2022-02-08 Massachusetts Institute Of Technology Systems, devices and methods for non-invasive hematological measurements
CN108803655A (en) * 2018-06-08 2018-11-13 哈尔滨工程大学 A kind of UAV Flight Control platform and method for tracking target

Also Published As

Publication number Publication date
CN111145201A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN110969624B (en) Laser radar three-dimensional point cloud segmentation method
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
CN110443836A (en) A kind of point cloud data autoegistration method and device based on plane characteristic
CN107610164B (en) High-resolution four-number image registration method based on multi-feature mixing
Liang et al. Automatic registration of terrestrial laser scanning data using precisely located artificial planar targets
CN110428425B (en) Sea-land separation method of SAR image based on coastline vector data
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN111047698B (en) Real projection image acquisition method
CN109373978B (en) Surrounding rock displacement monitoring method for roadway surrounding rock similar simulation
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN112308916A (en) Target pose identification method based on image target
CN108230375A (en) Visible images and SAR image registration method based on structural similarity fast robust
Crombez et al. 3D point cloud model colorization by dense registration of digital images
CN108416798B (en) A kind of vehicle distances estimation method based on light stream
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
CN111145201B (en) Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method
Wang Automatic extraction of building outline from high resolution aerial imagery
Jiang et al. Determination of construction site elevations using drone technology
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN107765257A (en) A kind of laser acquisition and measuring method based on the calibration of reflected intensity accessory external
CN109978982B (en) Point cloud rapid coloring method based on oblique image
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
Lin et al. Semi-automatic road tracking using parallel angular texture signature
CN114549780B (en) Intelligent detection method for large complex component based on point cloud data
Chen et al. True orthophoto generation using multi-view aerial images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant