CN109186550B - Coding decoding and measuring method for codable close-range photogrammetric mark - Google Patents

Coding decoding and measuring method for codable close-range photogrammetric mark Download PDF

Info

Publication number
CN109186550B
CN109186550B CN201810803118.6A CN201810803118A CN109186550B CN 109186550 B CN109186550 B CN 109186550B CN 201810803118 A CN201810803118 A CN 201810803118A CN 109186550 B CN109186550 B CN 109186550B
Authority
CN
China
Prior art keywords
coding
matrix
mark
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810803118.6A
Other languages
Chinese (zh)
Other versions
CN109186550A (en
Inventor
潘玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Saihei Intelligent Technology Co ltd
Original Assignee
Shanghai Saihei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Saihei Intelligent Technology Co ltd filed Critical Shanghai Saihei Intelligent Technology Co ltd
Priority to CN201810803118.6A priority Critical patent/CN109186550B/en
Publication of CN109186550A publication Critical patent/CN109186550A/en
Application granted granted Critical
Publication of CN109186550B publication Critical patent/CN109186550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Abstract

The invention relates to a coding, decoding and measuring method of a codeable close-range photogrammetry mark, which comprises the following steps: 1) and (3) an encoding process: 11) setting coding multiplying power; 12) generating a coding dictionary; 13) generating a coding mark; 2) and (3) decoding process: 21) preprocessing a coded image; 22) extracting a central point; 23) converting and decoding the one-dimensional bar code; 3) the measurement process comprises the following steps: after decoding is finished, an improved Fourier-Mellin transform algorithm is adopted to solve the rotation scale relation and the translation quantity generated on the positioning mark points of the coding elements on the two images. Compared with the prior art, the method has the advantages of rapid decoding, controllable capacity, accurate positioning, improvement of rotation measurement precision and the like.

Description

Coding decoding and measuring method for codable close-range photogrammetric mark
Technical Field
The invention relates to the field of digital close-range photogrammetry, in particular to a coding, decoding and measuring method of a codable close-range photogrammetry mark.
Background
Digital close-range photogrammetry has gradually become a technology most suitable for field measurement in the current reverse engineering due to good expandability and strong functions, and is widely applied to field scene control, large-scale structure monitoring, geometric measurement of industrial products and design and manufacturing processes in recent years.
With the increase of measurement tasks of large-size and complex-structure objects, the insufficiency of the total number of the coded marks of the existing measurement system, which is also called coded marks, has restricted the exertion of the overall measurement advantages. Expanding the number of codes and improving the data processing capability have become urgent needs to ensure the measurement accuracy and the working efficiency. Meanwhile, the positioning information of the traditional photogrammetry mark points has higher dependence on external control information, a control network needs to be additionally arranged, and the geometric positioning information of the mark points is obtained by measuring adjustment through a plurality of groups of data. But only the coding mark points are used, and the traditional coding mark points are difficult to directly obtain geometric positioning information, especially high-precision relative displacement and rotation information required by large-scale structure monitoring and industrial product measurement.
Disclosure of Invention
The present invention is directed to a method for encoding, decoding and measuring a codeable close-range photogrammetric mark, which overcomes the above-mentioned drawbacks of the prior art.
The purpose of the invention can be realized by the following technical scheme:
a coding, decoding and measuring method of a codeable close-range photogrammetry mark comprises the following steps:
1) and (3) an encoding process:
11) setting of coding magnification: constructing a plurality of coding elements of the sector one-dimensional code, and setting the coding multiplying power of the coding elements;
12) and (3) generation of a coding dictionary: forming a coding dictionary by a plurality of coding elements corresponding to different coding information;
13) generating a coding mark; forming a complete coding mark by combining a plurality of coding elements;
2) and (3) decoding process:
21) preprocessing a coded image, namely acquiring a binary image only containing edge information from an input coded mark image;
22) extracting a central point: acquiring the central point of the coding mark by adopting a least square ellipse fitting adjustment algorithm according to the binary image only containing the edge information;
23) and (3) converting and decoding the one-dimensional bar code: adopting an arc searching method with a given radius, namely, taking the extracted central point as the center, carrying out black and white gray value searching with 360 degrees by using the given radius, converting the coding mark into the code of the one-dimensional bar code, and then carrying out traversal matching according to the included angle ratio and the black and white pixel value corresponding to each coding element of the coding dictionary to obtain a complete coding information value and complete corresponding decoding;
3) the measurement process comprises the following steps:
after decoding is finished, an improved Fourier-Mellin transform algorithm is adopted to solve the rotation scale relation and the translation quantity generated on the positioning mark points of the coding elements on the two images.
In the step 11), the coding element of each sector one-dimensional code is formed by arranging and combining a positioning mark point and four sector pattern spots with different included angles, the positioning mark point is formed by a plurality of concentric circles alternating with black and white to realize initial positioning, the sector pattern spots comprise wide pattern spots and narrow pattern spots, specifically, the wide pattern spots are narrow black, narrow white, wide black and wide white, and the coding multiplying factor delta is the ratio of the sizes of the corresponding central angles of the wide pattern spots and the narrow pattern spots.
The sector-shaped pattern spot of each coding element is composed of three wide pattern spots and three narrow pattern spots, and the information contained in each coding element is determined by the distribution of the wide and narrow pattern spots and the configuration of black and white colors.
In the step 12), the information of the coding dictionary includes numbers, letters and special symbols, the numbers and the letters are used for generating coding information, and the special symbols are blank areas at the fourth quadrant position above the left of the coding elements and are used for marking the start and stop of coding and the calibration information.
The capacity of the coding dictionary is adjusted by adjusting the size of the coding multiplying factor delta.
The step 21) specifically comprises the following steps:
211) generation of a binary image: dividing the input coding mark image by a given gray threshold value to generate a binary image;
212) extracting an edge contour: and extracting the edge information of the coding mark from the binary image through a Canny edge detection operator.
In step 23), the given radius is 0.6 times the radius of the spot.
In the step 23), the search process starts at an arbitrary angle, the maximum white gray value interval obtained on the search path is used as a mark for determining the start stop of the encoding, and the width ratio of the one-dimensional barcode is used to replace the included angle ratio of the encoding elements.
The step 3) specifically comprises the following steps:
31) obtaining a reference image G of an original mark point1Measurement image G of mark point needing to measure rotation and translation angle2The translation amount between the two is specifically as follows:
311) reference image G for original mark points1And need to measureMeasurement image G of mark points measuring rotational translation angle2Performing discrete Fourier transform, and calculating a cross-power spectrum matrix Q (u, v), wherein the calculation formula of the cross-power spectrum matrix Q (u, v) is as follows:
Figure GDA0002719812250000031
S2(u,v)=S1(u,v)exp{-i(ux0+vy0)}
wherein S is2(u, v) is corresponding to G2Matrix S obtained after discrete Fourier transform of pixel point with coordinate (x, y) in image matrix2Result of the corresponding point in, S1 *(u, v) is G1Matrix S obtained after discrete Fourier transform of pixel point with coordinate (x, y) in image matrix1Is a conjugate matrix S* 1U is the abscissa of the corresponding point in the Fourier matrix S of the image obtained after discrete Fourier transform, v is the ordinate of the corresponding point in the Fourier matrix S of the image obtained after discrete Fourier transform, x0As a matrix G of images before discrete Fourier transform1And G2The displacement value in the abscissa direction of the corresponding point of (a), y0For the image matrix G obtained before discrete Fourier transform1And G2A displacement value of the corresponding point in (1) in the ordinate direction;
312) performing polar coordinate transformation on the cross-power spectrum matrix Q (u, v) to obtain a phase angle matrix ψ (u, v), that is:
ψ(u,v)=∠Q(u,v)=ux0+vy0
carrying out plane fitting by adopting a random sampling consistency algorithm to obtain a fitted plane expression as follows:
a1u+b1v+c1+d1=0
wherein, a1、b1、c1Is a slope parameter of the plane equation, and c1=0,d1Is a constant term of a plane equation, and d1=ψ(u,v);
313) To the planeSlope parameter a of the equation1And b1Solving to obtain the displacement value x of the two marked point images0=a1And y0=b1
32) Obtaining rotation scale relation of the image through logarithm polar coordinate transformation, including rotation angle theta0And a scaling scale γ, specifically:
321) reference image G for original mark points1Measurement image G of mark point needing to measure rotation and translation angle2Carrying out logarithmic polar coordinate conversion;
322) performing discrete Fourier transform on the image after polar coordinate conversion to obtain a cross-power spectrum matrix Q (m, n) under a polar coordinate, and acquiring a phase angle matrix psi (m, n) of the matrix Q (m, n):
ψ(m,n)=∠Q(m,n)=mlnγ+nθ0
323) carrying out plane fitting on the phase angle matrix psi (m, n) by adopting a random sampling consistency algorithm, and then solving a final slope parameter a of a plane equation2And b2Finally, the rotation angle theta of the two marking point images is obtained0=b2And scaling dimensions
Figure GDA0002719812250000041
The unwrapping step prior to plane fitting is:
in order to eliminate the influence of cross-power spectrum data noise on phase unwrapping, a vector filtering algorithm is adopted in the Fourier-Mellin transform algorithm for denoising, the real part and the imaginary part of the cross-power spectrum are separated, mean value filtering denoising of small windows is respectively used, the filtered matrix is subjected to arc tangent to obtain a wrapped phase angle matrix after denoising, and the phase angle matrix is unwrapped by a phase unwrapping algorithm based on minimum cost network flow.
Compared with the prior art, the invention has the following advantages:
firstly, fast decoding: the invention provides an arc-shaped search strategy based on a given radius, which realizes the quick conversion from a sector encoding element to a one-dimensional bar code, avoids the occurrence of unstable conditions in the process of extracting straight lines in an image and realizes the quick and steady decoding of codes.
Secondly, capacity is controllable: in the invention, a very practical concentric ring type mark point is taken as a prototype, and a coding element array design based on a fan-shaped one-dimensional code is added, so that the coding design can be carried out. The dimensionality of the coded element dictionary can be customized according to actual needs, so that the reduction and the expansion of the coding capacity are realized.
Thirdly, positioning is accurate: in the invention, an improved Fourier-Mellin transform algorithm is used to accurately extract the rotation angle and the translation distance of the mark point to obtain the positioning information of the photogrammetric mark point.
Fourthly, the rotating measurement precision is improved: the plane model au + bv-Q (u, v) is subjected to robust estimation by using a random sample consensus (RANSAC) algorithm instead of a least square algorithm, the parameters a and b are only selected to be in accordance with the unwrapped plane model for phase deviation value estimation, and data affected by deviation is eliminated as gross error in the robust estimation, so that the influence of the deviation on the least square estimation in the correlation process is weakened, the precision and the stability of the deviation value are improved, and the precision of rotation measurement is further improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a coded meta dictionary.
Fig. 3 shows two coded signatures using the same coding dictionary but containing different coding information.
Fig. 4 is a flow chart of the decoding process.
Fig. 5 is a comparison of results before and after image preprocessing, where fig. (5a) is an original image, (5b) is a binarization result, and (5c) is an edge extraction result.
Fig. 6 is an example of ellipse fitting results of an edge image.
Fig. 7 is a diagram of one-dimensional transcoding based on an arc search strategy of a given radius.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Example (b):
the invention provides a coding, decoding and measuring method of a codeable close-range photogrammetry mark, which comprises three parts of core contents: coding design of the mark, decoding design of the mark and measurement of the displacement and rotation of the mark.
1. Coding design of close-range photogrammetry mark
As shown in fig. 1, the coding design of the new close-range photogrammetric mark is divided into three main steps: determining coding multiplying power, generating a coding dictionary and generating a coding mark.
1.1 setting of coding Rate
The core of the whole coding design scheme is a coding element based on sector one-dimensional codes. Fig. 2 shows a sample of sector coded elements. Each sector coding element is obtained by arranging and combining four sector pattern spots with different included angles. The fan-shaped pattern spot includes four types: black (narrow), white (narrow), black (wide), white (wide). The size ratio of the angle between the wide and narrow patches is referred to as the magnification δ of encoding. The coding multiplying power directly determines the number of code elements which can be accommodated by a maximum code mark. The larger the number of code elements contained in a code flag, the higher the upper limit of the code capacity of the code flag will be in principle. The encoding magnification δ of the encoding primitive shown in fig. 2 is 3, i.e., the size ratio of the included angle values of the wide and narrow patches is three times. Normally, each code element may consist of three wide patches and three narrow patches. The distribution of the wide and narrow pattern spots and the arrangement of the black and white colors determine the information contained in the encoding elements.
1.1 Generation of coded dictionaries
A plurality of sets of code elements containing different coding information will constitute a dictionary of code elements (see fig. 2) in which different code elements will correspond to different coding information. The dictionary of codes given in fig. 2 contains the numbers 0-9, the letters a-F, and four special symbols, +, -. In the design of the whole set of coding dictionary, the numbers and the letters are mainly used for generating the coded information, and the special symbols are mainly used for identifying the start-stop and calibration information of the codes. The coding dictionary shown in fig. 2 contains 20 elements in total, but in practice, the invention can realize the custom setting of the capacity of the coding dictionary through the adjustment of the coding multiplying factor delta.
1.1 Generation of coded flags
The combination of multiple coded symbols may constitute a complete coded mark. Fig. 3 shows two encoding tags using the same encoding dictionary but containing different encoding information. Under the condition that no rotation occurs, a certain blank area is reserved at the upper left (fourth quadrant) position of each coding mark and is used for identifying the start-stop and calibration code information of the sector coding element sequence. In addition, there is a positioning mark point composed of four concentric circles at the center of the coding mark. The positioning mark point is used for preliminarily positioning the sector coding element in the decoding process of the coding mark. The extraction and determination of the circular edges is obtained by a least squares fit of an ellipse equation. Meanwhile, the design of concentric circles with black and white phases improves more error redundancy, so that the defect that the traditional mark point center extraction is easily influenced by noise and image darkness can be overcome through multiple times of extraction of the center point or a mode of overall least square fitting and the like.
2. Decoding of close-range photogrammetric markers
As shown in FIG. 4, the decoding scheme of the present invention is also divided into three main steps: preprocessing of a coded image, extraction of a central point, conversion and decoding of one-dimensional coding.
2.1 preprocessing of coded images
The core purpose of the pre-processing of the encoded image is to derive a binary image containing only edge information from the conventional color/grayscale image of the encoded logo, in order to facilitate the subsequent center point extraction and decoding operations. Specifically, the overall pretreatment comprises two main steps: generating a binary image and extracting an edge contour.
A binary image is a digital image with only two possible values per pixel. Compared with the original input image, the binary image eliminates the adverse effect generated by the image contrast and only retains the geometric information of the image. In the scheme, a segmentation method based on a given gray threshold value is adopted for generating the binary image. And traversing and judging each pixel of the original input image matrix through a given gray threshold value theta. Equation 1 gives the conversion from the original image G (x, y) to the binary image B (x, y):
Figure GDA0002719812250000061
after obtaining the binary image, the invention will use the Canny edge detection operator to extract the edge information of the coded mark from the binary image. The Canny edge detection operator is a multi-stage edge detection algorithm developed by John f.canny in 1986,
the method mainly comprises the following steps:
1. gaussian filtering is applied to smooth the image with the aim of removing noise.
2. The intensity gradients (intensity gradients) of the image are found.
3. Non-maximum suppression (non-maximum suppression) technique is applied to eliminate edge false detection (which is not originally detected).
4. A dual threshold approach is applied to determine the possible (potential) boundaries.
5. The boundaries are tracked using a hysteresis technique. Fig. 5 shows the comparison of the results of the original image, the binary image and the edge extraction.
2.2 extraction of center Point
After obtaining the edge image of the mark point, morphology-based erosion and dilation operations are also used for processing the edge image to eliminate the phenomena of edge training, edge burring, etc. occurring during the edge extraction process. Then, a least squares fitting algorithm based on an ellipse equation is applied to the edge image. Equation set 2 gives the equation for the mean of the ellipse fit:
Ax′2+Bx′y′+Cy′2+Dx′+Ey′+1=0 (2.1)
V=AX-L
X=[A B C D E]T (2.2)
Figure GDA0002719812250000071
Figure GDA0002719812250000072
Figure GDA0002719812250000073
wherein the first row is a parametric equation of an ellipse, the second and third rows are least squares adjustment equations of an ellipse, x0And y0The initial value of the adjustment is θ the eccentricity of the fitting result. The ellipse formula is adopted for carrying out least square fitting, but the circular formula used by the traditional mark points is not adopted for carrying out least square fitting, so that the problem of circular pattern deformation caused by projection errors can be effectively solved, and the fitting redundancy is increased. By adjusting different initial values x0And y0Different ellipse fitting results can be obtained. And according to the residual error values V of different adjustment results, the effectiveness of ellipse fitting can be judged. Ideally, the smaller the residual value V, the more reliable the fitting result, and the more likely it is to become a candidate set of concentric circles around the center point. In addition, the eccentricity theta obtained through the fitting result can help to judge whether the fitted contour is a concentric circle corresponding to the central point. And screening the fitting results under different parameters, wherein the corresponding common circle center coordinate points (x ', y') of the first four optimal fitting results are the central points of the coding mark points.
2.3 decoding of encoded landmark points
After the center point of the encoded landmark is determined, an arc search strategy based on a given radius is applied to the decoding process of the encoding. And (3) searching black and white gray values by using a given radius (0.6 times of the design radius of the mark point) by taking the determined mark point as a center, and recording a search result of 360 degrees to obtain a code converted into the one-dimensional bar code. Fig. 7 gives a schematic diagram of this search process. The start of the search process starts at an arbitrary angle and then takes the maximum white gray value interval obtained in the search path as a flag for determining the start of encoding. And converting to obtain the included angle ratio of the sector encoding element corresponding to the width ratio of the one-dimensional bar code. In the decoding process, the width ratio of the one-dimensional bar code is used for replacing the included angle ratio of the sector encoding element, so that the included angle error caused by projection angle transformation and the error generated by line extraction of the included angle can be effectively overcome.
After the one-dimensional bar code is obtained through conversion, a group of 4-dimensional vector values can be obtained through the relative width ratio and the black-white pixel value, traversing matching is carried out by using the included angle ratio and the black-white pixel value corresponding to each coding element of the coding dictionary, a complete coding information value can be obtained, and corresponding decoding is completed.
1.3 improved Fourier-Mellin transform based rotation and displacement measurement
After the decoding of the coded mark is completed, the rotation and translation amount of the mark point needs to be solved. The scheme provides an improved Fourier-Mellin transform algorithm, and the improved Fourier-Mellin transform algorithm has the improvement point that in the phase difference fitting process of Fourier-Mellin transform, a cross power spectrum is fitted by using a random sampling consistency algorithm instead of a traditional least square algorithm, and the anti-noise estimation can be stably carried out on the translation and rotation amount of a mark point.
The core of the algorithm is the shift property of Fourier transform, namely, the shift amount g (x, y) -g (x + x) of each pixel g (x, y) in the x-and y-directions between images is used by using the principle of phase correlation technology0,y+y0) The size of the spectrum is equivalent to the difference value of the mutual power phase spectrum of the Fourier transformed image at the corresponding frequency spectrum s (u, v) of the pixel
Figure GDA0002719812250000081
Wherein t is0Is the phase difference in fourier space. In the phase space, the phase difference is represented by an angular difference of 0-2 pi of the phase. I.e. the phase angle. In the phase angle matrix, all elements have the same phase angle. I.e. the phase difference t0The size of each element in the phase angle matrix is determined, and one can be calculated by knowing the two. By estimating the phase difference, the inverse Fourier transform can be used to find the ohmAmount of displacement (x) in formula space0,y0)。(x0,y0) The calculation of (c) refers to step (b).
Calculating two images G1And G2The steps of displacing and rotating include:
(a) image G1And G2Discrete fourier transform of
The corresponding relation and cross-power spectrum matrix Q (u, v) in the frequency domain after fourier transform can be expressed as:
S2(u,v)=S1(u,v)exp{-i(ux0+vy0)} (3.1)
Figure GDA0002719812250000082
in the formula S1And S2Is G1And G2Fourier transform of (S)1And
Figure GDA0002719812250000083
are conjugated with each other. Wherein G is1Reference image of original mark point, G2For measuring images of the marking points for which the displacement and the angle of rotation need to be measured, S2(u, v) is corresponding to G2Matrix S obtained after discrete Fourier transform of pixel point with coordinate (x, y) in image matrix2Result of the corresponding point in, S1 *(u, v) is G1Matrix S obtained after discrete Fourier transform of pixel point with coordinate (x, y) in image matrix1Is a conjugate matrix S* 1U is the abscissa of the corresponding point in the Fourier matrix S of the image obtained after discrete Fourier transform, v is the ordinate of the corresponding point in the Fourier matrix S of the image obtained after discrete Fourier transform, x0As a matrix G of images before discrete Fourier transform1And G2The displacement value of the abscissa (x-direction) of the corresponding point of (a), y0For the image matrix G obtained before discrete Fourier transform1And G2A displacement value of the ordinate (y-direction) of the corresponding point in (a);
(b) obtaining a displacement relationship of images by estimation of phase difference
Transformation of the above equation to polar coordinates yields:
ψ(u,v)=∠Q(u,v)=ux0+vy0 (4)
here ψ (u, v) is an angle matrix of the real part and the imaginary part of the Q (u, v) matrix, i.e., a phase angle matrix. In fact, equation 4 corresponds to an expression of a plane equation:
au+bv+c+d=0 (5)
where a, b, c are the slope parameters of the plane equation and d is the constant term of the plane equation, i.e., ψ (u, v). Here the slope parameter c is constantly equal to 0. According to the relation between Q (u, v) and u and v, the plane of the formula 4 can be fitted, and then slope parameters a and b of a plane equation are solved, namely under the condition of avoiding interpolation and inverse Fourier transform, a displacement value x of two mark point images is obtained0A and y0The ratio of b,. The specific plane fitting process refers to step (d).
(c) Obtaining rotation scale relation of image through logarithm polar coordinate transformation
If the two landmark images have a rotation angle θ and a scaling scale γ in addition to the displacement value:
G2(x,y)=G1(a(x·cos(θ)+y·sin(θ),a(-x·sin(θ)+y·cos(θ)) (5.1)
for image matrix G1And G2Carrying out logarithmic polar coordinate conversion:
G(x,y)=K(λ,θ) (5.2)
K2(θ,θ)=K1(λ+lnγ,θ+θ0) (5.3)
wherein K is an image matrix expressed by polar coordinates. It can be found that the rotation and scaling relationship can be converted into displacement relationship ln and theta under the logarithmic polar coordinate0Then, the phase difference is estimated in step (b) to obtain the rotation angle θ and the scaling scale γ, i.e., in pair K1And K2And (3) calculating the cross power spectrum after the discrete Fourier transform according to the formula 3.2 to obtain an expression similar to the formula 4:
ψ(m,n)=∠Q(m,n)=mlnγ+nθ0 (6)
and m is the abscissa of the corresponding point of K in the Fourier matrix obtained after discrete Fourier transform, and n is the ordinate of the corresponding point of K in the Fourier matrix obtained after discrete Fourier transform. Here ψ (m, n) is an angle matrix of the real part and the imaginary part of the Q (m, n) matrix, i.e., a phase angle matrix. In fact, equation 6 corresponds to an expression of a plane equation similar to equation 5:
am+bn+c+d=0 (7)
where a, b, c are the slope parameters of the plane equation and d is the constant term of the plane equation, i.e., ψ (m, n). Here the slope parameter c is constantly equal to 0. According to the relation between Q (m, n) and m and n, the plane of the formula 7 can be fitted, and then slope parameters a and b of a plane equation are solved, so that the rotation theta of the two marked point images is calculated0And the scale γ can be obtained by the following formula:
γ=ea (8.1)
θ0=b (8.2)
where e is the base of the natural logarithmic function, i.e., the scientific constant. The specific plane fitting process refers to step (d).
(d) Estimating phase angles by robust plane fitting
A phase angle (namely phase difference) matrix corresponding to a cross-power spectrum obtained after image Fourier transformation theoretically corresponds to a plane model wound by 2 pi phases in the row-column direction, a traditional method directly utilizes least square estimation to fit a two-dimensional plane, and only an offset value between-0.5 pixel and 0.5 pixel can be solved without involving unwrapping of the phase angle, so that the whole pixel is ensured not to have errors. Before the plane fitting is performed, the phase angle matrix needs to be unwrapped. In order to ensure the influence of cross-power spectrum data noise on phase unwrapping, the improved Fourier-Mellin transformation scheme adopts a vector filtering algorithm to perform denoising, separates the real part and the imaginary part of a cross-power spectrum Q (u, v), respectively uses mean value filtering denoising of a small window, and obtains a phase angle matrix after denoising according to an arc tangent of an obtained matrix after filtering. The scheme adopts a phase unwrapping algorithm based on Minimum Cost Network Flow (MCNF) to unwrapp the phase angle matrix, and can realize a better and faster unwrapping effect. Aiming at the defects of a phase correlation algorithm based on plane fitting, the phase correlation based on plane fitting and the RANSAC robust estimation algorithm are combined, the random sample consensus (RANSAC) algorithm is used for replacing a least square algorithm to robustly estimate parameters a and b of a plane model au + bv-Q (u, v) ═ 0, only data conforming to the unwrapped plane model are selected to carry out phase deviation value estimation, and data influenced by deviation are removed as gross error in the robust estimation, so that the influence of the deviation on the least square estimation in the correlation process is weakened, the precision and the stability of the deviation value are improved, and the precision of rotation measurement is further improved.
The invention provides a new coding mark design scheme on the basis of analyzing the characteristics of the existing bar coding and concentric ring coding mark points. The scheme takes a very practical concentric ring type mark point as a prototype, and a coding element array design based on a fan-shaped one-dimensional code is added, so that the coding design can be carried out. In addition, the scheme also provides an arc search strategy based on a given radius, so that the quick conversion from the sector encoding element to the one-dimensional bar code is realized, the unstable condition in the process of extracting straight lines in the image is avoided, and the quick and stable decoding of the encoding is realized. Finally, in order to obtain the relative geometric positioning information (namely rotation and displacement) of the coded mark points, the scheme also provides a displacement and rotation measurement algorithm based on improved Fourier-Mellin transform, and the displacement and rotation information of the mark points can be quickly and effectively and directly obtained under different illumination and noise conditions.

Claims (9)

1. An encoding, decoding and measuring method for an encodable close-range photogrammetric mark, comprising the steps of:
1) and (3) an encoding process:
11) setting of coding magnification: constructing a plurality of coding elements of the sector one-dimensional code, and setting the coding multiplying power of the coding elements;
12) and (3) generation of a coding dictionary: forming a coding dictionary by a plurality of coding elements corresponding to different coding information;
13) generating a coding mark; forming a complete coding mark by combining a plurality of coding elements;
2) and (3) decoding process:
21) preprocessing a coded image, namely acquiring a binary image only containing edge information from an input coded mark image;
22) extracting a central point: acquiring the central point of the coding mark by adopting a least square ellipse fitting adjustment algorithm according to the binary image only containing the edge information;
23) and (3) converting and decoding the one-dimensional bar code: adopting an arc searching method with a given radius, namely, taking the extracted central point as the center, carrying out black and white gray value searching with 360 degrees by using the given radius, converting the coding mark into the code of the one-dimensional bar code, and then carrying out traversal matching according to the included angle ratio and the black and white pixel value corresponding to each coding element of the coding dictionary to obtain a complete coding information value and complete corresponding decoding;
3) the measurement process comprises the following steps:
after decoding is finished, an improved Fourier-Mellin transform algorithm is adopted to solve the rotation scale relation and the translation amount generated on the positioning mark points of the coding elements on the two images, and the method specifically comprises the following steps:
31) obtaining a reference image G of an original mark point1Measurement image G of mark point needing to measure rotation and translation angle2The translation amount between the two is specifically as follows:
311) reference image G for original mark points1Measurement image G of mark point needing to measure rotation and translation angle2Performing discrete Fourier transform, and calculating a cross-power spectrum matrix Q (u, v), wherein the calculation formula of the cross-power spectrum matrix Q (u, v) is as follows:
Figure FDA0002719812240000011
S2(u,v)=S1(u,v)exp{-i(ux0+vy0)}
wherein S is2(u, v) is corresponding to G2Matrix S obtained after discrete Fourier transform of pixel point with coordinate (x, y) in image matrix2Result of the corresponding point in, S1 *(u, v) is G1Matrix S obtained after discrete Fourier transform of pixel point with coordinate (x, y) in image matrix1Is a conjugate matrix S* 1U is the abscissa of the corresponding point in the Fourier matrix S of the image obtained after discrete Fourier transform, v is the ordinate of the corresponding point in the Fourier matrix S of the image obtained after discrete Fourier transform, x0As a matrix G of images before discrete Fourier transform1And G2The displacement value in the abscissa direction of the corresponding point of (a), y0For the image matrix G obtained before discrete Fourier transform1And G2A displacement value of the corresponding point in (1) in the ordinate direction;
312) performing polar coordinate transformation on the cross-power spectrum matrix Q (u, v) to obtain a phase angle matrix ψ (u, v), that is:
ψ(u,v)=∠Q(u,v)=ux0+vy0
carrying out plane fitting by adopting a random sampling consistency algorithm to obtain a fitted plane expression as follows:
a1u+b1v+c1+d1=0
wherein, a1、b1、c1Is a slope parameter of the plane equation, and c1=0,d1Is a constant term of a plane equation, and d1=ψ(u,v);
313) Slope parameter a of the equation for plane1And b1Solving to obtain the displacement value x of the two marked point images0=a1And y0=b1
32) Obtaining rotation scale relation of the image through logarithm polar coordinate transformation, including rotation angle theta0And a scaling scale γ, specifically:
321) reference image G for original mark points1Measurement image G of mark point needing to measure rotation and translation angle2Perform logarithmic polar coordinatesConverting;
322) performing discrete Fourier transform on the image after polar coordinate conversion to obtain a cross-power spectrum matrix Q (m, n) under a polar coordinate, and acquiring a phase angle matrix psi (m, n) of the matrix Q (m, n):
ψ(m,n)=∠Q(m,n)=mlnγ+nθ0
323) carrying out plane fitting on the phase angle matrix psi (m, n) by adopting a random sampling consistency algorithm, and then solving a final slope parameter a of a plane equation2And b2Finally, the rotation angle theta of the two marking point images is obtained0=b2And scaling dimensions
Figure FDA0002719812240000021
2. The method as claimed in claim 1, wherein in step 11), each of the code elements of the sector one-dimensional code is formed by arranging and combining a positioning mark point and four sector patches with different included angles, the positioning mark point is formed by a plurality of concentric circles alternating with black and white to realize initial positioning, the sector patches include wide patches and narrow patches, specifically narrow black, narrow white, wide black and wide white, and the coding magnification δ is a ratio of the sizes of the corresponding central angles of the wide patches and the narrow patches.
3. The method as claimed in claim 2, wherein the sector-shaped patches of each code element are composed of three wide patches and three narrow patches, and each code element contains information determined by the distribution of the wide and narrow patches and the black and white color configuration.
4. The method as claimed in claim 1, wherein in step 12), the information of the coding dictionary includes numbers, letters and special symbols, the numbers and letters are used to generate the coded information, and the special symbols are blank areas at the fourth quadrant position above and to the left of the coded elements for the identification of the start-stop and calibration information of the coding.
5. The method as claimed in claim 1, wherein the code dictionary is adjustable in capacity by adjusting the coding magnification δ.
6. The method as claimed in claim 1, wherein the step 21) comprises the following steps:
211) generation of a binary image: dividing the input coding mark image by a given gray threshold value to generate a binary image;
212) extracting an edge contour: and extracting the edge information of the coding mark from the binary image through a Canny edge detection operator.
7. The method as claimed in claim 1, wherein the given radius in step 23) is 0.6 times the radius of the patch.
8. The method as claimed in claim 1, wherein the searching process in step 23) starts at an arbitrary angle, and uses the maximum white gray value interval obtained on the searching path as the indicator for determining the start-stop of encoding, and uses the width-to-width ratio of the one-dimensional bar code instead of the angle ratio of the encoding element.
9. The method of claim 1, wherein the step of unwrapping before the plane fitting comprises:
in order to eliminate the influence of cross-power spectrum data noise on phase unwrapping, a vector filtering algorithm is adopted in the Fourier-Mellin transform algorithm for denoising, the real part and the imaginary part of the cross-power spectrum are separated, mean value filtering denoising of small windows is respectively used, the filtered matrix is subjected to arc tangent to obtain a wrapped phase angle matrix after denoising, and the phase angle matrix is unwrapped by a phase unwrapping algorithm based on minimum cost network flow.
CN201810803118.6A 2018-07-20 2018-07-20 Coding decoding and measuring method for codable close-range photogrammetric mark Active CN109186550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810803118.6A CN109186550B (en) 2018-07-20 2018-07-20 Coding decoding and measuring method for codable close-range photogrammetric mark

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810803118.6A CN109186550B (en) 2018-07-20 2018-07-20 Coding decoding and measuring method for codable close-range photogrammetric mark

Publications (2)

Publication Number Publication Date
CN109186550A CN109186550A (en) 2019-01-11
CN109186550B true CN109186550B (en) 2021-03-12

Family

ID=64936559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810803118.6A Active CN109186550B (en) 2018-07-20 2018-07-20 Coding decoding and measuring method for codable close-range photogrammetric mark

Country Status (1)

Country Link
CN (1) CN109186550B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210431B (en) * 2019-06-06 2021-05-11 上海黑塞智能科技有限公司 Point cloud semantic labeling and optimization-based point cloud classification method
CN112432612B (en) * 2020-10-22 2022-08-16 中国计量科学研究院 High-precision micro rotation angle measuring method based on monocular vision
CN113129394B (en) * 2020-12-23 2022-09-06 合肥工业大学 Parallelogram coding mark based on region division coding and coding method thereof
CN113538483B (en) * 2021-06-28 2022-06-14 同济大学 Coding and decoding method and measuring method of high-precision close-range photogrammetry mark

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10347608A1 (en) * 2003-10-09 2005-05-19 Daimlerchrysler Ag Reference mark for photogrammetry/metrophotography has elements for fixing the center of the mark and for variable encoding of data on a circle around the center of the mark
CN102609979A (en) * 2012-01-17 2012-07-25 北京工业大学 Fourier-Mellin domain based two-dimensional/three-dimensional image registration method
CN104154904A (en) * 2014-07-18 2014-11-19 上海珞琪软件有限公司 Circular coded mark based on graphic topology relationship and identification method of circular coded mark
CN105760879A (en) * 2016-01-14 2016-07-13 西安电子科技大学 Fourier-Mellin transform-based image geometric matching method
CN106682689A (en) * 2016-12-16 2017-05-17 西安汇明光电技术有限公司 Image matching method based on multiscale Fourier-Mellin transform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10347608A1 (en) * 2003-10-09 2005-05-19 Daimlerchrysler Ag Reference mark for photogrammetry/metrophotography has elements for fixing the center of the mark and for variable encoding of data on a circle around the center of the mark
CN102609979A (en) * 2012-01-17 2012-07-25 北京工业大学 Fourier-Mellin domain based two-dimensional/three-dimensional image registration method
CN104154904A (en) * 2014-07-18 2014-11-19 上海珞琪软件有限公司 Circular coded mark based on graphic topology relationship and identification method of circular coded mark
CN105760879A (en) * 2016-01-14 2016-07-13 西安电子科技大学 Fourier-Mellin transform-based image geometric matching method
CN106682689A (en) * 2016-12-16 2017-05-17 西安汇明光电技术有限公司 Image matching method based on multiscale Fourier-Mellin transform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
近景摄影测量中标识点自动检测;魏鹏;《计量与检测技术》;20170830;第44卷(第8期);9-12页 *

Also Published As

Publication number Publication date
CN109186550A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109186550B (en) Coding decoding and measuring method for codable close-range photogrammetric mark
CN111160337B (en) Automatic identification method, system, medium and equipment for reading of pointer instrument
US8498467B2 (en) Method for processing a three-dimensional image of the surface of a tire so that it can be used to inspect the said surface
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
WO2019042232A1 (en) Fast and robust multimodal remote sensing image matching method and system
CN108764004B (en) Annular coding mark point decoding and identifying method based on coding ring sampling
CN112465912B (en) Stereo camera calibration method and device
CN109215016B (en) Identification and positioning method for coding mark
CN111402330B (en) Laser line key point extraction method based on planar target
CN106296587B (en) Splicing method of tire mold images
JP6188052B2 (en) Information system and server
CN115880373A (en) Calibration plate and calibration method of stereoscopic vision system based on novel coding characteristics
CN112734729A (en) Water gauge water level line image detection method and device suitable for night light supplement condition and storage medium
CN108537832A (en) Method for registering images, image processing system based on local invariant gray feature
Xia et al. A table method for coded target decoding with application to 3-D reconstruction of soil specimens during triaxial testing
Huang et al. Multimodal image matching using self similarity
CN109191489B (en) Method and system for detecting and tracking aircraft landing marks
CN113538404A (en) Cover plate glass defect detection method and system based on anchor point selection
Luft et al. Content-based Image Retrieval for Map Georeferencing
Hancer et al. A hybrid method to the reconstruction of contour lines from scanned topographic maps
Guo et al. Automatic shape-based target extraction for close-range photogrammetry
CN104637055B (en) A kind of high precision image matching process based on small scale features point
CN113096163B (en) Satellite-borne SAR image high-precision matching method without priori lifting rail information
CN113706620B (en) Positioning method, positioning device and movable platform based on reference object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant