CN115797185B - Coordinate conversion method based on image processing and complex sphere - Google Patents

Coordinate conversion method based on image processing and complex sphere Download PDF

Info

Publication number
CN115797185B
CN115797185B CN202310080890.0A CN202310080890A CN115797185B CN 115797185 B CN115797185 B CN 115797185B CN 202310080890 A CN202310080890 A CN 202310080890A CN 115797185 B CN115797185 B CN 115797185B
Authority
CN
China
Prior art keywords
image
coordinate
dimensional
conversion
coordinate conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310080890.0A
Other languages
Chinese (zh)
Other versions
CN115797185A (en
Inventor
刘振丰
张婷婷
赵波
敬志远
罗世杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jingwu Track Traffic Technology Co ltd
Original Assignee
Sichuan Jingwu Track Traffic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jingwu Track Traffic Technology Co ltd filed Critical Sichuan Jingwu Track Traffic Technology Co ltd
Priority to CN202310080890.0A priority Critical patent/CN115797185B/en
Publication of CN115797185A publication Critical patent/CN115797185A/en
Application granted granted Critical
Publication of CN115797185B publication Critical patent/CN115797185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a coordinate conversion method based on image processing and complex sphere, which comprises image recognition, image conversion and coordinate conversion; matching the template image and the pixel two-dimensional coordinate information of the feature points of the intrusion image through image recognition and finding out matching points; the matching points are subjected to epipolar constraint, and two-dimensional coordinates of the shooting device are obtained through an image conversion method; the coordinate conversion converts the two-dimensional coordinate of the shooting device into the three-dimensional coordinate in the spherical coordinate system by utilizing the coordinate conversion principle of the complex spherical surface, and obtains the rotation dip angle required by the measurement device to align the three-dimensional coordinate; the invention realizes the conversion from two-dimensional coordinates to three-dimensional coordinates by means of the standard two-dimensional shooting device coordinates corresponding to the pixel coordinates under the preset template image, and realizes the automatic alignment of the measuring optical machine in the measuring system by the three-dimensional coordinate information, so that the whole measuring process is simple and convenient.

Description

Coordinate conversion method based on image processing and complex sphere
Technical Field
The invention belongs to the technical field of limit measurement detection, and particularly relates to a coordinate conversion method based on image processing and complex sphere for limit measurement detection.
Background
The rapid development of the era, with the most significant changes in vehicles, from the former carriage to the present aircraft high-speed rail, each of which can be said to be the representative product of the forefront of the era. The urban development is accelerated nowadays, the scale of the inside of the city is gradually expanded, and the communication between cities is also becoming stronger, so that the rapid development of light and rapid rail transit such as subway high-speed rail transit is accompanied.
In rail transit, trains run at high speed along a fixed track, which often needs to be done in a specific space, the size of which is a so-called limit. The specific definition of the limit takes the subway limit as an example, namely, the outline size of a subway vehicle, relevant technical parameters of the vehicle, running power performance, relevant conditions of a track and a contact net or a contact rail are combined, and the limit is designed according to equipment and installation errors and a specified calculation method, namely, the outline line of the running of the vehicle and the exceeding of structures around a track area is limited, and the image outline of the safe running of the subway, the limit of the cross section size of the vehicle, the limit of the installation size of equipment along the line and the determined effective clearance size of a building structure is ensured. Since there is a specification, measurements must be made to determine if the specification is met. The current limited measuring methods are two types: contact and non-contact.
The contact measurement means is earlier, accurate data of a limited point on a section can be obtained by means of a probe and a protractor, and the data recording can be performed manually or by an optical encoder and a potentiometer. The method has the advantages of low cost, simple use, high static condition precision of +/-0.5 mm, large workload, more manual intervention, low measurement speed and incapability of measuring all points on a section.
The non-contact type is similar to the contact type, but the principle similar to optical speed measurement is often adopted, and the triangulation based on a laser and optical reading is used, so that the disadvantage is that the material cost and the labor cost are high.
In the two main methods, in the equipment for performing autonomous measurement with mechanical assistance, the optical imaging technology is often relied on, for example, a plurality of groups of special 3D depth cameras are used to determine the position of a camera module, so that the three-dimensional reconstruction (for example, three-dimensional point cloud) of the measured cross-section outline is performed by using the camera optical imaging principle, and the specific coordinates of the measurement points are obtained to obtain the rotation angle of the measuring optical machine.
Disclosure of Invention
The invention aims to provide a coordinate conversion method based on image processing and complex sphere, which aims to solve the defects that in the prior art, the workload of a contact method is large and the detection cost of a non-contact method is high.
In order to solve the technical problems, the invention adopts the following technical scheme:
the method based on image processing and coordinate conversion of complex sphere comprises image recognition, image conversion and coordinate conversion;
and (3) image identification: presetting a template image; acquiring an intrusion image;
performing feature recognition on the intrusion image and the template image to obtain two-dimensional coordinate information of the feature point pixels;
matching the two-dimensional coordinate information of the feature point pixels of the intrusion image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
the method comprises the steps that an intrusion image is obtained by a shooting device, and a template image is a known image;
image conversion: performing epipolar constraint on the matching points;
according to the essence matrix
Figure SMS_1
And base matrix->
Figure SMS_2
Decomposing to obtain shooting device change->
Figure SMS_3
、 />
Figure SMS_4
Two-dimensional coordinates on an intrusion image
Figure SMS_5
Conversion of camera motion change to template image
Figure SMS_6
Placing the shooting device at a position, which is right in front of the measuring frame and is away from the center point of the surface plane structure of the measuring frame, with the length of D, and keeping the shooting device and the measuring device as the same horizontal line;
the two-dimensional coordinates of the shooting device under the dimension D can be expressed as follows by a matrix relation:
Figure SMS_7
wherein
Figure SMS_8
Two-dimensional coordinates of the shooting device;
coordinate conversion: the method comprises static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate transformation is dynamic, executing the dynamic coordinate transformation;
static coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation tilt data to a measuring device;
dynamic coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
the rotational tilt data is transferred to the measuring device.
Further, the shooting device adopts an internal reference calibrated camera; in the detection environment, shooting under the requirement of acquiring a preset template image to obtain a standard preset template image; when the feature points are identified, an image pyramid is constructed; after the shooting device moves, finding a matching image before moving in an image pyramid; meanwhile, when the characteristic points are identified, the gray centroid method is utilized to introduce rotation characteristics.
Further, in the gray centroid calculation method; firstly, selecting a small image block A, defining an image blockFinding the mass center of the image block by the moment of the image block, and connecting the geometric center O and the mass center C of the image block A to obtain a direction vector
Figure SMS_9
The direction of the feature point can be determined by the direction vector.
Further, after a template image is obtained under the scale D, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, and then a group of successfully matched corresponding feature point groups is obtained through violent matching;
and performing epipolar constraint on the characteristic point group to obtain a rotation and translation relation of the two images.
Further, a spherical coordinate system based on a complex spherical principle is established to represent three-dimensional world coordinates, a north pole is taken as a fixed measuring optical machine, a perpendicular line is intersected at a plane of a camera shooting template to be used as a circle center, and the distance from the north pole to the circle center is a radius r; spherical surface
Figure SMS_10
Expressed as: />
Figure SMS_11
The camera template plane is recorded as:
Figure SMS_12
the north pole N has the coordinates of
Figure SMS_13
For->
Figure SMS_14
Any point z above, straight line and sphere joining N and z +.>
Figure SMS_15
Intersecting at a point P.
Further, in the static coordinate conversion, a horizontal rotation angle and a vertical rotation angle required by the measuring device to point P are obtained in a vector mode; wherein,
Figure SMS_16
Figure SMS_17
further, in the dynamic coordinate transformation, a direction vector required by the displacement of the measuring device is obtained according to the displacement L of the measuring device and the principle of similar triangles:
Figure SMS_18
measuring a horizontal rotation angle and a vertical rotation angle required by the optical machine to point to the P point; wherein,
Figure SMS_19
Figure SMS_20
compared with the prior art, the invention has the following beneficial effects:
the invention uses the existing image processing principle to make a part of cameras calibrated by internal references obtain a standard preset template picture under the requirement of template image acquisition, and uses the standard two-dimensional camera coordinates corresponding to the pixel coordinates under the preset template image to realize the conversion from two-dimensional coordinates to three-dimensional coordinates, and realizes the automatic alignment of a measuring optical machine in a measuring system by three-dimensional coordinate information, thus the whole measuring process is simple and convenient.
The intrusion conversion coordinate information can be transferred to the template image in a characteristic point matching mode, and the two-dimensional coordinate information in the template image is converted into the two-dimensional coordinate information of the camera through epipolar constraint, so that the intrusion image is connected with the camera in the measuring system in the two modes. Because the characteristic point matching mode is used, the requirement on the infringed image is not high when the infringed image is acquired, the infringed image can be acquired only around a specified position through image conversion when the measuring equipment is installed, namely, the infringed image acquired in a fuzzy range is converted into relatively accurate coordinate information through image processing, the integral work before measurement is simplified, the workload and human error are reduced, and the preset template image can be repeatedly used aiming at the same environment, so that the measuring process is repeatable.
The invention directly converts the two-dimensional coordinate conversion in the camera into the rotation dip angle required by the alignment of the measuring optical machine by the principle of complex sphere conversion, and simplifies the whole process from the two-dimensional coordinate to the three-dimensional coordinate. The accuracy after conversion can be effectively improved through two modes of static coordinate conversion and dynamic coordinate conversion, namely, the automatic alignment of the measuring optical machine can be effectively ensured.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
The method based on image processing and coordinate conversion of complex sphere comprises image recognition, image conversion and coordinate conversion;
before the automatic alignment is achieved by clicking the screen, two-dimensional coordinates in the image need to be established, because for the camera imaging system, the image pixel coordinates need to be converted to real world coordinates in space, the calibration needs to be completed through calibration work, while the internal parameters have good stability after one calibration, but the external parameters of a camera system change along with the position of the camera and different use environments, which means that before the measurement is completed by using the camera imaging principle, the calibration needs to be performed each time, and the calibration is a complex and error-prone process, so that by means of the existing image processing principle, a standard preset template picture is obtained by a camera calibrated through internal references under the requirement of preset template image acquisition, and thus the standard two-dimensional camera coordinates under a preset template image can be obtained through internal references.
And (3) image identification: presetting a template image; acquiring an intrusion image;
performing feature recognition on the intrusion image and the template image to obtain two-dimensional coordinate information of the feature point pixels;
matching the two-dimensional coordinate information of the feature point pixels of the intrusion image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
the limit intrusion image is obtained by a shooting device, the template image is a known image, and the shooting device adopts a camera.
Firstly, identifying two-dimensional coordinate information of feature point pixels of the intrusion image and the template image, and then matching the two-dimensional coordinate information of the feature point pixels. And when matching is performed on the two-dimensional coordinate information of the feature point pixels, violent matching is adopted, descriptors of each feature in the two-dimensional coordinate information of the feature point pixels of the intrusion image are matched with all feature descriptors in the two-dimensional coordinate information of the feature point pixels of the template image, the similarity (generally speaking, euclidean distance between the pixel points) between the two descriptors is calculated, and then a pair of matching points with the highest similarity, namely a final matching result, is returned. When a group of corresponding matching points are found, the spatial positions represented by the successfully matched pixel points can be basically considered to be the same on the preset template image and the infringement image, so that the relation between the whole images can be obtained through the relation between the matching points, the points on the infringement image are converted through the relation, and the point positions of the unknown condition under the infringement image are converted into the corresponding point positions under the preset template image of the known condition.
Performing epipolar constraint on the matching points;
according to the essence matrix
Figure SMS_21
And base matrix->
Figure SMS_22
Decomposing to obtain shooting device change->
Figure SMS_23
、 />
Figure SMS_24
;/>
Two-dimensional coordinates on an intrusion image
Figure SMS_25
Conversion of camera motion change to template image
Figure SMS_26
Placing the shooting device at a position, which is right in front of the measuring frame and is away from the center point of the surface plane structure of the measuring frame, with the length of D, and keeping the shooting device and the measuring device as the same horizontal line;
the two-dimensional coordinates of the shooting device under the dimension D can be expressed as follows by a matrix relation:
Figure SMS_27
wherein
Figure SMS_28
Two-dimensional coordinates for the camera. Wherein the shooting device adopts a camera, and the measuring device adopts a measuring optical machine.
The epipolar constraint is used for obtaining the motion condition between the intrusion image and the template image, and the motion from the intrusion image to the template image is set as R, t;
the camera reference matrix K is known, and for a certain point P in space, the spatial coordinates of the point P in the intrusion image are
Figure SMS_29
According to the camera model, which corresponds to the pixel points p1 and p2 in the two images, there is the following relationship
Figure SMS_30
Figure SMS_31
Obtaining
Figure SMS_32
wherein
Figure SMS_33
Figure SMS_34
Figure SMS_35
According to the essence matrix
Figure SMS_36
And base matrix->
Figure SMS_37
Decomposing to obtain camera motion variation->
Figure SMS_38
、 />
Figure SMS_39
After obtaining the camera motion change relationship, two-dimensional coordinates on the intrusion image are obtained
Figure SMS_40
Conversion to template image by camera motion change>
Figure SMS_41
Specifically, the camera system is placed at a position, which is right in front of the measuring equipment frame and is away from the center point of the surface plane structure of the measuring frame by the length D, and the camera system and the measuring optical machine are kept at the same horizontal line; the two-dimensional coordinates of the camera at the dimension D can be represented by a matrix relationship:
Figure SMS_42
wherein
Figure SMS_43
I.e. as camera two-dimensional coordinates.
Let the motion of the intrusion image to the template image be R (rotation matrix), t (translation matrix).
The camera reference matrix K is known, whose spatial coordinates in the first image are, for a point P in space
Figure SMS_44
According to the camera model, it corresponds to the pixel points p1 and p2 in the two images, with the following relationship:
Figure SMS_45
Figure SMS_46
wherein
Figure SMS_47
,/>
Figure SMS_48
Is the corresponding scale.
According to the imaging relationship,
Figure SMS_49
and />
Figure SMS_50
Is a projected relationship so they have an equal relationship in the scale sense. Thus, the first and second substrates are bonded together,
Figure SMS_51
so the two relations can be rewritten as
Figure SMS_52
Figure SMS_53
Order the
Figure SMS_54
, />
Figure SMS_55
Representing the coordinates of the two pixel points on the normalized plane, substituting the coordinates into the above formula for simplification,
Figure SMS_56
two-sided simultaneous left-hand anti-symmetric matrix
Figure SMS_57
Then there is
Figure SMS_58
And then simultaneously take advantage of
Figure SMS_59
Figure SMS_60
Left Bian Shizi is constant at 0 and 0 times any constant is 0, so can be used
Figure SMS_61
Rewritten as +.>
Figure SMS_62
Then there is
Figure SMS_63
Re-substitution into
Figure SMS_64
and />
Figure SMS_65
Is convenient to obtain
Figure SMS_66
The middle part is marked as two matrices
Figure SMS_67
Figure SMS_68
Figure SMS_69
According to the essence matrix
Figure SMS_70
And base matrix->
Figure SMS_71
Decomposing to obtain camera motion variation->
Figure SMS_72
、 />
Figure SMS_73
After epipolar constraint, the conversion relation between two groups of two-dimensional points is known, through the conversion relation, two-dimensional pixel point coordinates on a less standard intrusion image can be converted to the upper surface of a standard preset template image through the camera motion change relation, and then the converted pixel coordinates are given to the upper surface of a measuring frame through shooting data of the standard preset template image.
After the image processing, the corresponding relation R, t of the pixel points between the infringement image and the preset template image is determined, and when the pixel coordinates of the current infringement point are obtained through the marks in the image stored in the previous infringement
Figure SMS_74
After that, since the camera system calibrated by the factory internal reference is used, the internal reference is +.>
Figure SMS_75
Is known and is then calculated by the formula:
Figure SMS_76
can obtain the preset template image
Figure SMS_77
Corresponding->
Figure SMS_78
Due to the fact that at this time
Figure SMS_79
Still, the image pixel coordinates cannot be correspondingly related to the three-dimensional coordinates by a non-three-dimensional reconstruction mode. />
Firstly, the camera system is placed at the position, which is right in front of the frame of the measuring equipment and is the same horizontal line with the measuring optical machine, of which the length is D from the center point of the surface plane structure of the frame.
At this time, according to the imaging structure principle of the camera, the two-dimensional coordinates of the camera under the dimension D can be expressed as a matrix relation
Figure SMS_80
wherein
Figure SMS_81
I.e. as camera two-dimensional coordinates, i.e. two-dimensional coordinates below the surface plane of the frame.
In the actual measurement process, the equipment is firstly required to be installed, and the installation process has higher requirements in the traditional measurement mode, for example, if the space information is acquired through camera calibration, the equipment is often required to be installed according to the designated installation requirements before measurement and then to be subjected to subsequent measurement, and if the equipment is not installed according to the strict installation requirements, the equipment is required to be calibrated on site each time, and the operation is complex. Especially, under the specified installation requirement, a certain error is often caused to the whole measurement result once the measurement personnel generates deviation in the installation process. In order to simplify the installation requirement and avoid unnecessary manual errors, when the measurement is carried out after a preset template image is obtained, the reinstallation of equipment is no longer required to be according to the high specification requirement when the preset template image is obtained, but the equipment is installed at a position which can obtain information similar to the preset template image within a fuzzy range, and then the correction processing is carried out through the steps through a computer, so that the setting work before the measurement is simplified, the workload is reduced, the manual errors are reduced, and the preset template image can be repeatedly used for the same environment, so that the measurement process is repeatable. The image processing technique is chosen to simplify the operation steps and avoid the error problems that are easily brought about by the installation.
Through the steps, two-dimensional coordinate points on a measurement frame plane in a preset template image shot by a group of cameras are obtained, but the two-dimensional coordinates do not have directionality in a three-dimensional space, namely, the two-dimensional coordinates under the plane cannot provide a rotation dip angle required by automatic alignment of the measurement equipment to a designated position, so that a method for corresponding the two-dimensional coordinates to the space three-dimensional coordinates in a measurement environment needs to be found, and the corresponding relation in the method has the characteristic that any point on the obtained two-dimensional plane exists and only has one corresponding space three-dimensional point, so that the space direction information contained in the space three-dimensional coordinates can be ensured to be reliable for the corresponding two-dimensional coordinate point, namely, when the measurement equipment points to the space three-dimensional coordinates, the position of the corresponding two-dimensional coordinate point is pointed at simultaneously.
Coordinate conversion: the method comprises static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate transformation is dynamic, executing the dynamic coordinate transformation;
static coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation tilt data to a measuring device;
dynamic coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
the rotational tilt data is transferred to the measuring device.
Firstly, a spherical system is established, and a fixed measuring optical machine is taken as a north pole
Figure SMS_82
The perpendicular line is intersected with the plane of the camera shooting template as a circle center O, and the distance from the north pole to the circle center is the radius +.>
Figure SMS_83
Sphere->
Figure SMS_84
Can be expressed as +.>
Figure SMS_85
The plane of the template shot by a camera is recorded as
Figure SMS_86
Then the north pole N has the coordinates of
Figure SMS_87
For->
Figure SMS_88
At any point z, the straight line joining N and z must be spherical
Figure SMS_89
Intersecting at a point P. And easily know when->
Figure SMS_90
P is in northern hemisphere, ">
Figure SMS_91
P is in the southern hemisphere,>
Figure SMS_92
p and z coincide.
When z tends to +.sub.infinity, then p tends to the north pole N more.
Thus, according to the vector similarity and the spherical equation, when the two-dimensional coordinates of the z-point camera are
Figure SMS_93
In this case, it can be known that P corresponding to this is
Figure SMS_94
Figure SMS_95
Figure SMS_96
At this time, when the measuring light machine points to the point P through rotation, the measuring light machine can fall at the point Z, and the horizontal rotation angle and the vertical rotation angle required by the light machine can be directly obtained through a vector mode.
Figure SMS_97
Figure SMS_98
The rotation angle required by the measuring ray machine to point P can be obtained through the two-dimensional coordinates, and the two dimensions are embodied in three dimensions, are unique and reliable.
In a preferred embodiment, the template image is acquired by a method;
the shooting device adopts an internal reference calibrated camera;
in a preferred embodiment, when the feature points are identified, firstly selecting one pixel point p in the intrusion image or the template image, wherein the brightness of the pixel point p is U;
selecting 16 nearest pixel points by taking the pixel point p as a center; among the selected pixel points, p points with continuous N points with brightness greater than U-T and less than U+T are regarded as characteristic points; wherein N is 9, 11 or 12;
i.e. feature point identification employs the ORB algorithm. The adoption of ORB feature points requires the calculation of FAST key points and BRIEF description operators; wherein FAST only compares the brightness of pixels, firstly selects a pixel p in an image, supposes that the brightness is U, then sets a threshold T related to U, and then selects 16 pixels closest to the pixel p with the pixel p as the center, when the brightness of N consecutive pixels in the selected pixel is greater than u+t or less than U-T, then p can be regarded as a feature point, N can be generally selected as 9, 11 or 12, and finally the steps are circulated until each pixel is executed. When calculating FAST feature points, only brightness measurement differences are compared, so that the processing speed is very high, but the comparison is single, meanwhile, the defects of weak repeatability and uneven distribution are caused, the FAST corner points have no direction information, and the scale problem exists. Therefore, in the ORB, the scale is not deformed by constructing the image pyramid, and when the camera moves forward or backward, a match can be found in the upper layer of the previous image pyramid and the lower layer of the next image pyramid or in the lower layer of the previous pyramid and the upper layer of the next image pyramid. And the gray centroid method is utilized to introduce rotation characteristics. The centroid is the center of the weight with the image gray value, where the image gray centroid of the feature point attachment needs to be calculated. The specific method comprises the following steps:
firstly, selecting a small image block A, defining the moment of the image block as
Figure SMS_99
Then find the centroid of the image block by moment
Figure SMS_100
After finding the centroid, connecting the geometric center O and the centroid C of the image block A to obtain a direction vector
Figure SMS_101
Defining the direction of the characteristic point at the moment as
Figure SMS_102
The BRIEF description operator is a binary description operator whose description vector consists of a number of 0's and 1's, and when comparing the pixel sizes of two random vectors p and q, if p is greater than q, the description vector is taken to be 1, and vice versa is 0. The BRIEF operator is a selected random point during comparison, the speed is very high, and the binary system is convenient for storage of a computer. The original BRIEF operator also has no rotation invariance, and in ORB, the direction characteristic is added in the FAST corner extraction stage, so that the BRIEF operator obtains better rotation invariance by utilizing the direction information.
In a preferred embodiment, the template image is obtained and stored for the detection environment before the initial use; namely, a camera calibrated by internal parameters is enabled to obtain a standard preset template image under the requirement of obtaining the preset template image. The measuring frame, the camera and the measuring optical machine in the physical tool used in the coordinate conversion method are all matched measuring frames, cameras and measuring optical machines.
After the template image is obtained under the scale D, the characteristic points of the two images are identified through a characteristic operator with scale and rotation invariance, and then a group of successfully matched corresponding characteristic point groups are obtained through violent matching;
and performing epipolar constraint on the characteristic point group to obtain a rotation and translation relation of the two images.
In general, when an intrusion limit point is detected during traveling, accurate measurement cannot be performed by stopping the measurement on the section of the frame structure instantaneously and accurately, and a braking and sliding process is performed in the middle, so that three-dimensional coordinate conversion is required, and the measuring optical machine after displacement can accurately turn to a direction to which the measuring optical machine is required to be pointed.
After the template image is obtained under the standard scale, the characteristic points of the two images are identified through the characteristic operators with scale and rotation invariance, a group of successfully matched corresponding characteristic point groups is obtained through violent matching, and finally, a rotation and translation relation of the two images is obtained through the group of corresponding characteristic point groups in a epipolar constraint mode, so that points on the nonstandard acquired image can be converted into points on the standard template image, and subsequent measurement is facilitated.
Assuming that the displacement of the measuring ray apparatus is L (which can be obtained by subtracting mileage of the odometer), a direction vector required by the displaced ray apparatus can be obtained through a similar triangle;
then for the coordinate system at the time the intrusion point is detected,
Figure SMS_103
Figure SMS_104
Figure SMS_105
from the similarity relationship
Figure SMS_106
Then
Figure SMS_107
Required by measuring optical machines
Figure SMS_108
Figure SMS_109
In a preferred embodiment, the measuring machine is provided with at least a pitch rotation angle 270 °, a yaw rotation angle 450 °, and a measuring range 50 meters.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. The method for converting the coordinates of the complex sphere based on the image processing is characterized by comprising the following steps of: the method comprises image recognition, image conversion and coordinate conversion;
and (3) image identification: presetting a template image; acquiring an intrusion image;
performing feature recognition on the intrusion image and the template image to obtain two-dimensional coordinate information of the feature point pixels;
matching the two-dimensional coordinate information of the feature point pixels of the intrusion image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
the method comprises the steps that an intrusion image is obtained by a shooting device, and a template image is a preset template image obtained by a camera calibrated through internal references under the requirement of obtaining the preset template image;
image conversion: performing epipolar constraint on the matching points;
according to the essence matrix
Figure QLYQS_1
And base matrix->
Figure QLYQS_2
Decomposing to obtain a rotation matrix and a translation matrix;
two-dimensional coordinates on an intrusion image
Figure QLYQS_3
Conversion to template image by rotation matrix and translation matrix to obtain +.>
Figure QLYQS_4
Placing the shooting device at a position, which is right in front of the measuring frame and is away from the center point of the surface plane structure of the measuring frame, with the length of D, and keeping the shooting device and the measuring device as the same horizontal line;
the two-dimensional coordinates of the shooting device under the dimension D are expressed as follows by a matrix relation:
Figure QLYQS_5
wherein
Figure QLYQS_6
Two-dimensional coordinates of the shooting device;
coordinate conversion: the method comprises static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate transformation is dynamic, executing the dynamic coordinate transformation;
static coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation tilt data to a measuring device;
dynamic coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
the rotational tilt data is transferred to the measuring device.
2. The method for image processing-based coordinate transformation of complex sphere according to claim 1, wherein: the shooting device adopts an internal reference calibrated camera; in the detection environment, shooting under the requirement of acquiring a preset template image to obtain a standard preset template image; when the feature points are identified, an image pyramid is constructed; after the shooting device moves, finding a matching image before moving in an image pyramid; meanwhile, when the characteristic points are identified, the gray centroid method is utilized to introduce rotation characteristics.
3. The method for image processing-based coordinate transformation of complex sphere according to claim 2, wherein: in the gray centroid calculation method; firstly, selecting a small image block A, defining moment of the image block, finding mass center of the image block according to the moment of the image block, connecting geometric center O and mass center C of the image block A to obtain a direction vector
Figure QLYQS_7
The direction of the feature point can be determined by the direction vector.
4. The method for image processing-based coordinate transformation of complex sphere according to claim 1, wherein: after a template image is obtained under the scale D, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, and a group of successfully matched corresponding feature point groups is obtained through violent matching;
and performing epipolar constraint on the characteristic point group to obtain a rotation and translation relation of the two images.
5. The method for image processing-based coordinate transformation of complex sphere according to claim 1, wherein: establishing a spherical coordinate system based on a complex spherical principle to represent three-dimensional world coordinates, taking a position of a fixed measuring optical machine as a north pole, taking a position of a perpendicular line intersecting with a plane of a camera shooting template as a circle center, and taking a distance from the north pole to the circle center as a radius r; spherical surface
Figure QLYQS_8
Expressed as:
Figure QLYQS_9
the camera template plane is recorded as:
Figure QLYQS_10
the north pole N has the coordinates of
Figure QLYQS_11
For->
Figure QLYQS_12
Any point z above, straight line and sphere joining N and z +.>
Figure QLYQS_13
Intersecting at a point P.
6. The method for image processing-based coordinate transformation of complex sphere according to claim 5, wherein: in the static coordinate conversion, a horizontal rotation angle and a vertical rotation angle required by the measuring device to point P are obtained in a vector mode; wherein,
Figure QLYQS_14
Figure QLYQS_15
7. the method for image processing-based coordinate transformation of complex sphere according to claim 5, wherein: in the dynamic coordinate conversion, the measuring device is obtained according to the displacement L of the measuring device and the principle of similar trianglesDirection vector required for displacement
Figure QLYQS_16
Measuring a horizontal rotation angle and a vertical rotation angle required by the optical machine to point to the P point; wherein,
Figure QLYQS_17
Figure QLYQS_18
/>
CN202310080890.0A 2023-02-08 2023-02-08 Coordinate conversion method based on image processing and complex sphere Active CN115797185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310080890.0A CN115797185B (en) 2023-02-08 2023-02-08 Coordinate conversion method based on image processing and complex sphere

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310080890.0A CN115797185B (en) 2023-02-08 2023-02-08 Coordinate conversion method based on image processing and complex sphere

Publications (2)

Publication Number Publication Date
CN115797185A CN115797185A (en) 2023-03-14
CN115797185B true CN115797185B (en) 2023-05-02

Family

ID=85430462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310080890.0A Active CN115797185B (en) 2023-02-08 2023-02-08 Coordinate conversion method based on image processing and complex sphere

Country Status (1)

Country Link
CN (1) CN115797185B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102798456A (en) * 2012-07-10 2012-11-28 中联重科股份有限公司 Method, device and system for measuring working range of engineering mechanical arm frame system
CN107820012A (en) * 2017-11-21 2018-03-20 暴风集团股份有限公司 A kind of fish eye images processing method, device, server and system
CN113902810A (en) * 2021-09-16 2022-01-07 南京工业大学 Robot gear chamfering processing method based on parallel binocular stereo vision

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2523149A (en) * 2014-02-14 2015-08-19 Nokia Technologies Oy Method, apparatus and computer program product for image-driven cost volume aggregation
CN106251395A (en) * 2016-07-27 2016-12-21 中测高科(北京)测绘工程技术有限责任公司 A kind of threedimensional model fast reconstructing method and system
CN106530218B (en) * 2016-10-28 2020-04-10 浙江宇视科技有限公司 Coordinate conversion method and device
CN107845096B (en) * 2018-01-24 2021-07-27 西安平原网络科技有限公司 Image-based planet three-dimensional information measuring method
CN108828606B (en) * 2018-03-22 2019-04-30 中国科学院西安光学精密机械研究所 One kind being based on laser radar and binocular Visible Light Camera union measuring method
CN109949232A (en) * 2019-02-12 2019-06-28 广州南方卫星导航仪器有限公司 Measurement method, system, electronic equipment and medium of the image in conjunction with RTK
CN109916304B (en) * 2019-04-01 2021-02-02 易思维(杭州)科技有限公司 Mirror surface/mirror surface-like object three-dimensional measurement system calibration method
CN111160232B (en) * 2019-12-25 2021-03-12 上海骏聿数码科技有限公司 Front face reconstruction method, device and system
CN111024003B (en) * 2020-01-02 2021-12-21 安徽工业大学 3D four-wheel positioning detection method based on homography matrix optimization
WO2023276567A1 (en) * 2021-06-29 2023-01-05 富士フイルム株式会社 Image processing device, image processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102798456A (en) * 2012-07-10 2012-11-28 中联重科股份有限公司 Method, device and system for measuring working range of engineering mechanical arm frame system
CN107820012A (en) * 2017-11-21 2018-03-20 暴风集团股份有限公司 A kind of fish eye images processing method, device, server and system
CN113902810A (en) * 2021-09-16 2022-01-07 南京工业大学 Robot gear chamfering processing method based on parallel binocular stereo vision

Also Published As

Publication number Publication date
CN115797185A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN103559711B (en) Based on the method for estimating of three dimensional vision system characteristics of image and three-dimensional information
CN109523595B (en) Visual measurement method for linear angular spacing of building engineering
CN109297436B (en) Binocular line laser stereo measurement reference calibration method
CN106996748A (en) A kind of wheel footpath measuring method based on binocular vision
CN111932565B (en) Multi-target recognition tracking calculation method
CN106971408A (en) A kind of camera marking method based on space-time conversion thought
Nagy et al. Online targetless end-to-end camera-LiDAR self-calibration
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN114372992A (en) Edge corner point detection four-eye vision algorithm based on moving platform
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
Meng et al. Defocused calibration for large field-of-view binocular cameras
CN109506629B (en) Method for calibrating rotation center of underwater nuclear fuel assembly detection device
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN115797185B (en) Coordinate conversion method based on image processing and complex sphere
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment
CN116202487A (en) Real-time target attitude measurement method based on three-dimensional modeling
CN116091603A (en) Box workpiece pose measurement method based on point characteristics
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
CN111854678B (en) Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
CN115359119A (en) Workpiece pose estimation method and device for disordered sorting scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant