CN115797185B - Coordinate conversion method based on image processing and complex sphere - Google Patents
Coordinate conversion method based on image processing and complex sphere Download PDFInfo
- Publication number
- CN115797185B CN115797185B CN202310080890.0A CN202310080890A CN115797185B CN 115797185 B CN115797185 B CN 115797185B CN 202310080890 A CN202310080890 A CN 202310080890A CN 115797185 B CN115797185 B CN 115797185B
- Authority
- CN
- China
- Prior art keywords
- image
- coordinate
- dimensional
- conversion
- coordinate conversion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012545 processing Methods 0.000 title claims abstract description 21
- 238000005259 measurement Methods 0.000 claims abstract description 34
- 230000003287 optical effect Effects 0.000 claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims description 19
- 230000003068 static effect Effects 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 13
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 8
- 238000009434 installation Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011900 installation process Methods 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a coordinate conversion method based on image processing and complex sphere, which comprises image recognition, image conversion and coordinate conversion; matching the template image and the pixel two-dimensional coordinate information of the feature points of the intrusion image through image recognition and finding out matching points; the matching points are subjected to epipolar constraint, and two-dimensional coordinates of the shooting device are obtained through an image conversion method; the coordinate conversion converts the two-dimensional coordinate of the shooting device into the three-dimensional coordinate in the spherical coordinate system by utilizing the coordinate conversion principle of the complex spherical surface, and obtains the rotation dip angle required by the measurement device to align the three-dimensional coordinate; the invention realizes the conversion from two-dimensional coordinates to three-dimensional coordinates by means of the standard two-dimensional shooting device coordinates corresponding to the pixel coordinates under the preset template image, and realizes the automatic alignment of the measuring optical machine in the measuring system by the three-dimensional coordinate information, so that the whole measuring process is simple and convenient.
Description
Technical Field
The invention belongs to the technical field of limit measurement detection, and particularly relates to a coordinate conversion method based on image processing and complex sphere for limit measurement detection.
Background
The rapid development of the era, with the most significant changes in vehicles, from the former carriage to the present aircraft high-speed rail, each of which can be said to be the representative product of the forefront of the era. The urban development is accelerated nowadays, the scale of the inside of the city is gradually expanded, and the communication between cities is also becoming stronger, so that the rapid development of light and rapid rail transit such as subway high-speed rail transit is accompanied.
In rail transit, trains run at high speed along a fixed track, which often needs to be done in a specific space, the size of which is a so-called limit. The specific definition of the limit takes the subway limit as an example, namely, the outline size of a subway vehicle, relevant technical parameters of the vehicle, running power performance, relevant conditions of a track and a contact net or a contact rail are combined, and the limit is designed according to equipment and installation errors and a specified calculation method, namely, the outline line of the running of the vehicle and the exceeding of structures around a track area is limited, and the image outline of the safe running of the subway, the limit of the cross section size of the vehicle, the limit of the installation size of equipment along the line and the determined effective clearance size of a building structure is ensured. Since there is a specification, measurements must be made to determine if the specification is met. The current limited measuring methods are two types: contact and non-contact.
The contact measurement means is earlier, accurate data of a limited point on a section can be obtained by means of a probe and a protractor, and the data recording can be performed manually or by an optical encoder and a potentiometer. The method has the advantages of low cost, simple use, high static condition precision of +/-0.5 mm, large workload, more manual intervention, low measurement speed and incapability of measuring all points on a section.
The non-contact type is similar to the contact type, but the principle similar to optical speed measurement is often adopted, and the triangulation based on a laser and optical reading is used, so that the disadvantage is that the material cost and the labor cost are high.
In the two main methods, in the equipment for performing autonomous measurement with mechanical assistance, the optical imaging technology is often relied on, for example, a plurality of groups of special 3D depth cameras are used to determine the position of a camera module, so that the three-dimensional reconstruction (for example, three-dimensional point cloud) of the measured cross-section outline is performed by using the camera optical imaging principle, and the specific coordinates of the measurement points are obtained to obtain the rotation angle of the measuring optical machine.
Disclosure of Invention
The invention aims to provide a coordinate conversion method based on image processing and complex sphere, which aims to solve the defects that in the prior art, the workload of a contact method is large and the detection cost of a non-contact method is high.
In order to solve the technical problems, the invention adopts the following technical scheme:
the method based on image processing and coordinate conversion of complex sphere comprises image recognition, image conversion and coordinate conversion;
and (3) image identification: presetting a template image; acquiring an intrusion image;
performing feature recognition on the intrusion image and the template image to obtain two-dimensional coordinate information of the feature point pixels;
matching the two-dimensional coordinate information of the feature point pixels of the intrusion image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
the method comprises the steps that an intrusion image is obtained by a shooting device, and a template image is a known image;
image conversion: performing epipolar constraint on the matching points;
Two-dimensional coordinates on an intrusion imageConversion of camera motion change to template image;
Placing the shooting device at a position, which is right in front of the measuring frame and is away from the center point of the surface plane structure of the measuring frame, with the length of D, and keeping the shooting device and the measuring device as the same horizontal line;
the two-dimensional coordinates of the shooting device under the dimension D can be expressed as follows by a matrix relation:
coordinate conversion: the method comprises static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate transformation is dynamic, executing the dynamic coordinate transformation;
static coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation tilt data to a measuring device;
dynamic coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
the rotational tilt data is transferred to the measuring device.
Further, the shooting device adopts an internal reference calibrated camera; in the detection environment, shooting under the requirement of acquiring a preset template image to obtain a standard preset template image; when the feature points are identified, an image pyramid is constructed; after the shooting device moves, finding a matching image before moving in an image pyramid; meanwhile, when the characteristic points are identified, the gray centroid method is utilized to introduce rotation characteristics.
Further, in the gray centroid calculation method; firstly, selecting a small image block A, defining an image blockFinding the mass center of the image block by the moment of the image block, and connecting the geometric center O and the mass center C of the image block A to obtain a direction vectorThe direction of the feature point can be determined by the direction vector.
Further, after a template image is obtained under the scale D, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, and then a group of successfully matched corresponding feature point groups is obtained through violent matching;
and performing epipolar constraint on the characteristic point group to obtain a rotation and translation relation of the two images.
Further, a spherical coordinate system based on a complex spherical principle is established to represent three-dimensional world coordinates, a north pole is taken as a fixed measuring optical machine, a perpendicular line is intersected at a plane of a camera shooting template to be used as a circle center, and the distance from the north pole to the circle center is a radius r; spherical surfaceExpressed as: />
The camera template plane is recorded as:
the north pole N has the coordinates ofFor->Any point z above, straight line and sphere joining N and z +.>Intersecting at a point P.
Further, in the static coordinate conversion, a horizontal rotation angle and a vertical rotation angle required by the measuring device to point P are obtained in a vector mode; wherein,
further, in the dynamic coordinate transformation, a direction vector required by the displacement of the measuring device is obtained according to the displacement L of the measuring device and the principle of similar triangles:
measuring a horizontal rotation angle and a vertical rotation angle required by the optical machine to point to the P point; wherein,
compared with the prior art, the invention has the following beneficial effects:
the invention uses the existing image processing principle to make a part of cameras calibrated by internal references obtain a standard preset template picture under the requirement of template image acquisition, and uses the standard two-dimensional camera coordinates corresponding to the pixel coordinates under the preset template image to realize the conversion from two-dimensional coordinates to three-dimensional coordinates, and realizes the automatic alignment of a measuring optical machine in a measuring system by three-dimensional coordinate information, thus the whole measuring process is simple and convenient.
The intrusion conversion coordinate information can be transferred to the template image in a characteristic point matching mode, and the two-dimensional coordinate information in the template image is converted into the two-dimensional coordinate information of the camera through epipolar constraint, so that the intrusion image is connected with the camera in the measuring system in the two modes. Because the characteristic point matching mode is used, the requirement on the infringed image is not high when the infringed image is acquired, the infringed image can be acquired only around a specified position through image conversion when the measuring equipment is installed, namely, the infringed image acquired in a fuzzy range is converted into relatively accurate coordinate information through image processing, the integral work before measurement is simplified, the workload and human error are reduced, and the preset template image can be repeatedly used aiming at the same environment, so that the measuring process is repeatable.
The invention directly converts the two-dimensional coordinate conversion in the camera into the rotation dip angle required by the alignment of the measuring optical machine by the principle of complex sphere conversion, and simplifies the whole process from the two-dimensional coordinate to the three-dimensional coordinate. The accuracy after conversion can be effectively improved through two modes of static coordinate conversion and dynamic coordinate conversion, namely, the automatic alignment of the measuring optical machine can be effectively ensured.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
The method based on image processing and coordinate conversion of complex sphere comprises image recognition, image conversion and coordinate conversion;
before the automatic alignment is achieved by clicking the screen, two-dimensional coordinates in the image need to be established, because for the camera imaging system, the image pixel coordinates need to be converted to real world coordinates in space, the calibration needs to be completed through calibration work, while the internal parameters have good stability after one calibration, but the external parameters of a camera system change along with the position of the camera and different use environments, which means that before the measurement is completed by using the camera imaging principle, the calibration needs to be performed each time, and the calibration is a complex and error-prone process, so that by means of the existing image processing principle, a standard preset template picture is obtained by a camera calibrated through internal references under the requirement of preset template image acquisition, and thus the standard two-dimensional camera coordinates under a preset template image can be obtained through internal references.
And (3) image identification: presetting a template image; acquiring an intrusion image;
performing feature recognition on the intrusion image and the template image to obtain two-dimensional coordinate information of the feature point pixels;
matching the two-dimensional coordinate information of the feature point pixels of the intrusion image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
the limit intrusion image is obtained by a shooting device, the template image is a known image, and the shooting device adopts a camera.
Firstly, identifying two-dimensional coordinate information of feature point pixels of the intrusion image and the template image, and then matching the two-dimensional coordinate information of the feature point pixels. And when matching is performed on the two-dimensional coordinate information of the feature point pixels, violent matching is adopted, descriptors of each feature in the two-dimensional coordinate information of the feature point pixels of the intrusion image are matched with all feature descriptors in the two-dimensional coordinate information of the feature point pixels of the template image, the similarity (generally speaking, euclidean distance between the pixel points) between the two descriptors is calculated, and then a pair of matching points with the highest similarity, namely a final matching result, is returned. When a group of corresponding matching points are found, the spatial positions represented by the successfully matched pixel points can be basically considered to be the same on the preset template image and the infringement image, so that the relation between the whole images can be obtained through the relation between the matching points, the points on the infringement image are converted through the relation, and the point positions of the unknown condition under the infringement image are converted into the corresponding point positions under the preset template image of the known condition.
Performing epipolar constraint on the matching points;
according to the essence matrixAnd base matrix->Decomposing to obtain shooting device change->、 />;/>
Two-dimensional coordinates on an intrusion imageConversion of camera motion change to template image;
Placing the shooting device at a position, which is right in front of the measuring frame and is away from the center point of the surface plane structure of the measuring frame, with the length of D, and keeping the shooting device and the measuring device as the same horizontal line;
the two-dimensional coordinates of the shooting device under the dimension D can be expressed as follows by a matrix relation:
wherein Two-dimensional coordinates for the camera. Wherein the shooting device adopts a camera, and the measuring device adopts a measuring optical machine.
The epipolar constraint is used for obtaining the motion condition between the intrusion image and the template image, and the motion from the intrusion image to the template image is set as R, t;
the camera reference matrix K is known, and for a certain point P in space, the spatial coordinates of the point P in the intrusion image are
According to the camera model, which corresponds to the pixel points p1 and p2 in the two images, there is the following relationship
Obtaining
wherein
According to the essence matrixAnd base matrix->Decomposing to obtain camera motion variation->、 />;
After obtaining the camera motion change relationship, two-dimensional coordinates on the intrusion image are obtainedConversion to template image by camera motion change>;
Specifically, the camera system is placed at a position, which is right in front of the measuring equipment frame and is away from the center point of the surface plane structure of the measuring frame by the length D, and the camera system and the measuring optical machine are kept at the same horizontal line; the two-dimensional coordinates of the camera at the dimension D can be represented by a matrix relationship:
Let the motion of the intrusion image to the template image be R (rotation matrix), t (translation matrix).
The camera reference matrix K is known, whose spatial coordinates in the first image are, for a point P in space
According to the camera model, it corresponds to the pixel points p1 and p2 in the two images, with the following relationship:
According to the imaging relationship, and />Is a projected relationship so they have an equal relationship in the scale sense. Thus, the first and second substrates are bonded together,
so the two relations can be rewritten as
Order the, />Representing the coordinates of the two pixel points on the normalized plane, substituting the coordinates into the above formula for simplification,
Left Bian Shizi is constant at 0 and 0 times any constant is 0, so can be usedRewritten as +.>Then there is
The middle part is marked as two matrices
According to the essence matrixAnd base matrix->Decomposing to obtain camera motion variation->、 />。
After epipolar constraint, the conversion relation between two groups of two-dimensional points is known, through the conversion relation, two-dimensional pixel point coordinates on a less standard intrusion image can be converted to the upper surface of a standard preset template image through the camera motion change relation, and then the converted pixel coordinates are given to the upper surface of a measuring frame through shooting data of the standard preset template image.
After the image processing, the corresponding relation R, t of the pixel points between the infringement image and the preset template image is determined, and when the pixel coordinates of the current infringement point are obtained through the marks in the image stored in the previous infringementAfter that, since the camera system calibrated by the factory internal reference is used, the internal reference is +.>Is known and is then calculated by the formula:
Due to the fact that at this timeStill, the image pixel coordinates cannot be correspondingly related to the three-dimensional coordinates by a non-three-dimensional reconstruction mode. />
Firstly, the camera system is placed at the position, which is right in front of the frame of the measuring equipment and is the same horizontal line with the measuring optical machine, of which the length is D from the center point of the surface plane structure of the frame.
At this time, according to the imaging structure principle of the camera, the two-dimensional coordinates of the camera under the dimension D can be expressed as a matrix relation
wherein I.e. as camera two-dimensional coordinates, i.e. two-dimensional coordinates below the surface plane of the frame.
In the actual measurement process, the equipment is firstly required to be installed, and the installation process has higher requirements in the traditional measurement mode, for example, if the space information is acquired through camera calibration, the equipment is often required to be installed according to the designated installation requirements before measurement and then to be subjected to subsequent measurement, and if the equipment is not installed according to the strict installation requirements, the equipment is required to be calibrated on site each time, and the operation is complex. Especially, under the specified installation requirement, a certain error is often caused to the whole measurement result once the measurement personnel generates deviation in the installation process. In order to simplify the installation requirement and avoid unnecessary manual errors, when the measurement is carried out after a preset template image is obtained, the reinstallation of equipment is no longer required to be according to the high specification requirement when the preset template image is obtained, but the equipment is installed at a position which can obtain information similar to the preset template image within a fuzzy range, and then the correction processing is carried out through the steps through a computer, so that the setting work before the measurement is simplified, the workload is reduced, the manual errors are reduced, and the preset template image can be repeatedly used for the same environment, so that the measurement process is repeatable. The image processing technique is chosen to simplify the operation steps and avoid the error problems that are easily brought about by the installation.
Through the steps, two-dimensional coordinate points on a measurement frame plane in a preset template image shot by a group of cameras are obtained, but the two-dimensional coordinates do not have directionality in a three-dimensional space, namely, the two-dimensional coordinates under the plane cannot provide a rotation dip angle required by automatic alignment of the measurement equipment to a designated position, so that a method for corresponding the two-dimensional coordinates to the space three-dimensional coordinates in a measurement environment needs to be found, and the corresponding relation in the method has the characteristic that any point on the obtained two-dimensional plane exists and only has one corresponding space three-dimensional point, so that the space direction information contained in the space three-dimensional coordinates can be ensured to be reliable for the corresponding two-dimensional coordinate point, namely, when the measurement equipment points to the space three-dimensional coordinates, the position of the corresponding two-dimensional coordinate point is pointed at simultaneously.
Coordinate conversion: the method comprises static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate transformation is dynamic, executing the dynamic coordinate transformation;
static coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation tilt data to a measuring device;
dynamic coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
the rotational tilt data is transferred to the measuring device.
Firstly, a spherical system is established, and a fixed measuring optical machine is taken as a north poleThe perpendicular line is intersected with the plane of the camera shooting template as a circle center O, and the distance from the north pole to the circle center is the radius +.>Sphere->Can be expressed as +.>
The plane of the template shot by a camera is recorded as
Then the north pole N has the coordinates ofFor->At any point z, the straight line joining N and z must be sphericalIntersecting at a point P. And easily know when->P is in northern hemisphere, ">P is in the southern hemisphere,>p and z coincide.
When z tends to +.sub.infinity, then p tends to the north pole N more.
Thus, according to the vector similarity and the spherical equation, when the two-dimensional coordinates of the z-point camera areIn this case, it can be known that P corresponding to this is
At this time, when the measuring light machine points to the point P through rotation, the measuring light machine can fall at the point Z, and the horizontal rotation angle and the vertical rotation angle required by the light machine can be directly obtained through a vector mode.
The rotation angle required by the measuring ray machine to point P can be obtained through the two-dimensional coordinates, and the two dimensions are embodied in three dimensions, are unique and reliable.
In a preferred embodiment, the template image is acquired by a method;
the shooting device adopts an internal reference calibrated camera;
in a preferred embodiment, when the feature points are identified, firstly selecting one pixel point p in the intrusion image or the template image, wherein the brightness of the pixel point p is U;
selecting 16 nearest pixel points by taking the pixel point p as a center; among the selected pixel points, p points with continuous N points with brightness greater than U-T and less than U+T are regarded as characteristic points; wherein N is 9, 11 or 12;
i.e. feature point identification employs the ORB algorithm. The adoption of ORB feature points requires the calculation of FAST key points and BRIEF description operators; wherein FAST only compares the brightness of pixels, firstly selects a pixel p in an image, supposes that the brightness is U, then sets a threshold T related to U, and then selects 16 pixels closest to the pixel p with the pixel p as the center, when the brightness of N consecutive pixels in the selected pixel is greater than u+t or less than U-T, then p can be regarded as a feature point, N can be generally selected as 9, 11 or 12, and finally the steps are circulated until each pixel is executed. When calculating FAST feature points, only brightness measurement differences are compared, so that the processing speed is very high, but the comparison is single, meanwhile, the defects of weak repeatability and uneven distribution are caused, the FAST corner points have no direction information, and the scale problem exists. Therefore, in the ORB, the scale is not deformed by constructing the image pyramid, and when the camera moves forward or backward, a match can be found in the upper layer of the previous image pyramid and the lower layer of the next image pyramid or in the lower layer of the previous pyramid and the upper layer of the next image pyramid. And the gray centroid method is utilized to introduce rotation characteristics. The centroid is the center of the weight with the image gray value, where the image gray centroid of the feature point attachment needs to be calculated. The specific method comprises the following steps:
firstly, selecting a small image block A, defining the moment of the image block as
Then find the centroid of the image block by moment
After finding the centroid, connecting the geometric center O and the centroid C of the image block A to obtain a direction vectorDefining the direction of the characteristic point at the moment as
The BRIEF description operator is a binary description operator whose description vector consists of a number of 0's and 1's, and when comparing the pixel sizes of two random vectors p and q, if p is greater than q, the description vector is taken to be 1, and vice versa is 0. The BRIEF operator is a selected random point during comparison, the speed is very high, and the binary system is convenient for storage of a computer. The original BRIEF operator also has no rotation invariance, and in ORB, the direction characteristic is added in the FAST corner extraction stage, so that the BRIEF operator obtains better rotation invariance by utilizing the direction information.
In a preferred embodiment, the template image is obtained and stored for the detection environment before the initial use; namely, a camera calibrated by internal parameters is enabled to obtain a standard preset template image under the requirement of obtaining the preset template image. The measuring frame, the camera and the measuring optical machine in the physical tool used in the coordinate conversion method are all matched measuring frames, cameras and measuring optical machines.
After the template image is obtained under the scale D, the characteristic points of the two images are identified through a characteristic operator with scale and rotation invariance, and then a group of successfully matched corresponding characteristic point groups are obtained through violent matching;
and performing epipolar constraint on the characteristic point group to obtain a rotation and translation relation of the two images.
In general, when an intrusion limit point is detected during traveling, accurate measurement cannot be performed by stopping the measurement on the section of the frame structure instantaneously and accurately, and a braking and sliding process is performed in the middle, so that three-dimensional coordinate conversion is required, and the measuring optical machine after displacement can accurately turn to a direction to which the measuring optical machine is required to be pointed.
After the template image is obtained under the standard scale, the characteristic points of the two images are identified through the characteristic operators with scale and rotation invariance, a group of successfully matched corresponding characteristic point groups is obtained through violent matching, and finally, a rotation and translation relation of the two images is obtained through the group of corresponding characteristic point groups in a epipolar constraint mode, so that points on the nonstandard acquired image can be converted into points on the standard template image, and subsequent measurement is facilitated.
Assuming that the displacement of the measuring ray apparatus is L (which can be obtained by subtracting mileage of the odometer), a direction vector required by the displaced ray apparatus can be obtained through a similar triangle;
then for the coordinate system at the time the intrusion point is detected,
from the similarity relationship
Then
Required by measuring optical machines
In a preferred embodiment, the measuring machine is provided with at least a pitch rotation angle 270 °, a yaw rotation angle 450 °, and a measuring range 50 meters.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. The method for converting the coordinates of the complex sphere based on the image processing is characterized by comprising the following steps of: the method comprises image recognition, image conversion and coordinate conversion;
and (3) image identification: presetting a template image; acquiring an intrusion image;
performing feature recognition on the intrusion image and the template image to obtain two-dimensional coordinate information of the feature point pixels;
matching the two-dimensional coordinate information of the feature point pixels of the intrusion image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
the method comprises the steps that an intrusion image is obtained by a shooting device, and a template image is a preset template image obtained by a camera calibrated through internal references under the requirement of obtaining the preset template image;
image conversion: performing epipolar constraint on the matching points;
according to the essence matrixAnd base matrix->Decomposing to obtain a rotation matrix and a translation matrix;
two-dimensional coordinates on an intrusion imageConversion to template image by rotation matrix and translation matrix to obtain +.>;
Placing the shooting device at a position, which is right in front of the measuring frame and is away from the center point of the surface plane structure of the measuring frame, with the length of D, and keeping the shooting device and the measuring device as the same horizontal line;
the two-dimensional coordinates of the shooting device under the dimension D are expressed as follows by a matrix relation:
coordinate conversion: the method comprises static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate transformation is dynamic, executing the dynamic coordinate transformation;
static coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation tilt data to a measuring device;
dynamic coordinate conversion: converting the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex spherical surface, and obtaining a rotation dip angle required by the measuring device to align the three-dimensional coordinates;
the rotational tilt data is transferred to the measuring device.
2. The method for image processing-based coordinate transformation of complex sphere according to claim 1, wherein: the shooting device adopts an internal reference calibrated camera; in the detection environment, shooting under the requirement of acquiring a preset template image to obtain a standard preset template image; when the feature points are identified, an image pyramid is constructed; after the shooting device moves, finding a matching image before moving in an image pyramid; meanwhile, when the characteristic points are identified, the gray centroid method is utilized to introduce rotation characteristics.
3. The method for image processing-based coordinate transformation of complex sphere according to claim 2, wherein: in the gray centroid calculation method; firstly, selecting a small image block A, defining moment of the image block, finding mass center of the image block according to the moment of the image block, connecting geometric center O and mass center C of the image block A to obtain a direction vectorThe direction of the feature point can be determined by the direction vector.
4. The method for image processing-based coordinate transformation of complex sphere according to claim 1, wherein: after a template image is obtained under the scale D, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, and a group of successfully matched corresponding feature point groups is obtained through violent matching;
and performing epipolar constraint on the characteristic point group to obtain a rotation and translation relation of the two images.
5. The method for image processing-based coordinate transformation of complex sphere according to claim 1, wherein: establishing a spherical coordinate system based on a complex spherical principle to represent three-dimensional world coordinates, taking a position of a fixed measuring optical machine as a north pole, taking a position of a perpendicular line intersecting with a plane of a camera shooting template as a circle center, and taking a distance from the north pole to the circle center as a radius r; spherical surfaceExpressed as:
the camera template plane is recorded as:
7. the method for image processing-based coordinate transformation of complex sphere according to claim 5, wherein: in the dynamic coordinate conversion, the measuring device is obtained according to the displacement L of the measuring device and the principle of similar trianglesDirection vector required for displacement
Measuring a horizontal rotation angle and a vertical rotation angle required by the optical machine to point to the P point; wherein,
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310080890.0A CN115797185B (en) | 2023-02-08 | 2023-02-08 | Coordinate conversion method based on image processing and complex sphere |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310080890.0A CN115797185B (en) | 2023-02-08 | 2023-02-08 | Coordinate conversion method based on image processing and complex sphere |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115797185A CN115797185A (en) | 2023-03-14 |
CN115797185B true CN115797185B (en) | 2023-05-02 |
Family
ID=85430462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310080890.0A Active CN115797185B (en) | 2023-02-08 | 2023-02-08 | Coordinate conversion method based on image processing and complex sphere |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797185B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102798456A (en) * | 2012-07-10 | 2012-11-28 | 中联重科股份有限公司 | Method, device and system for measuring working range of engineering mechanical arm frame system |
CN107820012A (en) * | 2017-11-21 | 2018-03-20 | 暴风集团股份有限公司 | A kind of fish eye images processing method, device, server and system |
CN113902810A (en) * | 2021-09-16 | 2022-01-07 | 南京工业大学 | Robot gear chamfering processing method based on parallel binocular stereo vision |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2523149A (en) * | 2014-02-14 | 2015-08-19 | Nokia Technologies Oy | Method, apparatus and computer program product for image-driven cost volume aggregation |
CN106251395A (en) * | 2016-07-27 | 2016-12-21 | 中测高科(北京)测绘工程技术有限责任公司 | A kind of threedimensional model fast reconstructing method and system |
CN106530218B (en) * | 2016-10-28 | 2020-04-10 | 浙江宇视科技有限公司 | Coordinate conversion method and device |
CN107845096B (en) * | 2018-01-24 | 2021-07-27 | 西安平原网络科技有限公司 | Image-based planet three-dimensional information measuring method |
CN108828606B (en) * | 2018-03-22 | 2019-04-30 | 中国科学院西安光学精密机械研究所 | One kind being based on laser radar and binocular Visible Light Camera union measuring method |
CN109949232A (en) * | 2019-02-12 | 2019-06-28 | 广州南方卫星导航仪器有限公司 | Measurement method, system, electronic equipment and medium of the image in conjunction with RTK |
CN109916304B (en) * | 2019-04-01 | 2021-02-02 | 易思维(杭州)科技有限公司 | Mirror surface/mirror surface-like object three-dimensional measurement system calibration method |
CN111160232B (en) * | 2019-12-25 | 2021-03-12 | 上海骏聿数码科技有限公司 | Front face reconstruction method, device and system |
CN111024003B (en) * | 2020-01-02 | 2021-12-21 | 安徽工业大学 | 3D four-wheel positioning detection method based on homography matrix optimization |
WO2023276567A1 (en) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | Image processing device, image processing method, and program |
-
2023
- 2023-02-08 CN CN202310080890.0A patent/CN115797185B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102798456A (en) * | 2012-07-10 | 2012-11-28 | 中联重科股份有限公司 | Method, device and system for measuring working range of engineering mechanical arm frame system |
CN107820012A (en) * | 2017-11-21 | 2018-03-20 | 暴风集团股份有限公司 | A kind of fish eye images processing method, device, server and system |
CN113902810A (en) * | 2021-09-16 | 2022-01-07 | 南京工业大学 | Robot gear chamfering processing method based on parallel binocular stereo vision |
Also Published As
Publication number | Publication date |
---|---|
CN115797185A (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106651942B (en) | Three-dimensional rotating detection and rotary shaft localization method based on characteristic point | |
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN103559711B (en) | Based on the method for estimating of three dimensional vision system characteristics of image and three-dimensional information | |
CN109523595B (en) | Visual measurement method for linear angular spacing of building engineering | |
CN109297436B (en) | Binocular line laser stereo measurement reference calibration method | |
CN106996748A (en) | A kind of wheel footpath measuring method based on binocular vision | |
CN111932565B (en) | Multi-target recognition tracking calculation method | |
CN106971408A (en) | A kind of camera marking method based on space-time conversion thought | |
Nagy et al. | Online targetless end-to-end camera-LiDAR self-calibration | |
Wang et al. | Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle | |
CN113393524A (en) | Target pose estimation method combining deep learning and contour point cloud reconstruction | |
CN114372992A (en) | Edge corner point detection four-eye vision algorithm based on moving platform | |
CN116563377A (en) | Mars rock measurement method based on hemispherical projection model | |
Meng et al. | Defocused calibration for large field-of-view binocular cameras | |
CN109506629B (en) | Method for calibrating rotation center of underwater nuclear fuel assembly detection device | |
CN114998448A (en) | Method for calibrating multi-constraint binocular fisheye camera and positioning space point | |
CN112712566B (en) | Binocular stereo vision sensor measuring method based on structure parameter online correction | |
CN115797185B (en) | Coordinate conversion method based on image processing and complex sphere | |
CN116958218A (en) | Point cloud and image registration method and equipment based on calibration plate corner alignment | |
CN116202487A (en) | Real-time target attitude measurement method based on three-dimensional modeling | |
CN116091603A (en) | Box workpiece pose measurement method based on point characteristics | |
CN109815966A (en) | A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm | |
CN111854678B (en) | Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
CN115359119A (en) | Workpiece pose estimation method and device for disordered sorting scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |