CN115797185A - Method for converting coordinates based on image processing and complex spherical surface - Google Patents
Method for converting coordinates based on image processing and complex spherical surface Download PDFInfo
- Publication number
- CN115797185A CN115797185A CN202310080890.0A CN202310080890A CN115797185A CN 115797185 A CN115797185 A CN 115797185A CN 202310080890 A CN202310080890 A CN 202310080890A CN 115797185 A CN115797185 A CN 115797185A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- dimensional
- coordinate conversion
- dimensional coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012545 processing Methods 0.000 title claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims abstract description 55
- 230000003287 optical effect Effects 0.000 claims abstract description 19
- 238000005259 measurement Methods 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 22
- 230000003068 static effect Effects 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 15
- 230000033001 locomotion Effects 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 10
- 238000006073 displacement reaction Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 6
- 230000009545 invasion Effects 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 8
- 238000009434 installation Methods 0.000 description 8
- 230000009466 transformation Effects 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000011900 installation process Methods 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241000287196 Asthenes Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009828 non-uniform distribution Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
Images
Abstract
The invention discloses a coordinate conversion method based on image processing and complex spherical surfaces, which comprises image recognition, image conversion and coordinate conversion; matching the two-dimensional coordinate information of the characteristic point pixel of the template image and the threshold image through image identification and finding out a matching point; for polar constraint of matching points, obtaining a two-dimensional coordinate of the shooting device by an image conversion method; the coordinate conversion utilizes the coordinate conversion principle of a complex sphere to convert the two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system and obtain a rotation inclination angle required by the measuring device to align the three-dimensional coordinates; the invention realizes the conversion from two-dimensional coordinates to three-dimensional coordinates by means of the coordinates of the standard two-dimensional shooting device corresponding to the pixel coordinates under the preset template image, realizes the automatic alignment of the measuring optical machine in the measuring system by the three-dimensional coordinate information, and has simple and convenient whole measuring process.
Description
Technical Field
The invention belongs to the technical field of limit measurement and detection, and particularly relates to a coordinate conversion method based on image processing and complex spherical surfaces.
Background
The rapid development of the era and the most remarkable change of the era is the change of vehicles, and from the former carriage to the high-speed rail of the airplane at present, each vehicle can be regarded as the most advanced representative product of the era. Now, the urbanization is accelerated, not only the scale inside cities is gradually expanded, but also the communication between cities is stronger, and the rapid development of light and rapid rail transit such as subway high-speed rail is accompanied.
In rail transit, trains run at high speed along fixed rails, which often need to be carried out in a specific space, the size of which is the so-called limit. The concrete definition of the limit, which is taken as an example herein, is that the limit is designed according to a specified calculation method by taking the contour dimension of a subway vehicle, related technical parameters of the vehicle, operation dynamic performance, and related conditions of a track and a contact network or a contact rail in combination, according to equipment and installation errors, and is colloquially a contour line for limiting the operation of the vehicle and the exceeding of structures around a track area, and is an image contour for ensuring the safe operation of the subway, limiting the section dimension of the vehicle, limiting the installation dimension of equipment along the line and determining the effective clearance dimension of a building structure. Since there is a regulation, a measurement must be taken to determine compliance with the regulation. The current methods of clearance measurement are two: contact and contactless.
The contact type measuring means is earlier, accurate data of a finite point on a section can be obtained usually by means of a probe and a protractor, and data recording can be manually recorded or recorded by an optical encoder and a potentiometer. The method has the advantages of low cost, simple use, high static condition precision which can reach +/-0.5 mm, large workload, more manual intervention, very low measurement speed and incapability of measuring all points on the section.
The non-contact type is similar to the contact type, but the principle similar to optical speed measurement is usually adopted, and the laser and the triangulation measurement of optical reading are used, so that the defect that the material cost and the labor cost are high is overcome.
In the two methods, in the above mainstream methods, in the device that performs autonomous measurement with the aid of mechanical assistance, the method usually depends on an optical imaging technology, for example, a plurality of groups of special 3D depth-of-field cameras are used to determine the position of a camera module, so as to perform three-dimensional reconstruction (e.g., three-dimensional point cloud) on the measured cross-sectional profile with the aid of the optical imaging principle of the camera, obtain specific coordinates of a measurement point, and convert the specific coordinates to obtain the rotation angle of the measurement light machine.
Disclosure of Invention
The invention aims to provide a coordinate conversion method based on image processing and complex spherical surfaces, which aims to overcome the defects of large workload of a contact-type method and higher detection cost of a non-contact-type method in the prior detection technology in the background art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method based on image processing and complex spherical surface coordinate transformation comprises image recognition, image transformation and coordinate transformation;
image recognition: presetting a template image; acquiring an infringement image;
carrying out feature identification on the infringement image and the template image to obtain feature point pixel two-dimensional coordinate information;
matching the two-dimensional coordinate information of the characteristic point pixels of the invasion limit image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
wherein the invasion limit image is obtained by a shooting device, and the template image is a known image;
image conversion: carrying out epipolar constraint on the matching points;
Two-dimensional coordinates on the image to be infringedObtained by converting the change of the movement of the camera to the template image;
Placing a shooting device at a position which is right in front of the measuring frame and has the length of D from the center point of the surface plane structure of the measuring frame, and keeping the shooting device and the measuring device at the same horizontal line;
the two-dimensional coordinates of the photographing device at the dimension D can be expressed by a matrix relationship as:
and (3) coordinate conversion: the method comprises the steps of static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate conversion is carried out, the dynamic coordinate conversion is carried out;
static coordinate conversion: converting two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex sphere, and obtaining a rotation inclination angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation inclination angle data to a measuring device;
dynamic coordinate conversion: converting two-dimensional coordinates of a shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and a spherical coordinate conversion principle of a complex sphere, and obtaining a rotation inclination angle required by the measuring device to align the three-dimensional coordinates;
and transmitting the rotation inclination angle data to a measuring device.
Further, the shooting device adopts a camera calibrated by internal reference; in a detection environment, shooting under the requirement of acquiring a preset template image to obtain a standard preset template image; constructing an image pyramid during feature point identification; after the shooting device moves, finding a matching image before moving in the image pyramid; meanwhile, when the characteristic points are identified, the rotation characteristic is introduced by utilizing a gray scale centroid method.
Further, in the calculation method of the gray centroid; firstly, a small image block A is selected, the moment of the image block is defined, the centroid of the image block is found through the moment of the image block, the geometric center O and the centroid C of the image block A are connected, and a direction vector is obtainedAnd determining the direction of the characteristic point through the direction vector.
Further, after template images are obtained in a scale D, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, and a group of corresponding feature point groups which are successfully matched are obtained through violent matching;
and carrying out epipolar constraint on the feature point group to obtain a rotation and translation relation of the two images.
Further, establishing a spherical coordinate system based on a complex spherical principle to represent three-dimensional world coordinates, taking a fixed measuring optical machine position as a north pole point, taking a position where a perpendicular line intersects a camera shooting template plane as a circle center, and taking the distance from the north pole point to the circle center as a radius r; spherical surfaceExpressed as:
the camera template plane is recorded as:
the coordinate of north pole N isTo aAny point z, straight line connecting N and z and spherical surfaceIntersecting at a point P.
Further, in the static coordinate conversion, a horizontal rotation angle and a vertical rotation angle which are required by the measuring device to point P are obtained in a vector mode; wherein,
further, in the dynamic coordinate transformation, a direction vector required by the displacement of the measuring device is obtained according to the displacement L of the measuring device and the principle of a similar triangle:
measuring a horizontal rotation angle and a vertical rotation angle required by the light machine to point to the P point; wherein,
compared with the prior art, the invention has the following beneficial effects:
the invention obtains a standard preset template picture by a camera after internal reference calibration under the requirement of template image acquisition by means of the existing image processing principle, realizes the conversion from a two-dimensional coordinate to a three-dimensional coordinate by means of a standard two-dimensional camera coordinate corresponding to a pixel coordinate under the preset template image, realizes the automatic alignment of a measuring optical machine in a measuring system by three-dimensional coordinate information, and has simple and convenient whole measuring process.
The limit invasion conversion coordinate information can be converted to the template image in a characteristic point matching mode, the two-dimensional coordinate information in the template image is converted to the two-dimensional coordinate information of the camera through antipodal constraint, and the limit invasion image is linked with the camera in the measuring system through the two modes. The method has the advantages that the characteristic point matching mode is used, so that the requirement on the limit-invading image is not high when the limit-invading image is obtained, the image conversion can ensure that the measuring equipment only needs to be arranged around a specified position when being installed, namely, the limit-invading image obtained in a fuzzy range is converted into relatively accurate coordinate information through image processing, the integral work before measurement is simplified, the workload and the human error are reduced, and the preset template image can be repeatedly used for the same environment, so that the measuring process has repeatability.
According to the invention, the two-dimensional coordinate in the camera is directly converted into the rotation inclination angle required by the alignment of the measuring optical machine by the principle of complex spherical surface conversion, and the whole process from the two-dimensional coordinate to the three-dimensional coordinate is simplified. The precision after conversion can be effectively improved through two modes of static coordinate conversion and dynamic coordinate conversion, namely the automatic alignment of the measuring optical machine can be effectively ensured.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Examples
A method based on image processing and complex spherical surface coordinate transformation comprises image recognition, image transformation and coordinate transformation;
before the automatic alignment is realized by clicking the screen, two-dimensional coordinates in the image need to be established, because for a camera imaging system, the image pixel coordinates are required to be converted into actual world coordinates in space, which is completed through calibration work, and although internal parameters have good stability after one calibration, external parameters of one camera system change along with the difference of the camera position and the use environment, which means that before the measurement is completed by using the camera imaging principle, calibration and other operations need to be performed each time, and calibration is a complicated and error-prone process, so that a standard preset template picture is obtained by a camera after internal reference calibration under the requirement of preset template image acquisition by means of the existing image processing theory, so that standard two-dimensional camera coordinates under a preset template image can be obtained through internal reference.
Image recognition: as shown in fig. 1, a template image is preset; acquiring an infringement image;
as shown in fig. 1, performing feature identification on the infringement image and the template image to obtain feature point pixel two-dimensional coordinate information;
as shown in fig. 1, matching the two-dimensional coordinate information of the feature point pixels of the infringement image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
wherein, the invasion limit image is obtained by a shooting device, the template image is a known image, and the shooting device adopts a camera.
Identifying the two-dimensional coordinate information of the characteristic point pixels of the infringement image and the template image, and then matching the two-dimensional coordinate information of the characteristic point pixels. Violent matching is adopted when the two-dimensional coordinate information of the feature point pixel is matched, a descriptor of each feature in the threshold image feature point pixel two-dimensional coordinate information group is matched with all feature descriptors in the template image feature point pixel two-dimensional coordinate information group, the similarity (generally speaking, euclidean distance between pixel points) between the two descriptors is calculated when matching is carried out, and then a pair of matching points with the highest similarity is returned, namely a final matching result. When a group of matching points corresponding to the group is found, the spatial positions represented by the successfully matched pixel points are basically considered to be the same on the preset template image and the infringement image, so that the relation between the whole images can be obtained through the relation between the matching points, and then the points on the infringement image are converted through the relation, and the points under unknown conditions under the infringement image are converted into corresponding points under the preset template image under known conditions.
As shown in fig. 1, epipolar constraint is performed on matching points;
as shown in fig. 1, the change R and t of the photographing device are obtained by decomposition according to the intrinsic matrix E and the fundamental matrix F; wherein R is a rotation matrix and t is a translation matrix.
Two-dimensional coordinates on the image to be infringedObtained by converting the change of the movement of the camera to the template image;
Placing a shooting device at a position which is right in front of the measuring frame and has a length D from the center point of the surface plane structure of the measuring frame, and keeping the shooting device and the measuring device at the same horizontal line;
the two-dimensional coordinates of the photographing device at the scale D can be expressed by a matrix relationship as:
wherein Two-dimensional coordinates of the photographing device. Wherein the shooting device adopts a camera, and the measuring device adopts a measuring optical machine.
Wherein the epipolar constraint is used for solving the motion condition between the limit-invading image and the template image, and the motion from the limit-invading image to the template image is set as R and t;
knowing the camera internal reference matrix K, for a certain point P in space, the spatial coordinates in the infringement image are
According to the camera model, it corresponds to the pixel points p1 and p2 in the two images, having the following relationship
To obtain
wherein
Decomposing according to the essence matrix E and the basic matrix F to obtain camera motion changes R and t;
after the camera motion change relation is obtained, the two-dimensional coordinates on the image are subjected to limit invasionObtained by converting camera motion changes to template images;
Specifically, the camera system is placed at a position which is right in front of the measuring equipment frame and has a length D from the center point of the surface plane structure of the measuring frame, and the camera system and the measuring light machine are kept on the same horizontal line; the two-dimensional coordinates of the camera at the dimension D can be represented by a matrix relationship:
Let the motion of the infringement image to the template image be R (rotation matrix), t (translation matrix).
Knowing the camera reference matrix K, for a point P in space, the spatial coordinates in the first image are
According to the camera model, the pixel points p1 and p2 corresponding to the two images have the following relationship:
According to the imaging relationship of the image sensor,andare projective, so they have an equality in the scale sense. Therefore, the temperature of the molten metal is controlled,
so that the above two relationships can be rewritten as
Order to, The coordinates of the two pixel points on the normalization plane are expressed and are simply obtained by substituting the coordinates into the formula,
The left expression is always 0 and 0 multiplied by any constant is 0, so that the expression can be expressedIs rewritten asThen there is
Denote the middle part as two matrices
And decomposing according to the intrinsic matrix E and the basic matrix F to obtain the camera motion changes R and t.
As shown in fig. 1, after the epipolar constraint, the conversion relationship between two sets of two-dimensional points is known, and by this conversion relationship, the coordinates of two-dimensional pixels on the less-standard infringement image can be converted onto the standard preset template image through the camera motion change relationship, and then the converted pixel coordinates are given and placed onto the surface plane of the measurement frame through the shooting data of the standard preset template image.
After the image processing, the corresponding relation R and t of pixel points between the limit-invading image and the preset template image is determined, and when the corresponding relation R and t passes through the mark in the image saved in the previous limit-invading image, the pixel coordinate of the current limit-invading point is obtainedThen, as the camera system calibrated by the factory internal reference is used, the internal reference is usedIs known, then, by the following formula:
Since at this timeThe pixel coordinates of the image still cannot be correspondingly connected with the three-dimensional coordinates through a non-three-dimensional reconstruction mode.
Firstly, a camera system is placed at a position which is right in front of a frame of the measuring equipment and has a length D from the central point of a surface plane structure of the frame, and the camera system and the measuring optical machine are kept at the same horizontal line.
At this time, according to the imaging structure principle of the camera, the two-dimensional coordinates of the camera under the dimension D can be expressed as a matrix relation
wherein I.e. as two-dimensional coordinates of the camera, i.e. two-dimensional coordinates below the surface plane of the frame.
In the actual measurement process, the device needs to be installed first, and the installation process has higher requirements in the conventional measurement mode, for example, if the spatial information is acquired through camera calibration, subsequent measurement is often required to be performed after installation according to specified installation requirements before measurement, and if the device is not installed according to strict installation requirements, the device needs to be calibrated on site every time, so that the operation is complicated. Especially, under the specified installation requirement, once deviation is generated in the installation process, a certain error is brought to the whole measurement result by a measuring person. In order to simplify the installation requirement and avoid unnecessary manual errors, when measurement is carried out after a preset template image is obtained, the equipment is not required to be installed again according to the high specification requirement when the preset template image is obtained, but is installed in a fuzzy range to a position where information similar to the preset template image can be obtained, and then the computer carries out correction processing through the steps, so that the setting work before measurement is simplified, the workload is reduced, the manual errors are reduced, the preset template image can be repeatedly used for the same environment, and the measurement process is enabled to have repeatability. Therefore, the image processing technology is selected to simplify the operation steps and avoid the error problem easily caused by installation.
Through the steps, the two-dimensional coordinate point on the measuring frame plane in the preset template image shot by the camera is obtained, but the two-dimensional coordinate does not have the directivity on the three-dimensional space, that is, the two-dimensional coordinate under the plane cannot provide the rotation inclination angle required by the measuring device to automatically align to the designated position, so that a method for corresponding the two-dimensional coordinate to the space three-dimensional coordinate in the measuring environment needs to be found, and the corresponding relation in the method has the characteristic that any point on the obtained two-dimensional plane exists and only one corresponding space three-dimensional point exists, so that the spatial direction information contained in the space three-dimensional coordinate is reliable to the corresponding two-dimensional coordinate point, that is, when the measuring device points to the space three-dimensional coordinate, the position of the corresponding two-dimensional coordinate point is pointed at the same time.
And (3) coordinate conversion: the method comprises the steps of static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
as shown in fig. 1, it is determined whether the measurement state is static or dynamic, and if it is static, static coordinate conversion is performed;
if the dynamic coordinate is dynamic, performing dynamic coordinate conversion;
static coordinate conversion: as shown in fig. 1, a spherical coordinate conversion principle of a complex sphere is utilized to convert two-dimensional coordinates of a shooting device into three-dimensional coordinates in a spherical coordinate system, and a rotation inclination angle required by a measuring device to align the three-dimensional coordinates is obtained;
transmitting the rotation inclination angle data to a measuring device;
dynamic coordinate conversion: as shown in fig. 1, converting two-dimensional coordinates of the photographing device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex sphere, and obtaining a rotation inclination angle required by the measuring device to align the three-dimensional coordinates;
and transmitting the rotation inclination angle data to a measuring device.
Firstly, a spherical system is established, a fixed measuring optical machine is taken as a north pole N, a perpendicular line is made to intersect at the position of a camera shooting template plane as a circle center O, the distance from the north pole to the circle center is taken as a radius r, and then the spherical surface S can be expressed as
Recording the camera shooting template plane as
The north pole N is then coordinated withFor any point z on C, the straight line joining N and z must intersect the sphere S at a point P. And it is easy to know whenAnd P is in the northern hemisphere,and P is in the southern hemisphere,and P and z coincide.
As z tends towards ∞, then p tends towards the north pole N.
Thus, according to the vector similarity and the spherical equation, when the z point camera has two-dimensional coordinates ofWhen it is determined that P is
When measuring the ray apparatus this moment through rotating point to P point, must can fall on z point position department, through the vector mode, directly can obtain horizontal rotation angle and the vertical rotation angle that this moment ray apparatus needs.
Namely, the rotation angle required by the optical machine to point to the P point can be obtained through the two-dimensional coordinates, and the two dimensions are reflected in three dimensions and are unique and reliable.
In a preferred embodiment, a method of obtaining a template image;
the shooting device adopts a camera with internal reference calibration;
in a preferred embodiment, when identifying the characteristic point, firstly selecting a pixel point p in the threshold image or the template image, wherein the brightness of the p is U;
selecting 16 pixels closest to the pixel point p as a center; in the selected pixel points, if N continuous points with brightness larger than U-T and smaller than U + T exist, the p points are regarded as characteristic points; wherein N is 9, 11 or 12;
namely, the characteristic point identification adopts an ORB algorithm. Calculating FAST key points and BRIEF description operators by adopting ORB feature points; the FAST only compares the brightness of pixels, firstly selects a pixel point p in an image, assumes the brightness of the pixel point p to be U, then sets a threshold value T related to U, and then selects 16 pixel points closest to the pixel point p by taking the pixel p as a center, when the brightness of continuous N points in the selected pixel points is greater than U + T or less than U-T, the p can be regarded as a characteristic point, N can be generally selected to be 9, 11 or 12, and finally the steps are circulated until each pixel point is executed. When calculating the FAST feature points, the comparison is only the difference of brightness measurement, so the processing speed is very high, but the comparison is single, and the defects of weak repeatability and nonuniform distribution are caused, and FAST corner points do not have direction information, and the scale problem exists. Therefore, in the ORB, the image pyramid is constructed to add the scale invariance, and when the camera moves forward or backward, the matching can be found in the upper layer of the previous image pyramid and the lower layer of the next image pyramid or the lower layer of the previous pyramid and the upper layer of the next image pyramid. And the rotation characteristic is introduced by utilizing a gray centroid method. The centroid is the center with the image gray value as the weight, and the image gray centroid near the feature point needs to be calculated. The specific method comprises the following steps:
firstly, one small image block A is selected, and the moment of the image block is defined as
Finding the centroid of the image block by the moment
After finding the centroid, connecting the geometric center O and the centroid C of the image block A to obtain a direction vectorDefining the direction of the feature point at this time as
The BRIEF descriptor is a binary descriptor, whose descriptor vector is composed of many 0's and 1's, and when comparing the pixel sizes of two random vectors p and q, if p is greater than q, the descriptor vector takes 1, otherwise it takes 0. The BRIEF operator is a random point selected during comparison, the speed is very high, and the binary system is used for facilitating the storage of a computer. The original BRIEF operator does not have rotation invariance, and in ORB, the direction characteristic is added in the stage of FAST corner extraction, so that the BRIEF operator obtains better rotation invariance by using the direction information.
In a preferred embodiment, before the initial use, the detection environment is subjected to template image acquisition and storage to obtain a template image; namely, a camera calibrated by internal reference obtains a standard preset template image under the requirement of preset template image acquisition. The measuring frame, the camera and the measuring optical machine in the entity tool used in the coordinate conversion method are all matched measuring frame, camera and measuring optical machine.
After template images are obtained in a scale D, carrying out feature point identification on the two images through a feature operator with scale and rotation invariance, and then carrying out violent matching to obtain a group of corresponding feature point groups which are successfully matched;
and carrying out epipolar constraint on the feature point group to obtain a rotation and translation relation of the two images.
Generally, when a limit-violation point is detected during traveling, the measurement cannot be stopped accurately on the section of the frame structure instantly, a braking and sliding process is needed in the middle, and three-dimensional coordinates need to be converted, so that the measurement optical machine after displacement can be steered to the direction to be pointed accurately.
After template images are obtained in a standard scale, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, a group of corresponding feature point groups which are successfully matched is obtained through violent matching, and finally a rotation and translation relation of the two images is obtained through an antipodal constraint mode by means of the corresponding feature point groups of the group, so that points on an acquired image which is not standard can be converted into points on a standard template image, and subsequent measurement is facilitated.
If the displacement of the measuring optical machine is L (obtained by subtracting mileage of the odometer), a direction vector required by the optical machine after displacement can be obtained through the similar triangle;
then at this point for the coordinate system when the violation point was detected,
according to the similarity relationship
Then
Required for measuring optical machine
In a preferred embodiment, the measuring light machine is at least provided with a pitch rotation angle of 270 degrees, a yaw rotation angle of 450 degrees and a measuring range of 50 meters.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A method for converting coordinates based on image processing and complex spherical surface is characterized in that: the method comprises the steps of image recognition, image conversion and coordinate conversion;
image recognition: presetting a template image; acquiring an intrusion image;
carrying out feature identification on the infringement image and the template image to obtain feature point pixel two-dimensional coordinate information;
matching the two-dimensional coordinate information of the characteristic point pixels of the infringement image and the template image and finding out a plurality of groups of matching points with similarity higher than a set value;
wherein the invasion limit image is obtained by a shooting device, and the template image is a known image;
image conversion: carrying out epipolar constraint on the matching points;
according to the essence matrixAnd a base matrixDecomposing to obtain the change R and t of the shooting device;
two-dimensional coordinates on the image to be infringedObtained by converting the change of the movement of the camera to the template image;
Placing a shooting device at a position which is right in front of the measuring frame and has the length of D from the center point of the surface plane structure of the measuring frame, and keeping the shooting device and the measuring device at the same horizontal line;
and expressing the two-dimensional coordinates of the shooting device under the dimension D as follows through a matrix relation:
and (3) coordinate conversion: the method comprises the steps of static coordinate conversion and dynamic coordinate conversion;
establishing a spherical coordinate system based on a complex spherical surface;
judging whether the measurement state is static or dynamic, and if the measurement state is static, executing static coordinate conversion;
if the dynamic coordinate conversion is carried out, the dynamic coordinate conversion is carried out;
static coordinate conversion: converting two-dimensional coordinates of the shooting device into three-dimensional coordinates in a spherical coordinate system by utilizing a spherical coordinate conversion principle of a complex sphere, and obtaining a rotation inclination angle required by the measuring device to align the three-dimensional coordinates;
transmitting the rotation inclination angle data to a measuring device;
dynamic coordinate conversion: converting two-dimensional coordinates of a shooting device into three-dimensional coordinates in a spherical coordinate system according to the displacement of the measuring device and the spherical coordinate conversion principle of the complex sphere, and obtaining a rotation inclination angle required by the measuring device to align the three-dimensional coordinates;
and transmitting the rotation inclination angle data to a measuring device.
2. The method of claim 1, wherein the method comprises: the shooting device adopts a camera with internal reference calibration; in a detection environment, shooting under the requirement of acquiring a preset template image to obtain a standard preset template image; constructing an image pyramid during feature point identification; after the shooting device moves, finding a matching image before moving in the image pyramid; meanwhile, when the characteristic points are identified, the rotation characteristic is introduced by utilizing a gray scale centroid method.
3. The method of claim 2, wherein the method comprises: in the calculation method of the gray centroid; firstly, selecting a small image block A, defining the moment of the image block, finding the centroid of the image block through the moment of the image block, connecting the geometric center O and the centroid C of the image block A to obtain a direction vectorAnd determining the direction of the characteristic point through the direction vector.
4. The method of claim 1, wherein the method comprises: after template images are obtained in a scale D, feature point identification is carried out on the two images through a feature operator with scale and rotation invariance, and a group of corresponding feature point groups which are successfully matched are obtained through violent matching;
and carrying out epipolar constraint on the feature point group to obtain a rotation and translation relation of the two images.
5. The method of claim 1, wherein the method comprises: establishing a spherical coordinate system based on a complex spherical principle to represent three-dimensional world coordinates, taking a fixed measuring optical machine position as a north pole point, making a perpendicular line intersect at a camera shooting template plane position as a circle center, and taking the distance from the north pole point to the circle center as a radius r; spherical surfaceExpressed as:
the camera template plane is recorded as:
7. the method of claim 5, wherein the method comprises: in the dynamic coordinate conversion, a direction vector required by the displacement of the measuring device is obtained according to the displacement L of the measuring device and the principle of a similar triangle:
measuring a horizontal rotation angle and a vertical rotation angle required by the ray machine to point to the point P; wherein,
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310080890.0A CN115797185B (en) | 2023-02-08 | 2023-02-08 | Coordinate conversion method based on image processing and complex sphere |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310080890.0A CN115797185B (en) | 2023-02-08 | 2023-02-08 | Coordinate conversion method based on image processing and complex sphere |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115797185A true CN115797185A (en) | 2023-03-14 |
CN115797185B CN115797185B (en) | 2023-05-02 |
Family
ID=85430462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310080890.0A Active CN115797185B (en) | 2023-02-08 | 2023-02-08 | Coordinate conversion method based on image processing and complex sphere |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797185B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102798456A (en) * | 2012-07-10 | 2012-11-28 | 中联重科股份有限公司 | Method, device and system for measuring working range of engineering mechanical arm frame system |
CN106251395A (en) * | 2016-07-27 | 2016-12-21 | 中测高科(北京)测绘工程技术有限责任公司 | A kind of threedimensional model fast reconstructing method and system |
CN106530218A (en) * | 2016-10-28 | 2017-03-22 | 浙江宇视科技有限公司 | Coordinate conversion method and apparatus |
US20170178353A1 (en) * | 2014-02-14 | 2017-06-22 | Nokia Technologies Oy | Method, apparatus and computer program product for image-driven cost volume aggregation |
CN107820012A (en) * | 2017-11-21 | 2018-03-20 | 暴风集团股份有限公司 | A kind of fish eye images processing method, device, server and system |
CN107845096A (en) * | 2018-01-24 | 2018-03-27 | 西安平原网络科技有限公司 | Planet three-dimensional information assay method based on image |
CN108828606A (en) * | 2018-03-22 | 2018-11-16 | 中国科学院西安光学精密机械研究所 | One kind being based on laser radar and binocular Visible Light Camera union measuring method |
CN109916304A (en) * | 2019-04-01 | 2019-06-21 | 易思维(杭州)科技有限公司 | Mirror surface/class mirror surface three-dimensional measurement of objects system calibrating method |
CN109949232A (en) * | 2019-02-12 | 2019-06-28 | 广州南方卫星导航仪器有限公司 | Measurement method, system, electronic equipment and medium of the image in conjunction with RTK |
CN111024003A (en) * | 2020-01-02 | 2020-04-17 | 安徽工业大学 | 3D four-wheel positioning detection method based on homography matrix optimization |
CN111160232A (en) * | 2019-12-25 | 2020-05-15 | 上海骏聿数码科技有限公司 | Front face reconstruction method, device and system |
CN113902810A (en) * | 2021-09-16 | 2022-01-07 | 南京工业大学 | Robot gear chamfering processing method based on parallel binocular stereo vision |
WO2023276567A1 (en) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | Image processing device, image processing method, and program |
-
2023
- 2023-02-08 CN CN202310080890.0A patent/CN115797185B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102798456A (en) * | 2012-07-10 | 2012-11-28 | 中联重科股份有限公司 | Method, device and system for measuring working range of engineering mechanical arm frame system |
US20170178353A1 (en) * | 2014-02-14 | 2017-06-22 | Nokia Technologies Oy | Method, apparatus and computer program product for image-driven cost volume aggregation |
CN106251395A (en) * | 2016-07-27 | 2016-12-21 | 中测高科(北京)测绘工程技术有限责任公司 | A kind of threedimensional model fast reconstructing method and system |
CN106530218A (en) * | 2016-10-28 | 2017-03-22 | 浙江宇视科技有限公司 | Coordinate conversion method and apparatus |
CN107820012A (en) * | 2017-11-21 | 2018-03-20 | 暴风集团股份有限公司 | A kind of fish eye images processing method, device, server and system |
CN107845096A (en) * | 2018-01-24 | 2018-03-27 | 西安平原网络科技有限公司 | Planet three-dimensional information assay method based on image |
CN108828606A (en) * | 2018-03-22 | 2018-11-16 | 中国科学院西安光学精密机械研究所 | One kind being based on laser radar and binocular Visible Light Camera union measuring method |
CN109949232A (en) * | 2019-02-12 | 2019-06-28 | 广州南方卫星导航仪器有限公司 | Measurement method, system, electronic equipment and medium of the image in conjunction with RTK |
CN109916304A (en) * | 2019-04-01 | 2019-06-21 | 易思维(杭州)科技有限公司 | Mirror surface/class mirror surface three-dimensional measurement of objects system calibrating method |
CN111160232A (en) * | 2019-12-25 | 2020-05-15 | 上海骏聿数码科技有限公司 | Front face reconstruction method, device and system |
CN111024003A (en) * | 2020-01-02 | 2020-04-17 | 安徽工业大学 | 3D four-wheel positioning detection method based on homography matrix optimization |
WO2023276567A1 (en) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | Image processing device, image processing method, and program |
CN113902810A (en) * | 2021-09-16 | 2022-01-07 | 南京工业大学 | Robot gear chamfering processing method based on parallel binocular stereo vision |
Non-Patent Citations (5)
Title |
---|
AO SUN等: "3D Estimation of Single Image based on Homography Transformation" * |
刘喜兵: "基于三维重建的遥感影像点云生成关键技术研究" * |
焦增涛: "基于结构先验的规则场景三维重建技术研究" * |
陈华等: "基于AD7606-6的STATCOM信号采集模块设计" * |
黄清杰等: "轨道交通高架车站清水混凝土轨道梁预制技术" * |
Also Published As
Publication number | Publication date |
---|---|
CN115797185B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN109523595B (en) | Visual measurement method for linear angular spacing of building engineering | |
CN106650701B (en) | Binocular vision-based obstacle detection method and device in indoor shadow environment | |
CN106996748A (en) | A kind of wheel footpath measuring method based on binocular vision | |
CN112648976B (en) | Live-action image measuring method and device, electronic equipment and storage medium | |
Xia et al. | Global calibration of non-overlapping cameras: State of the art | |
Wang et al. | Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle | |
CN114998448A (en) | Method for calibrating multi-constraint binocular fisheye camera and positioning space point | |
Meng et al. | Defocused calibration for large field-of-view binocular cameras | |
Wang et al. | Corners positioning for binocular ultra-wide angle long-wave infrared camera calibration | |
CN110197104B (en) | Distance measurement method and device based on vehicle | |
Ohno et al. | Study on real-time point cloud superimposition on camera image to assist environmental three-dimensional laser scanning | |
CN113393524A (en) | Target pose estimation method combining deep learning and contour point cloud reconstruction | |
CN113409242A (en) | Intelligent monitoring method for point cloud of rail intersection bow net | |
CN112767459A (en) | Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion | |
CN108564626B (en) | Method and apparatus for determining relative pose angle between cameras mounted to an acquisition entity | |
Goto et al. | 3D environment measurement using binocular stereo and motion stereo by mobile robot with omnidirectional stereo camera | |
CN116091603A (en) | Box workpiece pose measurement method based on point characteristics | |
CN113790711B (en) | Unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and storage medium | |
CN116202487A (en) | Real-time target attitude measurement method based on three-dimensional modeling | |
CN115797185B (en) | Coordinate conversion method based on image processing and complex sphere | |
CN109815966A (en) | A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm | |
CN111854678B (en) | Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
CN114626112A (en) | Unknown object surface measurement viewpoint planning method based on boundary inspection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |