CN118209087B - Space point and plane positioning calibration method based on photogrammetry - Google Patents
Space point and plane positioning calibration method based on photogrammetry Download PDFInfo
- Publication number
- CN118209087B CN118209087B CN202410608326.6A CN202410608326A CN118209087B CN 118209087 B CN118209087 B CN 118209087B CN 202410608326 A CN202410608326 A CN 202410608326A CN 118209087 B CN118209087 B CN 118209087B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- camera
- point
- target
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000009434 installation Methods 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 62
- 238000006243 chemical reaction Methods 0.000 claims description 32
- 230000009466 transformation Effects 0.000 claims description 11
- HOWHQWFXSLOJEF-MGZLOUMQSA-N systemin Chemical compound NCCCC[C@H](N)C(=O)N[C@@H](CCSC)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H]([C@@H](C)O)C(=O)N[C@@H](CC(O)=O)C(=O)OC(=O)[C@@H]1CCCN1C(=O)[C@H]1N(C(=O)[C@H](CC(O)=O)NC(=O)[C@H](CCCN=C(N)N)NC(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H]2N(CCC2)C(=O)[C@H]2N(CCC2)C(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H](CCC(N)=O)NC(=O)[C@@H](NC(=O)[C@H](C)N)C(C)C)CCC1 HOWHQWFXSLOJEF-MGZLOUMQSA-N 0.000 claims description 8
- 108010050014 systemin Proteins 0.000 claims description 8
- 238000013519 translation Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 6
- 238000005553 drilling Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005119 centrifugation Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C1/00—Measuring angles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application relates to the technical field of distance and azimuth measurement, in particular to a positioning and calibrating method based on photogrammetry space points and planes, wherein a camera is arranged at a space target point M and faces a target plane N vertically, and a point coordinate system is respectively establishedPlane coordinate systemAnd a camera coordinate systemThe space coordinates are obtained through the pixel coordinates of the RGB image shot by the camera and the camera internal parameter K, and finally the actual position, the azimuth and the theoretical position, the distance between the azimuth and the intersection deviation are obtained, so that how to adjust the calibration is guided, the distance and the angle are adjusted, and the purpose of calibration is achieved. The invention can accurately measure the distance and the relative orientation between any target point M and the target surface N in the space; and the space distance and Euler angle between the current state of the target point M and/or the target surface N and the preset target state, so that the positions and angles of the target point M and the target surface N can be guided and adjusted, and the aim of accurate installation/debugging is achieved.
Description
Technical Field
The invention relates to the technical field of distance and azimuth measurement, in particular to a method for measuring the relative position or the relative azimuth between a specific point and a surface in space or measuring the deviation of the current position and azimuth from the standard target position and azimuth by using a photogrammetry technology, and particularly relates to a positioning and calibrating method based on photogrammetry of the spatial point and the surface.
Background
Quantitative measurement of the distance and/or orientation existing between two arbitrary objects in space is generally applied to the field of installation and calibration of high-precision equipment, however, because more influencing factors and variables exist in space, the parameterization calibration of the space distance and orientation between two specific objects is difficult. Taking the field of mechanical installation as an example, when a three-jaw chuck such as a machine tool is installed and positioned, coaxiality of installation points and verticality with an installation surface of the machine tool are required to be ensured, and rotation centrifugation is caused by slight deviation; in the field of X-ray detection technology, for example, it is likewise necessary for the center ray of the X-ray to coincide with a perpendicular bisector passing through the center point of the detector as an optimum installation position between an X-ray machine and the detector for receiving X-ray signals.
There are also various techniques for spatial measurement in the prior art, for example, chinese patent application publication No. CN108896030a discloses a spatial positioning device, which has an absolute position of an object in space that can be positioned in real time, but cannot measure a displacement difference and an angle difference between a current position and a theoretical or target position.
In order to solve the difference between the actual position and the theoretical optimal position or the target position so as to facilitate adjustment and achieve the aim of accurate positioning and installation, the invention provides a positioning and calibrating method based on photogrammetry space points and planes, which is used for solving the technical problems.
Disclosure of Invention
In order to solve the problem of distance and/or azimuth deviation between the current position and the target position of a measurement object in space, the application provides a positioning calibration method based on photogrammetry space points and planes, which is used for measuring the relative distance and azimuth between the point of any specific object and the plane of the specific object in a space coordinate system and the distance and/or azimuth deviation between the current position and the theoretical/target position of the plane of the specific object, so that the equipment is adjusted to the optimal theoretical position or the preset target position in the equipment installation and precision debugging process through measuring the deviation of the distance and the azimuth.
In order to achieve the above purpose, the technical scheme adopted by the application is as follows:
the invention provides a positioning calibration method based on photogrammetry space points and planes, which comprises the following steps:
step STP100, establishing a space coordinate system with the space target point M, and recording as a point coordinate system ; Then a space coordinate system is established by the central point of the space target surface N and is marked as a surface coordinate systemObtaining a slave point coordinate systemAny of the pointsConversion to face coordinate systemCoordinates of (a)Is a conversion matrix of (a),
Wherein SID is the linear distance between the target point M and the target surface N;
In step STP200, a camera is mounted on a vertical line passing through the target point M and facing the target plane N just before the target point M faces the target plane N, and a spatial coordinate system is established by using the center point of camera imaging, and is recorded as a camera coordinate system Obtaining a slave point coordinate systemIs converted to a plane coordinate systemCoordinates of (a)Is a conversion matrix of (a),
D is the linear distance between the target point M and the imaging center point of the camera;
Step STP300, calculating according to the transformation matrix T SD and the transformation matrix T SC to obtain the plane coordinate system Coordinate system with cameraIs converted into matrix of (a),
Step STP400, capturing an RGB image including the target surface N by a camera, and passing pixel coordinates of four points located at four corners of the rectangular region of the target surface N、 、、Calculating the space coordinates of four points under the camera coordinate system C by combining the camera internal parameters K、、、Calculating a plane coordinate system by a space coordinate systemIn the camera coordinate systemLower part (C)The axis unit vector is expressed as, The unit vector of the axis isAnd a unit vector of the z-axis;
Step STP500 according to the unit vectorCalculating a camera coordinate systemSurface-to-surface coordinate systemIs a rotation matrix of (a),
Wherein,
Surface coordinate systemIn the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Surface coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Surface coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Thereby obtaining the coordinate system under the current state The coordinates of the origin of (C) in the camera coordinate system C areAnd the slave camera coordinate system in the current stateTo the current coordinate systemIs the conversion matrix of (a);
Step STP600, combined with step STP300The relation between the two points obtains the point coordinate system in the current stateAny of the pointsConversion to face coordinate systemCoordinates of (a)Is a conversion matrix of (a),
Thereby obtaining the current state plane coordinate system' Standard Detector SystemConverting the matrix intoAccording toAndCalculating the relation between the two to obtain the current state plane coordinate system' And Standard plane coordinate SystemEuler angle therebetweenWherein, the method comprises the steps of, wherein,
Wherein,,,Respectively represent the conversion matrix asThe element values of the 3 rd row and the 1 st column, the element values of the 3 rd row and the 2 nd column and the element values of the 3 rd row and the 3 rd column;
Amount of triaxial translation Wherein, the method comprises the steps of, wherein,
Wherein,,,Respectively represent the conversion matrix asElement values of the 1 st row, 4 th column, 2 nd row, 4 th column and 3 rd row, 4 th column;
In step STP700, the euler angle and the triaxial translation amount obtained in step STP600 are adjusted so that the current state target surface N coincides with the theoretical standard target surface N installation position.
Preferably, in step STP400, four points at the upper left, upper right, lower left, and lower right corners of the target surface N are in the camera coordinate systemSpace coordinates of、 、、 The method is calculated and obtained by adopting the following steps:
The width of the target surface N is High isThe following equation is established:
①
②
③
④
⑤
⑥
⑦
⑧
simultaneous equation ①-⑧ 、、Space coordinates in whichIs an internal reference of the cameraAn inverse matrix.
Preferably, the plane coordinate system in step STP400In the camera coordinate systemLower part (C)Axial unit vector,Unit vector of axisAnd a unit vector of the z-axisThe calculation method of (2) is as follows:
。
the beneficial effects are that:
The invention can accurately measure the distance and the relative orientation between any target point M and the target surface N in the space; and the space distance and Euler angle between the current state of the target point M and/or the target surface N and the preset target state, so that the positions and angles of the target point M and the target surface N can be guided and adjusted, and the aim of accurate installation/debugging is achieved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic diagram of a target surface N.
Fig. 2 is a schematic diagram of the spatial relative position between the target point M and the target plane N.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be noted that, if the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate an azimuth or a positional relationship based on that shown in the drawings, or an azimuth or a positional relationship in which a product of the application is conventionally put in use, it is merely for convenience of describing the present application and simplifying the description, and it is not indicated or implied that the referred device or element must have a specific azimuth, be constructed and operated in a specific azimuth, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like in the description of the present application, if any, are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance.
Furthermore, the terms "horizontal," "vertical," and the like in the description of the present application, if any, do not denote a requirement that the component be absolutely horizontal or overhang, but rather may be slightly inclined. As "horizontal" merely means that its direction is more horizontal than "vertical", and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present application, it should also be noted that, unless explicitly stated and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" should be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Example 1:
the present embodiment is described by taking a conventional spatial vertical drilling as an example, and specifically, the present embodiment provides a positioning calibration method based on photogrammetry spatial points and planes, which includes the following steps:
In the step STP100, the target point M is the position where the drill bit is installed, the connecting line PM in fig. 2 is the perpendicular line where the center point of the target surface N is located, the target surface M is the structural surface where vertical drilling is required, and the theoretical optimal drilling direction is the PM connecting line direction; since the PM connection is virtually oriented and is not visible, it is not known whether the actually installed axial direction of the drill bit spatially coincides with the connection PM, and if the axial direction of the drill bit does not coincide with the connection PM, the position of the drill hole on the target surface N may deviate from the point P, or even if the drill hole is drilled from the point P, there is a spatial angle deviation between the drilling direction and the PM connection, so that the actual drilling position and direction are rejected due to the deviation. To avoid this problem, the present embodiment provides a calibration method, specifically, first, a space coordinate system is established with a space target point M, and recorded as a point coordinate system ; Then a space coordinate system is established by the central point of the space target surface N and is marked as a surface coordinate systemObtaining a slave point coordinate systemAny of the pointsConversion to face coordinate systemCoordinates of (a)Is a conversion matrix of (a),
Wherein SID is the linear distance between the target point M and the target surface N; it should be noted that the transformation matrix herein is a theoretical transformation relationship, and does not have a specific value, for example, a rectangular area=length×width, and when having a specific value of length and width, the rectangular area can be obtained by taking the above formula. Similarly, when a specific coordinate system is known, the corresponding value in another coordinate system can be obtained through the transformation matrix.
In step STP200, a camera is mounted on a vertical line passing through the target point M and facing the target surface N just in front of the target point M, and after mounting, the camera is ensured to be in coaxial relation with the hole of the drill bit where the target point M is located, so that the central photographing beam of the camera is concentric and coaxial with the axial direction of the drill bit, and if there is a deviation between the position and the azimuth of the central photographing beam of the camera and the target surface N, the deviation is necessarily caused when the drill bit drills. Then, a space coordinate system is established by using the imaging center point of the camera and is recorded as a camera coordinate systemObtaining a slave point coordinate systemIs converted to a plane coordinate systemCoordinates of (a)Is a conversion matrix of (a),
D is the linear distance between the target point M and the imaging center point of the camera; in this embodiment, since the actual drilling direction of the drill is consistent with the photographing beam at the center of the camera, the value of d has no influence in this embodiment.
Step STP300, calculating according to the transformation matrix T SD and the transformation matrix T SC to obtain the plane coordinate systemCoordinate system with cameraIs converted into matrix of (a),
Step STP400, capturing an RGB image including the target surface N by a camera, and capturing pixel coordinates of four points located at the four corners of the target surface N, i.e., the upper left, the upper right, the lower left and the lower right、、、 Referring to fig. 1, the spatial coordinates of four points in the camera coordinate system C are calculated in combination with the camera reference K:
、 、 、 Calculating a plane coordinate system by a space coordinate system In the camera coordinate systemLower part (C)The axis unit vector is expressed as,The unit vector of the axis isAnd a unit vector of the z-axis;
In the step, four points at the four corners of the upper left, the upper right, the lower left and the lower right of the target surface N are positioned in a camera coordinate systemThe following spatial coordinates:
、、、 The method is calculated and obtained by adopting the following steps:
The width of the target surface N is High isThe following equation is established:
①
②
③
④
⑤
⑥
⑦
⑧
simultaneous equation ①-⑧
、、 、Space coordinates in whichIs an internal reference of the cameraAn inverse matrix.
Plane coordinate systemIn the camera coordinate systemLower part (C)Axial unit vector, Unit vector of axisAnd a unit vector of the z-axisThe calculation method of (2) is as follows:
。
Step STP500 according to the unit vector Calculating a camera coordinate systemSurface-to-surface coordinate systemIs a rotation matrix of (a),
Wherein,
Surface coordinate systemIn the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Surface coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Surface coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Thereby obtaining the coordinate system under the current state The coordinates of the origin of (C) in the camera coordinate system C areAnd the slave camera coordinate system in the current stateTo the current coordinate systemIs the conversion matrix of (a);
Step STP600, combined with step STP300The relation between the two points obtains the point coordinate system in the current stateAny of the pointsConversion to face coordinate systemCoordinates of (a)Is a conversion matrix of (a),
And then obtain
Thereby obtaining the current state plane coordinate system' Standard Detector SystemConverting the matrix intoThen
According toAndCalculating the relation between the two to obtain the current state plane coordinate system' And Standard plane coordinate SystemEuler angle therebetweenWherein, the method comprises the steps of, wherein,
Wherein,,, Respectively represent the conversion matrix asThe element values of the 3 rd row and the 1 st column, the element values of the 3 rd row and the 2 nd column and the element values of the 3 rd row and the 3 rd column;
Amount of triaxial translation Wherein, the method comprises the steps of, wherein,
Wherein,,,Respectively represent the conversion matrix asElement values of the 1 st row, 4 th column, 2 nd row, 4 th column and 3 rd row, 4 th column;
and step STP700, adjusting according to the Euler angle and the triaxial translation amount obtained in step STP600, so that the installation position of the current state target surface N and the theoretical standard target surface N are coincident, and the actual drilling direction of the drill bit is coincident with the PM connecting line, thereby enabling the drill bit to drill from the point P on the target surface N, and enabling the drilling direction to be along the straight line where the PM connecting line is located, and achieving the purpose of drilling holes at the preset position and direction.
Example 2:
In this embodiment, the DR measurement technical field is taken to describe an example of an X-ray machine and a detector, and the embodiment provides a positioning calibration method based on photogrammetry spatial points and planes, including the following steps:
Step STP100, establishing a space coordinate system with the center of the bulb tube of the space X-ray machine, and recording as the bulb tube coordinate system ; Then a space coordinate system is established by the central point of the space detector and is recorded as a detector coordinate systemObtaining from the bulb coordinate systemAny of the pointsConversion to a detector coordinate systemCoordinates of (a)Is a conversion matrix of (a),
Wherein SID is the linear distance between the center of the bulb tube of the X-ray machine and the detector;
In step STP200, a camera is mounted on a vertical line passing through the center of the bulb of the X-ray machine and facing the detector, and a spatial coordinate system is established with the center point of the imaging of the camera and recorded as a camera coordinate system Obtaining the coordinate system of the slave bulb tubeAny point k in (2) is converted to a detector coordinate systemCoordinates of (a)Is a conversion matrix of (a),
Wherein d is the linear distance between the center of the bulb tube of the X-ray machine and the imaging center point of the camera;
Step STP300, obtaining the detector coordinate system by calculation according to the transformation matrix T SD and the transformation matrix T SC Coordinate system with cameraIs converted into matrix of (a),
Step STP400, capturing an RGB image containing the detector by the camera, passing the pixel coordinates of four points located at four corners of the rectangular area of the detector、、 、 Calculating the space coordinates of four points under the camera coordinate system C by combining the camera internal parameters K、、、Calculating a detector coordinate system by a space coordinate systemIn the camera coordinate systemLower part (C)The axis unit vector is expressed as,The unit vector of the axis isAnd a unit vector of the z-axis;
Four points at the four corners of the upper left, the upper right, the lower left and the lower right of the detector are arranged in a camera coordinate systemSpace coordinates of、、、The method is calculated and obtained by adopting the following steps:
The detector has a width of High isThe following equation is established:
①
②
③
④
⑤
⑥
⑦
⑧
simultaneous equation ①-⑧ 、、、Space coordinates in whichIs an internal reference of the cameraAn inverse matrix.
The detector coordinate system in this stepIn the camera coordinate systemLower part (C)Axial unit vector,Unit vector of axisAnd a unit vector of the z-axisThe calculation method of (2) is as follows:
。
step STP500 according to the unit vector Calculating a camera coordinate systemTo the detector coordinate systemIs a rotation matrix of (a),
Wherein,
Representing the detector coordinate systemIn the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
representing the detector coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Representing the detector coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Thereby obtaining the detector coordinate system in the current state The coordinates of the origin of (C) in the camera coordinate system C areAnd the slave camera coordinate system in the current stateTo the current detector coordinate systemIs the conversion matrix of (a);
Step STP600, combined with step STP300The relation between the two is used for obtaining the bulb coordinate system in the current stateAny of the pointsConversion to a detector coordinate systemCoordinates of (a)Is a conversion matrix of (a),
And then obtain
Thereby obtaining the current state detector coordinate system' Standard Detector SystemConverting the matrix intoThen
According toAndCalculating the relation between the two to obtain the coordinate system of the current state detector' And Standard Detector coordinate SystemEuler angle therebetweenWherein, the method comprises the steps of, wherein,
Wherein,,,Respectively represent the conversion matrix asThe element values of the 3 rd row and the 1 st column, the element values of the 3 rd row and the 2 nd column and the element values of the 3 rd row and the 3 rd column;
amount of triaxial translation Wherein, the method comprises the steps of, wherein,
Wherein,,,Respectively represent the conversion matrix asElement values of the 1 st row, 4 th column, 2 nd row, 4 th column and 3 rd row, 4 th column;
In step STP700, the euler angle and the triaxial translation amount obtained in step STP600 are adjusted so that the current state detector coincides with the theoretical standard detector mounting position.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (3)
1. The positioning and calibrating method based on the photogrammetry space point and the plane is characterized by comprising the following steps:
step STP100, establishing a space coordinate system with the space target point M, and recording as a point coordinate system ; Then a space coordinate system is established by the central point of the space target surface N and is marked as a surface coordinate systemObtaining a slave point coordinate systemAny of the pointsConversion to face coordinate systemCoordinates of (a)Is set to be a conversion matrix T SD of the (c),
Wherein SID is the linear distance between the target point M and the target surface N;
In step STP200, a camera is mounted on a vertical line passing through the target point M and facing the target plane N just before the target point M faces the target plane N, and a spatial coordinate system is established by using the center point of camera imaging, and is recorded as a camera coordinate system Obtaining a slave point coordinate systemIs converted to a plane coordinate systemCoordinates of (a)Is a conversion matrix of (a),
D is the linear distance between the target point M and the imaging center point of the camera;
Step STP300, calculating according to the transformation matrix T SD and the transformation matrix T SC to obtain the plane coordinate system Coordinate system with cameraIs converted into matrix of (a),
Step STP400, capturing an RGB image including the target surface N by a camera, and passing pixel coordinates of four points located at four corners of the rectangular region of the target surface N、 、 、 Calculating the space coordinates of four points under the camera coordinate system C by combining the camera internal parameters K、 、 、 Calculating a plane coordinate system by a space coordinate systemIn the camera coordinate systemLower part (C)The axis unit vector is expressed as,The unit vector of the axis isAnd a unit vector of the z-axis;
Step STP500 according to the unit vectorCalculating a camera coordinate systemSurface-to-surface coordinate systemIs a rotation matrix of (a),
Wherein,
Surface coordinate systemIn the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Surface coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Surface coordinate system In the camera coordinate systemLower part(s)Transpose matrix of axis unit vector;
Thereby obtaining the coordinate system under the current state The coordinates of the origin of (C) in the camera coordinate system C areAnd the slave camera coordinate system in the current stateTo the current coordinate systemIs the conversion matrix of (a);
Step STP600, combined with step STP300The relation between the two points obtains the point coordinate system in the current stateAny of the pointsConversion to face coordinate systemCoordinates of (a)Is a conversion matrix of (a),
Thereby obtaining the current state plane coordinate system' Standard Detector SystemConverting the matrix intoAccording toAndCalculating the relation between the two to obtain the current state plane coordinate system' And Standard plane coordinate SystemEuler angle therebetweenWherein, the method comprises the steps of, wherein,
Wherein,, , Respectively represent the conversion matrix asThe element values of the 3 rd row and the 1 st column, the element values of the 3 rd row and the 2 nd column and the element values of the 3 rd row and the 3 rd column;
Amount of triaxial translation Wherein, the method comprises the steps of, wherein,
Wherein,,,Respectively represent the conversion matrix asElement values of the 1 st row, 4 th column, 2 nd row, 4 th column and 3 rd row, 4 th column;
In step STP700, the euler angle and the triaxial translation amount obtained in step STP600 are adjusted so that the current state target surface N coincides with the theoretical standard target surface N installation position.
2. The method of calibrating spatial point and surface positioning based on photogrammetry according to claim 1, wherein four points at four corners of the upper left, upper right, lower left and lower right of the target surface N in the step STP400 are in the camera coordinate systemSpace coordinates of、 、 、 The method is calculated and obtained by adopting the following steps:
The width of the target surface N is High isThe following equation is established:
①
②
③
④
⑤
⑥
⑦
⑧
simultaneous equation ①-⑧ 、 、 、 Space coordinates in whichIs an internal reference of the cameraAn inverse matrix.
3. The method of calibrating spatial point and surface positioning based on photogrammetry according to claim 2, wherein the surface coordinate system in step STP400In the camera coordinate systemLower part (C)Axial unit vector,Unit vector of axisAnd a unit vector of the z-axisThe calculation method of (2) is as follows:
。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410608326.6A CN118209087B (en) | 2024-05-16 | 2024-05-16 | Space point and plane positioning calibration method based on photogrammetry |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410608326.6A CN118209087B (en) | 2024-05-16 | 2024-05-16 | Space point and plane positioning calibration method based on photogrammetry |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118209087A CN118209087A (en) | 2024-06-18 |
CN118209087B true CN118209087B (en) | 2024-07-23 |
Family
ID=91450524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410608326.6A Active CN118209087B (en) | 2024-05-16 | 2024-05-16 | Space point and plane positioning calibration method based on photogrammetry |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118209087B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101427153A (en) * | 2006-04-20 | 2009-05-06 | 法罗技术股份有限公司 | Camera based six degree-of-freedom target measuring and target tracking device |
CN107449402A (en) * | 2017-07-31 | 2017-12-08 | 清华大学深圳研究生院 | A kind of measuring method of the relative pose of noncooperative target |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2557212A (en) * | 2016-11-30 | 2018-06-20 | Nokia Technologies Oy | Methods and apparatuses for determining positions of multi-directional image capture apparatuses |
CN112006710B (en) * | 2020-11-02 | 2021-02-02 | 晓智未来(成都)科技有限公司 | Dynamic photogrammetry system and method based on X-ray machine detector |
CN112419425B (en) * | 2020-11-20 | 2022-10-28 | 南京理工大学 | Anti-disturbance high-precision camera group measuring method for structural deformation measurement |
CN113180709B (en) * | 2021-07-01 | 2021-09-07 | 晓智未来(成都)科技有限公司 | Human body to-be-detected part posture recognition method based on photogrammetry |
CN115824190A (en) * | 2022-10-25 | 2023-03-21 | 天津大学 | Vision and GPS-based target ship fusion positioning method |
CN116309798A (en) * | 2023-02-08 | 2023-06-23 | 南京理工大学 | Unmanned aerial vehicle imaging positioning method |
-
2024
- 2024-05-16 CN CN202410608326.6A patent/CN118209087B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101427153A (en) * | 2006-04-20 | 2009-05-06 | 法罗技术股份有限公司 | Camera based six degree-of-freedom target measuring and target tracking device |
CN107449402A (en) * | 2017-07-31 | 2017-12-08 | 清华大学深圳研究生院 | A kind of measuring method of the relative pose of noncooperative target |
Also Published As
Publication number | Publication date |
---|---|
CN118209087A (en) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111060025B (en) | Pose calibration method and system for in-situ mounting line laser sensor of five-axis machine tool | |
US8520067B2 (en) | Method for calibrating a measuring system | |
CN106920261B (en) | A kind of Robot Hand-eye static demarcating method | |
US4889425A (en) | Laser alignment system | |
CN110618408B (en) | System calibration method for antenna phase center of precision distance measurement system | |
CN108489401A (en) | Split type calibration target, calibrating installation and its calibration method with the target | |
CN102699733A (en) | Method and device for measuring movement locus of automatic tool changing mechanical arm | |
CN111536954B (en) | Drilling position positioning system and positioning method of drilling machine | |
CN103026310B (en) | Method for realizing the spatial transformation from machining points to reference points of installation survey | |
CN111649667A (en) | Flange pipeline end measuring method, measuring device and adapter structure | |
CN107390155A (en) | A kind of Magnetic Sensor calibrating installation and method | |
CN118209087B (en) | Space point and plane positioning calibration method based on photogrammetry | |
CN110211175B (en) | Method for calibrating space pose of collimated laser beam | |
Gao et al. | Novel precision vision measurement method between area-array imaging and linear-array imaging especially for dynamic objects | |
CN116673796B (en) | Calibration tool and calibration method for robot hole making system | |
CN207630072U (en) | A kind of robot coordinate system's calibration tool | |
CN112697074A (en) | Dynamic object angle measuring instrument and measuring method | |
CN109990801A (en) | Level meter rigging error scaling method based on plumb line | |
CN117029684A (en) | Wide-range high-precision space assembly position absolute coordinate measurement system and measurement method | |
CN216645327U (en) | Complete set of cable strand height difference measuring equipment based on machine vision | |
CN112815841B (en) | Position calibration method and device for normal measurement sensor | |
CN114136357A (en) | Testing method and testing system suitable for surface structure light sensor | |
CN110866951B (en) | Method for correcting optical axis inclination of monocular camera | |
Chen et al. | A novel positioning method for Hall magnetic field measurement of heavy ion accelerator | |
CN219694180U (en) | Datum point detects frock |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |