CN118258377A - Bridge swivel monitoring method and system based on three-dimensional orthogonal cursor - Google Patents
Bridge swivel monitoring method and system based on three-dimensional orthogonal cursor Download PDFInfo
- Publication number
- CN118258377A CN118258377A CN202410365220.8A CN202410365220A CN118258377A CN 118258377 A CN118258377 A CN 118258377A CN 202410365220 A CN202410365220 A CN 202410365220A CN 118258377 A CN118258377 A CN 118258377A
- Authority
- CN
- China
- Prior art keywords
- bridge
- coordinate system
- cursor
- swivel
- dimensional orthogonal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 144
- 238000012544 monitoring process Methods 0.000 title claims abstract description 71
- 239000011159 matrix material Substances 0.000 claims abstract description 81
- 230000009466 transformation Effects 0.000 claims abstract description 81
- 230000008569 process Effects 0.000 claims abstract description 60
- 238000003384 imaging method Methods 0.000 claims abstract description 43
- 238000010276 construction Methods 0.000 claims abstract description 27
- 238000013507 mapping Methods 0.000 claims abstract description 21
- 230000036544 posture Effects 0.000 claims description 81
- 239000013598 vector Substances 0.000 claims description 61
- 230000014509 gene expression Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 15
- 230000001131 transforming effect Effects 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 239000003086 colorant Substances 0.000 claims description 7
- 238000012886 linear function Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 abstract description 5
- 230000003287 optical effect Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 9
- 238000013519 translation Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- CLOMYZFHNHFSIQ-UHFFFAOYSA-N clonixin Chemical compound CC1=C(Cl)C=CC=C1NC1=NC=CC=C1C(O)=O CLOMYZFHNHFSIQ-UHFFFAOYSA-N 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a bridge swivel monitoring method and system based on a three-dimensional orthogonal cursor, wherein the method comprises the following steps: an orthogonal three-dimensional cursor and an image acquisition device are arranged, and a coordinate system is established; acquiring an initial posture image of a three-dimensional orthogonal cursor before swivel construction, extracting imaging information of beam projection, and determining a transformation relation among coordinate systems during initial posture; establishing a mapping relation between the light beam and the light beam projection, and solving an initial gesture of a cursor by adopting a variation principle and a numerical iteration method; continuously acquiring attitude images of the bridge at different moments in the bridge turning process, and solving the bridge turning attitudes at different moments; and calculating a relative rotation matrix of the bridge from the initial posture to the posture at any moment, and separating the relative rotation matrix to obtain a relative rotation Euler angle. According to the method, the three-dimensional orthogonal cursor is used as visual input, so that the relative rotation angle of the swivel bridge from the initial posture to any posture can be accurately calculated, the real-time monitoring of the whole swivel process of the bridge is realized, and the method is accurate, efficient, low in cost, safe and non-contact.
Description
Technical Field
The invention relates to a bridge swivel monitoring method and system based on a three-dimensional orthogonal cursor, and belongs to the technical field of bridge swivel construction monitoring.
Background
With the continuous development of the infrastructure of China, the number of the three-dimensional crossed lines is greatly increased, and the problem of how to avoid interference to the existing lines due to the construction of the overline bridge becomes a hot spot of engineering research. The bridge swivel construction method is regarded as a non-interference overpass bridge construction method, and has been paid attention to and widely used in recent years. The method is successfully applied to hundreds of bridges in the world, including continuous beam bridges, arch bridges, cable stayed bridges and the like. The adoption of the swivel construction method remarkably simplifies and accelerates the construction of the bridge, and has higher economy and timeliness.
On the one hand, in the rotation traction process of the bridge, if the bridge swivel exceeds the design angle, the jack can only be used for pushing and returning, so that a high requirement on the accurate swivel is provided. On the other hand, because a gap exists between the supporting leg fixed on the upper structure and the slideway, the beam body rotates around the vertical axis in the rotating process and also rotates longitudinally and transversely around the structure, so that the space pose of the rotating structure changes within a certain range. Therefore, to ensure accurate rotation, the accurate actions of crossing obstacles, crossing a designated space and the like of the swivel bridge are realized, and the monitoring of the swivel construction process of the bridge becomes a key research problem.
At present, the rotation monitoring of the swivel bridge is mainly carried out by using the traditional contact type monitoring means, such as a turntable pointer method, a sensor technology and the like. The turntable pointer method is most widely applied, realizes rotation monitoring on the bridge around the vertical axis by utilizing pointer alignment, has the functions of prompting the turning process and preventing excessive turning to a certain extent, but has the problems of high labor cost, poor instantaneity and insufficient precision. The attitude sensor can realize automatic monitoring, has higher precision, but has higher sensitivity, is easily influenced by magnetic field interference and environmental change, and has higher cost.
Compared with the traditional contact type monitoring method, the non-contact type swivel monitoring method has the advantages of omnibearing measurement, high prediction precision and the like, such as a satellite positioning method, a total station method and the like. The satellite positioning system (GPS) can realize full-automatic measurement, but the equipment has higher requirements on use conditions, and communication signals are greatly interfered by the outside and have poor stability, so that the implementation of an accurate rotator is not facilitated. The total station method can utilize the total station to monitor the coordinates of the control points of the swivel bridge, and derive the change characteristics of the whole structure by the change of the local coordinates, so as to indirectly obtain the space pose of the swivel bridge, but the field measurement is greatly influenced by factors such as rotation of a beam body, light, personnel safety and the like, and the space pose of the swivel bridge is difficult to obtain in real time and high efficiency.
In general, the existing bridge swivel construction monitoring method has the defects of insufficient real-time performance, low accuracy and the like, and the whole monitoring process needs manual operation measurement, so that the swivel monitoring method becomes one of the important research points of bridge swivel construction.
The computer vision monitoring method is a novel high-technology monitoring method which is popular at present, and is divided into monocular vision and multi-eye vision according to the number of cameras, wherein the monocular vision is widely applied due to the simple structure, low cost and convenience in arrangement. However, since monocular vision imaging loses depth information, it is often necessary to combine other sensors or use complex algorithms such as machine learning, etc. to achieve accurate acquisition of the spatial pose of a moving object. And means such as machine learning and the like need a large amount of training data to realize gesture estimation, and high-quality data is large in demand, complex in calculation cost and accumulated in errors. Aiming at the bridge rotation problem, in the rotation process, the beam body only rotates relatively around a local coordinate system of the beam body, and does not translate due to the fact that the rotation bridge has central limit. Therefore, the rotation and translation of the bridge space pose can be separated, and the space pose problem related to the bridge swivel is specifically studied. Therefore, how to apply monocular vision to the field of bridge rotation and realize real-time monitoring of the whole rotation process is one of the problems to be solved in the field.
Disclosure of Invention
The invention aims to solve the defects of insufficient real-time performance, low accuracy and the like of the existing bridge swivel construction monitoring method and the problem that manual inching operation measurement is needed in the whole monitoring process, and provides a bridge swivel monitoring method and system based on a three-dimensional orthogonal cursor.
The invention aims to obtain bridge swivel image information based on a three-dimensional orthogonal cursor to reversely calculate the spatial attitude of a bridge swivel, further calculate the relative rotation Euler angle in the bridge swivel process, and realize non-contact, high-precision and real-time rotation monitoring of a swivel bridge. The technology has important engineering application potential in the technical field of rotation construction monitoring of swivel bridge structures.
In order to solve the technical problems, the invention adopts the following technical scheme:
in a first aspect, a bridge swivel monitoring method based on a three-dimensional orthogonal cursor includes the following steps:
S1, arranging at least one three-dimensional orthogonal cursor capable of emitting light beams on a swivel bridge, and arranging an image acquisition device outside the swivel range of the bridge; respectively establishing a bridge local coordinate system and a camera coordinate system on the swivel bridge and the image acquisition device, and simultaneously establishing a cursor coordinate system by taking the intersection point of three-dimensional orthogonal cursor beams as an origin and three beam directions as coordinate axes; taking the spherical hinge center of a turntable on a bridge as an origin, respectively taking the longitudinal direction, the transverse direction and the vertical direction of the bridge as an X B axis, a Y B axis and a Z B axis to establish a bridge local coordinate system, respectively taking the optical center of an image acquisition device as the origin, respectively taking the horizontal direction of the image acquisition device to the right and the vertical direction to the down, and taking the optical axis directions as an X C axis, a Y C axis and a Z C axis to establish a camera coordinate system;
S2, before bridge swivel construction, an image acquisition device shoots and acquires an image containing three-dimensional orthogonal cursor beam rays as an initial posture image, and the direction angle and intersection point coordinates of the three-dimensional orthogonal cursor beam projection on an imaging plane when the initial posture is determined according to the initial posture image;
S3, establishing a transformation relation between a cursor coordinate system and a camera coordinate system by adopting a rigid body space transformation principle, establishing a mapping relation between a three-dimensional orthogonal cursor light beam and a light beam projection according to an imaging principle, solving an initial posture of the three-dimensional orthogonal cursor by adopting a variation principle and a numerical iteration method according to the mapping relation, the transformation relation and a direction angle and an intersection point coordinate of the light beam projection when the initial posture is obtained in the S2, and obtaining a transformation relation between the camera coordinate system and the bridge local coordinate system when the initial posture is obtained according to the initial posture of the three-dimensional orthogonal cursor and a transformation relation between the cursor coordinate system and the bridge local coordinate system, so as to further determine the initial posture of the bridge swivel;
S4, continuously acquiring attitude images of the bridge swivel at different moments by an image acquisition device in the construction process of the bridge swivel, extracting the direction angles and intersection coordinates of the three-dimensional orthogonal cursor light beams projected on an imaging plane at different moments from the acquired attitude images at different moments, solving the spatial attitudes of the three-dimensional orthogonal cursor at different moments by adopting a variation principle and a numerical iteration method, and acquiring the transformation relations between a camera coordinate system and a local coordinate system of the bridge swivel at different moments in the bridge swivel process so as to determine the spatial attitudes of the bridge swivel at different moments;
s5, according to the initial posture of the bridge turning body obtained in the S3-S4 and the posture at different moments in the turning process, obtaining a relative rotation matrix of the posture at different moments in the turning process of the bridge and the initial posture; setting a rotation sequence of the bridge around the bridge local coordinate system to generate relative rotation, and extracting relative rotation Euler angles of the bridge swivel at different moments from the relative rotation matrix according to the rotation sequence.
Further, in step S1, the three-dimensional orthogonal cursor is arranged as follows:
The three-dimensional orthogonal cursor is formed by mutually and perpendicularly connecting three laser transmitters, the light beam starting points are converged at one point, and each laser transmitter transmits light beam rays to form three optical axes; the three laser emitters can respectively emit laser with different colors, preferably three colors of red, green and blue; the emission distance of each laser emitter is preferably more than or equal to 5m, so that the color of the light beam is highlighted and the computer vision observation and capture are facilitated; the three-dimensional orthogonal cursor is preferably mounted on the bridge pier or the upper turntable.
Further, in step S1, the image acquisition device is arranged according to the following method:
The arrangement position of the image acquisition device and the arrangement position of the three-dimensional orthogonal cursor have a height difference, so that projections of three-dimensional orthogonal cursor beams on an imaging plane of the image acquisition device are not overlapped with each other;
the spatial pose of the image acquisition device is kept unchanged in the whole bridge swivel construction process, and meanwhile, the three-dimensional orthogonal cursor is kept in the view finding range of the image acquisition device.
Further, in step S2, according to the initial pose image (i.e. before the bridge rotates), the direction angle and the intersection point coordinate of the orthogonal three-dimensional cursor beam projected on the imaging plane of the image acquisition device during the initial pose are obtained according to the following method:
Determining RGB threshold ranges of the three color light beams, and extracting pixel points corresponding to the three color light beams from the initial posture image according to the corresponding RGB threshold ranges; and respectively carrying out iterative fitting on the three extracted pixel points according to a linear function by adopting a random sampling consistency method (RANSAC algorithm) to obtain a plurality of analytical expressions corresponding to the beam projection fitting straight lines, and obtaining the analytical expressions of the beam projection fitting straight lines meeting the precision requirement when the error of the coordinates of every two intersection points of the three fitting straight lines is less than or equal to 0.1 pixel, so as to obtain the direction angle of the beam projection on an imaging plane, and taking the average value of the coordinates of every two intersection points of the fitting straight lines as the intersection point coordinates of the beam projection.
Further, in step S3, according to the principle of rigid space transformation, the transformation relationship between the cursor coordinate system and the camera coordinate system is established according to the following method:
Assuming that the rotation order of the transformation from the camera coordinate system to the cursor coordinate system is about the X-Y-Z axis, the corresponding rotation euler angles are α, β, γ, respectively, and the corresponding rotation matrices are R X、RY、RZ, respectively, the total rotation transformation matrix R c is obtained as in equation (3.1):
Wherein alpha, beta and gamma are Euler angles of rotation of a camera coordinate system around X, Y, Z axes respectively; r X、RY、RZ is a rotation matrix corresponding to rotation angles alpha, beta, gamma, respectively; r c is the total rotation transformation matrix of the camera coordinate system to the cursor coordinate system;
According to the rotation transformation matrix R c, the transformation relation between the camera coordinate system and the cursor coordinate system is obtained according to equation (3.2):
ηi=RCΕ=[r1i r2i r3i]T (3.2)
Wherein eta i is the cursor coordinate axis direction vector at any moment; r C is a rotation transformation matrix for transforming the camera coordinate system to the cursor coordinate system at the moment; e is a camera coordinate system coordinate axis unit matrix, and row vectors forming the unit matrix represent direction vectors of the camera coordinate system coordinate axes; r 1i、r2i、r3i is the 1 st, 2 nd, 3 rd row and column element in rotation matrix R C, respectively.
Further, in step S3, according to the imaging principle, the three-dimensional orthogonal cursor beam is projected toward the center of the camera optical center, that is, the center of the camera coordinate system, and the mapping relationship between the beam and the beam projection is obtained according to the following method:
Center projection is carried out on the three-dimensional orthogonal cursor to an optical center (center of a camera coordinate system) of the image acquisition device, and assuming that the coordinate of a three-dimensional orthogonal cursor origin O A in the camera coordinate system is (X 0,Y0,Z0), the image coordinate (X 0,y0) of the three-dimensional orthogonal cursor origin projected to an imaging plane is expressed as follows in a formula (3.3):
Wherein f is the focal length of the camera;
Assuming that the vector diameter of the origin O A of the cursor coordinate system under the camera coordinate system is t 0=(X0,Y0,Z0),t0, namely a translation vector in a camera external parameter matrix, according to the plane intersection principle, the projection ray l i of the cursor coordinate system on an imaging plane is an intersection line of a space plane determined by the vector diameter t 0 and a cursor coordinate axis eta i and an imaging plane, and according to the space analytic geometry principle, the normal vector n i of the space plane determined by the vector diameter t 0 and the cursor coordinate axis eta i is obtained according to (3.4):
The analytic formula of the space plane where the vector diameter t 0 and the cursor coordinate axis eta i are located is shown as formula (3.5):
ni1x+ni2y+ni3z=0 (3.5)
Where n i1、ni2、ni3 is the component of the normal vector n i of the plane in which η i lies in the axial direction of the camera coordinate system X, Y, Z, i.e. the 1 st, 2 nd and 3 rd elements in the vector n i, respectively.
According to the principle of space analytic geometry, the analytic expression of the beam projection l i on the imaging plane can be obtained by combining the analytic expression of the space plane determined by the vector diameter t 0 and the cursor coordinate axis eta i with the analytic expression of the imaging plane (namely z=f), and the analytic expression is as follows (3.6):
Wherein, (x, y) is imaging plane coordinates;
according to the analytical expression of the beam projection l i, the direction angle of the beam projection l i is solved, and the mapping relation between the three-dimensional orthogonal cursor beam and the beam projection is obtained as shown in the formula (3.7):
θi=atan2(fr2i-y0r3i,fr1i-x0r3i) (3.7)
Wherein, theta i is the direction angle of the light beam projection li; atan2 (·) is a four-quadrant arctangent function.
Further, in step S3, a simultaneous equation of a mapping relationship between the three-dimensional orthogonal cursor light beam and the light beam projection and a transformation relationship between the cursor coordinate system and the camera coordinate system are solved by adopting a numerical iteration method according to the direction angle and the intersection point coordinate of the light beam projection in the initial posture obtained in S2, so as to obtain corresponding rotation euler angles α, β, γ of the three-dimensional orthogonal cursor in the camera coordinate system in the initial posture; the numerical iteration method is preferably Newton-Laplarson numerical iteration method;
and obtaining a transformation relation between a camera coordinate system and a cursor coordinate system in the initial posture according to the solved rotation Euler angles alpha, beta and gamma, wherein the transformation relation is shown as a formula (3.8):
wherein R c0 is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system when in an initial posture; I=1, 2 and 3, which are coordinate axis direction vectors of a cursor coordinate system in the initial gesture; e is a camera coordinate system coordinate axis unit matrix, and row vectors forming the unit matrix represent direction vectors of the camera coordinate system coordinate axes.
Further, in step S3, according to the transformation relation between the camera coordinate system and the cursor coordinate system and the transformation relation between the cursor coordinate system and the bridge local coordinate system as shown in formula (3.9), the transformation relation between the camera coordinate system and the bridge local coordinate system at the initial posture is obtained according to formula (3.10), and the initial posture (the initial rotation euler angle based on the camera coordinate system) of the bridge swivel is determined according to the transformation relation:
[η1,η2,η3]T=R0[ξ1,ξ2,ξ3]T (3.9)
In the method, in the process of the invention, I=1, 2 and 3, which are coordinate axis direction vectors of the bridge local coordinate system; r 0 is a rotation transformation matrix for transforming the bridge local coordinate axis into a cursor coordinate system; r 0 -1 is the inverse of R 0; η 1、η2、η3 is the cursor coordinate axis direction vector at any moment respectively; xi 1、ξ2、ξ3 are coordinate axis direction vectors of the bridge local coordinate system at any moment respectively; r 0 -1Rc0 is a rotation transformation matrix for transforming the camera coordinate system to the bridge local coordinate system.
Further, in step S4, the transformation relationship between the camera coordinate system and the local coordinate system of the bridge at different times during the bridge rotation process is obtained according to equation (4.1), and the spatial attitudes (the rotation euler angles based on the camera coordinate system during the rotation process) at different times during the bridge rotation process are determined according to the transformation relationship:
In the method, in the process of the invention, For the coordinate axis direction vector of the bridge local coordinate system at the kth shooting moment, i=1, 2, 3, k=1, 2, 3; r ck is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system at the kth shooting time; r 0 -1Rck represents a rotation transformation matrix in which the camera coordinate system is transformed to the bridge local coordinate system at the kth photographing time, from which the swivel spatial attitude (rotation euler angle) of the swivel bridge based on the camera coordinate system at the kth photographing time can be extracted.
Further, in step S5, according to the initial posture of the bridge swivel obtained in S3-S4 and the spatial posture at different moments in the swivel process, a relative rotation matrix based on the local coordinate system of the bridge between the posture at different moments in the swivel process and the initial posture of the bridge is obtained according to (5.1):
ΔR0k=R0 -1RckR0Rck -1 (5.1)
Wherein R ck is a rotation matrix in which a camera coordinate system is converted into a cursor coordinate system at the kth shooting time, and k=1, 2, 3; ΔR 0k is a relative rotation matrix of the local coordinate system of the swivel bridge, which is transformed from the initial posture to the spatial posture corresponding to the kth shooting time.
Further, in step S5, the rotation order of the rotator bridge relative to the bridge local coordinate system is set to be around the Z-Y-X axis, and when the rotation order of the rotator bridge transformed from the camera coordinate system to the cursor coordinate system is set to be around the X-Y-Z axis, the relative rotation euler angles Δα, Δβ, Δγ are extracted from the relative rotation matrix Δr 0k as formula (5.2):
Wherein delta alpha, delta beta and delta gamma are Euler angles of rotation of the swivel bridge around a bridge local coordinate system X, Y, Z shaft respectively; s mn (m=1, 2,3; n=1, 2, 3) represents the element of the mth row and n column in the relative rotation matrix.
In a second aspect, a monitoring system of a bridge swivel monitoring method based on a three-dimensional orthogonal cursor includes:
the cursor module comprises a three-dimensional orthogonal cursor fixedly arranged on the bridge through a base, wherein the three-dimensional orthogonal cursor is composed of three laser transmitters which are mutually and perpendicularly connected and emit light beams with different colors;
The image acquisition module continuously shoots and acquires gesture images of three-dimensional orthogonal cursors mounted on the swivel bridge by utilizing the image acquisition device, wherein the gesture images comprise initial gesture images and gesture images at different times in the swivel process;
The image processing module is used for extracting and obtaining a direction angle and an intersection point coordinate corresponding to the three-dimensional orthogonal cursor beam rays from all the gesture images shot by the image acquisition module;
The data processing module is used for calculating relative rotation Euler angles of different time postures of the bridge swivel relative to the initial posture according to the direction angle and the intersection point coordinates of the light beam projection obtained by the image processing module;
and the data display module displays relative rotation Euler angles of the rotator bridge at different time poses and target rotation Euler angles.
The technical principle of the invention is as follows: because the existing monitoring technology of swivel bridge construction is difficult to accurately acquire the real-time space pose of the bridge, the rotation and the translation of the bridge are separated by considering that the center limit exists in the swivel process of the bridge and only rotating around the local coordinate system of the bridge without translation; however, the bridge local coordinate system is difficult to visualize in the image acquisition device and the monitoring effect is poor, so that a three-dimensional orthogonal cursor is introduced, the bridge rotation problem which is difficult to directly observe is converted into the visualized three-dimensional cursor movement, and the bridge swivel movement monitoring is equivalently replaced by the movement monitoring of the three-dimensional orthogonal cursor, so that a new thought is provided for realizing the tracking and monitoring of the overall process gesture of the bridge swivel; according to the imaging principle and the space analytic geometry principle, the invention establishes the mapping relation between the three-dimensional orthogonal cursor light beam and the light beam projection, establishes the conversion relation among the three-dimensional orthogonal cursor, the bridge and the image acquisition device according to the rigid body space transformation principle, and finally obtains the relative rotation Euler angles of the bridge swivel at different moments in the process of the bridge swivel according to the gesture images of the three-dimensional orthogonal cursor acquired by the image acquisition device relative to the initial gesture, and the relative rotation Euler angles and the target rotation Euler angles can be compared to adjust the bridge swivel action in real time, thereby realizing the real-time monitoring of the whole process of the bridge swivel.
Compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) Global monitoring: the invention can realize the whole process monitoring of the rotating process, does not need on-site contact and is not limited by the visual field position.
(2) High precision: the invention can accurately calculate the space gesture of the swivel bridge in the rotating process, and has high precision for monitoring the swivel angle and longitudinal and transverse deflection.
(3) Real-time performance: according to the invention, the swivel image information is identified and acquired in real time, a swivel bridge space attitude monitoring algorithm is developed, programming modules such as data preprocessing, calculation analysis and parameter output are designed, and efficient real-time feedback of monitoring data and results is realized through module integration.
(4) The application range is wide: the invention is suitable for rotation monitoring of swivel bridges in the field of civil engineering, and is particularly suitable for bridge structures with limited geographic positions.
(5) The application potential is large: the invention can realize global, high-precision and real-time monitoring of the swivel bridge with extremely low cost and calculation consumption, and is important for developing a structural rotation monitoring method based on computer vision.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional orthogonal cursor;
FIG. 3 is a schematic layout of a three-dimensional orthogonal cursor;
FIG. 4 is a schematic diagram of the principle of relative rotation of a swivel bridge;
FIG. 5 is a cursor projection plan view;
fig. 6 is a schematic diagram of coordinate system conversion.
Detailed Description
In order to make the purpose and technical solutions of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present invention fall within the protection scope of the present invention.
Example 1;
See fig. 1-6;
The bridge swivel monitoring method based on the three-dimensional orthogonal cursor comprises the following steps:
S1, arranging at least one three-dimensional orthogonal cursor capable of emitting light beams on a swivel bridge, and arranging an image acquisition device outside the swivel range of the bridge; taking the spherical hinge center of a turntable on a bridge as an origin, respectively taking the longitudinal direction, the transverse direction and the vertical direction of the bridge as an X B axis, a Y B axis and a Z B axis to establish a bridge local coordinate system, taking the intersection point of three-dimensional orthogonal cursor beams as the origin, establishing a cursor coordinate system by taking the three beam directions as coordinate axes, taking the optical center of an image acquisition device as the origin, respectively taking the horizontal direction to the right and the vertical direction of the image acquisition device downwards, and establishing a camera coordinate system by taking the optical axis directions as an X C axis, a Y C axis and a Z C axis;
The three-dimensional orthogonal cursors are laid out as follows:
The three-dimensional orthogonal cursor is formed by mutually and perpendicularly connecting three laser transmitters, a beam starting point is converged at one point (namely, the origin of the three-dimensional orthogonal cursor), beam rays form three optical axes, and the beam starting point and the three orthogonal optical axes form a cursor coordinate system; the three laser emitters can emit laser with different colors, preferably red, green and blue; the emission distance of each laser emitter is preferably 5m or more; the three-dimensional orthogonal cursor is preferably mounted on the bridge pier or the upper turntable. In a preferred embodiment, the cursor device uses a 4W red laser, a 4W green laser, and a 12W blue laser as the beam emitters.
The light beam can be emitted infinitely in a certain range under the condition of no shielding, a huge cursor coordinate system with outstanding color is formed, and the image acquisition device can observe and capture conveniently, so that the shooting effect and the image processing accuracy are improved.
It should be noted that in addition to the laser emitters emitting light beams that may be used to form a three-dimensional orthogonal cursor, three-dimensional orthogonal physical targets with length information may also be used. According to the construction requirement of crossing railways, the swivel construction is usually carried out at night, in this case, a laser beam is preferably selected as a visual input, and if the site environment is bright and the visibility of the laser beam is not obviously advantageous, a three-dimensional orthogonal entity target with length information can be selected. When the latter is used as a visual input of an image acquisition device, it is necessary to ensure that the line of sight between the image acquisition device and the three-dimensional orthogonal target is unobstructed.
The image acquisition device is arranged according to the following method:
the arrangement position of the image acquisition device and the arrangement position of the three-dimensional orthogonal cursor have a height difference, so that projections of three-dimensional orthogonal cursor beams on an imaging plane of the image acquisition device are not overlapped with each other;
and keeping the space pose of the image acquisition device unchanged in the whole bridge swivel construction process, so that the three-dimensional orthogonal cursor is kept in the view finding range of the image acquisition device.
It should be understood that the image acquisition device may be a video camera, etc. preferably an industrial camera, and the calibration needs to be performed before the image acquisition device starts to acquire the intrinsic parameters of the camera required in the process of calculating the spatial attitude of the swivel bridge, and the calibration method is preferably a Zhang's calibration method; when a camera is adopted to take pictures, the frame rate of the camera needs to be set in advance to realize continuous shooting; the whole process of the bridge swivel keeps the sight between the image acquisition device and the orthogonal three-dimensional cursor basically unobstructed, and large-area shielding is avoided; preferably, the distance between the image acquisition device and the three-dimensional orthogonal cursor is within 100 m.
S2, before bridge swivel construction, an image acquisition device shoots and acquires an image containing a three-dimensional orthogonal cursor beam as an initial posture image, and the direction angle and the intersection point coordinate of the three-dimensional orthogonal cursor beam projected on an imaging plane when the initial posture is determined according to the initial posture image;
According to an initial attitude image (namely before a bridge rotates), the direction angle of the orthogonal three-dimensional cursor beam ray projected on the imaging plane of the image acquisition device during the initial attitude is obtained by the following method:
In a camera coordinate system, traversing and identifying pixel points in an image based on a set RGB threshold range, and extracting light beam pixel point information; and respectively carrying out iterative fitting on the three extracted pixel points according to a linear function by adopting a random sampling consistency method (RANSAC algorithm) to obtain a plurality of analytical expressions corresponding to the beam projection fitting straight lines, obtaining the analytical expressions of the beam projection fitting straight lines meeting the precision requirement when the error of coordinates of every two intersection points of the three fitting straight lines is smaller than or equal to 0.1 pixel, and then adopting a plane analytical geometry principle to obtain the direction angle of the beam projection on an imaging plane, wherein the average value of coordinates of every two intersection points of the fitting straight lines is used as the intersection point coordinates of the beam projection.
It should be noted that, the RANSAC algorithm is a common mathematical method in the field of computer vision, and is a method for estimating parameters of a mathematical model from a group of observed data containing outliers by adopting an iterative manner; RANSAC is a non-deterministic algorithm, and the more the number of iterations, the more reasonable the resulting fitting model. In the invention, the basic idea of the RANSAC algorithm is as follows: taking a certain group of light beam projection pixel points as an example, carrying out a small amount of random sampling on a pixel point data set as an inner point set and carrying out model fitting according to a linear function; predicting all pixel points in a data set by using a fitting model, setting a distance threshold value, regarding data points with the fitting degree of the data points with the model higher than the threshold value as internal points, updating the internal point set and fitting the model again; setting an interior point proportion threshold value, if the number of the interior points reaches the threshold value, considering that the current model meets the convergence condition, stopping iteration, otherwise, repeating the previous step; and when the model meets the convergence condition, estimating model parameters by using a least square method to obtain an analytical expression of the cursor projection fitting straight line.
After the image acquisition device acquires the images/photos of the bridge swivel at different moments, the images/photos generally need to be preprocessed, for example, mean filtering is selected for noise reduction processing, and morphological open operation is used for smoothing the photos; image preprocessing is a conventional technical means in the art, and is not described herein.
In step S2, the pixel information is subjected to iterative fitting by processing the bridge swivel gesture image and adopting a RANSAC algorithm, constraint conditions are added to improve fitting accuracy, and two-dimensional image information of a three-dimensional orthogonal cursor on a swivel bridge is accurately obtained.
S3, establishing a transformation relation between a cursor coordinate system and a camera coordinate system by adopting a rigid body space transformation principle, establishing a mapping relation between a three-dimensional orthogonal cursor light beam and a light beam projection according to an imaging principle, solving an initial posture of the three-dimensional orthogonal cursor by adopting a variation principle and a numerical iteration method according to the mapping relation, the transformation relation and a direction angle and an intersection point coordinate of the light beam projection when the initial posture is obtained in the S2, and obtaining a transformation relation between the camera coordinate system and the bridge local coordinate system when the initial posture is obtained according to the initial posture of the three-dimensional orthogonal cursor and a transformation relation between the cursor coordinate system and the bridge local coordinate system, so as to further determine the initial posture of the bridge swivel;
According to the rigid body space transformation principle and the transformation characteristics of the swivel bridge, the transformation relation between the cursor coordinate system and the camera coordinate system is established according to the following method:
Describing the posture transformation of the swivel bridge by virtue of Euler angles, decomposing the rotation process of the beam body into three independent rotations around X, Y, Z axes, and carrying out internal rotation by taking a local coordinate system of the bridge as a reference in each rotation; assuming that the rotation order of transforming from the camera coordinate system to the bridge local coordinate system is about the X-Y-Z axis, the corresponding rotation euler angles are α, β, γ, respectively, and the corresponding rotation matrices are R X、RY、RZ, respectively, the total rotation matrix R C is expressed as follows by equation (1):
The three coordinate axis direction vectors of the camera coordinate system are represented by an identity matrix E, and the camera coordinate system is used as an observation reference, and then the coordinate axis direction vector of the camera coordinate system and the coordinate axis direction vector η i (i=1, 2, 3) of the cursor coordinate system satisfy the formula (2):
ηi=RCAΕ=[r1i r2i r3i]T (2)
Wherein R CA is a rotation transformation matrix for transforming the camera coordinate system into the cursor coordinate system; r 1i、r2i、r3i is the 1 st, 2 nd, 3 rd row and column element in rotation matrix R CA, respectively; e is a camera coordinate system coordinate axis matrix, and row vectors constituting the matrix represent unit direction vectors of the camera coordinate system coordinate axes.
It should be understood that, according to the principle of rigid space transformation, the camera coordinate system may be transformed to the bridge local coordinate system in a different rotation order, for example, around the X-Z-Y axis, where the different rotation order corresponds to different rotation matrices and rotation angles, but the spatial pose of the bridge itself before or after rotation is not changed by the change of the rotation order. The order of rotation about the X-Y-Z axis is not to be construed as limiting the invention.
The bridge is regarded as a rigid body, the rigid body motion consists of rotation and translation, and because the prior monitoring technology is difficult to accurately acquire the space pose of the rigid body motion, the inventor considers that the bridge only rotates around the local coordinate system of the bridge without translation due to the central limit in the rotating process of the bridge, so that the rotation and the translation of the bridge are separated; however, because the local coordinate system of the bridge is difficult to visualize in the image acquisition device, the attitude change in the rotation process of the bridge is indirectly determined by means of the three-dimensional orthogonal cursor, and the establishment of the mapping relation between the three-dimensional orthogonal cursor light beam and the projection of the three-dimensional orthogonal cursor light beam on the imaging plane of the image acquisition device is important in the process;
taking camera imaging as an example, the imaging principle is briefly described as follows:
The center projection is carried out on any point in space, and the conversion relation of world coordinates (X W,YW,ZW), camera coordinates (X C,YC,ZC), image coordinates (X, y) and pixel coordinates (u, v) is as shown in the following formula (3) under the premise of not considering lens distortion of camera imaging:
Wherein, (u 0,v0) is the coordinate of the center of the imaging plane of the image acquisition device in the pixel coordinate system; dx and dy respectively represent the lengths of the physical pixels of the photo in the directions of two coordinate axes of a pixel coordinate system; k m is a camera internal reference matrix; k T is a camera external parameter matrix;
K m is represented by formula (4):
Wherein f is the focal length of the camera, and the internal parameters of the camera can be obtained through camera calibration.
The camera extrinsic matrix K T is composed of a rotation matrix R transformed from the camera coordinate system to the world coordinate system and a corresponding translation vector t, expressed as:
In this embodiment, the three-dimensional orthogonal cursor is projected to the optical center (origin of camera coordinate system) of the image capturing device, and assuming that the coordinates of the three-dimensional orthogonal cursor origin O A in the camera coordinate system are (X 0,Y0,Z0) and the camera coordinate system is taken as the observation reference, the image coordinates (X 0,y0) of the three-dimensional orthogonal cursor origin projected to the imaging plane can be expressed as follows in equation (6):
Assuming that the vector diameter of the origin O A of the cursor coordinate system under the camera coordinate system is t 0=(X0,Y0,Z0),t0, which is a translation vector in the camera external parameter matrix, according to the plane intersection principle, the projection ray l i of the cursor coordinate system on the imaging plane is the intersection line of the space plane determined by the vector diameter t 0 and the cursor coordinate axis eta i and the imaging plane, and according to the space analytic geometry principle, the normal vector n i of the space plane determined by the vector diameter t 0 and the cursor coordinate axis eta i can be obtained by the following formula (7):
According to the normal vector n i, the analytic formula of the space plane where the vector diameter t 0 and the cursor coordinate axis eta i are located is shown as formula (8):
ni1x+ni2y+ni3z=0 8)
where n i1、ni2、ni3 is the component of normal vector n i of the plane in which η i lies in the direction of camera coordinate system X, X, Z, i.e. the 1 st, 2 nd and 3 rd elements in normal vector n i, respectively.
According to the plane intersection principle, calculating mapping information of the central projection of the cursor coordinate system to the imaging plane to obtain an analytical expression of a cursor ray projection l i on the imaging plane, wherein the analytical expression is as shown in formula (9):
Because the cursor coordinate system has directivity, the projection information is expressed in the form of rays, so that the direction angle is accurately described for the mapping information. According to the analytical expression of the cursor projection l i, the mapping relation between the three-dimensional orthogonal cursor light beam and the light beam projection is obtained through calculation as in (10):
θi=atan2(fr2i-y0r3i,fr1i-x0r3i) (10)
Wherein, theta i is the direction angle of the cursor ray projection l i; atan2 (·) is a four-quadrant arctangent function.
The mapping relation between the three-dimensional orthogonal cursor light beam and the light beam projection and the transformation relation between the cursor coordinate system and the camera coordinate system are shown as a simultaneous equation (11), according to the direction angle and the intersection point coordinate of the cursor projection in the initial gesture obtained in the step S2, the simultaneous equation (11) is subjected to iterative solution by adopting a variation principle and a numerical iteration method, and the rotation Euler angles alpha, beta and gamma corresponding to the spatial gestures of the three-dimensional orthogonal cursor at different moments in the camera coordinate system are obtained through calculation; the numerical iteration method is preferably Newton-Laplarson numerical iteration method;
according to the solved rotation Euler angles alpha, beta and gamma, a rotation matrix R c0 which is transformed from a camera coordinate system to a cursor coordinate system when the initial attitude of the swivel bridge is determined, and the transformation relation between the camera coordinate system and the cursor coordinate system of the initial attitude of the swivel bridge is obtained as shown in the following formula (12):
wherein R c0 is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system when in an initial posture; I=1, 2 and 3, which are coordinate axis direction vectors of a cursor coordinate system in the initial gesture; e is a camera coordinate system coordinate axis matrix, and row vectors constituting the matrix represent unit direction vectors of the camera coordinate system coordinate axes.
The three-dimensional orthogonal cursor is used as visual input, and a conversion relation between a cursor coordinate system and a bridge local coordinate system is established as shown in formula (13):
[η1,η2,η3]T=R0[ξ1,ξ2,ξ3]T (13)
Wherein: η i、ζi (i=1, 2, 3) represents the direction vectors of the cursor coordinate axis and the bridge local coordinate axis in the same coordinate system respectively; r 0 is a rotation matrix transformed from the bridge local coordinate system to the cursor coordinate system.
According to the transformation relation between the camera coordinate system and the cursor coordinate system and the transformation relation between the cursor coordinate system and the bridge local coordinate system of the swivel, the transformation relation between the camera coordinate system and the bridge local coordinate system in the initial posture is obtained according to the formula (14):
In the method, in the process of the invention, The coordinate axis direction vector of the bridge local coordinate system in the initial posture is i=1, 2 and 3; r 0 is a rotation transformation matrix for transforming the local coordinate axis of the bridge to a cursor coordinate system when in an initial gesture, and R 0 meets the requirement of [η1,η2,η3]T=R0[ξ1,ξ2,ξ3]T;R0 -1 as an inverse matrix of R 0; r 0 -1Rc0 represents a rotational transformation matrix of the camera coordinate system to the bridge local coordinate system, from which the initial spatial pose of the swivel bridge based on the camera coordinate system can be extracted.
In step S3, an effective mapping relation resolving method is provided, a mathematical relation between the three-dimensional orthogonal cursor beam and the image information projected at the center of the three-dimensional orthogonal cursor beam is established, and a theoretical basis is provided for realizing the subsequent bridge swivel space attitude reconstruction by taking the three-dimensional orthogonal cursor as a medium.
S4, in the construction process of the bridge swivel, the image acquisition device continuously acquires attitude images of different moments of the bridge swivel, extracts the direction angles and intersection coordinates of the projections of the three-dimensional orthogonal cursor beams on the imaging plane from the acquired attitude images of different moments, solves the spatial attitudes of the three-dimensional orthogonal cursor at different moments by adopting a variation principle and a numerical iteration method, acquires the transformation relation between a camera coordinate system and a local coordinate system of the bridge at different moments in the bridge swivel process, and further determines the spatial attitudes (based on the rotary Euler angles of the camera coordinate system) of the bridge swivel at different moments;
The transformation relation between the camera coordinate system and the bridge local coordinate system at different time poses is obtained according to the formula (15):
In the method, in the process of the invention, For the coordinate axis direction vector of the bridge local coordinate system at the kth shooting moment, i=1, 2, 3, k=1, 2, 3; r ck is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system at the kth shooting time; r 0 -1Rck represents a rotation transformation matrix in which the camera coordinate system is transformed to the bridge local coordinate system at the kth photographing time, from which the swivel spatial attitude (rotation euler angle) of the swivel bridge based on the camera coordinate system at the kth photographing time can be extracted.
According to the method of step S3, R ck is obtained as follows:
Determining the direction angle and intersection point coordinates of the three-dimensional orthogonal cursor beam ray projection on the imaging plane when corresponding gestures are determined according to gesture images of the swivel bridge at different moments; according to the mapping relation, the transformation relation and the direction angle and the intersection point coordinate of the light beam ray projection when the gesture is different, the orthogonal three-dimensional cursor is obtained by solving the gesture of different moments by adopting a variation principle and a numerical iteration method, and the transformation relation between a camera coordinate system and a cursor coordinate system when the shooting moment is the kth is determined as shown in the formula (16):
Wherein R ck is a rotation transformation matrix for transforming the camera coordinate system into the cursor coordinate system at the kth shooting time; I=1, 2,3, k=1, 2,3 for coordinate axis direction vectors of the cursor coordinate system at the kth shooting time; e is a camera coordinate system coordinate axis unit matrix, and row vectors forming the unit matrix represent coordinate axis direction vectors of the camera coordinate system.
S5, according to the initial posture of the bridge turning body obtained in the S3-S4 and the posture at different moments in the turning process, obtaining a relative rotation matrix of the posture at different moments in the turning process of the bridge and the initial posture; setting a rotation sequence of the bridge of the swivel relative to the local coordinate system of the bridge, and extracting relative rotation Euler angles of the bridge swivel at different moments from the relative rotation matrix according to the rotation sequence;
Defining DeltaR 0k as a relative rotation transformation matrix of the corresponding spatial posture of the rotating bridge local coordinate system from an initial state to a kth shooting time (kth image), then AndThe relation of (2) is expressed as:
In the method, in the process of the invention, The coordinate axis direction vectors of the bridge local coordinate system at the initial gesture and the spatial gesture corresponding to the kth shooting moment are respectively, i=1, 2 and 3; ΔR 0k is a relative rotation matrix of the local coordinate system of the swivel bridge, which is transformed from the initial posture to the spatial posture corresponding to the kth shooting time.
Based on the transformation relation between the camera coordinate system and the cursor coordinate system constructed in the steps S3-S4 and the transformation relation between the bridge local coordinate system and the cursor coordinate system,AndThe relationship of (2) is further represented by the following formula (17):
The relative rotation transformation matrix DeltaR 0k of the swivel bridge after transformation from the initial state to the posture is determined according to the formula (19):
ΔR0k=R0 -1RckR0Rc0 -1 (19)
When the rotation sequence of the rotator bridge around the local coordinate system of the rotator bridge is set to be around the Z-Y-X axis according to the characteristics of the rotator bridge, and the rotation sequence of the rotator bridge around the X-Y-Z axis from the camera coordinate system to the cursor coordinate system, the unique relative rotation Euler angles delta alpha, delta beta and delta gamma can be extracted from the relative rotation matrix delta R 0k as shown in formula (20):
Where R ij (i=1, 2,3; j=1, 2, 3) represents the component of row i and column j in the rotation matrix Δr 0k.
In step S5, a calculation formula of a relative rotation transformation matrix of the bridge swivel at different time poses and the initial poses is derived by introducing a specific rotation mode and a rotation sequence suitable for the swivel bridge, and the relative rotation euler angles of the swivel bridge at different time poses and the initial poses are obtained by separating from the calculation formula, so that accurate monitoring of the swivel bridge structure is realized, and a reliable mathematical basis is provided for rotation monitoring of the swivel bridge structure.
According to the design position and the initial attitude position of the swivel bridge, calculating to obtain a target rotating Euler angle of the swivel bridge according to the method, and completing monitoring when the obtained rotating Euler angle of the swivel bridge around the vertical axis at any moment reaches the target rotating Euler angle; the target rotating Euler angle is a rotating Euler angle which needs to rotate around a vertical axis when a local coordinate system of the bridge needs to rotate around the vertical axis when the bridge reaches a bridge swivel design position from an initial posture through a primary swivel; in the bridge rotation construction process, the bridge is commanded to rotate according to the difference value between the bridge rotation data obtained through monitoring and the target rotation angle, and simultaneously, the attitude of the bridge is adjusted in time according to the rotation Euler angle of the rotation bridge obtained through monitoring around the longitudinal direction and the transverse direction, so that the bridge is accurately rotated to the design position.
Example 2;
the monitoring system of the bridge swivel monitoring method based on the three-dimensional orthogonal cursor comprises the following components:
the cursor module comprises a three-dimensional orthogonal cursor arranged on the swivel bridge through a base, wherein the three-dimensional orthogonal cursor is composed of three laser transmitters which are mutually and perpendicularly connected and emit light beams with different colors;
The image acquisition module continuously shoots and acquires gesture images of three-dimensional orthogonal cursors mounted on the swivel bridge by utilizing the image acquisition device, wherein the gesture images comprise initial gesture images and gesture images at different times in the swivel process;
More specifically, the spatial pose of the image acquisition device is adjusted to a proper position outside the bridge swivel range so as to ensure that the three-dimensional orthogonal cursor does not move out of the camera viewfinder in the bridge swivel process, the internal parameters of the image acquisition device are calibrated, the spatial pose of the image acquisition device is kept fixed after the adjustment is finished, and the image acquisition device is controlled to continuously acquire the pose image of the three-dimensional orthogonal cursor.
The image processing module is used for extracting and obtaining a direction angle and an intersection point coordinate corresponding to the three-dimensional orthogonal cursor beam projection from all the gesture images shot by the image acquisition module;
More specifically, the image is subjected to smooth noise reduction processing by means of mean filtering and morphological opening operation, then pixel points corresponding to three different color beam projections are extracted based on an RGB threshold range, three groups of pixel points are subjected to iterative fitting according to a linear function by means of random sampling consistency method to obtain fitting straight lines, then a fitting expression is verified according to orthogonal constraint of a three-dimensional cursor, when the error of coordinates of every two intersection points of the three fitting straight lines is smaller than or equal to 0.1 pixel unit, the fitting accuracy requirement is met, a final analysis expression of the beam projections is obtained, the direction angle of the beam projections is obtained, and the average value of the intersection point coordinates of the fitting function is taken as the intersection point coordinates of the three beam rays. The MSAC algorithm, PROSAC algorithm, etc. may also be employed in other embodiments.
The data processing module is used for calculating relative rotation Euler angles of different time postures of the bridge swivel relative to the initial posture according to the direction angle and the intersection point coordinates of the light beam projection obtained by the image processing module;
It should be noted that, the image processing and the data processing are implemented on an edge computing platform, which may be a microcomputer or a notebook computer, or may be other computing platforms with operation functions, and processing software, such as matlab software, is installed on the edge computing platform. In a particular embodiment, the CPU of the computing platform is i7-12700@2.10GHz. According to the bridge swivel monitoring method based on the orthogonal three-dimensional cursor, a person skilled in the art can write a corresponding calculation program in matlab software, and the description is omitted here. According to the embodiment simulation test, the time from capturing the image to obtaining the corresponding rotation parameter is about 0.15s, and the engineering instantaneity and specification requirements of rotation measurement are met.
The data display module displays relative rotation Euler angles of the rotator bridge at different time poses and target rotation Euler angles; and comparing the relative rotation Euler angle with the target rotation Euler angle, so that the bridge swivel construction can be guided.
Compared with the prior art, the method has the advantages that the shooting and collecting processes of the image collecting device are continuous and real-time, then the image information containing the three-dimensional orthogonal cursor is calculated on the edge calculating platform through the method, the calculating time is short, the relative rotation Euler angle can be obtained at a high speed, and the engineering instantaneity requirement is met.
Example 3;
In order to verify the real-time monitoring of the bridge swivel posture, a bridge swivel simplified model is constructed in CAD software for simulation, and the model is a swivel T-shaped bridge.
The height of the bridge pier with the T-shaped swivel body is 16m, the upper turntable is a 9m multiplied by 12m rectangle, the three-dimensional orthogonal cursor is arranged on the bridge pier base, and in order to simplify calculation, the cursor coordinate system is completely parallel to the bridge local coordinate system, so that the conversion relation [ R 0 in eta 1,η2,η3]T=R0[ξ1,ξ2,ξ3]T ] of the bridge local coordinate system and the cursor coordinate system is a unit matrix. The image acquisition device is a camera, the center of the camera coincides with the center of a coordinate system of the CAD working space, and the focal length f of the camera is set to be 0.13m. In simulation, the spatial position based on a camera coordinate system (camera coordinate system) is set as a range random value in the initial state of the swivel bridge, the random ranges of X C、YC coordinates are (-100, 100) and the random ranges of Z C coordinates are (0.13, 100.13), and the spatial posture of the initial state of the swivel bridge relative to the camera coordinate system is randomly given within-90 degrees (the spatial position parameters of the swivel bridge do not participate in the spatial posture and relative rotation calculation, and the layout position of the camera is simulated only). And (3) carrying out simulation on the actual construction monitoring condition of the bridge swivel, carrying out relative rotation successively based on the initial posture of the swivel bridge, randomly changing the rotation Euler angles delta alpha and delta beta within +/-1 DEG, increasing delta gamma by 0.5 DEG each time, and introducing a random value +/-0.5 deg.
For distinction, three different conditions are set in table 1 in this example; taking working condition 1 as an example, in the initial state, the coordinates of the origin of the local coordinate system of the swivel bridge under the camera coordinate system are (57.54 m,9.97m and 96.45 m), and the spatial posture relative to the camera coordinate system is alpha=0.58°, beta= -0.40 °, and gamma=0.25 °. The relative rotation angle of the random given swivel bridge relative to the initial state is delta alpha= -0.0162 degrees, delta beta= -0.0017 degrees, delta gamma = 0.4366 degrees, and according to the set working condition set, the direction angle and the intersection point coordinate of the orthogonal three-dimensional cursor ray projection are obtained based on CAD simulation software. Based on the steps S3-S5, a relative rotation matrix DeltaR 0k is calculated, a rotation Euler angle calculated value is extracted from the relative rotation matrix DeltaR 0k, and the calculated value is compared with an actual design value to analyze errors. The calculated values of the relative rotation angles under different working conditions are shown in table 1.
TABLE 1 spatial attitude of bridge swivel under camera coordinate system
As can be seen from Table 1, under different working conditions, the calculated values of the relative rotation Euler angles are consistent with the design values, the error between the calculated values is within 10 -15, and the rotation monitoring of the swivel bridge meets the precision requirement. In table 1, the rotation angle design value and the calculated value actually have a multi-bit decimal, and the text constraint does not provide all data, for example, for the condition 1, the error between the calculated value and the design value is 3.6705×10 -15 when the rotation angle α about the X axis at the first rotation is designed to be-0.0162 (rad). Based on the obtained projection information of the three-dimensional orthogonal cursor light beam in the initial gesture and any gesture in the rotating process, the test time for resolving the relative rotation Euler angle converted from the initial gesture to any gesture is about 0.15s, and the real-time monitoring requirement of the rotating bridge space position is met.
The monitoring method provided by the patent can be widely applied to the technical field of space attitude transformation of various monitoring target objects besides being applied to space rotation monitoring of a swivel bridge, for example, the attitude transformation of a camera is deduced through the rotation angle, the pitch angle and the yaw angle of the monitoring camera, the monocular vision odometer function is realized, and the monitoring method can be popularized and applied to the technical field of aircraft attitude estimation, the attitude tracking technology in the field of virtual reality and the like, and has huge application potential.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (10)
1. A bridge swivel monitoring method based on a three-dimensional orthogonal cursor is characterized by comprising the following steps of: the method comprises the following steps:
S1, arranging at least one three-dimensional orthogonal cursor capable of emitting light beams on a swivel bridge, and arranging an image acquisition device outside the swivel range of the bridge; respectively establishing a bridge local coordinate system and a camera coordinate system on a bridge and an image acquisition device, and simultaneously establishing a cursor coordinate system by taking the intersection point of three-dimensional orthogonal cursor beams as an origin and three beam directions as coordinate axes;
S2, before bridge swivel construction, an image acquisition device shoots and acquires an image containing three-dimensional orthogonal cursor beam rays as an initial posture image, and the direction angle and intersection point coordinates of the three-dimensional orthogonal cursor beam projection on an imaging plane when the initial posture is extracted from the initial posture image;
S3, establishing a transformation relation between a cursor coordinate system and a camera coordinate system by adopting a rigid body space transformation principle, establishing a mapping relation between a three-dimensional orthogonal cursor light beam and a light beam projection according to an imaging principle, solving an initial posture of the three-dimensional orthogonal cursor by utilizing a variation principle and a numerical iteration method according to the transformation relation, the mapping relation and a direction angle and an intersection point coordinate of the light beam projection when the initial posture is obtained in the S2, and obtaining a transformation relation between the camera coordinate system and a bridge local coordinate system when the initial posture is obtained according to the initial posture of the three-dimensional orthogonal cursor and a transformation relation between the cursor coordinate system and the bridge local coordinate system, so as to further determine the initial posture of a bridge swivel;
S4, continuously acquiring attitude images of the bridge swivel at different moments by an image acquisition device in the construction process of the bridge swivel, extracting the direction angles and intersection coordinates of the three-dimensional orthogonal cursor light beams projected on an imaging plane at different moments from the acquired attitude images at different moments, solving the spatial attitudes of the three-dimensional orthogonal cursor at different moments by adopting a variation principle and a numerical iteration method, and acquiring the transformation relations between a camera coordinate system and a local coordinate system of the bridge swivel at different moments in the bridge swivel process so as to determine the spatial attitudes of the bridge swivel at different moments;
S5, according to the initial posture of the bridge turning body obtained in the S3-S4 and the spatial postures at different moments in the turning process, obtaining a relative rotation matrix of the different time postures and the initial posture in the turning process of the bridge; setting a rotation sequence of the bridge around a local coordinate system of the bridge, and extracting relative rotation Euler angles of the bridge swivel at different moments from the relative rotation matrix according to the rotation sequence.
2. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in the step S1, a three-dimensional orthogonal cursor is formed by mutually and perpendicularly connecting three laser transmitters; the three laser transmitters respectively emit laser with different colors;
The arrangement position of the image acquisition device and the arrangement position of the three-dimensional orthogonal cursor have a height difference; and keeping the space pose of the image acquisition device unchanged in the whole bridge swivel construction process, so that the three-dimensional orthogonal cursor is kept in the view finding range of the image acquisition device.
3. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: the direction angle and the intersection point coordinate of the orthogonal three-dimensional cursor beam projected on the imaging plane are obtained by the following method:
Determining RGB threshold ranges of the three color light beams, and extracting pixel points corresponding to the three color light beams from the gesture image according to the corresponding RGB threshold ranges; and respectively carrying out iterative fitting on the three extracted groups of pixel points according to a linear function by adopting a random sampling consistency algorithm to obtain a plurality of analytical expressions corresponding to the beam projection fitting straight lines, and obtaining the analytical expressions of the beam projection fitting straight lines meeting the precision requirement when the error of coordinates of every two intersection points of the three fitting straight lines is less than or equal to 0.1 pixel, so as to obtain the direction angle of the beam projection on an imaging plane, wherein the average value of coordinates of every two intersection points of the fitting straight lines is used as the intersection point coordinates of the beam projection.
4. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in step S3, according to the imaging principle, the mapping relationship between the beam and the beam projection is obtained by the following method:
Center projection of the three-dimensional orthogonal cursor to the center of the camera coordinate system is performed, and assuming that the coordinates of the three-dimensional orthogonal cursor origin O A in the camera coordinate system are (X 0,Y0,Z0) and the vector t 0=(X0,Y0,Z0), the image coordinates (X 0,y0) of the three-dimensional orthogonal cursor origin projected to the imaging plane are expressed as follows in equation (3.1):
Wherein f is the focal length of the camera;
According to the principle of space analytic geometry, the vector t 0 and the normal vector n i of the space plane determined by the cursor coordinate axis eta i are obtained according to the formula (3.2):
Wherein r 1i、r2i、r3i is the ith row and column elements of the 1 st, 2 nd and 3 rd rows in the rotation transformation matrix of the camera coordinate system to the cursor coordinate system respectively;
The analytic formula of the space plane where the vector diameter t 0 and the cursor coordinate axis eta i are located is shown as formula (3.3):
ni1x+ni2y+ni3z=0 (3.3)
wherein n i1、ni2、ni3 is the 1 st, 2 nd and 3 rd elements in the normal vector n i respectively;
The analytical expression of the beam projection l i on the imaging plane can be obtained by combining the analytical expression of the space plane determined by the vector diameter t 0 and the cursor coordinate axis eta i with the analytical expression of the imaging plane, such as (3.4):
Wherein, (x, y) is imaging plane coordinates;
according to the analytical expression of the beam projection l i, the direction angle of the beam projection l i is obtained, and the mapping relation between the three-dimensional orthogonal cursor beam and the beam projection is obtained as shown in the formula (3.5):
θi=atan2(fr2i-y0r3i,fr1i-x0r3i) (3.5)
Wherein, theta i is the direction angle of the light beam projection li; atan2 (·) is a four-quadrant arctangent function.
5. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in step S3, the simultaneous equations of the mapping relation between the three-dimensional orthogonal cursor light beam and the light beam projection and the transformation relation between the cursor coordinate system and the camera coordinate system are solved by adopting a numerical iteration method according to the direction angle and the intersection point coordinate of the light beam projection in the initial posture obtained in the step S2, and the corresponding rotation Euler angles alpha, beta and gamma of the three-dimensional orthogonal cursor in the camera coordinate system in the initial posture are obtained;
And obtaining a transformation relation between a camera coordinate system and a cursor coordinate system in the initial posture according to the solved rotation Euler angles alpha, beta and gamma, wherein the transformation relation is shown as a formula (3.6):
wherein R c0 is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system when in an initial posture; I=1, 2 and 3, which are coordinate axis direction vectors of a cursor coordinate system in the initial gesture; e is a camera coordinate system coordinate axis unit matrix, and row vectors forming the unit matrix represent direction vectors of the camera coordinate system coordinate axes.
6. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in step S3, according to the transformation relation between the camera coordinate system and the cursor coordinate system and the transformation relation between the cursor coordinate system and the bridge local coordinate system as shown in formula (3.7), the transformation relation between the camera coordinate system and the bridge local coordinate system during the initial posture is obtained according to formula (3.8), and the initial posture of the bridge swivel is determined according to the transformation relation:
[η1,η2,η3]T=R0[ξ1,ξ2,ξ3]T (3.7)
In the method, in the process of the invention, I=1, 2 and 3, which are coordinate axis direction vectors of the bridge local coordinate system; r 0 is a rotation transformation matrix for transforming the bridge local coordinate axis into a cursor coordinate system; r 0 -1 is the inverse of R 0; η 1、η2、η3 is the cursor coordinate axis direction vector at any moment respectively; xi 1、ξ2、ξ3 are coordinate axis direction vectors of the bridge local coordinate system at any moment respectively; r 0 -1Rc0 is a rotation transformation matrix for transforming the camera coordinate system into the bridge local coordinate system; e is a camera coordinate system coordinate axis unit matrix, and row vectors forming the unit matrix represent direction vectors of the camera coordinate system coordinate axes.
7. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in step S4, the transformation relationship between the camera coordinate system and the local coordinate system of the bridge at different moments in the bridge rotation process is obtained according to equation (4.1), and the spatial attitudes at different moments in the bridge rotation process are determined according to the transformation relationship:
In the method, in the process of the invention, For the coordinate axis direction vector of the bridge local coordinate system at the kth shooting moment, i=1, 2,3, k=1, 2, 3; r 0 is a rotation transformation matrix for transforming the bridge local coordinate axis into a cursor coordinate system; r 0 -1 is the inverse of R 0; r ck is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system at the kth shooting time; r 0 -1Rck represents a rotation transformation matrix for transforming the camera coordinate system to the bridge local coordinate system at the kth shooting time; e is a camera coordinate system coordinate axis unit matrix, and row vectors forming the unit matrix represent direction vectors of the camera coordinate system coordinate axes.
8. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in step S5, according to the initial posture of the bridge swivel obtained in step S3-S4 and the spatial postures at different moments in the swivel process, the relative rotation matrix of the local coordinate system of the bridge is obtained according to (5.1) between the postures at different moments and the initial posture in the swivel process of the bridge:
ΔR0k=R0 -1RckR0Rck -1 (5.1)
Wherein R 0 is a rotation transformation matrix for transforming the local coordinate axis of the bridge into a cursor coordinate system; r 0 -1 is the inverse of R 0; r ck is a rotation matrix of the camera coordinate system transformed to the cursor coordinate system at the kth shooting time, k=1, 2, 3; ΔR 0k is a relative rotation matrix of the local coordinate system of the swivel bridge, which is transformed from the initial posture to the spatial posture corresponding to the kth shooting time.
9. The bridge swivel monitoring method based on the three-dimensional orthogonal cursor as claimed in claim 1, wherein: in step S5, the rotation order of the rotator bridge relative to the bridge local coordinate system is set to be around the Z-Y-X axis, and when the rotation order of the rotator bridge transformed from the camera coordinate system to the cursor coordinate system is set to be around the X-Y-Z axis, the relative rotation euler angles Δα, Δβ, Δγ are extracted from the relative rotation matrix as formula (5.2):
Wherein delta alpha, delta beta and delta gamma are Euler angles of rotation of the swivel bridge around a bridge local coordinate system X, Y, Z shaft respectively; s mn is the element of the m-th row and n-th column in the relative rotation matrix, m=1, 2,3; n=1, 2,3.
10. The monitoring system of the bridge swivel monitoring method based on the three-dimensional orthogonal cursors according to any one of claims 1 to 9, wherein: comprising the following steps:
the cursor module comprises a three-dimensional orthogonal cursor fixedly arranged on the bridge through a base, wherein the three-dimensional orthogonal cursor is composed of three laser transmitters which are mutually and perpendicularly connected and emit light beams with different colors;
The image acquisition module continuously shoots and acquires gesture images of three-dimensional orthogonal cursors mounted on the swivel bridge by utilizing the image acquisition device, wherein the gesture images comprise initial gesture images and gesture images at different times in the swivel process;
The image processing module is used for extracting and obtaining a direction angle and an intersection point coordinate corresponding to the three-dimensional orthogonal cursor beam projection from all the gesture images shot by the image acquisition module;
The data processing module is used for calculating relative rotation Euler angles of different time postures of the bridge swivel relative to the initial posture according to the direction angle and the intersection point coordinates of the light beam projection obtained by the image processing module;
and the data display module displays relative rotation Euler angles of the rotator bridge at different time poses and target rotation Euler angles.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410365220.8A CN118258377A (en) | 2024-03-28 | 2024-03-28 | Bridge swivel monitoring method and system based on three-dimensional orthogonal cursor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410365220.8A CN118258377A (en) | 2024-03-28 | 2024-03-28 | Bridge swivel monitoring method and system based on three-dimensional orthogonal cursor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118258377A true CN118258377A (en) | 2024-06-28 |
Family
ID=91604859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410365220.8A Pending CN118258377A (en) | 2024-03-28 | 2024-03-28 | Bridge swivel monitoring method and system based on three-dimensional orthogonal cursor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118258377A (en) |
-
2024
- 2024-03-28 CN CN202410365220.8A patent/CN118258377A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7260269B2 (en) | Positioning system for aeronautical non-destructive inspection | |
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN104482934B (en) | The super close distance autonomous navigation device of a kind of Multi-sensor Fusion and method | |
CN104330074B (en) | Intelligent surveying and mapping platform and realizing method thereof | |
CN102927908B (en) | Robot eye-on-hand system structured light plane parameter calibration device and method | |
CN112629431B (en) | Civil structure deformation monitoring method and related equipment | |
KR102295809B1 (en) | Apparatus for acquisition distance for all directions of vehicle | |
CN110849331B (en) | Monocular vision measurement and ground test method based on three-dimensional point cloud database model | |
Yang et al. | A calibration method for binocular stereo vision sensor with short-baseline based on 3D flexible control field | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
CN104200086A (en) | Wide-baseline visible light camera pose estimation method | |
Özaslan et al. | Towards fully autonomous visual inspection of dark featureless dam penstocks using MAVs | |
WO2022126339A1 (en) | Method for monitoring deformation of civil structure, and related device | |
CN112254663B (en) | Plane deformation monitoring and measuring method and system based on image recognition | |
CN110470226A (en) | A kind of bridge structure displacement measurement method based on UAV system | |
CN114743021A (en) | Fusion method and system of power transmission line image and point cloud data | |
CN105374067A (en) | Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof | |
Kim et al. | As-is geometric data collection and 3D visualization through the collaboration between UAV and UGV | |
Liu et al. | A high-accuracy pose measurement system for robotic automated assembly in large-scale space | |
CN113902809A (en) | Method for jointly calibrating infrared camera and laser radar | |
CN106169076A (en) | A kind of angle license plate image storehouse based on perspective transform building method | |
CN114659523B (en) | Large-range high-precision attitude measurement method and device | |
CN114663473A (en) | Personnel target positioning and tracking method and system based on multi-view information fusion | |
CN108225276A (en) | A kind of list star imageable target kinetic characteristic inversion method and system | |
CN113096058B (en) | Spatial target multi-source data parametric simulation and MixCenterNet fusion detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |