CN111473739B - Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area - Google Patents
Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area Download PDFInfo
- Publication number
- CN111473739B CN111473739B CN202010332626.8A CN202010332626A CN111473739B CN 111473739 B CN111473739 B CN 111473739B CN 202010332626 A CN202010332626 A CN 202010332626A CN 111473739 B CN111473739 B CN 111473739B
- Authority
- CN
- China
- Prior art keywords
- camera
- dimensional
- point
- points
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/16—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
- G01B11/167—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge by projecting a pattern on the object
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Abstract
The invention discloses a video monitoring-based method for monitoring surrounding rock deformation of a tunnel collapse area in real time. The method comprises the steps of firstly obtaining a three-dimensional virtual live-action model of a tunnel collapse area by adopting a ground laser scanning technology, wherein the three-dimensional virtual live-action model comprises point cloud data and three-dimensional image data, secondly constructing a transmission projection relation between a camera position and the three-dimensional virtual live-action model of the collapse area according to the position of each camera and video data of the collapse area monitored by the camera position, then realizing the positioning of a target monitoring point of the collapse area in a multi-view photographic video relative to a reference control point under a space three-dimensional coordinate system by adopting a trifocal tensor and antipodal constraint method, and finally calculating a displacement value of the target monitoring point according to the transmission projection relation. The method and the device realize non-contact real-time full-time automatic deformation monitoring, avoid high risk of monitoring the collapse area of the tunnel, and solve the problems that the contact measurement range and the cost are limited, and the non-full-time monitoring cannot be performed in time for safety early warning.
Description
Technical Field
The invention relates to a safety monitoring technology for surrounding rock deformation of a collapse area in a tunnel construction process. Belongs to the technical field of geotechnical engineering safety monitoring. The method is suitable for non-contact full-time real-time safety monitoring work aiming at high-risk areas and areas where conventional instruments are difficult to arrange.
Background
With the development of production, various civil engineering emerges like bamboo shoots in the spring after rain, and develops towards the high, deep and large directions, and the geotechnical engineering testing technology is an important means for fundamentally ensuring the accuracy, reliability and economic rationality of geotechnical engineering investigation, design, treatment and supervision, so that the accurate testing of the geotechnical engineering characteristics is more important.
In order to solve various complex geotechnical engineering problems, a plurality of new theories and new design methods appear, and the geotechnical engineering theory takes geotechnical testing technology and corresponding experimental basis as development background. If no new testing technology is developed correspondingly, various parameters based on the design can not be measured, and the design result can not be verified, so that the geotechnical engineering theory and the design advancement can not be embodied. Therefore, geotechnical engineering tests are not only very important in civil engineering practice, but also play a key role in theoretical development of the geotechnical engineering discipline.
TABLE 1 Tunnel monitoring item Table
Disclosure of Invention
The technical problem is as follows: the invention discloses a video monitoring-based method for monitoring surrounding rock deformation of a tunnel collapse area in real time. The invention realizes non-contact deformation monitoring, avoids high risk of tunnel collapse area, simultaneously satisfies real-time monitoring, solves the dilemma that other non-contact deformation monitoring technologies can not monitor in full time, and realizes automation in the digital data processing process, thereby achieving the requirement of automatic early warning.
The technical scheme is as follows: the invention relates to a video monitoring-based method for monitoring surrounding rock deformation of a tunnel collapse area in real time, which comprises the steps of firstly obtaining a three-dimensional virtual reality model of the tunnel collapse area by adopting a ground laser scanning technology, wherein the three-dimensional virtual reality model comprises point cloud data and three-dimensional image data, secondly constructing a transmission projection relation between a camera position and the three-dimensional virtual reality model of the collapse area according to the position of each camera and video data of the collapse area monitored by the camera position, then realizing the positioning of a collapse area target monitoring point in a multi-view photographic video relative to a reference control point under a space three-dimensional coordinate system by adopting a trifocal tensor and antipodal constraint method, and finally calculating the displacement value of the target monitoring point according to the transmission projection relation.
The method comprises the following specific steps:
the first step is as follows: the target monitoring points and the reference control points are arranged, a plurality of target monitoring points are uniformly arranged in the collapse area, a plurality of reference control points are uniformly arranged on the periphery of the collapse area, the number of the reference control points is ensured to be more than 7 common reference points for the video images of every two cameras, the target monitoring points and the reference control points are marked by fluorescent pens with different colors, and each marking point is obviously different for the convenience of computer identification;
the second step is that: the cameras are arranged, so that each target monitoring point and each reference control point are required to be within more than three camera visual angles, and the included angles of the three camera visual angles are controlled to be 45-60 degrees; according to the position of the camera, establishing a conversion relation between video image coordinate systems of the multiple cameras by adopting an epipolar constraint relation;
the third step: measuring by using a laser scanner, selecting appropriate instrument measuring points 5-10 m away from a collapse area, satisfying the requirement that a laser scanning area covers all target monitoring points and reference control points, and determining the number of stations of three-dimensional laser scanner erection measuring points, the specific positions of the measuring stations, the erection height of the instrument and the scanning angle; if multi-station measurement is needed, 3 splicing identification points are needed to be contained in different adjacent scanning areas so as to splice point cloud data of different stations; finally, point cloud data describing surface coordinate information of the laser scanning area and image data marked for identifying the target monitoring point and the reference control point are obtained after scanning measurement is finished;
the fourth step: establishing a three-dimensional virtual reality model, namely establishing a Mount matrix (a coordinate conversion parameter matrix between a camera coordinate system and a scanner coordinate system), a COP matrix (a rotation matrix of the camera coordinate system and an initial camera coordinate system at the moment of camera shooting) and an SOP matrix (a rotation translation matrix of a scanning station measuring station coordinate system relative to an engineering coordinate system) according to internal calibration parameters of the laser scanner and intrinsic parameters of a scanner camera, and realizing the registration of point cloud data and image data so as to establish the three-dimensional virtual reality model;
the fifth step: the method comprises the steps of calibrating initial values of a reference control point and a target monitoring point, converting point cloud data into a triangular grid by adopting a Delaunay triangulation method, marking the reference control point and the target monitoring point in image data, projecting the three-dimensional triangular grid onto plane coordinates of the image data, searching the triangular grid to which each reference control point and the target monitoring point belong and the distance relative to the triangular grid corner point according to pixel coordinates of the reference control point and the target monitoring point and pixel coordinates of the grid corner point after projection, and then restoring the three-dimensional coordinates before projection to obtain actual initial three-dimensional coordinates of the reference control point and the target monitoring point.
And a sixth step: establishing a transmission projection relation, establishing a transmission projection relation of each camera, establishing a camera transmission projection relation through the camera position, the focal length parameter, the initial coordinates of the reference control point and the target monitoring point in the real scene, and establishing a relation between the pixel coordinates of the video image and the real scene coordinates;
the seventh step: positioning a target monitoring point in video acquisition, acquiring video images at intervals of n frames, registering according to corresponding reference control points in different frames of video images by adopting a sift algorithm to obtain a series of video images, ensuring the accurate positions of the series of video images in respective camera transmission projection coordinate systems, recording pixel coordinates of the target monitoring point and the reference control points in the video images, and calculating the relative positions of the target monitoring point and the reference control points under the three-dimensional coordinates of a camera by adopting the conversion relationship between trifocal tensor point transfer and a video image coordinate system of a plurality of cameras according to the transmission projection relationship of the corresponding camera;
eighth step: and calculating a deformation displacement value, obtaining the coordinates of the target monitoring point in the real three-dimensional coordinate system by using the coordinates of the reference control point in the real three-dimensional coordinate system, and comparing the coordinates with the initial coordinates of the target monitoring point to obtain the deformation vector of the target monitoring point.
Has the advantages that: the invention realizes non-contact deformation monitoring, avoids high risk of tunnel collapse area, simultaneously satisfies real-time monitoring, solves the dilemma that other non-contact deformation monitoring technologies can not monitor in full time, and realizes automation in the digital data processing process, thereby achieving the requirement of automatic early warning.
Detailed Description
The technical solution of the present invention is described in detail below by way of example:
the invention discloses a video monitoring-based method for monitoring surrounding rock deformation of a tunnel collapse area in real time. The method integrates a ground laser scanning technology and a video monitoring technology to construct a transmission projection relation between a real engineering scene and a video virtual scene under a camera, integrates a trifocal tensor and an epipolar constraint method to realize the positioning of a target monitoring point under the monitoring of the multi-view camera relative to a reference control point under a space three-dimensional coordinate system, and finally restores the transmission projection relation between the real engineering scene and the video virtual scene to obtain a deformation displacement value of the target monitoring point. The specific step 8 comprises the following steps: target monitoring point and reference control point arrangement, camera position setting, laser scanner measurement, three-dimensional virtual live-action model construction, initial value calibration of reference control points and target monitoring points, transmission projection relation construction, target monitoring point positioning in video acquisition and target point displacement value calculation.
The method comprises the following concrete steps: arranging target monitoring points and reference control points, setting the position of a camera, acquiring a three-dimensional live-action model, constructing a transmission projection relationship, acquiring a video, positioning the target monitoring points, and calculating a deformation displacement value.
The method comprises the following specific steps:
the first step is as follows: the target monitoring points and the reference control points are arranged, a plurality of target monitoring points are uniformly arranged in the collapse area, a plurality of reference control points are uniformly arranged on the periphery of the collapse area, the number of the reference control points is ensured to be more than 7 common reference points for the video images of every two cameras, the target monitoring points and the reference control points are marked by fluorescent pens with different colors, and each marking point is obviously different for the convenience of computer identification;
the second step is that: the cameras are arranged, so that each target monitoring point and each reference control point are required to be within more than three camera visual angles, and the included angles of the three camera visual angles are controlled to be 45-60 degrees; according to the position of the camera, establishing a conversion relation between video image coordinate systems of the multiple cameras by adopting an epipolar constraint relation;
the third step: measuring by using a laser scanner, selecting appropriate instrument measuring points 5-10 m away from a collapse area, satisfying the requirement that a laser scanning area covers all target monitoring points and reference control points, and determining the number of stations of three-dimensional laser scanner erection measuring points, the specific positions of the measuring stations, the erection height of the instrument and the scanning angle; if multi-station measurement is needed, 3 splicing identification points are needed to be contained in different adjacent scanning areas so as to splice point cloud data of different stations; finally, point cloud data describing surface coordinate information of the laser scanning area and image data marked for identifying the target monitoring point and the reference control point are obtained after scanning measurement is finished;
the fourth step: establishing a three-dimensional virtual reality model, namely establishing a Mount matrix (a coordinate conversion parameter matrix between a camera coordinate system and a scanner coordinate system), a COP matrix (a rotation matrix of the camera coordinate system and an initial camera coordinate system at the moment of camera shooting) and an SOP matrix (a rotation translation matrix of a scanning station measuring station coordinate system relative to an engineering coordinate system) according to internal calibration parameters of the laser scanner and intrinsic parameters of a scanner camera, and realizing the registration of point cloud data and image data so as to establish the three-dimensional virtual reality model;
the fifth step: the method comprises the steps of calibrating initial values of a reference control point and a target monitoring point, converting point cloud data into a triangular grid by adopting a Delaunay triangulation method, marking the reference control point and the target monitoring point in image data, projecting the three-dimensional triangular grid onto plane coordinates of the image data, searching the triangular grid to which each reference control point and the target monitoring point belong and the distance relative to the triangular grid corner point according to pixel coordinates of the reference control point and the target monitoring point and pixel coordinates of the grid corner point after projection, and then restoring the three-dimensional coordinates before projection to obtain actual initial three-dimensional coordinates of the reference control point and the target monitoring point.
And a sixth step: establishing a transmission projection relation, establishing a transmission projection relation of each camera, establishing a camera transmission projection relation through the camera position, the focal length parameter, the initial coordinates of the reference control point and the target monitoring point in the real scene, and establishing a relation between the pixel coordinates of the video image and the real scene coordinates;
the seventh step: positioning a target monitoring point in video acquisition, acquiring video images at intervals of n frames, registering according to corresponding reference control points in different frames of video images by adopting a sift algorithm to obtain a series of video images, ensuring the accurate positions of the series of video images in respective camera transmission projection coordinate systems, recording pixel coordinates of the target monitoring point and the reference control points in the video images, and calculating the relative positions of the target monitoring point and the reference control points under the three-dimensional coordinates of a camera by adopting the conversion relationship between trifocal tensor point transfer and a video image coordinate system of a plurality of cameras according to the transmission projection relationship of the corresponding camera;
eighth step: and calculating a deformation displacement value, obtaining the coordinates of the target monitoring point in the real three-dimensional coordinate system by using the coordinates of the reference control point in the real three-dimensional coordinate system, and comparing the coordinates with the initial coordinates of the target monitoring point to obtain the deformation vector of the target monitoring point.
Wherein the content of the first and second substances,
in the second step, the epipolar constraint relationship is adopted to construct a conversion relationship between the video image coordinate systems of the multiple cameras, specifically:
the epipolar geometry is intrinsic projective geometry between two views, is independent of a scene structure, only depends on internal parameters and relative postures of the camera, is the intersection geometry of an image plane and a plane beam taking a base line as an axis, and is essentially the ray coplanarity reversely projected from a corresponding point, and a basic matrix F is used for describing the conversion relation between video image coordinate systems of the multiple cameras;
and (3) calculating a basic matrix F:
(1) randomly extracting 7 groups of corresponding reference points on two video images of the camera by using RANSAC algorithm, and respectively representing the reference points as Xn=(xn,yn,1)T、X′n=(x′n,y′n,1)T(n ═ 1,2 … 7); wherein x isn,yn,x′n,y′nThe pixel coordinates of 7 groups of corresponding reference points in the two video images are respectively.
(2) The vector F represents a 9-dimensional vector composed of elements of the F basis matrix and arranged in row-priority order, and can be obtained by the following equation. From the 7 sets of corresponding points, the system of equations can be derived:
and solving the vector f.
(3) The basis matrix F can be obtained from the 9-dimensional vector F. Thereby obtaining the epipolar geometrical relationship of the conversion relationship among the video image coordinate systems of the multiple cameras.
In the sixth step, establishing a relation between the pixel coordinates of the video image and the real scene coordinates specifically;
(1) obtaining a three-dimensional scene depth map
Setting a virtual depth camera according to the position of the camera, wherein the parameters of the depth camera such as the coordinate, the direction, the distance between a near cutting surface and a far cutting surface, the angle of view and the length-width ratio of an image are the same as the setting parameters of an actual camera; acquiring a depth coordinate of a scene depth image of the virtual depth camera according to the virtual depth camera position and the three-dimensional real scene model;
(2) obtaining a transformation matrix
According to the position of the virtual camera, the internal parameters of the camera and the depth coordinate of the scene depth image, the ray relation between the virtual camera and the video image and the ray relation between the virtual camera and the three-dimensional scene model are ensured, and the conversion matrix of the pixel coordinate of the video image and the real scene coordinate is as follows:
Mw=MrPinvTrans(s,t,rn,1.0)
wherein Trans (s, t, r)n1.0) is a translation matrix of the live-action three-dimensional model in the virtual camera; pinvIs the inverse of the internal parameter matrix of the virtual camera; mrIs the rotation matrix of the live-action three-dimensional model in the virtual camera.
In the seventh step, calculating the relative positions of the target monitoring point and the reference control point under the three-dimensional coordinate of the camera;
(1) SIFT algorithm matching video image position
The SIFT algorithm includes: constructing a scale space, detecting extreme points, positioning positions of feature points, designating the main direction of key points and generating feature descriptors; firstly, a DOG filter is adopted to construct a scale space, a reference control point is found in the scale space as a candidate feature point, and the candidate feature point adopts the following method to construct an SIFT feature with projective transformation independence;
(2) calculating two-dimensional coordinates of target monitoring points in video images
Detecting a target monitoring point, recording the pixel coordinate of a middle point of the bottom edge of an outsourcing rectangle of an area where the target is located, and normalizing the pixel coordinate to be in a range of [0, 1] by utilizing the pixel resolution of an original video;
(3) calculating three-dimensional depth coordinates of bottom center point of target
The size of a depth image obtained by a virtual projector in a three-dimensional scene is the same as that of a video image, and a depth coordinate on the corresponding depth image is calculated by using a two-dimensional position coordinate of a target in the video image;
assuming that the pixel coordinates of the normalized target bottom is (s, t) and the corresponding values in the depth image are (sd, td, r, q), where (sd, td) is the two-dimensional texture coordinates, r is the depth component, q is a scaling factor, assuming that the three-dimensional depth value of the target bottom center point is rnThe coordinates in the virtual camera coordinate system are (s, t, r) ═ r/qn,1.0);
(4) And calculating the three-dimensional coordinates of the target monitoring point by adopting a trifocal tensor point transfer method.
Claims (4)
1. A real-time monitoring method for tunnel collapse area surrounding rock deformation based on video monitoring is characterized in that a three-dimensional virtual reality model of a tunnel collapse area is obtained by adopting a ground laser scanning technology, the three-dimensional virtual reality model comprises point cloud data and three-dimensional image data, then a transmission projection relation between a camera position and the three-dimensional virtual reality model of the collapse area is established according to each camera position and video data of the collapse area monitored by the camera position, then a trifocal tensor and antipodal constraint method is adopted to realize the positioning of a collapse area target monitoring point in a multi-view photographic video relative to a reference control point under a space three-dimensional coordinate system, and finally the displacement value of a target monitoring point is calculated according to the transmission projection relation;
the method comprises the following steps:
the first step is as follows: the target monitoring points and the reference control points are arranged, a plurality of target monitoring points are uniformly arranged in the collapse area, a plurality of reference control points are uniformly arranged on the periphery of the collapse area, the number of the reference control points is ensured to be more than 7 common reference points for the video images of every two cameras, the target monitoring points and the reference control points are marked by fluorescent pens with different colors, and each marking point is obviously different for the convenience of computer identification;
the second step is that: the cameras are arranged, so that each target monitoring point and each reference control point are required to be within more than three camera visual angles, and the included angles of the three camera visual angles are controlled to be 45-60 degrees; according to the position of the camera, establishing a conversion relation between video image coordinate systems of the multiple cameras by adopting an epipolar constraint relation;
the third step: measuring by using a laser scanner, selecting appropriate instrument measuring points 5-10 m away from a collapse area, satisfying the requirement that a laser scanning area covers all target monitoring points and reference control points, and determining the number of stations of three-dimensional laser scanner erection measuring points, the specific positions of the measuring stations, the erection height of the instrument and the scanning angle; if multi-station measurement is needed, 3 splicing identification points are needed to be contained in different adjacent scanning areas so as to splice point cloud data of different stations; finally, point cloud data describing surface coordinate information of the laser scanning area and image data marked for identifying the target monitoring point and the reference control point are obtained after scanning measurement is finished;
the fourth step: establishing a three-dimensional virtual live-action model, namely establishing a Mount matrix, namely a coordinate conversion parameter matrix between a camera coordinate system and a scanner coordinate system, a COP matrix, namely a rotation matrix of the camera coordinate system and an initial camera coordinate system at the moment of camera shooting and an SOP matrix, namely a rotation translation matrix of a scanning station measuring station coordinate system relative to an engineering coordinate system according to internal calibration parameters of a laser scanner and intrinsic parameters of a scanner camera, and realizing the registration of point cloud data and image data so as to establish the three-dimensional virtual live-action model;
the fifth step: calibrating initial values of the reference control points and the target monitoring points, converting point cloud data into triangular grids by adopting a Delaunay triangulation method, marking the reference control points and the target monitoring points in image data, projecting the three-dimensional triangular grids onto plane coordinates of the image data, searching triangular grids belonging to each reference control point and the target monitoring points and distances relative to corner points of the triangular grids according to pixel coordinates of the reference control points and the target monitoring points and pixel coordinates of corner points of the projected triangular grids, and then restoring three-dimensional coordinates before projection to obtain actual initial three-dimensional coordinates of the reference control points and the target monitoring points;
and a sixth step: establishing a transmission projection relation, establishing a transmission projection relation of each camera, establishing a camera transmission projection relation through the camera position, the focal length parameter, the initial coordinates of the reference control point and the target monitoring point in the real scene, and establishing a relation between the pixel coordinates of the video image and the real scene coordinates;
the seventh step: positioning a target monitoring point in video acquisition, acquiring video images at intervals of n frames, registering according to corresponding reference control points in different frames of video images by adopting a sift algorithm to obtain a series of video images, ensuring the accurate positions of the series of video images in respective camera transmission projection coordinate systems, recording pixel coordinates of the target monitoring point and the reference control points in the video images, and calculating the relative positions of the target monitoring point and the reference control points under the three-dimensional coordinates of a camera by adopting the conversion relationship between trifocal tensor point transfer and a video image coordinate system of a plurality of cameras according to the transmission projection relationship of the corresponding camera;
eighth step: and calculating a deformation displacement value, obtaining the coordinates of the target monitoring point in the real three-dimensional coordinate system by using the coordinates of the reference control point in the real three-dimensional coordinate system, and comparing the coordinates with the initial coordinates of the target monitoring point to obtain the deformation vector of the target monitoring point.
2. The method for monitoring the deformation of the surrounding rock in the tunnel collapse area in real time based on video monitoring as claimed in claim 1, wherein in the second step, the transformation relationship between the video image coordinate systems of the multiple cameras is constructed by adopting an epipolar constraint relationship, specifically:
the epipolar geometry is intrinsic projective geometry between two views, is independent of a scene structure, only depends on internal parameters and relative postures of the camera, is the intersection geometry of an image plane and a plane beam taking a base line as an axis, and is essentially the ray coplanarity reversely projected from a corresponding point, and a basic matrix F is used for describing the conversion relation between video image coordinate systems of the multiple cameras;
and (3) calculating a basic matrix F:
(1) randomly extracting 7 groups of corresponding reference points on two video images of the camera by using RANSAC algorithm, and respectively representing the reference points as Xn=(xn,yn,1)T、X′n=(x′n,y′n,1)T(n ═ 1,2 … 7); wherein x isn,yn,x′n,y′nRespectively representing pixel coordinates of 7 groups of corresponding reference points in the two video images;
(2) the vector F represents a 9-dimensional vector composed of elements of the F basis matrix and arranged in row-priority order, and can be obtained by the following equation, from 7 sets of corresponding points, a system of equations can be obtained:
solving a vector f;
(3) the basis matrix F can be obtained by the 9-dimensional vector F, so that the epipolar geometrical relationship of the conversion relationship among the video image coordinate systems of the multiple cameras is obtained.
3. The method for monitoring the deformation of the surrounding rock in the tunnel collapse area in real time based on video monitoring as claimed in claim 1, wherein in the sixth step, the relationship between the pixel coordinates of the video image and the live-action coordinates is established specifically;
(1) obtaining a three-dimensional scene depth map
Setting a virtual depth camera according to the position of the camera, wherein the parameters of the depth camera such as the coordinate, the direction, the distance between a near cutting surface and a far cutting surface, the angle of view and the length-width ratio of an image are the same as the setting parameters of an actual camera; acquiring a depth coordinate of a scene depth image of the virtual depth camera according to the virtual depth camera position and the three-dimensional real scene model;
(2) obtaining a transformation matrix
According to the position of the virtual camera, the internal parameters of the camera and the depth coordinate of the scene depth image, the ray relation between the virtual camera and the video image and the ray relation between the virtual camera and the three-dimensional scene model are ensured, and the conversion matrix of the pixel coordinate of the video image and the real scene coordinate is as follows:
Mw=MrPinvTrans(s,t,rn,1.0)
wherein Trans (s, t, r)n1.0) is a translation matrix of the live-action three-dimensional model in the virtual camera; pinvIs the inverse of the internal parameter matrix of the virtual camera; mrIs the rotation matrix of the live-action three-dimensional model in the virtual camera.
4. The method for monitoring the deformation of the surrounding rock in the collapse area of the tunnel based on the video monitoring as claimed in claim 1, wherein in the seventh step, the relative positions of the target monitoring point and the reference control point under the three-dimensional coordinates of the camera are calculated;
(1) SIFT algorithm matching video image position
The SIFT algorithm includes: constructing a scale space, detecting extreme points, positioning positions of feature points, designating the main direction of key points and generating feature descriptors; firstly, a DOG filter is adopted to construct a scale space, a reference control point is found in the scale space as a candidate feature point, and the candidate feature point adopts the following method to construct an SIFT feature with projective transformation independence;
(2) calculating two-dimensional coordinates of target monitoring points in video images
Detecting a target monitoring point, recording the pixel coordinate of a middle point of the bottom edge of an outsourcing rectangle of an area where the target is located, and normalizing the pixel coordinate to be in a range of [0, 1] by utilizing the pixel resolution of an original video;
(3) calculating three-dimensional depth coordinates of bottom center point of target
The size of a depth image obtained by a virtual projector in a three-dimensional scene is the same as that of a video image, and a depth coordinate on the corresponding depth image is calculated by using a two-dimensional position coordinate of a target in the video image;
assuming that the pixel coordinates of the normalized target bottom is (s, t) and the corresponding values in the depth image are (sd, td, r, q), where (sd, td) is the two-dimensional texture coordinates, r is the depth component, and q is a scaling factor, assuming a target bottom center point three-dimensional depthA value of rnThe coordinates in the virtual camera coordinate system are (s, t, r) ═ r/qn,1.0);
(4) And calculating the three-dimensional coordinates of the target monitoring point by adopting a trifocal tensor point transfer method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010332626.8A CN111473739B (en) | 2020-04-24 | 2020-04-24 | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010332626.8A CN111473739B (en) | 2020-04-24 | 2020-04-24 | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111473739A CN111473739A (en) | 2020-07-31 |
CN111473739B true CN111473739B (en) | 2022-02-08 |
Family
ID=71763940
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010332626.8A Active CN111473739B (en) | 2020-04-24 | 2020-04-24 | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111473739B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114444158B (en) * | 2020-11-04 | 2023-06-06 | 北京瓦特曼科技有限公司 | Underground roadway deformation early warning method and system based on three-dimensional reconstruction |
CN112637553B (en) * | 2020-11-25 | 2021-11-23 | 浙江大学 | Bridge structure modal analysis method based on monitoring video |
CN112330745B (en) * | 2020-11-25 | 2023-06-02 | 招商局重庆交通科研设计院有限公司 | Tunnel portal side elevation slope stability monitoring and early warning system and method based on binocular vision |
CN112488037A (en) * | 2020-12-15 | 2021-03-12 | 上海有个机器人有限公司 | Method for identifying dangerous area in image recognition |
CN112325789B (en) * | 2021-01-04 | 2021-03-23 | 中南大学 | Method for measuring deformation and displacement in model test based on image processing |
CN112833338A (en) * | 2021-01-26 | 2021-05-25 | 武汉阿卡瑞思光电自控有限公司 | Oil and gas station leakage monitoring method and system based on live-action three-dimension |
CN112862879B (en) * | 2021-02-18 | 2023-07-07 | 中国矿业大学(北京) | Subway tunnel three-dimensional model construction method based on TIN model |
CN113012222A (en) * | 2021-02-26 | 2021-06-22 | 中南大学 | Deformation monitoring system of tunnel portal |
CN112901207B (en) * | 2021-04-23 | 2023-04-07 | 中交路桥南方工程有限公司 | Treatment method for collapse in tunnel during arch change of secondary lining in sulfate corrosion section of operation tunnel |
CN113240887A (en) * | 2021-05-14 | 2021-08-10 | 深圳市安泰数据监测科技有限公司 | Slope rockfall collapse monitoring method, device and equipment |
CN113340405B (en) * | 2021-07-09 | 2023-03-17 | 中铁七局集团有限公司 | Bridge vibration mode measuring method, device and system |
CN113916192B (en) * | 2021-11-05 | 2023-06-23 | 郑州信息科技职业学院 | Karst cave image acquisition monitoring and imaging system |
CN115388769A (en) * | 2022-08-17 | 2022-11-25 | 成都飞机工业(集团)有限责任公司 | Method, device, equipment and medium for monitoring tool state in real time |
CN115294748A (en) * | 2022-09-08 | 2022-11-04 | 广东中科凯泽信息科技有限公司 | Fixed target disappearance early warning method based on visual data analysis |
CN115375779B (en) * | 2022-10-27 | 2023-01-10 | 智广海联(天津)大数据技术有限公司 | Method and system for camera AR live-action annotation |
CN116778094B (en) * | 2023-08-15 | 2023-11-24 | 深圳眸瞳科技有限公司 | Building deformation monitoring method and device based on optimal viewing angle shooting |
CN116858098B (en) * | 2023-08-15 | 2024-02-09 | 中国铁路经济规划研究院有限公司 | Automatic acquisition method and system for multi-element information of tunnel in construction period |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106323176A (en) * | 2016-08-09 | 2017-01-11 | 鞍钢集团矿业有限公司 | Three dimensional displacement monitoring method for strip mine side slope |
CN108844478A (en) * | 2018-06-15 | 2018-11-20 | 上海交通大学 | Shield subway work surface deformation monitoring method based on close-range photogrammetry |
CN109978761A (en) * | 2017-12-28 | 2019-07-05 | 杭州海康威视系统技术有限公司 | A kind of method, apparatus and electronic equipment generating panoramic pictures |
KR20190124026A (en) * | 2018-04-25 | 2019-11-04 | 경북대학교 산학협력단 | Apparatus and method for measuring deformation of structural member, and recording medium |
CN110503080A (en) * | 2019-08-30 | 2019-11-26 | 中国电建集团西北勘测设计研究院有限公司 | Investigation method based on unmanned plane oblique photograph auxiliary sewage draining exit |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10621739B2 (en) * | 2018-03-27 | 2020-04-14 | Applejack 199 L.P. | Characterization of specular surfaces |
-
2020
- 2020-04-24 CN CN202010332626.8A patent/CN111473739B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106323176A (en) * | 2016-08-09 | 2017-01-11 | 鞍钢集团矿业有限公司 | Three dimensional displacement monitoring method for strip mine side slope |
CN109978761A (en) * | 2017-12-28 | 2019-07-05 | 杭州海康威视系统技术有限公司 | A kind of method, apparatus and electronic equipment generating panoramic pictures |
KR20190124026A (en) * | 2018-04-25 | 2019-11-04 | 경북대학교 산학협력단 | Apparatus and method for measuring deformation of structural member, and recording medium |
CN108844478A (en) * | 2018-06-15 | 2018-11-20 | 上海交通大学 | Shield subway work surface deformation monitoring method based on close-range photogrammetry |
CN110503080A (en) * | 2019-08-30 | 2019-11-26 | 中国电建集团西北勘测设计研究院有限公司 | Investigation method based on unmanned plane oblique photograph auxiliary sewage draining exit |
Also Published As
Publication number | Publication date |
---|---|
CN111473739A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
CN102003938B (en) | Thermal state on-site detection method for large high-temperature forging | |
EP2111530B1 (en) | Automatic stereo measurement of a point of interest in a scene | |
Xie et al. | Study on construction of 3D building based on UAV images | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
CN102364299A (en) | Calibration technology for multiple structured light projected three-dimensional profile measuring heads | |
CN110827392B (en) | Monocular image three-dimensional reconstruction method, system and device | |
KR101759798B1 (en) | Method, device and system for generating an indoor two dimensional plan view image | |
CN109141226A (en) | The spatial point coordinate measuring method of one camera multi-angle | |
CN113205603A (en) | Three-dimensional point cloud splicing reconstruction method based on rotating platform | |
CN112254670B (en) | 3D information acquisition equipment based on optical scanning and intelligent vision integration | |
CN105374067A (en) | Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof | |
CN110766170A (en) | Image processing-based multi-sensor fusion and personnel positioning method | |
Wenzel et al. | High-resolution surface reconstruction from imagery for close range cultural Heritage applications | |
CN114543787B (en) | Millimeter-scale indoor map positioning method based on fringe projection profilometry | |
Altuntas et al. | Image based methods for surveying heritage of masonry arch bridge with the example of Dokuzunhan in Konya, Turkey | |
CN113034678A (en) | Three-dimensional rapid modeling method for dam face of extra-high arch dam based on group intelligence | |
Barazzetti et al. | Automated and accurate orientation of complex image sequences | |
CN113345084B (en) | Three-dimensional modeling system and three-dimensional modeling method | |
CN116883604A (en) | Three-dimensional modeling technical method based on space, air and ground images | |
Fritsch et al. | Photogrammetric point cloud collection with multi-camera systems | |
CN112304250B (en) | Three-dimensional matching equipment and method between moving objects | |
WO2022078437A1 (en) | Three-dimensional processing apparatus and method between moving objects | |
CN113012238B (en) | Method for quick calibration and data fusion of multi-depth camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |