CN117488887A - Foundation pit multi-measuring-point integrated monitoring method based on monocular vision - Google Patents

Foundation pit multi-measuring-point integrated monitoring method based on monocular vision Download PDF

Info

Publication number
CN117488887A
CN117488887A CN202311517458.XA CN202311517458A CN117488887A CN 117488887 A CN117488887 A CN 117488887A CN 202311517458 A CN202311517458 A CN 202311517458A CN 117488887 A CN117488887 A CN 117488887A
Authority
CN
China
Prior art keywords
target
point
camera
foundation pit
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311517458.XA
Other languages
Chinese (zh)
Inventor
周华飞
胡天奕
应鹏飞
王思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202311517458.XA priority Critical patent/CN117488887A/en
Publication of CN117488887A publication Critical patent/CN117488887A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02DFOUNDATIONS; EXCAVATIONS; EMBANKMENTS; UNDERGROUND OR UNDERWATER STRUCTURES
    • E02D17/00Excavations; Bordering of excavations; Making embankments
    • E02D17/02Foundation pits
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02DFOUNDATIONS; EXCAVATIONS; EMBANKMENTS; UNDERGROUND OR UNDERWATER STRUCTURES
    • E02D33/00Testing foundations or foundation structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Mining & Mineral Resources (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Structural Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Paleontology (AREA)
  • Civil Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A foundation pit multi-measuring-point integrated monitoring method based on monocular vision comprises the following steps: (1) The foundation pit is surveyed, and positions of monitoring points and working base points are selected; (2) Selecting parameters of a camera and a lens, and constructing a synchronous monitoring system; (3) Arranging a measuring point target, pouring a working foundation point camera monitoring pier, installing a camera set and light supplementing equipment, determining a background marker and arranging a calibration target; (4) Calibrating an Inertial Measurement Unit (IMU), and adjusting the pose of a target and a camera by using the IMU; (5) Preheating a camera, setting shooting frequency, and continuously and real-timely monitoring a measuring point target; (6) Dividing an image region of interest, positioning target pixels, and resolving grid target corner points in real time; (7) Calculating the corresponding proportion coefficient of each target for the initial frame image, and calculating the initial coordinates of each measuring point and calibration point; (8) And for each subsequent frame, solving the coordinates of each measuring point and the calibration point to obtain the noisy displacement of the measuring point, and solving the displacement of the background stationary marker to obtain the denoising displacement of the measuring point.

Description

Foundation pit multi-measuring-point integrated monitoring method based on monocular vision
Technical Field
The invention relates to the field of foundation pit deformation monitoring, in particular to a foundation pit multi-measuring-point integrated monitoring method based on monocular vision.
Background
With the development of urban construction, the construction of underground rail transit, underground comprehensive market and various high-rise buildings has higher requirements on the construction of foundation pit engineering. Due to factors such as uncertainty, complexity, variability and the like of geological conditions, foundation pit engineering has a great risk, and the overview of accidents is much higher than other engineering. In order to ensure the safety and stability of foundation pit construction, a strict monitoring system is required to be synchronously endowed in the construction process to monitor the foundation pit in real time in the whole process so as to discover potential safety hazards caused by factors such as design defects, poor construction management, environmental change and the like in time. Therefore, foundation pit deformation monitoring has become a vital ring for guaranteeing foundation pit construction safety.
At present, the foundation pit deformation monitoring mainly adopts a manual monitoring method of a total station and a level, and the method mainly relies on manual operation and has the defects of large visual error, low reliability, poor timeliness, long monitoring period and the like of monitoring data due to complicated measuring procedures, particularly the fact that the monitoring data are discontinuous due to a longer manual monitoring period, and the data analysis of a construction site can only be stopped at simple data comparison, namely, the safety condition of the foundation pit is evaluated according to a threshold value required by specifications. In addition, because manual monitoring cannot feed back monitoring data in real time, data analysis has hysteresis, and potential safety hazards with untimely early warning can exist in foundation pit engineering. In order to optimize the early warning mode of foundation pit deformation and better guide actual construction, students at home and abroad try to predict the foundation pit deformation by utilizing the techniques such as convolutional neural network, finite element analysis and the like, so that the efficiency and the safety of foundation pit construction are greatly improved, however, the convolutional neural network and the finite element analysis often need extremely comprehensive and continuous monitoring data as the basis of analysis, and obviously, the data obtained by manual monitoring cannot meet the requirement, and continuous real-time monitoring data gradually becomes one of the most urgent requirements of the current foundation pit engineering. Under this driving force, conventional monitoring devices are improved, and continuous real-time measurement techniques in other fields are also tried for foundation pit monitoring, but these devices and techniques have certain limitations. The static leveling instrument can be paved at multiple points in the foundation pit and continuously feeds back monitoring data, but can only feed back the vertical displacement of the monitoring point; the three-dimensional laser technology has a certain feasibility for deformation monitoring of the foundation pit, but has the defects that the precision requirement is not up to standard, the foundation pit is easily influenced by weather environment and the like; the Beidou satellite technical system has harsh application conditions in the aspect of foundation pit monitoring, needs a relatively wide space for setting a reference station without vertical shielding, has relatively limited precision, and can not meet the requirements of foundation pit deformation monitoring specifications; the total station robot has higher precision, however, the total station robot has high manufacturing cost and higher maintenance cost, and the current foundation pit engineering generally does not have the condition of comprehensive use. In general, the current continuous real-time monitoring technology cannot meet the requirements of foundation pit engineering, and the foundation pit engineering needs a continuous real-time monitoring technology which has the advantages of efficiency, precision and cost performance.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a foundation pit multi-measuring-point integrated monitoring method based on monocular vision, which realizes continuous real-time monitoring of foundation pit deformation, improves monitoring efficiency and reduces monitoring cost.
In order to solve the technical problems, the invention adopts the following technical scheme:
a foundation pit multi-measuring-point integrated monitoring method based on monocular vision comprises the following steps:
(1) And (5) surveying the foundation pit, and selecting the positions of the monitoring points and the working base points.
And selecting displacement monitoring points of the foundation pit retaining wall according to the building foundation pit engineering monitoring technical standard and the building deformation measurement standard, wherein the monitoring points are arranged on the crown beams of the foundation pit, the distance between the monitoring points and the crown beams is 10-15M, and the monitoring points are arranged at the middle part and the external corners of each side edge of the foundation pit and adjacent to the parts of the protection objects.
The camera is arranged at a fixed point position in the foundation pit environment so as to reduce measurement errors caused by camera position variation. According to the building deformation measurement specification, the displacement of the area which is more than 2 times of the depth of the foundation pit from the edge of the building foundation pit is negligible, so the area is assumed to be the setting range of the vision measurement working base point.
(2) And determining the type of a camera (an infrared camera), selecting parameters of the camera and a lens, and constructing a synchronous monitoring system integrating horizontal displacement and vertical displacement of multiple measuring points.
An infrared camera with better environmental adaptability is adopted for image acquisition, and the infrared camera directly receives an infrared wave Duan Guangcheng image under the condition of good illumination; when the illuminance is insufficient, the infrared camera can generate active infrared light through the infrared emitting diode to realize light supplementing.
Selecting camera parameters: based on the requirement of foundation pit monitoring, the infrared camera with low frame frequency, high signal-to-noise ratio and large dynamic range is selected, and the resolution is determined according to the monitored foundation pit size and the arrangement condition of the measuring points.
Selecting a camera lens: aiming at the characteristic that monitoring points of the foundation pit enclosure wall are often in linear or nearly linear distribution, a lens with a small angle of view and a large depth of field is used for shooting (the angle of view is recommended to be 5-10 degrees). In practical engineering application, the correlation among working distance, field of view, camera chip size, focal length and depth of field is utilized according to the precision requirement, and the cost performance is comprehensively considered to determine a proper lens.
(3) And arranging a measuring point target, pouring a working foundation point camera monitoring pier, installing a camera set and light supplementing equipment, determining a background marker and arranging a calibration target.
Installing a measuring point target: and installing a target base at the monitoring point and fixing the target base by using a rivet gun. The targets are mounted on the base by using universal joints and mechanical lifting rods so as to realize flexible adjustment of angles and heights of the targets, and the targets are sequentially expressed as T 1 、T 2 、T 3 ......T n
Manufacturing a base and arranging a camera unit: and manufacturing a camera monitoring pier at a working base point by using a cast-in-situ reinforced concrete method, and installing a forced centering chassis. The camera is mounted on the forced centering chassis, and an infrared light supplement lamp is arranged.
Selecting a background marker and a calibration target: for each camera, one or more stationary background markers need to be determined in the captured image in order to subsequently correct the offset error of the camera, the camera monitoring pier is used as the background marker, and the grid targets are arranged on the surface of the monitoring pier as calibration targets.
(4) An inertial measurement unit (Inertial Measurement Unit, IMU) is calibrated, and the target and camera pose is adjusted by the IMU to achieve parallelism of the target plane and the corresponding camera plane.
The IMU is respectively placed on the working base point and the target platform in a flat mode, two magnetic force count values are obtained, then the target universal joint and the centering chassis of the working base point are adjusted until the readings of the magnetometers at the two positions are consistent, primary object image plane parallel calibration is completed, and then an angular velocity meter can be used for checking the calibration result: and placing the IMU on a forced centering chassis of a camera monitoring pier, stably moving the IMU to a target platform corresponding to the working base point as much as possible after the IMU is stable, waiting for the sensor to be stable, integrating the values of an angular velocity meter and an accelerometer in the moving process by using calculation software to obtain integral displacement and Euler angle rotation quantity, further obtaining the relative pose of the IMU at the target position and the monitoring pier, judging whether the variation quantity is within an allowed error interval, otherwise, correspondingly adjusting a universal joint of the target, and repeating the steps until the target plane is parallel to the corresponding camera plane.
(5) The camera is preheated, shooting frequency is set, and a measuring point target is continuously monitored in real time.
Starting a camera set, waiting for the temperature of the camera to rise to the normal working average temperature, setting shooting frequency, and comprehensively considering factors such as equipment conditions, foundation pit size, working environment and the like by specific frequency, wherein the shooting frequency is recommended to be not less than 10 times per minute for foundation pit monitoring. Acquiring a real-time monitoring image p c t, c denote numbers of the photographing cameras, t is a time series number of the image, and represents a photographing order thereof.
(6) And dividing the region of interest of the image by using an edge calculation method, positioning target pixels, and calculating grid target corner points in real time.
And resolving the image at the camera end in real time by adopting an edge computing mode, namely realizing image processing by a camera-integrated processor, and adopting a camera-end direct resolving mode for subsequent measuring point displacement computation and background marker displacement computation.
And (3) positioning an interested region where the target image is located, analyzing the connected domains consisting of the same or similar gray pixels in the image by using a BLOB (small block calculation), extracting gray mutation extreme points of each connected domain, fitting the pixels of each extreme point to obtain an edge image, and comparing the edge image with the shape of the target to obtain the interested region representing the target in the image.
And calculating target corner points in all the interested areas by using a SUSAN corner detection operator, wherein the SUSAN algorithm adopts a circular template, the circle center of the model is taken as a kernel, the gray value of each pixel in the circular area is compared with the gray value of the central pixel, and pixels with the gray value similar to the central pixel are taken to form a USAN area (assimilation kernel segmentation similar area).
The gray values of each point and the core point of the template circle are compared by the following method
The USAN area size is then calculated
Wherein: (x) 0 ,y 0 ) Is the core point position; (x, y) is the position of the template M (x, y) in which other pixels are located; f (x) 0 ,y 0 ) And f (x, y) respectively represent (x) 0 ,y 0 ) And (x, y) the gray scale of the pixel; t represents a gray level difference threshold; the function c represents the comparison output result and is obtained by all pixels in the template participating in the operation. Taking the geometric threshold g=s max /2,S max For the maximum S value that can be obtained by a template circle, the response function of the judgment corner point is
Then local maximum suppression is applied to the target square lattice corner points to obtain the target square lattice corner pointsi represents the number corresponding to the target of the region of interest where the corner points of the square grid are located, j represents the number of the corner points of the square grid in the target along the coordinate axis direction, and t represents the time sequence number of the image.
(7) For the initial frame image, calculating the corresponding proportion coefficient mu of each target i And (5) calculating initial coordinates of each measuring point and each calibration point.
Calculating to obtain the coordinates of the corner points of the target square grid in the region of interest contained in the image through the step (6), comparing the preset target size with the pixel spacing of the corner points of the target square grid obtained through actual recognition by using a template-like matching algorithm, and obtaining the proportionality coefficient mu corresponding to each target i Then coordinates the characteristic angular points in the image coordinate systemThe initial coordinates of each square corner point of the target in a camera coordinate system are obtained through conversion>
Wherein X is ic Representing coordinate values of the measuring points in the direction orthogonal to the enclosure wall, Y ic And the coordinate value of the measuring point in the vertical direction is shown. The real coordinates of the pixel points corresponding to the measuring point target and the calibration target are
(8) For each of the subsequent frames,solving the coordinates of each measuring point and the calibration point, and obtaining the noisy displacement of the measuring point by making a difference with the initial frameCalculating the displacement of the background static marker, and correcting to obtain the denoising displacement of the measuring point>
For the subsequent images, respectively calculating time sequence coordinates of each target square corner point under a camera coordinate system by using the proportionality coefficients corresponding to different targets obtained in the step (7)And subtracting the initial coordinates +.>Obtaining the noisy displacement of the measuring point>
Wherein m is a smaller value of the number of grid corners in a single region of interest, namely a smaller value of the number of grid corners identified under t time sequence and the number of grid corners of an initial frame, so as to avoid the influence of the identification deletion of the grid corners on the displacement of the measuring point.
After the monitor point displacement is calculated, the error caused by the camera displacement is corrected by calculating the displacement of the background stationary marker. The background stationary object selects a camera to monitor a calibration target on the pier, and then obtains the displacement of the calibration target in the same solving mode of the measurement point targetThen subtracting the calibration target displacement of the same frame of image from the noise displacement of the measuring point to obtain the measuring pointDenoising Displacement->And repeating the steps for the subsequently obtained image sequence, so that the displacement of the measuring point can be accurately monitored in real time.
Further, in S1, since the pit edge line is generally a straight line or approximately a straight line, the arrangement of the monitoring points is also approximately collinear, and in order to ensure that the targets arranged on the monitoring points are all within the field of view of the camera, the working base point should be arranged near the pit edge extension line, and the vertical distance between the working base point and the pit edge extension line should be adjusted according to the camera field angle and the measuring point distance.
Further, in S3, when the camera set monitors that the ambient infrared illuminance is insufficient, the system automatically starts the infrared lamp dot matrix surrounding the lens, and the infrared lamp emits infrared rays to irradiate the target and the background object, and the infrared lamp emits infrared rays to be received by the monitoring camera after diffuse reflection, so that a distinguishable gray image is formed.
Further, in S4, the IMU needs to calibrate the IMU magnetometer for calibrating each sensor before use: and respectively rotating the front surface and the back surface of the magnetometer upwards for one circle, and correcting the obtained difference value of the extreme value in the output data as amplitude value so as to realize correction of the magnetometer.
Calibrating the IMU accelerometer: and (3) respectively downwards arranging each surface of the sensor, recording the reading of the accelerometer under the action of gravity, and obtaining the corrected scale factor through calculation.
Calibrating an IMU gyroscope: the error factor of the gyroscope is mainly temperature drift, namely the error caused by the rising of the working temperature, so that the power and heat dissipation are considered seriously when the IMU is selected, the actual working temperature is observed, and the measurement is stopped when the core temperature of the IMU is too high so as to reduce the temperature drift error.
Further, in S6, the plane square lattice feature target is selected, the target corner point can be accurately positioned later, and the robustness of the measuring point displacement is improved. Because the invention is integrated monitoring of displacement of multiple measuring points, a single camera correspondingly shoots a plurality of targets, namely a plurality of interested areas need to be positioned on an image, and the targets respectively correspond to each measuring point in the depth of field range of the camera and a calibration target for correcting camera offset.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention realizes the real-time continuous monitoring of the foundation pit multi-measuring-point horizontal and vertical displacement integration by a single camera based on the monocular vision measurement technology, thereby improving the monitoring efficiency and the economy.
2. The invention applies the active infrared technology to realize good performance of equipment under complex working conditions, and has better environmental adaptability and user friendliness.
3. According to the method, the self-adaptive correction of the self-measurement error of the camera is realized through the algorithm based on the static background marker, and the measurement accuracy is effectively improved.
4. According to the invention, an edge calculation mode is adopted to process the image at the camera end in real time, and the camera unit calculates the target coordinates, so that continuous real-time monitoring of foundation pit deformation is realized.
The foregoing description is only an overview of the present invention, and is intended to provide a better understanding of the present invention, as it is embodied in the following description, with reference to the preferred embodiments of the present invention and the accompanying drawings. Specific embodiments of the present invention are given in detail by the following examples and the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic view of a camera monitoring pier of the present invention.
Fig. 3 is a schematic diagram of the corner point recognition of the square grid of the present invention.
Fig. 4 is a schematic diagram of the SUSAN operator of the present invention.
Fig. 5 is a general schematic of the monitoring system of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The features of the following examples and embodiments may be combined with each other without any conflict.
Referring to fig. 1-5, a foundation pit multi-measuring-point integrated monitoring method based on monocular vision includes:
(1) And (5) surveying the foundation pit, and selecting the positions of the monitoring points and the working base points. And selecting displacement monitoring points of the foundation pit retaining wall according to the building foundation pit engineering monitoring technical standard and the building deformation measurement standard, wherein the monitoring points are arranged on the crown beams of the foundation pit, the distance between the monitoring points and the crown beams is 10-15M, and the monitoring points are arranged at the middle part and the external corners of each side edge of the foundation pit and adjacent to the parts of the protection objects.
The camera is arranged at a fixed point position in the foundation pit environment so as to reduce measurement errors caused by camera position variation. According to the building deformation measurement specification, the displacement of the area which is more than 2 times of the depth of the foundation pit from the edge of the building foundation pit is negligible, so the area is assumed to be the setting range of the vision measurement working base point.
Furthermore, as the foundation pit edge line is generally a straight line or approximately a straight line, the arrangement of the monitoring points is also approximately collinear, in order to ensure that targets arranged on the monitoring points are all in the field of view of the camera, the working base points should be arranged near the foundation pit edge extension line, and the vertical distance between the working base points and the foundation pit edge extension line should be adjusted according to the field angle of the camera and the distance between the measuring points. Particularly, aiming at the condition that the edge of the foundation pit is overlong and the monitoring points on the same side exceed the maximum depth of field of the cameras, a monitoring mode that two cameras are synchronous in opposite directions is adopted, and working base points are respectively arranged at two ends of the edge of the foundation pit, so that long-distance measuring point monitoring of the monitoring points on the long edge of the foundation pit is realized.
(2) And determining the type of a camera (an infrared camera), selecting parameters of the camera and a lens, and constructing a synchronous monitoring system integrating horizontal displacement and vertical displacement of multiple measuring points.
In order to improve the economical efficiency of the monitoring method as much as possible, the invention provides a synchronous monitoring method for integrating horizontal displacement and vertical displacement of multiple measuring points, which comprises the following steps: according to the monitoring requirements, camera and lens parameters are determined, shooting of a plurality of targets is achieved through the space positions of the working base points and the targets to be detected within the depth of field of the lens, and real displacement of each target is calculated respectively, so that multi-measuring-point synchronous monitoring of a single camera is achieved.
Because the foundation pit is required to be monitored for a long time, the working camera set is required to obtain clear target images under various working conditions, the infrared camera with better environmental adaptability is adopted for image acquisition relative to the visible light camera, and the image shot by the infrared camera is a gray level image, so that the subsequent processing is more convenient. Under the condition of good illumination, the infrared camera directly receives an infrared wave Duan Guangcheng image; when the illuminance is insufficient, the infrared camera can generate active infrared light through the infrared emitting diode to realize light supplementing.
Selecting camera parameters: based on the requirement of foundation pit monitoring, the infrared camera with low frame frequency, high signal-to-noise ratio and large dynamic range is selected, and the resolution is determined according to the monitored foundation pit size and the arrangement condition of the measuring points.
Selecting a camera lens: aiming at the characteristic that monitoring points of the foundation pit enclosure wall are often in linear or nearly linear distribution, a lens with a small field angle and a large depth of field is used for shooting (5-10 degrees are recommended to be selected according to the field angle), in practical engineering application, the correlations among working distance, field of view, camera chip size, focal length and depth of field are utilized according to the precision requirement, and the cost performance is comprehensively considered to determine a proper lens.
(3) And arranging a measuring point target, pouring a working foundation point camera monitoring pier, installing a camera set and light supplementing equipment, determining a background marker and arranging a calibration target.
Installing a measuring point target: target base is installed at monitoring point and rivet is used forThe gun is fixed. The targets are mounted on the base by using universal joints and mechanical lifting rods so as to realize flexible adjustment of angles and heights of the targets, and the targets are sequentially expressed as T 1 、T 2 、T 3 ......T n
Manufacturing a base and arranging a camera unit: and manufacturing a camera monitoring pier at a working base point by using a cast-in-situ reinforced concrete method, and installing a forced centering chassis. The camera is mounted on the forced centering chassis, and an infrared light supplement lamp is arranged. When the environment infrared illuminance is monitored to be insufficient, the camera unit automatically starts an infrared lamp dot matrix surrounding the lens, and the infrared lamp is used for emitting infrared rays to irradiate targets and background objects, and the infrared lamp dot matrix is received by the monitoring camera through diffuse reflection to form a distinguishable gray level image.
Selecting a background marker and a calibration target: for each camera, one or more static background markers are needed to be determined in a photographed image so as to correct offset errors of the cameras, and as a working base point can be arranged to meet that two cameras positioned on the same side of a foundation pit are mutually visible, and are stationary points stably visible in the environment, a camera monitoring pier is used as the background marker, and a square target is arranged on the surface of the monitoring pier to be used as a calibration target.
(4) And calibrating an inertial measurement unit (Inertial Measurement Unit, IMU), and adjusting the target to the pose of the camera by using the IMU so as to realize that the target plane is parallel to the corresponding camera plane.
The IMU is respectively placed on the working base point and the target platform in a flat mode, two magnetic force count values are obtained, then the target universal joint and the centering chassis of the working base point are adjusted until the readings of the magnetometers at the two positions are consistent, primary object image plane parallel calibration is completed, and then an angular velocity meter can be used for checking the calibration result: and placing the IMU on a forced centering chassis of a camera monitoring pier, stably moving the IMU to a target platform corresponding to the working base point as much as possible after the IMU is stable, waiting for the sensor to be stable, integrating the values of an angular velocity meter and an accelerometer in the moving process by using calculation software to obtain integral displacement and Euler angle rotation quantity, further obtaining the relative pose of the IMU at the target position and the monitoring pier, judging whether the variation quantity is within an allowed error interval, otherwise, correspondingly adjusting a universal joint of the target, and repeating the steps until the target plane is parallel to the corresponding camera plane.
Further, the IMU needs to calibrate each sensor before use:
calibrating the IMU magnetometer, respectively rotating the front and the back of the magnetometer upwards for one circle, and correcting the obtained difference value of the extremum in the output data as an amplitude value so as to realize correction of the magnetometer;
calibrating an IMU accelerometer, respectively downwards pointing each face of the sensor, recording the reading of the accelerometer under the action of gravity, and obtaining a corrected scale factor through calculation;
the IMU gyroscope is calibrated, error factors of the gyroscope are mainly temperature drift, namely errors caused by the increase of working temperature, so that power and heat dissipation are considered seriously when the IMU is selected, the actual working temperature is observed, and measurement is suspended when the core temperature of the IMU is too high so as to reduce the temperature drift errors.
(5) The camera is preheated, shooting frequency is set, and a measuring point target is continuously monitored in real time.
Starting a camera set, waiting for the temperature of the camera to rise to the normal working average temperature, setting shooting frequency, and comprehensively considering factors such as equipment conditions, foundation pit size, working environment and the like by specific frequency, wherein the shooting frequency is recommended to be not less than 10 times per minute for foundation pit monitoring. Acquiring real-time monitoring imagesc denotes the number of the photographing camera, t denotes the time series number of the image, and represents the photographing order thereof.
(6) And dividing the region of interest of the image by using an edge calculation method, positioning target pixels, and calculating grid target corner points in real time.
The invention selects the plane square lattice characteristic target, can accurately position the target angular point in the follow-up process, and improves the robustness of the measuring point displacement result. In order to improve the resolving speed of the target angular points and simplify the processing steps, the invention adopts an edge computing mode to resolve the images at the camera end in real time, namely, the image processing is realized through a processor integrated with the camera, and the subsequent measuring point displacement computation and the background marker displacement computation also adopt a camera end direct resolving mode.
And (3) positioning an interested region where the target image is located, analyzing the connected domains consisting of the same or similar gray pixels in the image by using a BLOB (small block calculation), extracting gray mutation extreme points of each connected domain, fitting the pixels of each extreme point to obtain an edge image, and comparing the edge image with the shape of the target to obtain the interested region representing the target in the image.
Furthermore, because the invention is integrated monitoring of displacement of multiple measuring points, a single camera correspondingly shoots multiple targets, namely, multiple interested areas need to be positioned on an image, and the targets respectively correspond to each measuring point in the depth of field range of the camera and a calibration target for correcting camera offset.
And calculating target corner points in each region of interest by using a SUSAN corner detection operator, wherein a circular template is adopted by the SUSAN algorithm (figure 4), the circle center of the model is taken as a core, the gray value of each pixel in the circular region is compared with the gray value of the central pixel, and the pixels with the gray value similar to the central pixel are taken to form a USAN region (assimilation core segmentation similar region).
The gray values of each point and the core point of the template circle are compared by the following method
The USAN area size is then calculated
In (x) 0 ,y 0 ) Is the core point position, (x, y) is the position of other pixels in the template M (x, y), f (x) 0 ,y 0 ) And f (x, y) respectively represent (x) 0 ,y 0 ) And (x, y) the gray scale of the pixel; u represents a gray level difference threshold; the function c represents the comparison output result and is obtained by all pixels in the template participating in the operation. Taking the geometric threshold g=s max /2,S max For the maximum S value that can be obtained by a template circle, the response function of the judgment corner point is
Then local maximum suppression is applied to the target square lattice corner points to obtain the target square lattice corner pointsi represents the number corresponding to the target of the region of interest where the corner points of the square grid are located, j represents the number of the corner points of the square grid in the target along the coordinate axis direction, and t represents the time sequence number of the image.
(7) For the initial frame image, calculating the corresponding proportion coefficient mu of each target i And (5) calculating initial coordinates of each measuring point and each calibration point.
Calculating to obtain the coordinates of the corner points of the target square grid in the region of interest contained in the image through the step (6), comparing the preset target size with the pixel spacing of the corner points of the target square grid obtained through actual recognition by using a template-like matching algorithm, and obtaining the proportionality coefficient mu corresponding to each target i Then coordinates the characteristic angular points in the image coordinate systemThe initial coordinates of each square corner point of the target in a camera coordinate system are obtained through conversion>
Wherein X is ic Representing coordinate values of the measuring points in the direction orthogonal to the enclosure wall, Y ic And the coordinate value of the measuring point in the vertical direction is shown. The real coordinates of the pixel points corresponding to the measuring point target and the calibration target are
(8) For each subsequent frame, solving the coordinates of each measuring point and the calibration point, and obtaining the noisy displacement of the measuring point by making a difference with the initial frameCalculating the displacement of the background static marker, and correcting to obtain the denoising displacement of the measuring point>
For the subsequent images, respectively calculating time sequence coordinates of each target square corner point under a camera coordinate system by using the proportionality coefficients corresponding to different targets obtained in the step (7)And subtracting the initial coordinates +.>Obtaining the noisy displacement of the measuring point>
Wherein m is a smaller value of the number of grid corners in a single region of interest, namely a smaller value of the number of grid corners identified under t time sequence and the number of grid corners of an initial frame, so as to avoid the influence of the identification deletion of the grid corners on the displacement of the measuring point.
After the monitor point displacement is calculated, the error caused by the camera displacement is corrected by calculating the displacement of the background stationary marker. The background stationary object selects a camera to monitor a calibration target on the pier, and then obtains the displacement of the calibration target in the same solving mode of the measurement point targetThen subtracting the calibration target displacement of the same frame image from the measurement point noisy displacement to obtain the measurement point denoising displacement +.>And repeating the steps for the subsequently obtained image sequence, so that the displacement of the measuring point can be accurately monitored in real time.
The foregoing embodiments are provided for further explanation of the present invention and are not to be construed as limiting the scope of the present invention, and some insubstantial modifications and variations of the present invention, which are within the scope of the invention, will be suggested to those skilled in the art in light of the foregoing teachings.

Claims (9)

1. A foundation pit multi-measuring-point integrated monitoring method based on monocular vision comprises the following steps:
(1) The foundation pit is surveyed, and positions of monitoring points and working base points are selected;
(2) Determining the types of cameras, selecting parameters of the cameras and the lenses, and constructing a synchronous monitoring system integrating horizontal displacement and vertical displacement of multiple measuring points;
(3) Arranging a measuring point target, pouring a working foundation point camera monitoring pier, installing a camera set and light supplementing equipment, determining a background marker and arranging a calibration target;
(4) Calibrating an Inertial Measurement Unit (IMU), and adjusting the pose of the target and the camera by using the IMU so as to realize that the plane of the target is parallel to the corresponding plane of the camera;
(5) Preheating a camera, setting shooting frequency, and continuously and real-timely monitoring a measuring point target;
(6) Dividing an image region of interest by using an edge calculation method, positioning target pixels, and calculating grid target corner points in real time;
(7) For the initial frame image, calculating the corresponding proportion coefficient mu of each target ip Calculating initial coordinates of each measuring point and each calibration point;
(8) For each subsequent frame, solving the coordinates of each measuring point and the calibration point, and obtaining the noisy displacement of the measuring point by making a difference with the initial frameCalculating the displacement of the background static marker, and correcting to obtain the denoising displacement of the measuring point>And the displacement of the measuring point is accurately monitored in real time.
2. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (1) specifically includes:
selecting foundation pit retaining wall displacement monitoring points according to building foundation pit engineering monitoring technical standards and building deformation measurement standards, wherein the monitoring points are arranged on a foundation pit crown beam, the distance is 10-15M, and the monitoring points are arranged in the middle of each side edge of the foundation pit, at external corners and at positions adjacent to a protection object;
the camera is arranged at a fixed point position in the foundation pit environment so as to reduce measurement errors caused by camera position variation; according to the building deformation measurement specification, the displacement of the area which is more than 2 times of the depth of the foundation pit from the edge of the building foundation pit is negligible, so the area is assumed to be the setting range of the vision measurement working base point.
3. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (2) specifically includes:
an infrared camera with better environmental adaptability is adopted for image acquisition, and the infrared camera directly receives an infrared wave Duan Guangcheng image under the condition of good illumination; when the illuminance is insufficient, the infrared camera can generate active infrared light through the infrared emitting diode to realize light supplementing;
selecting camera parameters: based on the requirement of foundation pit monitoring, selecting an infrared camera with low frame frequency, high signal-to-noise ratio and large dynamic range, wherein the resolution is determined according to the size of the monitored foundation pit and the arrangement condition of measuring points;
selecting a camera lens: aiming at the characteristic that monitoring points of the foundation pit enclosure wall are often in linear or nearly linear distribution, a lens with a small angle of view and a large depth of field is used for shooting, and the angle of view is selected to be 5-10 degrees.
4. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (3) specifically includes:
installing a measuring point target: a target base is arranged at the monitoring point and fixed by a rivet gun; the targets are mounted on the base by using universal joints and mechanical lifting rods so as to realize flexible adjustment of angles and heights of the targets, and the targets are sequentially expressed as T 1 、T 2 、T 3 ......T n
Manufacturing a base and arranging a camera unit: manufacturing a camera monitoring pier at a working base point by using a cast-in-situ reinforced concrete method, and installing a forced centering chassis; mounting the camera on the forced centering chassis, and arranging an infrared light supplementing lamp;
selecting a background marker and a calibration target: for each camera, one or more stationary background markers need to be determined in the captured image in order to subsequently correct the offset error of the camera, the camera monitoring pier is used as the background marker, and the grid targets are arranged on the surface of the monitoring pier as calibration targets.
5. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (4) specifically includes:
the IMU is respectively placed on the working base point and the target platform in a flat mode, two magnetic force count values are obtained, then the target universal joint and the centering chassis of the working base point are adjusted until the readings of the magnetometers at the two positions are consistent, primary object image plane parallel calibration is completed, and then an angular velocity meter can be used for checking the calibration result: and placing the IMU on a forced centering chassis of a camera monitoring pier, stably moving the IMU to a target platform corresponding to the working base point as much as possible after the IMU is stable, waiting for the sensor to be stable, integrating the values of an angular velocity meter and an accelerometer in the moving process by using calculation software to obtain integral displacement and Euler angle rotation quantity, further obtaining the relative pose of the IMU at the target position and the monitoring pier, judging whether the variation quantity is within an allowed error interval, otherwise, correspondingly adjusting a universal joint of the target, and repeating the steps until the target plane is parallel to the corresponding camera plane.
6. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (5) specifically includes:
starting a camera set, waiting for the temperature of the camera to rise to the normal working average temperature, setting shooting frequency, wherein the specific frequency should comprehensively consider factors such as equipment conditions, foundation pit size, working environment and the like, and for foundation pit monitoring, the shooting frequency is recommended to be not less than 10 times per minute; acquiring real-time monitoring imagesc denotes the number of the photographing camera, t denotes the time series number of the image, and represents the photographing order thereof.
7. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (6) specifically includes:
the image is resolved at the camera end in real time by adopting an edge computing mode, namely, image processing is realized by a processor integrated with the camera, and the subsequent measuring point displacement computation and background marker displacement computation also adopt a camera end direct resolving mode;
positioning an interested region where a target image is located, analyzing a connected domain consisting of the same or similar gray pixels in the image by using a BLOB (small block calculation), extracting gray mutation extremum points of each connected domain, fitting each extremum point pixel to obtain an edge image, and comparing the edge image with the target shape to obtain the interested region representing the target in the image;
calculating target corner points in all interested areas by using a SUSAN corner detection operator, wherein the SUSAN algorithm adopts a circular template, the circle center of the model is taken as a kernel, the gray value of each pixel in the circular area is compared with the gray value of the central pixel, and pixels with the gray value similar to the central pixel are taken to form a USAN area (assimilation kernel segmentation similar area);
the gray values of each point and the core point of the template circle are compared by the following method
The USAN area size is then calculated
Wherein: (x) 0 ,y 0 ) Is the core point position; (x, y) is the position of the template M (x, y) in which other pixels are located; f (x) 0 ,y 0 ) And f (x, y) respectively represent (x) 0 ,y 0 ) And (x, y) the gray scale of the pixel; t represents a gray level difference threshold; the function c represents a comparison output result and is obtained by all pixels in the template participating in operation;
taking the geometric threshold g=s max /2,S max For the maximum S value that can be obtained by a template circle, the response function of the judgment corner point is
Then local maximum suppression is applied to the target square lattice corner points to obtain the target square lattice corner pointsi represents the number corresponding to the target of the region of interest where the corner points of the square grid are located, j represents the number of the corner points of the square grid in the target along the coordinate axis direction, and t represents the time sequence number of the image.
8. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (7) specifically includes:
for the initial frame image, calculating the corresponding proportion coefficient mu of each target i Calculating initial coordinates of each measuring point and each calibration point;
calculating to obtain the coordinates of the corner points of the target square grid in the region of interest contained in the image through the step (6), comparing the preset target size with the pixel spacing of the corner points of the target square grid obtained through actual recognition by using a template-like matching algorithm, and obtaining the proportionality coefficient mu corresponding to each target i Then coordinates the characteristic angular points in the image coordinate systemThe initial coordinates of each square corner point of the target in a camera coordinate system are obtained through conversion>
Wherein X is ic Representing coordinate values of the measuring points in the direction orthogonal to the enclosure wall, Y ic Coordinate values of the measuring points in the vertical direction are represented; the real coordinates of the pixel points corresponding to the measuring point target and the calibration target are
9. The foundation pit multi-measuring-point integrated monitoring method based on monocular vision as set forth in claim 1, wherein the step (8) specifically includes:
for the subsequent images, respectively calculating time sequence coordinates of each target square corner point under a camera coordinate system by using the proportionality coefficients corresponding to different targets obtained in the step (7)And subtracting the initial coordinates +.>Obtaining noisy displacement of measuring point
Wherein m is a smaller value of the number of grid corners in a single region of interest, namely a smaller value of the number of grid corners identified under t time sequence and the number of grid corners of an initial frame, so as to avoid the influence of the identification deletion of the grid corners on the displacement of the measuring point;
correcting errors caused by camera displacement by calculating displacement of the background stationary marker after calculating the monitoring point position; the background stationary object selects a camera to monitor a calibration target on the pier, and then obtains the displacement of the calibration target in the same solving mode of the measurement point targetThen subtracting the calibration target displacement of the same frame image from the measurement point noisy displacement to obtain the measurement point denoising displacement +.>ΔY i t The method comprises the steps of carrying out a first treatment on the surface of the And repeating the steps for the subsequently obtained image sequence, so that the displacement of the measuring point can be accurately monitored in real time. />
CN202311517458.XA 2023-11-15 2023-11-15 Foundation pit multi-measuring-point integrated monitoring method based on monocular vision Pending CN117488887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311517458.XA CN117488887A (en) 2023-11-15 2023-11-15 Foundation pit multi-measuring-point integrated monitoring method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311517458.XA CN117488887A (en) 2023-11-15 2023-11-15 Foundation pit multi-measuring-point integrated monitoring method based on monocular vision

Publications (1)

Publication Number Publication Date
CN117488887A true CN117488887A (en) 2024-02-02

Family

ID=89670540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311517458.XA Pending CN117488887A (en) 2023-11-15 2023-11-15 Foundation pit multi-measuring-point integrated monitoring method based on monocular vision

Country Status (1)

Country Link
CN (1) CN117488887A (en)

Similar Documents

Publication Publication Date Title
US8699005B2 (en) Indoor surveying apparatus
CN108828554B (en) Coordinate transformation-based measuring method, system and device without laser drop point
JP2014098683A (en) Method for remotely measuring crack
CN106403900B (en) Flying object tracking location system and method
CN104704384A (en) Image processing method, particularly used in a vision-based localization of a device
CN112629431A (en) Civil structure deformation monitoring method and related equipment
Prahl et al. Airborne shape measurement of parabolic trough collector fields
CN105526906B (en) Wide-angle dynamic high precision laser angular measurement method
CN109373978B (en) Surrounding rock displacement monitoring method for roadway surrounding rock similar simulation
EP3353492B1 (en) Device and method to locate a measurement point with an image capture device
US11544857B1 (en) Method and device for calculating river surface flow velocity based on variational principle
JP6201252B1 (en) Position measuring apparatus and position measuring method
JP2018036769A (en) Image processing apparatus, image processing method, and program for image processing
CN110926373A (en) Structured light plane calibration method and system under railway foreign matter detection scene
Mi et al. A vision-based displacement measurement system for foundation pit
WO1998030977A1 (en) Method and arrangement for determining the position of an object
CN103795935A (en) Camera shooting type multi-target locating method and device based on image rectification
US11694357B2 (en) Solar photovoltaic measurement, and related methods and computer-readable media
WO2022126339A1 (en) Method for monitoring deformation of civil structure, and related device
JPH09311021A (en) Method for measuring three-dimensional shape using light wave range finder
Jiang et al. Determination of construction site elevations using drone technology
CN117152257A (en) Method and device for multidimensional angle calculation of ground monitoring camera
CN112530010A (en) Data acquisition method and system
CN106840137B (en) Automatic positioning and orienting method of four-point type heading machine
CN117488887A (en) Foundation pit multi-measuring-point integrated monitoring method based on monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination