CN117490579A - Foundation pit displacement monitoring system based on image vision processing - Google Patents

Foundation pit displacement monitoring system based on image vision processing Download PDF

Info

Publication number
CN117490579A
CN117490579A CN202410005871.6A CN202410005871A CN117490579A CN 117490579 A CN117490579 A CN 117490579A CN 202410005871 A CN202410005871 A CN 202410005871A CN 117490579 A CN117490579 A CN 117490579A
Authority
CN
China
Prior art keywords
image
displacement
module
distortion
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410005871.6A
Other languages
Chinese (zh)
Inventor
徐向阳
杨浩
李秋乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202410005871.6A priority Critical patent/CN117490579A/en
Publication of CN117490579A publication Critical patent/CN117490579A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a foundation pit displacement monitoring system based on image vision processing, which comprises an image sensing module, a data processing module, a monitoring module and an intelligent early warning module. The data processing module comprises image distortion calibration, intelligent dividing and extracting of image pixels, high-precision calculation of image pixel displacement and other sub-modules. The image after distortion calibration is processed through a machine vision algorithm based on deep learning, so that pixel point extraction, region division and pixel point region average displacement calculation based on time variation are realized. The foundation pit displacement monitoring system based on image vision processing provided by the invention adopts an integrated design, integrates image acquisition, data processing, monitoring visualization and intelligent detection and early warning, is simple and convenient to operate and low in cost, can intelligently monitor the displacement condition in the foundation pit in real time, and is an efficient and reliable foundation pit monitoring solution.

Description

Foundation pit displacement monitoring system based on image vision processing
Technical Field
The invention belongs to the field of machine vision, and particularly relates to a foundation pit displacement monitoring system based on image vision processing.
Background
Geometric deformation monitoring is an important method for finding abnormal behaviors of a building. In the past practical engineering projects, the engineering personnel are mainly relied on to carry out manual monitoring and the geodetic instrument is used for realizing related monitoring; currently, in actual engineering, engineering personnel commonly use related professional measuring instruments such as theodolites, total stations and the like to monitor foundation pit displacement. However, although the manual monitoring is widely applied, the defects of complicated operation, high labor cost, incapability of eliminating data errors caused by human factors, incapability of monitoring and acquiring data in real time and the like in the actual process are overcome.
Along with the increasing complexity of engineering sites and the increasing construction requirements, the requirements of engineering projects on monitoring are continuously improved in order to ensure the safety of construction environments and surrounding environments. Along with the continuous development of the related field of machine vision, aiming at the monitoring requirements required by complex environments, the improvement of the accuracy of the monitoring efficiency and the reduction of the monitoring cost are the main development directions of the current actual engineering monitoring, so that a monitoring system with high intelligence and low cost is required.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior related monitoring technology, the invention provides a foundation pit displacement monitoring system based on image vision processing.
The technical scheme of the invention is as follows:
the invention discloses a foundation pit displacement monitoring system based on image vision processing, which comprises an image sensing module, a data processing module, a monitoring module and an intelligent early warning module. The image sensing module illuminates the interior surface of the target pit with an LED illumination lamp and captures interior images of the pit with a high precision industrial camera, which are then transmitted to the data processing module by wired or wireless means. The data processing module comprises image distortion calibration, intelligent dividing and extracting of image pixels, high-precision calculation of image pixel displacement and other sub-modules. The image after distortion calibration is processed through a machine vision algorithm based on deep learning, so that pixel point extraction, region division and pixel point region average displacement calculation based on time variation are realized. Therefore, the average displacement value of the foundation pit can be accurately obtained, and the average displacement value is transmitted to the monitoring module in a wired or wireless mode for operators to check.
The monitoring module is composed of a computer host terminal and a mobile device terminal. The operator can not only analyze visual data through the computer host terminal, but also check the data in real time through the mobile equipment by utilizing the 5G wireless transmission technology. And the intelligent early warning module monitors the pixel average displacement value obtained by the data processing module according to the set pixel displacement threshold range. And once the displacement exceeds the threshold value, the early warning is triggered, and if the monitoring data is abnormal, the error reporting module is triggered. The intelligent early warning module is provided with an embedded visual window in the monitoring module.
The foundation pit displacement monitoring system based on image vision processing provided by the invention adopts an integrated design, and integrates image acquisition, data processing, monitoring visualization and intelligent detection and early warning. The foundation pit monitoring system is simple and convenient to operate and low in cost, can intelligently monitor the displacement condition inside the foundation pit in real time, and is an efficient and reliable foundation pit monitoring solution.
Preferably, the image sensing module illuminates the interior surface of the target pit and the high contrast black and white bi-color checkerboard for target recognition with an LED illumination lamp, captures the interior image of the pit with an industrial camera, and then transmits the images to the data processing module in a wired or wireless manner. The image sensing module specifically comprises a solar power supply electronic module, an LED illuminating lamp sub-module and an industrial camera shooting sub-module. The solar energy electronic supply module consists of a solar panel and a storage battery, and can provide continuous and environment-friendly electric power support for the whole monitoring system by utilizing solar energy. The solar panel efficiently converts sunlight into electric energy, and meanwhile, the storage battery ensures that the system can stably operate even under the condition of no illumination; the LED illuminating lamp sub-module has the characteristics of high brightness and low energy consumption, and can ensure that uniform and sufficient illumination can be provided for the interior of a foundation pit and a target checkerboard marker under various illumination conditions, so that the definition and the accuracy of a shot image are ensured; the industrial camera shooting sub-module comprises an industrial camera and is specially used for capturing high-quality images of the interior of the foundation pit. These images will be the main analysis object of the data processing module for subsequent displacement monitoring and analysis. The use of industrial cameras ensures the details and accuracy of the image data, providing a reliable basis for subsequent data processing.
Further preferably, the data processing module comprises an image distortion calibration module, an image pixel point intelligent dividing and extracting and image pixel point displacement calculating module.
Further preferably, the image distortion calibration module aims at correcting the distortion problem of the shot image of the inner surface of the foundation pit, and the module performs distortion calibration on the image based on the LED illuminating lamp point distribution position and by combining a Zhang Zhengyou checkerboard calibration method, and the industrial camera performs calibration based on the parameters of the industrial camera.
Further preferably, a high-contrast black-and-white bi-color checkerboard calibration plate is adopted to ensure the definition and recognition rate of the black-and-white bi-color checkerboard calibration plate in the industrial camera capturing process based on the requirements of a Zhang Zhengyou checkerboard calibration method according to the special requirements of foundation pit scenes. For this purpose we have chosen a 5 x 5 tessellation which aims to maximize the accuracy of image capture.
Further preferably, the LED lighting lamp point distribution scheme is designed based on the checkerboard layout. A total of 9 LED lights are provided to ensure that the calibration plate can be fully and uniformly illuminated. By the method, the uniformity and the effectiveness of illumination are considered, the specificity of the foundation pit environment is fully considered, the image capturing quality is ensured, and meanwhile, the economical efficiency and the practicability of the system are also maintained.
Further preferably, the relevant implementation process and calculation process specifically for image distortion calibration are as follows:
step one: an industrial camera is used for capturing a black-and-white bi-color checkerboard calibration plate image with an LED illuminating lamp from a fixed angle, and the key of the method is feature point identification and tracking in an image processing stage. In particular, the emphasis is placed on the corner points of the checkerboard with the LED lighting lamps. The LED illuminating lamp provides a clear visual marking environment, so that accurate identification of characteristic points is greatly promoted, and accuracy of identifying marking points is improved.
Step two: and capturing the characteristic point coordinates of the LED illuminating lamp, and substituting the characteristic point coordinates into the characteristic point coordinates for calculating distortion parameters of the industrial camera. These parameters are then used to calibrate the image to correct any distortion that may be present. To verify the effect of the calibration, we choose other feature points for further validation based on the calibration results.
The image distortion correction step includes:
step 1: identifying the angular point coordinates of the checkerboard with the LED illuminating lamp: the LED illuminating lamp provides more obvious visual marks, which is helpful for more accurately identifying characteristic points;
step 2: calculating distortion parameters of the industrial camera: calculating parameters of the industrial camera, including an internal reference matrix and distortion coefficients, by using the identified LED illuminating lamp characteristic point coordinates;
step 3: and correcting distortion of the photographed checkerboard image pixels based on the calculated relevant parameters. And carrying out accurate distortion correction on the image pixel points by applying the internal reference matrix and the distortion parameters which are obtained through calculation.
For step 1, based on the coordinates of the mark point where the LED illumination lamp is known to be located in a normal situation, the coordinates of the mark point of the LED illumination lamp in the captured image need to be determined, that is, it is assumed that I (x, y) represents the brightness value of the image at the point (x, y), and the threshold is set to T. For each detected corner point (x i ,y i ) The following calculations were performed:
calculating the brightness of the corner points:wherein N is the corner point (x i ,y i ) The number of pixels in the field;
if L > T, then (x i ,y i ) Is the coordinates of the LED illuminating lamp;
note that: the threshold T needs to be adjusted according to the lighting conditions of the specific environment.
Aiming at the step 2, after the coordinates of the mark points of the LED illuminating lamp in the shot image are determined, the corresponding relation between a physical coordinate system and an image coordinate system is established by utilizing the checkerboard image captured by the industrial camera. First, the location of each point in physical space and their corresponding location in the image is determined by analysis of the checkerboard image. Based on such a coordinate relationship, a distortion coefficient of the industrial camera can be calculated. The distortion coefficient is a quantized representation of the inherent distortion of an industrial camera lens, including radial distortion, tangential distortion, and the like. Using the calculated distortion coefficients, we apply distortion correction to the captured image.
Wherein, camera internal reference matrix A:
wherein f x ,f y Is focal length, c x ,c y Is the center coordinates of the image;
the distortion coefficients include: coefficient of radial distortion k 1 ,k 2 … … and tangential distortion coefficient p 1 ,p 2
For the distortion coefficient, an optimization algorithm is generally selected to be obtained through calculation in the camera calibration process, and the camera distortion coefficient is obtained through a least square method, wherein the calculation formula is as follows:
x ij is the image coordinate of the jth corner in the ith image; n represents the number of images and m represents the number of corner points;
X ij is the corresponding physical coordinates;
a is an internal reference matrix of the camera;
k 1 ,k 2 is the radial distortion coefficient, p 1 ,p 2 Is the tangential distortion coefficient;
and (3) rotating and translating the matrix of the R, t camera.
Checkerboard image correction based on distortion coefficients, for each pixel point (x, y) in the image:
step 1: radial distortion correction
x RD = x(1 + k 1 r 2 + k 2 r 4 + …)
y RD = y(1 + k 1 r 2 + k 2 r 4 + …)
Wherein r is 2 = x 2 + y 2 X, y is the pre-distortion imagePixel coordinates; x is x RD ,y RD The radial distortion is calibrated image pixel coordinates; where RD is an abbreviation for radial distortion (Radial Distortion).
Step 2: tangential distortion correction
x TD = x RD + [2p 1 y RD + p 2 (r 2 + 2 x 2 RD )]
y TD = y RD + [2p 2 x RD + p 1 (r 2 + 2 y 2 RD )]
Wherein x is TD ,y TD The radial distortion is calibrated image pixel coordinates; where TD is an abbreviation for tangential distortion (Tangential Distortion).
Further preferably, the image pixel point intelligent dividing and extracting module may operate according to the following principle and calculation steps:
and (3) intelligent partitioning and extracting of pixel points:
step one: and (5) dividing areas. The calibrated checkerboard image is still divided into 25 areas according to the previous black and white double colors, and each area represents one small checkerboard.
Step two: and (5) calculating the size of the region. Assuming that the pixel resolution of the image is w×h (width×height), the pixel size of each region is calculated as (W/5) × (H/5).
Step three: and (5) area numbers. The 25 regions are numbered in a left to right, top to bottom order. For the i-th region, all pixels whose coordinate range is within [ ((i-1) mod 5) × (W/5), ((i-1)/(5) × (H/5) ] are extracted, where mod is the modulo operator.
Step four: and extracting image pixel points. For each region, all pixels within the region are extracted based on a deep learning machine vision correlation method.
And aiming at the fourth step, extracting the pixel points of each area of the checkerboard by adopting a machine vision correlation algorithm based on deep learning. This process can be divided into several steps: feature learning, region identification and pixel point extraction. The following are related concepts and extraction formulas:
and (3) feature learning: the CNN is used to learn the features from the image, and the formula is as follows:
,
where x is the input image, W is the weight of the convolution kernel, b is the bias term, x represents the convolution operation, and ReLU is the activation function;
and (3) area identification: for locating each region of the checkerboard, the formula is as follows:
wherein F represents the proposed area,is the output of the convolutional layer; region proposalnetwork represents a candidate region generation network;
pixel point extraction: and carrying out pixel level processing on each region to extract pixel points in the region, wherein the formula is as follows, and the segmentation network is represented by the following formula:
,
where P represents the extracted pixel point and F is the identified region.
Integrating the above step formulas, the specific formulas are as follows:
,
further preferably, in order to accurately calculate the foundation pit displacement value, the image pixel displacement calculation module provides a pixel displacement calculation method based on two images: one is an image photographed for the first time after distortion correction, and the other is an image after a lapse of time after the first photographing. By analyzing the two images, we can measure the displacement of the foundation pit. The operation can be performed according to the following principle and calculation steps:
step one: and calculating the pixel displacement. For each pixel we calculate its displacement between the two images, denoted (Δx, Δy). Δx and Δy are the moving distances of the pixel point on the X-axis and the Y-axis, respectively.
Step two: and calculating the average displacement in the area. For each region in the checkerboard we calculate the average of the displacements of all pixels in that region. This average value will represent the overall pixel displacement for that region.
For the average displacement of the i-th region, the calculation formula is as follows:
average displacement,/>,
Where n is the total number of pixels in the ith region,and->Is the displacement of each pixel point in the i-th region.
Step three: based on the camera calibration and the internal reference matrix, a geometric relationship between the camera and the monitored target, including distance and angle, is determined to establish a relationship between the world coordinate system (actual physical space) and the image coordinate system (pixel space).
Let P be world = (X, Y, Z) is a point in the world coordinate system, P image = (x, y) is the corresponding pixel point in the image coordinate system, and the conversion relationship may be expressed as:
P image =a· [ r|t] · P world
Where A is the camera's internal matrix and Rt is the rotation and translation matrix from the world coordinate system to the camera coordinate system.
Step four: and judging the displacement of the foundation pit. And using the average displacement of each region to represent the overall displacement of the region, and judging the overall displacement condition of the foundation pit based on the relationship among the average displacement, the world coordinate system and the image coordinate system.
Further preferably, the monitoring module comprises a computer host terminal module and a mobile device terminal module; the computer host terminal module consists of wired transmission equipment, a computer host and a display submodule. Specific physical displacement data and analysis results are directly transmitted and stored in a disk of a computer host through a wired transmission device. Through the connected display, the data and the analysis result can be clearly and intuitively presented to related personnel; and the mobile equipment terminal module is based on a 5G and other high-speed wireless transmission technology, and allows multiparty personnel to view related data and analysis results through the mobile equipment at any time and any place. The wireless connection mode increases the flexibility of the monitoring system and improves the convenience of data access.
Further preferably, the intelligent early warning module comprises an image pixel position displacement threshold setting module, a super threshold early warning module and a monitoring data mutation error reporting module; the image pixel displacement threshold setting module compares and screens specific physical displacement data based on a preset threshold. By setting a reasonable displacement threshold, abnormal displacement conditions which possibly cause structural risks can be effectively identified; the super-threshold early warning module is based on the screening result of the image pixel position displacement threshold setting module, and when the monitored displacement exceeds the set threshold, the module triggers early warning prompt; the monitoring data mutation error reporting module is specially responsible for reporting error processing to abnormal super-threshold early warning data, so that foundation pit monitoring and maintenance are facilitated for multiparty staff.
Further preferably, the high-intelligent foundation pit displacement condition acquisition equipment integrating image acquisition, data processing, monitoring visualization and intelligent detection and early warning can realize real-time intelligent monitoring of the internal displacement condition of the foundation pit.
The invention has the advantages that:
1. the foundation pit displacement monitoring system based on image vision processing has high integration on the system, can provide intelligent and convenient monitoring operation for all relevant staff of each level, is light in weight, and is convenient for checking relevant data at any time and any place, knowing the progress of relevant operation of the foundation pit and changing the displacement condition inside the foundation pit.
2. According to the foundation pit displacement monitoring system based on image vision processing, displacement errors generated by the high-definition industrial camera due to the external environment can be eliminated through multipoint displacement monitoring.
3. The foundation pit displacement monitoring system based on image vision processing has the advantages that the cost of the adopted related matched equipment is low, the labor cost advantage is higher than that of the traditional technology, the working monitoring efficiency is greatly improved, and the comprehensive value is higher.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic diagram of a foundation pit displacement monitoring system based on image vision processing according to the present invention;
FIG. 2 is a schematic view of a black and white bi-color checkerboard calibration plate according to the present invention;
FIG. 3 is a flow chart of an image distortion calibration module according to the present invention;
FIG. 4 is a flowchart of a pixel extraction module according to the present invention;
FIG. 5 is a flowchart of the pixel displacement calculation according to the present invention;
FIG. 6 is a flow chart of an intelligent early warning module according to the present invention;
fig. 7 is a schematic diagram of the working process of the foundation pit displacement monitoring system based on image vision processing.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in FIG. 1, the foundation pit displacement monitoring system based on image vision processing disclosed by the invention comprises an image sensing module, a data processing module, a monitoring module and an intelligent early warning module, and has the advantages of high intelligent degree, low overall cost, simplicity in operation and real-time supervision.
The invention provides a foundation pit displacement monitoring system based on image vision processing, which consists of an image sensing module, a data processing module, a monitoring module and an intelligent early warning module and is used for realizing high-precision and real-time monitoring of foundation pit displacement.
An image sensing module: the module utilizes an LED light source to illuminate the inner surface of a target foundation pit and is provided with an industrial camera to acquire images of corresponding checkerboard calibration plates in the foundation pit. The acquired picture information is transmitted to the data processing module in a wired or wireless mode, so that definition and accuracy of the image information are ensured.
And a data processing module: the module adopts a machine vision-based algorithm, and comprises image distortion calibration, picture pixel point extraction, pixel point region division and pixel point region average displacement calculation based on time variation. The application of these techniques enables the average displacement value of the pit area to be calculated from the taken picture. In addition, the data processing module has the characteristic of light data, and can rapidly upload monitoring data in real time.
And a monitoring module: the module comprises a computer host terminal and a mobile equipment terminal, so that operators can analyze and check visual data results through the computer host terminal. Meanwhile, based on the 5G wireless transmission technology, operators can also check real-time data through the mobile equipment terminal, so that the flexibility and convenience of the monitoring system are improved.
An intelligent early warning module: based on the set pixel displacement threshold range, the intelligent early warning module monitors the average pixel displacement value obtained by the data processing module. And once the monitored data exceeds the threshold value, triggering the early warning module, and if the monitored data is abnormal, triggering the error reporting module. The threshold value is determined according to the specific requirements of the specific foundation pit. The intelligent early warning module is provided with an embedded visual window in the monitoring module, so that instant safety early warning is provided for operators.
Specifically, the method of the foundation pit displacement monitoring system based on image vision comprises the following steps. Fig. 7 is a schematic diagram of the working process of the foundation pit displacement monitoring system based on image vision processing.
1. The foundation pit displacement monitoring system based on image vision processing comprises an image distortion calibration module, an image pixel intelligent dividing and extracting module and an image pixel displacement calculating module.
2. The image distortion calibration module aims to correct the distortion problem of the shot foundation pit surface image, and the module performs distortion calibration on the image based on the LED illuminating lamp point distribution position and by combining a Zhang Zhengyou checkerboard calibration method, and the industrial camera performs calibration based on the parameters of the industrial camera.
3. Aiming at the special requirements of foundation pit scenes, a high-contrast black-white double-color checkerboard calibration plate is adopted based on the requirements of a Zhang Zhengyou checkerboard calibration method, so that the definition and the recognition rate of the black-white double-color checkerboard calibration plate in the industrial camera capturing process are ensured. For this purpose we have chosen a 5 x 5 tessellation which aims to maximize the accuracy of image capture.
4. And designing an LED illuminating lamp point distribution scheme based on the checkerboard layout. A total of 9 LED lights are provided to ensure that the calibration plate can be fully and uniformly illuminated. By the method, the uniformity and the effectiveness of illumination are considered, the specificity of the foundation pit environment is fully considered, the image capturing quality is ensured, and meanwhile, the economical efficiency and the practicability of the system are also maintained. The black-white bi-color checkerboard and the dot arrangement scheme of the LED illuminating lamps are shown in a schematic diagram of a black-white bi-color checkerboard calibration board in FIG. 2.
5. The relevant implementation process and calculation process for the image distortion calibration are as follows:
step one: an industrial camera is used for capturing a black-and-white bi-color checkerboard calibration plate image with an LED illuminating lamp from a fixed angle, and the key of the method is feature point identification and tracking in an image processing stage. In particular, the emphasis is placed on the corner points of the checkerboard with the LED lighting lamps. The LED illuminating lamp provides a clear visual marking environment, so that accurate identification of characteristic points is greatly promoted, and accuracy of identifying marking points is improved.
Step two: and capturing the characteristic point coordinates of the LED illuminating lamp, and substituting the characteristic point coordinates into the characteristic point coordinates for calculating distortion parameters of the industrial camera. These parameters are then used to calibrate the image to correct any distortion that may be present. To verify the effect of the calibration, we choose other feature points for further validation based on the calibration results.
The image distortion correction step includes:
step 1: identifying the angular point coordinates of the checkerboard with the LED illuminating lamp: the LED illuminating lamp provides more obvious visual marks, which is helpful for more accurately identifying characteristic points;
step 2: calculating distortion parameters of the industrial camera: parameters of the industrial camera, including an internal reference matrix and distortion coefficients, are calculated using the identified LED illumination lamp feature point coordinates.
Step 3: and correcting distortion of the photographed checkerboard image pixels based on the calculated relevant parameters. And carrying out accurate distortion correction on the image pixel points by applying the internal reference matrix and the distortion parameters which are obtained through calculation.
For step 1, based on the coordinates of the mark point where the LED illumination lamp is known to be located in a normal situation, the coordinates of the mark point of the LED illumination lamp in the captured image need to be determined, that is, it is assumed that I (x, y) represents the brightness value of the image at the point (x, y), and the threshold is set to T. For each detected corner point (x i ,y i ) The following calculations were performed:
calculating the brightness of the corner points:wherein N is the corner point (x i ,y i ) The number of pixels in the field;
if L > T, then (x i ,y i ) Is the coordinates of the LED illuminating lamp;
note that: the threshold T needs to be adjusted according to the lighting conditions of the specific environment.
Aiming at the step 2, after the coordinates of the mark points of the LED illuminating lamp in the shot image are determined, the corresponding relation between a physical coordinate system and an image coordinate system is established by utilizing the checkerboard image captured by the industrial camera. First, the location of each point in physical space and their corresponding location in the image is determined by analysis of the checkerboard image. Based on such a coordinate relationship, a distortion coefficient of the industrial camera can be calculated. The distortion coefficient is a quantized representation of the inherent distortion of an industrial camera lens, including radial distortion, tangential distortion, and the like. Using the calculated distortion coefficients, we apply distortion correction to the captured image.
Wherein, camera internal reference matrix A:
wherein f x ,f y Is focal length, c x ,c y Is the center coordinates of the image; distortion coefficient: coefficient of radial distortion k 1 ,k 2 … … and tangential distortion coefficient p 1 ,p 2
For the distortion coefficient, an optimization algorithm is generally selected to be obtained through calculation in the camera calibration process, and the camera distortion coefficient is obtained through a least square method, wherein the calculation formula is as follows:
,
x ij is the image coordinate of the jth corner in the ith image; n represents the number of images and m represents the number of corner points;
X ij is the corresponding physical coordinates;
a is an internal reference matrix of the camera;
k 1 ,k 2 is the radial distortion coefficient, p 1 ,p 2 Is the tangential distortion coefficient;
and (3) rotating and translating the matrix of the R, t camera.
Checkerboard image correction based on distortion coefficients, for each pixel point (x, y) in the image:
step 1: radial distortion correction
x RD = x(1 + k 1 r 2 + k 2 r 4 + …)
y RD = y(1 + k 1 r 2 + k 2 r 4 + …)
Wherein r is 2 = x 2 + y 2 X, y are the pre-distortion image pixel coordinates; x is x RD ,y RD The radial distortion is calibrated image pixel coordinates; where RD is an abbreviation for radial distortion (Radial Distortion).
Step 2: tangential distortion correction
x TD = x RD + [2p 1 y RD + p 2 (r 2 + 2 x 2 RD )]
y TD = y RD + [2p 2 x RD + p 1 (r 2 + 2 y 2 RD )]
Wherein x is TD ,y TD The radial distortion is calibrated image pixel coordinates; where TD is an abbreviation for tangential distortion (Tangential Distortion).
The specific flow is shown in the flow chart of the image distortion module in fig. 3.
6. The intelligent dividing and extracting module for the image pixel points can operate according to the following principle and calculation steps:
and (3) intelligent partitioning and extracting of pixel points:
step one: and (5) dividing areas. The calibrated checkerboard image is still divided into 25 areas according to the previous black and white double colors, and each area represents one small checkerboard.
Step two: and (5) calculating the size of the region. Assuming that the pixel resolution of the image is w×h (width×height), the pixel size of each region is calculated as (W/5) × (H/5).
Step three: and (5) area numbers. The 25 regions are numbered in a left to right, top to bottom order. For the i-th region, all pixels whose coordinate range is within [ ((i-1) mod 5) × (W/5), ((i-1)/(5) × (H/5) ] are extracted, where mod is the modulo operator.
Step four: and extracting image pixel points. For each region, all pixels within the region are extracted based on a deep learning machine vision correlation method.
And aiming at the fourth step, extracting the pixel points of each area of the checkerboard by adopting a machine vision correlation method based on deep learning. This process can be divided into several steps: feature learning, region identification and pixel point extraction. The following are related concepts and extraction formulas:
and (3) feature learning: the CNN is used to learn the features from the image, and the formula is as follows:
,
where x is the input image, W is the weight of the convolution kernel, b is the bias term, x represents the convolution operation, and ReLU is the activation function;
and (3) area identification: for locating each region of the checkerboard, the formula is as follows:
,
wherein F represents the proposed area,is the output of the convolutional layer; region proposalnetwork represents a candidate region generation network;
pixel point extraction: and carrying out pixel level processing on each region to extract pixel points in the region, wherein the formula is as follows, and the segmentation network is represented by the following formula:
,
where P represents the extracted pixel point and F is the identified region.
Integrating the above step formulas, the specific formulas are as follows:
,
the specific flow is shown in the flow chart of the pixel extraction module in fig. 4.
7. In order to accurately calculate a foundation pit displacement value, the image pixel displacement calculation module provides a pixel displacement calculation method based on two images: one is an image photographed for the first time after distortion correction, and the other is an image after a lapse of time after the first photographing. By analyzing the two images, we can measure the displacement of the foundation pit. The operation can be performed according to the following principle and calculation steps:
step one: and calculating the pixel displacement. For each pixel we calculate its displacement between the two images, denoted (Δx, Δy). Δx and Δy are the moving distances of the pixel point on the X-axis and the Y-axis, respectively.
Step two: and calculating the average displacement in the area. For each region in the checkerboard we calculate the average of the displacements of all pixels in that region. This average value will represent the overall pixel displacement for that region.
For the average displacement of the i-th region, the calculation formula is as follows:
average displacement,/>,
Where n is the total number of pixels in the ith region,and->Is the displacement of each pixel point in the i-th region.
Step three: based on the camera calibration and the internal reference matrix, a geometric relationship between the camera and the monitored target, including distance and angle, is determined to establish a relationship between the world coordinate system (actual physical space) and the image coordinate system (pixel space).
Let P be world = (X, Y, Z) is a point in the world coordinate system, P image = (x, y) is the corresponding pixel point in the image coordinate system, and the conversion relationship may be expressed as:
P image =a· [ r|t] · P world
Where A is the camera's internal matrix and Rt is the rotation and translation matrix from the world coordinate system to the camera coordinate system.
Step four: and judging the displacement of the foundation pit. And using the average displacement of each region to represent the overall displacement of the region, and judging the overall displacement condition of the foundation pit based on the relationship among the average displacement, the world coordinate system and the image coordinate system.
The specific flow is shown in the pixel point displacement calculation flow chart of fig. 5.
8. The monitoring module comprises a computer host terminal module and a mobile equipment terminal module; the computer host terminal module consists of wired transmission equipment, a computer host and a display submodule. Specific physical displacement data and analysis results are directly transmitted and stored in a disk of a computer host through a wired transmission device. Through the connected display, the data and the analysis result can be clearly and intuitively presented to related personnel; and the mobile equipment terminal module is based on a 5G and other high-speed wireless transmission technology, and allows multiparty personnel to view related data and analysis results through the mobile equipment at any time and any place. The wireless connection mode increases the flexibility of the monitoring system and improves the convenience of data access.
9. The intelligent early warning module comprises an image pixel position displacement threshold setting module, a super threshold early warning module and a monitoring data mutation error reporting module; the image pixel displacement threshold setting module compares and screens specific physical displacement data based on a preset threshold. By setting a reasonable displacement threshold, abnormal displacement conditions which possibly cause structural risks can be effectively identified; the super-threshold early warning module is based on the screening result of the image pixel position displacement threshold setting module, and when the monitored displacement exceeds the set threshold, the module triggers early warning prompt; the monitoring data mutation error reporting module is specially responsible for reporting error processing to abnormal super-threshold early warning data, so that foundation pit monitoring and maintenance are facilitated for multiparty staff. The workflow of the intelligent pre-warning module is shown in fig. 6.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The utility model provides a foundation ditch displacement monitoring system based on image vision processing which characterized in that: the intelligent early warning system comprises an image sensing module, a data processing module, a monitoring module and an intelligent early warning module;
in the image sensing module, an LED illuminating lamp is used for illuminating the inner surface of a target foundation pit and a black-white double-color checkerboard calibration plate for target identification, and an industrial camera is used for capturing the image of the checkerboard calibration plate placed in the foundation pit and transmitting the inner image to a data processing module;
the data processing module comprises an image distortion calibration module, an image pixel point intelligent dividing and extracting module and an image pixel point displacement calculation module; and the image distortion calibration module performs distortion calibration on the checkerboard calibration plate image in the foundation pit, the image after the distortion calibration is processed through a machine vision algorithm based on deep learning, and the foundation pit displacement is obtained through pixel point extraction, region division and pixel point region average displacement calculation based on time variation.
2. The foundation pit displacement monitoring system of claim 1, wherein: the image distortion calibration module performs distortion calibration on the image based on the LED illuminating lamp point distribution position and by combining a Zhang Zhengyou checkerboard calibration method.
3. The foundation pit displacement monitoring system of claim 1 or 2, wherein: the distortion calibration process of the image is as follows:
step 1: identifying the angular point coordinates of the checkerboard with the LED illuminating lamp:
step 2: calculating distortion parameters of the industrial camera: calculating parameters of the industrial camera, including an internal reference matrix and distortion coefficients, by using the identified LED illuminating lamp characteristic point coordinates;
step 3: and carrying out distortion correction on the photographed checkerboard image pixel points based on the internal reference matrix and the distortion coefficient.
4. A pit displacement monitoring system according to claim 3, wherein:
the step 1 comprises the following steps:
assuming that I (x, y) represents the luminance value of the image at point (x, y), for each detected corner point (x i ,y i ) The following calculations were performed:
calculating the brightness of the corner points:wherein N is the corner point (x i ,y i ) The number of pixels in the field;
if L > T, then (x i ,y i ) The coordinate of the LED illuminating lamp is T which is a preset threshold value.
5. The foundation pit displacement monitoring system of claim 4, wherein:
the steps 2 and 3 comprise:
after the coordinates of the mark points of the LED illuminating lamp in the shot image are determined, a corresponding relation between a physical coordinate system and an image coordinate system is established by utilizing a checkerboard image captured by an industrial camera; determining the position of each point in a physical space and the corresponding position of each point in the image through analysis of the checkerboard image, and calculating the distortion coefficient of the industrial camera based on the corresponding relation between coordinate systems;
wherein, camera internal reference matrix A:
,
wherein f x ,f y Is focal length, c x ,c y Is the center coordinates of the image;
the distortion coefficients include: coefficient of radial distortion k 1 ,k 2 … … and tangential distortion coefficient p 1 ,p 2
The camera distortion coefficient is obtained by adopting a least square method, and the calculation formula is as follows:
,
x ij is the image coordinate of the jth corner in the ith image; n represents the number of images and m represents the number of corner points;
X ij is the corresponding physical coordinates;
a is an internal reference matrix of the camera;
k 1 ,k 2 is the radial distortion coefficient, p 1 ,p 2 Is the tangential distortion coefficient;
r, t is the rotation and translation matrix of the camera;
checkerboard image correction based on distortion coefficients, for each pixel point (x, y) in the image:
step 1: radial distortion correction
x RD = x(1 + k 1 r 2 + k 2 r 4 + …)
y RD = y(1 + k 1 r 2 + k 2 r 4 + …)
Wherein r is 2 = x 2 + y 2 X, y are the pre-distortion image pixel coordinates; x is x RD ,y RD The radial distortion is calibrated image pixel coordinates; where RD is an abbreviation for radial distortion (Radial Distortion);
step 2: tangential distortion correction
x TD = x RD + [2p 1 y RD + p 2 (r 2 + 2 x 2 RD )]
y TD = y RD + [2p 2 x RD + p 1 (r 2 + 2 y 2 RD )]
Wherein x is TD ,y TD The radial distortion is calibrated image pixel coordinates; wherein TD is an abbreviation for tangential distortion (Tangential Distortion);
in performing image distortion correction, the pixel coordinates are typically first adjusted according to the radial distortion coefficients and then further adjusted according to the tangential distortion coefficients, which are typically performed on the basis of having considered the radial distortion correction.
6. The foundation pit displacement monitoring system of claim 1, wherein: the intelligent dividing and extracting module for the image pixel points operates according to the following steps:
step one: dividing the area: dividing the calibrated checkerboard image into 25 areas according to the previous black-white double colors, wherein each area represents a small checkerboard;
step two: area size calculation: assuming that the pixel resolution of the image is w×h, i.e., width×height, the pixel size of each region is calculated as (W/5) × (H/5);
step three: region number: numbering 25 areas, wherein the numbering mode is that the areas are orderly ordered from left to right and from top to bottom; extracting all pixel points with the coordinate range of [ ((i-1) mod 5) x (W/5) ((i-1) 5) x (H/5) ] for the i-th region, wherein mod is a modulo operator;
step four: extracting image pixel points: for each region, all pixels within the region are extracted based on a deep learning machine vision method.
7. The foundation pit displacement monitoring system of claim 6, wherein:
the deep learning machine vision algorithm comprises the following steps: feature learning, region identification, and pixel point extraction, wherein:
and (3) feature learning: the CNN is used to learn the features from the image, and the formula is as follows:
,
where x is the input image, W is the weight of the convolution kernel, b is the bias term, x represents the convolution operation, and ReLU is the activation function;
and (3) area identification: for locating each region of the checkerboard, the formula is as follows:
,
wherein F represents the proposed area,is the output of the convolutional layer; region proposalnetwork represents a candidate region generation network;
pixel point extraction: and carrying out pixel level processing on each region to extract pixel points in the region, wherein the formula is as follows, and the segmentation network is represented by the following formula:
,
wherein P represents the extracted pixel point, and F is the identified region;
integrating the above step formulas, the specific formulas are as follows:
8. the foundation pit displacement monitoring system of claim 1, wherein: the image pixel point displacement calculation module uses a pixel point displacement calculation method based on two images based on the image pixel point coordinates after distortion calibration, wherein one image is the first shot image after distortion correction, and the other image is the image after a period of time after the first shot, and the image pixel point displacement calculation module operates according to the following steps:
step one: calculating pixel displacement: for each pixel, calculating the displacement of the pixel between the two images, denoted as (Deltax, deltay), deltax and Deltay being the movement distance of the pixel on the X-axis and the Y-axis respectively;
step two: calculating average displacement in the region: for each region in the checkerboard, calculating the average value of all pixel point displacements in the region, wherein the average value represents the whole pixel point displacement of the region;
for the average displacement of the i-th region, the calculation formula is as follows:
average displacement,/>,
Where n is the total number of pixels in the ith region,and->Is the displacement of each pixel point in the ith area;
step three: based on camera calibration and an internal reference matrix, determining a geometric relationship between a camera and a monitored target, including a distance and an angle, to establish a relationship between a world coordinate system and an image coordinate system;
step four: and (3) foundation pit displacement judgment: and judging the overall displacement condition of the foundation pit based on the relation among the average displacement, the world coordinate system and the image coordinate system.
9. The foundation pit displacement monitoring system of claim 1, wherein: the monitoring module comprises a computer host terminal module and a mobile equipment terminal module; the computer host terminal module consists of wired transmission equipment, a computer host and a display submodule.
10. The foundation pit displacement monitoring system of claim 1, wherein: the intelligent early warning module comprises an image pixel position displacement threshold setting module, a super threshold early warning module and a monitoring data mutation error reporting module; the image pixel displacement threshold setting module compares and screens the physical displacement data based on a preset threshold to identify abnormal displacement conditions causing structural risks; the super-threshold early warning module triggers early warning prompt when the monitored displacement exceeds a set threshold value based on the screening result of the image pixel displacement threshold value setting module; and the monitoring data mutation error reporting module is used for reporting the abnormal super-threshold early warning data.
CN202410005871.6A 2024-01-03 2024-01-03 Foundation pit displacement monitoring system based on image vision processing Pending CN117490579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410005871.6A CN117490579A (en) 2024-01-03 2024-01-03 Foundation pit displacement monitoring system based on image vision processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410005871.6A CN117490579A (en) 2024-01-03 2024-01-03 Foundation pit displacement monitoring system based on image vision processing

Publications (1)

Publication Number Publication Date
CN117490579A true CN117490579A (en) 2024-02-02

Family

ID=89683447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410005871.6A Pending CN117490579A (en) 2024-01-03 2024-01-03 Foundation pit displacement monitoring system based on image vision processing

Country Status (1)

Country Link
CN (1) CN117490579A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016176800A (en) * 2015-03-19 2016-10-06 株式会社安藤・間 Displacement or strain calculating program, and displacement or strain measuring method
CN113240747A (en) * 2021-04-21 2021-08-10 浙江大学 Outdoor structure vibration displacement automatic monitoring method based on computer vision
CN115511878A (en) * 2022-11-04 2022-12-23 中南大学 Side slope earth surface displacement monitoring method, device, medium and equipment
KR20220170122A (en) * 2021-06-22 2022-12-29 인천대학교 산학협력단 System for monitoring of structural and method ithereof
US11619556B1 (en) * 2021-11-26 2023-04-04 Shenzhen University Construction monitoring method and system for v-shaped column in underground foundation pit, terminal and storage medium
CN116678337A (en) * 2023-06-08 2023-09-01 交通运输部公路科学研究所 Image recognition-based bridge girder erection machine girder front and rear pivot point position height difference and girder deformation monitoring and early warning system and method
CN117190875A (en) * 2023-09-08 2023-12-08 重庆交通大学 Bridge tower displacement measuring device and method based on computer intelligent vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016176800A (en) * 2015-03-19 2016-10-06 株式会社安藤・間 Displacement or strain calculating program, and displacement or strain measuring method
CN113240747A (en) * 2021-04-21 2021-08-10 浙江大学 Outdoor structure vibration displacement automatic monitoring method based on computer vision
KR20220170122A (en) * 2021-06-22 2022-12-29 인천대학교 산학협력단 System for monitoring of structural and method ithereof
US11619556B1 (en) * 2021-11-26 2023-04-04 Shenzhen University Construction monitoring method and system for v-shaped column in underground foundation pit, terminal and storage medium
CN115511878A (en) * 2022-11-04 2022-12-23 中南大学 Side slope earth surface displacement monitoring method, device, medium and equipment
CN116678337A (en) * 2023-06-08 2023-09-01 交通运输部公路科学研究所 Image recognition-based bridge girder erection machine girder front and rear pivot point position height difference and girder deformation monitoring and early warning system and method
CN117190875A (en) * 2023-09-08 2023-12-08 重庆交通大学 Bridge tower displacement measuring device and method based on computer intelligent vision

Similar Documents

Publication Publication Date Title
CN111255636B (en) Method and device for determining tower clearance of wind generating set
CN107229930B (en) Intelligent identification method for numerical value of pointer instrument
CN102673106B (en) Silk screen print positioning equipment and method for photovoltaic solar silicon chip
CN110529186A (en) Tunnel structure percolating water based on infrared thermal imaging accurately identifies device and method
CN108562250B (en) Keyboard keycap flatness rapid measurement method and device based on structured light imaging
CN103048331B (en) Printing defect detection method based on flexible template registration
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN113240747B (en) Outdoor structure vibration displacement automatic monitoring method based on computer vision
CN105938554B (en) The tongue telescopic displacement monitoring method and system read based on image automatic judging
CN112560837A (en) Reading method, device, equipment and storage medium of pointer instrument
CN112001917A (en) Machine vision-based geometric tolerance detection method for circular perforated part
CN112595236A (en) Measuring device for underwater laser three-dimensional scanning and real-time distance measurement
CN109143001A (en) pantograph detection system
CN111932504A (en) Sub-pixel positioning method and device based on edge contour information
CN111986267B (en) Coordinate system calibration method of multi-camera vision system
CN113688817A (en) Instrument identification method and system for automatic inspection
CN114581760B (en) Equipment fault detection method and system for machine room inspection
CN108896552A (en) Disease automatic checkout system in bridge chamber
KR20140075042A (en) Apparatus for inspecting of display panel and method thereof
CN114820439A (en) PCB bare board defect detection system and method based on AOI
CN113884011A (en) Non-contact concrete surface crack measuring equipment and method
CN111476785A (en) Night infrared light-reflecting water gauge detection method based on position recording
CN117704970A (en) Building visual displacement monitoring system and monitoring method thereof
CN117490579A (en) Foundation pit displacement monitoring system based on image vision processing
CN109879170B (en) Crane jib sidewise bending displacement real-time detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination