CN113610782A - Building deformation monitoring method and equipment and storage medium - Google Patents

Building deformation monitoring method and equipment and storage medium Download PDF

Info

Publication number
CN113610782A
CN113610782A CN202110826876.1A CN202110826876A CN113610782A CN 113610782 A CN113610782 A CN 113610782A CN 202110826876 A CN202110826876 A CN 202110826876A CN 113610782 A CN113610782 A CN 113610782A
Authority
CN
China
Prior art keywords
marker
image
sub
pixel edge
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110826876.1A
Other languages
Chinese (zh)
Other versions
CN113610782B (en
Inventor
魏怡
王济民
孙傲
周宇
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202110826876.1A priority Critical patent/CN113610782B/en
Publication of CN113610782A publication Critical patent/CN113610782A/en
Application granted granted Critical
Publication of CN113610782B publication Critical patent/CN113610782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a building deformation monitoring method, equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of building images which are shot by a camera at the current moment and contain the markers, preprocessing the building images to extract the marker images, performing maximum gradient projection calculation on the marker images, acquiring maximum gradient projection images of the markers, and then acquiring whole pixel edge images of the markers according to the maximum gradient projection images of the markers; obtaining a sub-pixel edge image of the marker according to the whole pixel edge image; positioning the centers of the markers according to the sub-pixel edge images of the markers to obtain the center coordinates of the markers; and calculating the deformation of the building according to the center coordinates of the markers at the current moment and the center coordinates of the markers at the previous moment. The invention solves the problems of low efficiency, difficult precision guarantee and poor timeliness of operation means when the deformation monitoring is carried out on the building at present.

Description

Building deformation monitoring method and equipment and storage medium
Technical Field
The invention relates to the technical field of building safety monitoring, in particular to a building deformation monitoring method, building deformation monitoring equipment and a storage medium.
Background
The construction is easy to deform due to the self weight, the load, the rise and fall of the underground water level, insufficient geological exploration, design errors, construction quality and the like of the engineering building. Therefore, regular deformation monitoring of buildings is an essential engineering task for ensuring the stability and safety of buildings.
At present, most of deformation monitoring in practical application adopts a mode of 'manual measurement and measurement instrument', the working means is low in efficiency, the precision is difficult to guarantee, and the timeliness is poor. Although a deformation monitoring method based on an image processing technology appears in the prior art, the method is mostly limited to deformation calculation of computer simulation images or indoor ultra-close range shooting targets, and a small number of practical image measuring technologies have short shooting distance and low precision.
Disclosure of Invention
The invention aims to overcome the technical defects, provides a building deformation monitoring method, equipment and a storage medium, and solves the technical problems of low efficiency of operation means, difficulty in ensuring precision and poor timeliness in the deformation monitoring of buildings in the prior art.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a building deformation monitoring method, including the steps of:
acquiring a plurality of building images which are shot by a camera at the current moment and contain a marker, and preprocessing the building images to extract the marker images, wherein the marker is fixedly arranged on the building;
carrying out maximum gradient projection calculation on the marker image, and obtaining a whole pixel edge image of the marker according to the maximum gradient projection image of the marker after obtaining the maximum gradient projection image of the marker;
obtaining a sub-pixel edge image of the marker according to the whole pixel edge image;
positioning the centers of the markers according to the sub-pixel edge images of the markers to obtain the center coordinates of the markers;
and calculating the deformation of the building according to the center coordinates of the markers at the current moment and the center coordinates of the markers at the previous moment.
Preferably, in the building deformation monitoring method, the preprocessing the building image to extract the marker image specifically includes:
carrying out binarization processing on the building image, and extracting all suspected mark areas, wherein the suspected mark areas are areas formed by combining black and white;
and according to the geometric features of the markers, selecting an interested region containing the markers from the suspected marker region, and obtaining a marker image according to the interested region containing the markers.
Preferably, in the building deformation monitoring method, the calculating a maximum gradient projection of the marker image, obtaining a full-pixel edge image of the marker according to the maximum gradient projection image of the marker after obtaining the maximum gradient projection image of the marker specifically includes:
acquiring the maximum gradient projection direction of each pixel point of the marker image, and respectively calculating the maximum gradient projection value of each pixel point of the marker image in the maximum gradient projection direction to obtain the maximum gradient projection image of the marker;
preprocessing the maximum gradient projection image of the marker to obtain an initial edge point of the marker;
and thinning the initial edge points and screening out smooth whole-pixel edge points to obtain a whole-pixel edge image of the marker.
Preferably, in the building deformation monitoring method, the formula for calculating the maximum gradient projection value of each pixel point of the marker image in the maximum gradient projection direction is specifically as follows:
Figure BDA0003173910280000031
wherein the content of the first and second substances,
Figure BDA0003173910280000032
Figure BDA0003173910280000033
wherein (x, y) is the coordinate of a certain pixel point in the marker image, GxAnd GyThe gradient projection values of the marker image along the horizontal direction and the vertical direction are respectively, and alpha is the maximum gradient projection direction.
Preferably, in the building deformation monitoring method, the obtaining of the sub-pixel edge image of the marker according to the whole-pixel edge image specifically includes:
taking each edge point of the whole pixel edge image as a central point, respectively calculating first-order differences in row, column and two diagonal directions in a preset neighborhood range of the central point, and calculating sub-pixel positions of the corresponding edge points in the corresponding directions according to the first-order differences in the four directions;
and obtaining sub-pixel edge points of the marker according to the calculated sub-pixel positions of the edge points and the positions of the edge points in the whole pixel edge image, and obtaining a sub-pixel edge image of the marker according to the sub-pixel edge points.
Preferably, in the building deformation monitoring method, the positioning the center of the marker according to the sub-pixel edge images of the plurality of markers to obtain the center coordinates of the marker specifically includes:
performing primary ellipse fitting on all sub-pixel edge points in the sub-pixel edge image by adopting a preset ellipse fitting model so as to solve the ellipse fitting model;
calculating the fitting residual error of all sub-pixel edge points in the sub-pixel edge image according to the solved ellipse fitting model;
calculating a residual standard deviation according to the fitting residual of each sub-pixel edge point, screening each sub-pixel edge point according to the residual standard deviation, and performing ellipse fitting on the reserved sub-pixel edge points for multiple times again to obtain the final reserved sub-pixel edge points;
taking the center of the finally reserved sub-pixel edge points for ellipse fitting as the center of the sub-pixel edge image;
and obtaining the central coordinates of the marker according to the central coordinates of the plurality of sub-pixel edge images.
Preferably, in the building deformation monitoring method, the obtaining of the center coordinates of the marker according to the center coordinates of the plurality of sub-pixel edge images specifically includes:
calculating the coordinate mean value and standard deviation of the center coordinates of the plurality of sub-pixel edge images;
calculating the difference value between the central coordinate of each sub-pixel edge image and the coordinate, namely the mean value, and screening the central coordinate of each sub-pixel edge image according to the difference value;
and screening the reserved central coordinates for multiple times, and taking the coordinate mean value of a plurality of central coordinates obtained after screening for multiple times as the central coordinates of the marker.
Preferably, in the building deformation monitoring method, the formula for calculating the deformation of the building specifically includes:
Figure BDA0003173910280000051
Δx=x2-x1
Δy=y2-y1
wherein (x)1,y1) Is t1The time of day yields the center coordinates of the marker, (x)2,y2) Is t2And (3) obtaining the central coordinates of the marker at the moment, wherein delta x is the horizontal deformation, delta y is the vertical deformation, and delta d is the deformation point position distance.
In a second aspect, the present invention also provides a building deformation monitoring apparatus comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the building deformation monitoring method as described above.
In a third aspect, the present invention also provides a computer readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the building deformation monitoring method as described above.
Compared with the prior art, the building deformation monitoring method, the building deformation monitoring equipment and the building deformation monitoring storage medium provided by the invention have the advantages that firstly, a plurality of building images containing the markers are obtained by using a camera, then, after the marker images are extracted, the maximum gradient projection calculation is carried out on the marker images to obtain the maximum gradient projection images of the markers, then, the whole pixel edge images of the markers are extracted according to the maximum gradient projection images, then, the sub-pixel edge images are extracted according to the whole pixel edge images, finally, after the central coordinates of the markers are obtained according to the plurality of sub-pixel edge images, the deformation of the building is calculated according to the central coordinates of the markers at the current moment and the central coordinates of the markers at the last moment, and therefore, the deformation monitoring on the building is realized, the precision is higher, and the timeliness is strong.
Drawings
FIG. 1 is a flow chart of a method for monitoring deformation of a building according to a preferred embodiment of the present invention;
FIG. 2 is a schematic view of a preferred embodiment of a marker of the present invention;
FIG. 3 is a schematic view of a preferred embodiment of the scale of the present invention;
FIG. 4 is a schematic representation of a preferred embodiment of a maximum gradient projection image of a marker of the present invention;
FIG. 5 is a schematic diagram of a full-pixel edge image of a marker according to a preferred embodiment of the present invention;
FIG. 6 is a schematic view of a preferred embodiment of sub-pixel edges and full-pixel edges of a marker according to the present invention;
FIG. 7 is a schematic diagram of a preferred embodiment of fitted marker centers;
FIG. 8 is a schematic representation of the markers of experiment 1 of the present invention;
fig. 9 is a schematic operating environment diagram of a building deformation monitoring program according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a method for monitoring deformation of a building according to an embodiment of the present invention includes the following steps;
s100, obtaining a plurality of building images which are shot by a camera at the current moment and contain a marker, and preprocessing the building images to extract the marker images, wherein the marker is fixedly installed on a building;
s200, performing maximum gradient projection calculation on the marker image, and obtaining an integral pixel edge image of the marker according to the maximum gradient projection image of the marker after obtaining the maximum gradient projection image of the marker;
s300, obtaining a sub-pixel edge image of the marker according to the whole pixel edge image;
s400, positioning the centers of the markers according to the sub-pixel edge images of the markers to obtain the center coordinates of the markers;
and S500, calculating the deformation of the building according to the center coordinates of the marker at the current moment and the center coordinates of the marker at the previous moment.
In this embodiment, a camera is used to obtain a plurality of building images including a marker, then after the marker image is extracted, maximum gradient projection calculation is performed on the marker image to obtain a maximum gradient projection image of the marker, then an integer pixel edge image of the marker is extracted according to the maximum gradient projection image, then a sub-pixel edge image is extracted according to the integer pixel edge image, and finally after a central coordinate of the marker is obtained according to a plurality of sub-pixel edge images, a deformation amount of the building is calculated according to the central coordinate of the marker at the current time and the central coordinate of the marker at the previous time, so that deformation monitoring on the building is achieved, and the building monitoring device is high in precision and high in timeliness.
The embodiment of the invention does not need complicated camera calibration steps, can completely separate from expensive measuring instruments such as a total station and the like, and can realize deformation monitoring of remote imaging only by one single lens reflex equipped with a common-length lens and a simple artificial mark; the single lens reflex with the optimal photographing distance of 100 meters can be used for completing deformation monitoring with the photographing distance of 200 meters, and the implementation cost is low; the high cost generated when the lens is lengthened is avoided, the installation is convenient, and the extra monitoring error caused by camera shake during photographing is also avoided; the deformation is obtained based on the image processing technology, after image acquisition is completed, subsequent image measurement and deformation calculation can be performed fully automatically, manual intervention is not needed, labor cost is saved, measurement accuracy is guaranteed, the requirements of three kinds of accuracy on specifications or manuals can be met, and timeliness is high.
In a specific embodiment, before the step S100, a marker is first fabricated and installed on the building, as shown in fig. 2, the marker is a square with a length of 20cm × 20cm, and is black and white, the center is a white solid circle with a diameter of 10cm, and the periphery is a black frame with a width of 5 cm. If this mark is used for image measurement, no reflective properties are required. If the white circle needs to be measured together with a total station, the white circle is replaced by a light reflecting patch with the same size and a cross at the center. After the marker is manufactured, the marker can be installed on the surface of a building needing deformation monitoring.
After the marker is installed, the camera is installed and the proportion is determined, when the camera is installed, in order to reduce errors caused by shaking caused by dead weights of the camera and a lens during photographing as much as possible, the camera is installed in a mode of pouring cement piers or manufacturing a camera tripod fixing device, and the stability of the camera during photographing is guaranteed as much as possible. When the tripod fixing device is used for fixing, the positions of three holes are marked on the fixing bottom plate according to the distance between three purchased feet of the tripod when the feet are opened to the maximum, wherein the diameter of each hole is 1cm, and the tripod can be stably fixed on the bottom plate by drilling holes with the hole depth of 1 cm.
After the camera fixing equipment is installed, the camera can be installed. Specifically, the single phase inverter is fixed on a tripod, and parameters of the camera are set to be black and white, mute, fixed focus (maximum focus) and continuous shooting modes. And installing a wireless shutter release, and wirelessly controlling the camera to shoot 20-25 images of the same marker through the shutter release. According to statistical theory and experimental data, the number of one-time photographing is not recommended to be too small, and the 20-25 images need to be screened for the mark centers in the later period of processing. If the number of the shot images is small, and the effective data is kept too little through screening, the reliability of the mark center calculated according to the effective data is insufficient. On the contrary, the number of one-time photographing is not suggested to be too large, and the camera is shaken due to too many continuous photographing times, so that the positioning accuracy is not facilitated. Therefore, in a preferred embodiment, the number of images taken by the camera of the present invention at one time is 20. The image shot by the camera can be transmitted to a designated server through a single lens reflex companion and supporting software (wherein the single lens reflex companion software can be installed on a (personal) computer or an intermediate transmission device, but the invention is not limited thereto), or an SD card of the camera can be manually removed, and the image in the SD card is copied to the computer.
In order to facilitate the calculation of the subsequent deformation, the actual distance ratio represented by each pixel of the camera in the shooting range of the installation position needs to be determined to be millimeter/pixel, the step only needs to be used in the first shooting, and once the ratio is acquired, the ratio is a constant known quantity because the subsequent shooting is still at the same position. In specific implementation, a scale is firstly made, as shown in fig. 3, the scale is a ruler with black pasters fixed at two ends, the known length of the middle part of the black pasters is L, the purpose of the scale is to calculate the actual distance represented by each pixel in an image shot by a camera in a certain shooting range, and in order to ensure the calculation accuracy of the parameter, the length L of the scale is preferably not less than 1 meter; to ensure convenient fixing of the ruler to the building surface, the length L of the scale is preferably no greater than 2 metres. After the scale is manufactured, the scale is fixed on the surface of the building in the horizontal or vertical direction and is close to the position where the artificial mark is fixed. Then, the actual distance determining step may be started, and specifically, the step of determining the actual distance represented by each pixel of the image captured by the camera specifically includes:
(1) reading and opening a first image, finding a scale and amplifying the image as much as possible;
(2) if the ruler is fixed in the horizontal direction, the rightmost pixel coordinate x of the black paste at the left end of the scale is measured by a mouseLeft side ofAnd the leftmost pixel coordinate x of the right black pasteRight side
(3) If the ruler is fixed in the vertical direction, the lowest pixel coordinate y of the black paste on the upper end of the scale is measured by a mouseOn the upper partAnd the uppermost pixel coordinate y of the lower black pasteLower part
(4) Calculating the pixel length l of the scale in the imagep=xRight side-xLeft side of+1 or lp=yLower part-yOn the upper part+1;
(5) Computing
Figure BDA0003173910280000091
The actual distance represented by each pixel in the image in the shooting range can be obtained.
It should be noted that: in the step (2) and the step (3), the length of the pixel of the obtained scale needs to be measured manually on the image, the pixel length calculated by different operators has the difference of no more than 2 pixels, and the obtained ratio values are different. This difference can be overcomeThe method is compensated by using a longer scale (> -1 m). The experimental data can show that the length of the pixel in the image of the ruler with the length of 1 meter when the ruler is farthest at the shooting range of 200 meters is about 100 pixels, the ratio values finally calculated by different operators are respectively calculated according to the maximum difference of 2 pixels
Figure BDA0003173910280000101
Or
Figure BDA0003173910280000102
By analogy, if a longer ruler is used or the range is less than 200 meters, the difference will be even more subtle (< 0.2 mm/pixel). Taking the coordinate positioning accuracy of a certain point as 0.3 pixel as an example, the maximum difference brought by manually calculating the ratio value is only
Figure BDA0003173910280000103
Figure BDA0003173910280000104
Therefore, the requirements for the accuracy of the three-equal deformation monitoring are negligible.
After the camera is installed and the scale is determined, the camera may be used to capture a plurality of building images including the markers, and then the images are preprocessed to extract the marker images, specifically, in step S100, the preprocessing the building images to extract the marker images specifically includes:
carrying out binarization processing on the building image, and extracting all suspected mark areas, wherein the suspected mark areas are areas formed by combining black and white;
and according to the geometric features of the markers, selecting an interested region containing the markers from the suspected marker region, and obtaining a marker image according to the interested region containing the markers.
In this embodiment, firstly, the image is binarized according to the combination manner of the black and white circles of the markers, the suspected marker regions of all black and white combinations are extracted, then, according to the geometric features of the markers and the positions of the markers in the image, the regions meeting the conditions are selected from the suspected marker regions to be used as ROI (Region of interest) containing the markers, and the Region of interest containing the markers is used as the marker image.
Further, in step S200, an image processing technique is used to complete a high-precision measurement task, and first integer pixel positioning is required, and maximum gradient projection must be calculated to realize integer pixel positioning, and then an integer pixel edge image of the marker is obtained by the maximum gradient projection image. Because the image acquired in the real scene is inevitably interfered by various uncertain factors, the traditional gradient calculation method cannot meet the requirements and needs to eliminate the interference of the uncertain factors. Specifically, the step S200 specifically includes:
acquiring the maximum gradient projection direction of each pixel point of the marker image, and respectively calculating the maximum gradient projection value of each pixel point of the marker image in the maximum gradient projection direction to obtain the maximum gradient projection image of the marker;
preprocessing the maximum gradient projection image of the marker to obtain an initial edge point of the marker;
and thinning the initial edge points and screening out smooth whole-pixel edge points to obtain a whole-pixel edge image of the marker.
In this embodiment, it is assumed that the gradient projection of a gray image f (x, y) along the horizontal and vertical directions is GxAnd GyBased on the error ellipse theory, it may be calculated that each pixel point of the marker image has a maximum gradient projection value along the α direction, and the formula for calculating the maximum gradient projection value of each pixel point of the marker image in the maximum gradient projection direction is specifically:
Figure BDA0003173910280000111
wherein the content of the first and second substances,
Figure BDA0003173910280000121
Figure BDA0003173910280000122
wherein (x, y) is the coordinate of a certain pixel point in the marker image, GxAnd GyThe gradient projection values of the marker image along the horizontal direction and the vertical direction are respectively, and alpha is the maximum gradient projection direction.
According to the formula, the maximum gradient projection image of the marker can be calculated, as shown in fig. 4, the maximum gradient projection image calculation method provided by the invention not only completely reserves the marker edge, but also effectively avoids other interferences, and the accurate whole pixel edge can be obtained by using the maximum gradient projection image.
Further, after a maximum gradient projection image is obtained, preprocessing is carried out on the maximum gradient projection image, specifically, after binarization processing is carried out on the maximum gradient projection image, negation is carried out on the maximum gradient projection image, an inner circle of the marker is obtained, and then the inner circle of the marker is tracked by a boundary to obtain an initial edge point of the marker.
Further, when the initial edge is obtained, assume tan (α) as the tangent of the maximum gradient projection direction of a certain initial edge point, if:
Figure BDA0003173910280000123
finding the maximum value in the maximum gradient projection of the edge point and the left and right 2 neighborhood pixels;
tan (alpha) epsilon (— 2) or tan (alpha) epsilon (2, ∞), finding the maximum value in the maximum gradient projection of the edge point and the upper and lower 2 neighborhood pixels;
Figure BDA0003173910280000124
then the maximum gradient projection of the edge point and its 4 neighborhood pixels left, right, left-lower and right-upperFinding the maximum value;
tan (alpha) belongs to H, 2], and then searching for the maximum value in the maximum gradient projection of the edge point and the upper, lower left and upper right 4 neighborhood pixels;
tan (alpha) E < -2 < -1 >, finding the maximum value in the maximum gradient projection of the edge point and the upper, lower, upper left and lower right 4 neighborhood pixels;
Figure BDA0003173910280000131
the maximum is found in the maximum gradient projection of the edge point and its 4 neighborhood pixels left, right, top left, and bottom right.
According to the edge thinning processing method, after the initial edge point is replaced by the maximum value point in the maximum gradient projection direction, a new edge point is generated, the integrity of the edge is high, and the subsequent sub-pixel positioning precision is guaranteed. After obtaining the new edge point, the curvature of the edge point is calculated, and the point with the overlarge curvature is deleted, so that the outer edge contour of the mark is ensured to be smooth, and further, the whole pixel edge image of the mark is obtained, as shown in fig. 5.
In a preferred embodiment, the step S300 specifically includes:
taking each edge point of the whole pixel edge image as a central point, respectively calculating first-order differences in row, column and two diagonal directions in a preset neighborhood range of the central point, and calculating sub-pixel positions of the corresponding edge points in the corresponding directions according to the first-order differences in the four directions;
and obtaining sub-pixel edge points of the marker according to the calculated sub-pixel positions of the edge points in the corresponding direction and the positions of the edge points in the whole pixel edge image, and obtaining a sub-pixel edge image of the marker according to the sub-pixel edge points.
In this embodiment, each edge point is taken as a center point, and first-order differences in four directions including a row direction, a column direction, and two diagonal directions are calculated in the 3 × 3 neighborhood. Wherein, the first difference in the row and column directions is the average of the front and back difference or the average of the up and down difference, and the difference in the two diagonal directions is the difference in the two diagonal directions. The sub-pixel locations in these four directions are the average of the first order differences in that direction. Then, comparing the sub-pixel position of the edge with the original whole pixel position to obtain the position variation of the row, the column and the two diagonal directions; and taking the one with the largest variation in the four directions as the new edge point sub-pixel position. If the direction is diagonal, the variation needs to be projected to the row and column directions respectively to locate a new sub-pixel position, as shown in fig. 6, which shows both the whole pixel edge and the sub-pixel edge. Indicating integer pixel edges and new sub-pixel edges.
In a preferred embodiment, the step S400 specifically includes:
performing primary ellipse fitting on all sub-pixel edge points in the sub-pixel edge image by adopting a preset ellipse fitting model so as to solve the ellipse fitting model;
calculating the fitting residual error of all sub-pixel edge points in the sub-pixel edge image according to the solved ellipse fitting model;
calculating a residual standard deviation according to the fitting residual of each sub-pixel edge point, screening each sub-pixel edge point according to the residual standard deviation, and performing ellipse fitting on the reserved sub-pixel edge points for multiple times again to obtain the final reserved sub-pixel edge points;
taking the center of the finally reserved sub-pixel edge points for ellipse fitting as the center of the sub-pixel edge image;
and obtaining the central coordinates of the marker according to the central coordinates of the plurality of sub-pixel edge images.
Specifically, Ax is used first2+Bxy+Cy2+ Dx + Ey + F { (x + y + F) { (0) as an ellipse fitting model for all edge pointsi,yi) I-1, 2, …, n, to obtain six coefficients a, B, C, D, E, F, and any edge point (x)i,yi) Corresponding fitting residual is Vi=Axi 2+Bxiyi+Cyi 2+Dxi+Eyi+ F, in turn, may beCalculating the fitting residual error (V) of all edge pointsiI ═ 1, 2, …, n }; the residual standard deviation sigma is then calculatedVAt 2 σVAll edge points are screened for limits as follows
Figure BDA0003173910280000141
Performing secondary ellipse fitting on the retained edge points, performing fitting residual calculation and edge point screening again, and circulating for multiple times until all the retained edge points meet the residual limitation requirement, wherein the center of the ellipse circle at this time is regarded as the mark center of the sub-pixel edge image, and the finally positioned mark center of the sub-pixel edge image is shown in fig. 7.
In the embodiment, the concept of residual errors in mathematical statistics is used for screening the edge points after sub-pixel positioning, so that the edges entering fitting calculation are all effective points meeting the statistical limit value, and the accuracy and reliability of mark center positioning are further improved.
Further, after the center of each sub-pixel edge image is obtained, the center coordinates of the marker may be obtained according to the center coordinates of the plurality of sub-pixel edge images, specifically, the obtaining of the center coordinates of the marker according to the center coordinates of the plurality of sub-pixel edge images specifically includes:
calculating the coordinate mean value and standard deviation of the center coordinates of the plurality of sub-pixel edge images;
calculating the difference value between the central coordinate of each sub-pixel edge image and the coordinate, namely the mean value, and screening the central coordinate of each sub-pixel edge image according to the difference value;
and screening the reserved central coordinates for multiple times, and taking the coordinate mean value of a plurality of central coordinates obtained after screening for multiple times as the central coordinates of the marker.
Specifically, the camera takes 20 images of each marker, and each marker can obtain 20 central coordinates { (x)oi,yoi) 1, 2, 20, and then calculating a coordinate mean value
Figure BDA0003173910280000151
And standard deviation of
Figure BDA0003173910280000152
Then calculating the difference (delta x) between the coordinate of each central point and the coordinate mean valueoi,Δyoi) To do so by
Figure BDA0003173910280000153
And
Figure BDA0003173910280000154
for the limit, 20 marker centers were screened:
Figure BDA0003173910280000155
and then repeatedly screening the reserved M (M is less than or equal to 20) mark center coordinates, wherein the repeated screening process is the same as the screening process, namely, the coordinate mean value and the standard deviation of the reserved center coordinates are calculated again, then, the difference value between the reserved center coordinates and the coordinate mean value of the reserved center coordinates is calculated again, and screening is carried out according to the difference value until all mark center coordinates meet the limit value requirement, the coordinate mean value at the moment is the sub-pixel center corresponding to the mark, and the standard deviation of the coordinates is the positioning precision.
In this embodiment, various factors such as illumination, weather, shading, camera shake during continuous photographing, and a very small amount of mark misrecognition directly affect the positioning of the mark center, so that although the same batch of mark images are obtained by continuous photographing in a very short time, the mark center positioning for each image is not completely the same, and there is a high possibility that an abnormality of the center coordinate or a large difference from the center positioning of other images occurs. Therefore, the embodiment of the invention screens a group of circle centers acquired by the same batch of mark images again according to the statistical criteria, deletes abnormal circle center coordinates or circle centers with larger difference with other circle center coordinates, and ensures that the mark center positioning meets the precision requirement. The limit value of the coordinate difference is formulated through a statistical criterion, and the center positioning meeting the precision requirement can be automatically screened without reason of center coordinate abnormity, so that the method can be suitable for different outdoor environments, can cope with emergency scenes, and still meets the measurement precision requirement.
In a preferred embodiment, in step S500, the formula for calculating the deformation amount of the building is specifically:
Figure BDA0003173910280000161
Δx=x2-x1
Δy=y2-y1
wherein (x)1,y1) Is t1The time of day yields the center coordinates of the marker, (x)2,y2) Is t2And (3) obtaining the central coordinates of the marker at the moment, wherein delta x is the horizontal deformation, delta y is the vertical deformation, and delta d is the deformation point position distance.
The present invention performs multiple experiments on various factors that may affect the image measurement accuracy, such as weather, illumination, camera settings, environment, height of a tripod, and shaking.
In experiment 1, 5 circles with the diameter of 10cm are pasted on a black bottom flat plate, the distance between the centers of the circles is known, a total station and a single-lens reflex camera matched with the total station perform circle center positioning and circle center distance measurement on 5 marks at 200 meters, 150 meters, 100 meters and 50 meters simultaneously, and the result shows that the positioning precision of the mark center is high, the measurement accuracy of the distance between the mark centers is equivalent to that of the total station, and the total station can be completely replaced for deformation monitoring.
In experiment 2, the actual distance represented by one pixel in the image was calculated using a magnetic level of known length, and the total station was no longer used. Fixing a camera on a foot rest with the lowest height under the same camera shooting range (50 meters, 100 meters, 150 meters and 200 meters), and firstly collecting a batch of images; and then manually shaking the camera, and collecting a batch of images after the camera is stabilized. Therefore, two batches of pictures are taken in the same shooting process, and the result shows that the invention can accurately measure the images shot in the 200-meter shooting process, the measurement result is stable and reliable, and the measurement accuracy meets the requirements of the three accuracy of the water transport engineering measurement standard and the building deformation measurement standard.
As shown in fig. 9, based on the building deformation monitoring method, the invention further provides a building deformation monitoring device, which may be a computing device such as a mobile terminal, a desktop computer, a notebook, a palm computer, and a server. The building deformation monitoring apparatus includes a processor 10, a memory 20, and a display 30. Fig. 9 shows only some of the components of the building deformation monitoring apparatus, but it will be understood that not all of the shown components are required and that more or fewer components may be implemented instead.
The memory 20 may in some embodiments be an internal storage unit of the building deformation monitoring device, such as a hard disk or a memory of the building deformation monitoring device. The memory 20 may also be an external storage device of the building deformation monitoring device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the building deformation monitoring device. Further, the memory 20 may also include both an internal storage unit of the building deformation monitoring device and an external storage device. The memory 20 is used for storing application software installed in the building deformation monitoring device and various types of data, such as program codes of the installed building deformation monitoring device. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a building deformation monitoring program 40, and the building deformation monitoring program 40 can be executed by the processor 10, so as to implement the building deformation monitoring method according to the embodiments of the present application.
The processor 10 may be, in some embodiments, a Central Processing Unit (CPU), a microprocessor or other data Processing chip, and is configured to run program codes stored in the memory 20 or process data, such as executing the building deformation monitoring method.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the building deformation monitoring device and for displaying a visual user interface. The components 10-30 of the building deformation monitoring device communicate with each other via a system bus.
In an embodiment, the steps in the building deformation monitoring method as described above are implemented when the processor 10 executes the building deformation monitoring program 40 in the memory 20.
In summary, the building deformation monitoring method, device and storage medium provided by the present invention first use the camera to obtain several building images containing the markers, then extract the marker images, maximum gradient projection calculation is carried out on the marker image to obtain a maximum gradient projection image of the marker, then, an integral pixel edge image of the marker is extracted according to the maximum gradient projection image, then a sub-pixel edge image is extracted according to the integral pixel edge image, finally, after the central coordinate of the marker is obtained according to a plurality of sub-pixel edge images, calculating the deformation of the building according to the center coordinates of the marker at the current moment and the center coordinates of the marker at the previous moment, therefore, the deformation monitoring of the building is realized, the precision is higher, the timeliness is strong, the hardware cost and the labor cost are low, the shooting range is far, and the environmental suitability is strong.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A building deformation monitoring method is characterized by comprising the following steps:
acquiring a plurality of building images which are shot by a camera at the current moment and contain a marker, and preprocessing the building images to extract the marker images, wherein the marker is fixedly arranged on the building;
carrying out maximum gradient projection calculation on the marker image, and obtaining a whole pixel edge image of the marker according to the maximum gradient projection image of the marker after obtaining the maximum gradient projection image of the marker;
obtaining a sub-pixel edge image of the marker according to the whole pixel edge image;
positioning the centers of the markers according to the sub-pixel edge images of the markers to obtain the center coordinates of the markers;
and calculating the deformation of the building according to the center coordinates of the markers at the current moment and the center coordinates of the markers at the previous moment.
2. The building deformation monitoring method according to claim 1, wherein the preprocessing the building image to extract a marker image specifically comprises:
carrying out binarization processing on the building image, and extracting all suspected mark areas, wherein the suspected mark areas are areas formed by combining black and white;
and according to the geometric features of the markers, selecting an interested region containing the markers from the suspected marker region, and obtaining a marker image according to the interested region containing the markers.
3. The building deformation monitoring method according to claim 1, wherein the performing the maximum gradient projection calculation on the marker image, and obtaining the whole-pixel edge image of the marker according to the maximum gradient projection image of the marker after obtaining the maximum gradient projection image of the marker specifically includes:
acquiring the maximum gradient projection direction of each pixel point of the marker image, and respectively calculating the maximum gradient projection value of each pixel point of the marker image in the maximum gradient projection direction to obtain the maximum gradient projection image of the marker;
preprocessing the maximum gradient projection image of the marker to obtain an initial edge point of the marker;
and thinning the initial edge points and screening out smooth whole-pixel edge points to obtain a whole-pixel edge image of the marker.
4. The building deformation monitoring method according to claim 3, wherein the formula for calculating the maximum gradient projection value of each pixel point of the marker image in the maximum gradient projection direction is specifically as follows:
Figure FDA0003173910270000021
wherein the content of the first and second substances,
Figure FDA0003173910270000022
Figure FDA0003173910270000023
wherein (x, y) is the coordinate of a certain pixel point in the marker image, GxAnd GyAre respectively asGradient projection values of the marker image along the horizontal direction and the vertical direction, and alpha is the maximum gradient projection direction.
5. The building deformation monitoring method according to claim 4, wherein the obtaining of the sub-pixel edge image of the marker according to the full-pixel edge image specifically comprises:
taking each edge point of the whole pixel edge image as a central point, respectively calculating first-order differences in row, column and two diagonal directions in a preset neighborhood range of the central point, and calculating sub-pixel positions of the corresponding edge points in the corresponding directions according to the first-order differences in the four directions;
and obtaining sub-pixel edge points of the marker according to the calculated sub-pixel positions of the edge points and the positions of the edge points in the whole pixel edge image, and obtaining a sub-pixel edge image of the marker according to the sub-pixel edge points.
6. The building deformation monitoring method according to claim 5, wherein the positioning the center of the marker according to the sub-pixel edge images of the plurality of markers to obtain the center coordinates of the marker specifically comprises:
performing primary ellipse fitting on all sub-pixel edge points in the sub-pixel edge image by adopting a preset ellipse fitting model so as to solve the ellipse fitting model;
calculating the fitting residual error of all sub-pixel edge points in the sub-pixel edge image according to the solved ellipse fitting model;
calculating a residual standard deviation according to the fitting residual of each sub-pixel edge point, screening each sub-pixel edge point according to the residual standard deviation, and performing ellipse fitting on the reserved sub-pixel edge points for multiple times again to obtain the final reserved sub-pixel edge points;
taking the center of the finally reserved sub-pixel edge points for ellipse fitting as the center of the sub-pixel edge image;
and obtaining the central coordinates of the marker according to the central coordinates of the plurality of sub-pixel edge images.
7. The building deformation monitoring method according to claim 6, wherein the obtaining of the center coordinates of the markers according to the center coordinates of the plurality of sub-pixel edge images specifically comprises:
calculating the coordinate mean value and standard deviation of the center coordinates of the plurality of sub-pixel edge images;
calculating the difference value between the central coordinate of each sub-pixel edge image and the coordinate, namely the mean value, and screening the central coordinate of each sub-pixel edge image according to the difference value;
and screening the reserved central coordinates for multiple times, and taking the coordinate mean value of a plurality of central coordinates obtained after screening for multiple times as the central coordinates of the marker.
8. The building deformation monitoring method according to claim 1, wherein the formula for calculating the deformation of the building is specifically:
Figure FDA0003173910270000041
Δx=x2-x1
Δy=y2-y1
wherein (x)1,y1) Is t1The time of day yields the center coordinates of the marker, (x)2,y2) Is t2And (3) obtaining the central coordinates of the marker at the moment, wherein delta x is the horizontal deformation, delta y is the vertical deformation, and delta d is the deformation point position distance.
9. A building deformation monitoring apparatus, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the building deformation monitoring method of any of claims 1-8.
10. A computer readable storage medium, having one or more programs stored thereon which are executable by one or more processors to perform the steps of the building deformation monitoring method according to any one of claims 1-8.
CN202110826876.1A 2021-07-21 2021-07-21 Building deformation monitoring method, equipment and storage medium Active CN113610782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110826876.1A CN113610782B (en) 2021-07-21 2021-07-21 Building deformation monitoring method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110826876.1A CN113610782B (en) 2021-07-21 2021-07-21 Building deformation monitoring method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113610782A true CN113610782A (en) 2021-11-05
CN113610782B CN113610782B (en) 2024-01-02

Family

ID=78305075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110826876.1A Active CN113610782B (en) 2021-07-21 2021-07-21 Building deformation monitoring method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113610782B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117606362A (en) * 2023-11-23 2024-02-27 湖南科天健光电技术有限公司 Detection method and detection system for slope displacement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551046A (en) * 2015-12-17 2016-05-04 浙江宇视科技有限公司 Vehicle face location method and device
CN106441138A (en) * 2016-10-12 2017-02-22 中南大学 Deformation monitoring method based on vision measurement
US20170261319A1 (en) * 2015-09-29 2017-09-14 Baidu Online Network Technology (Beijing) Co., Ltd. Building height calculation method, device, and storage medium
CN110823116A (en) * 2019-10-25 2020-02-21 同济大学 Image-based building component deformation measurement method
CN112381847A (en) * 2020-10-27 2021-02-19 新拓三维技术(深圳)有限公司 Pipeline end head space pose measuring method and system
CN112419287A (en) * 2020-11-27 2021-02-26 杭州鲁尔物联科技有限公司 Building deflection determination method and device and electronic equipment
CN113077467A (en) * 2021-06-08 2021-07-06 深圳市华汉伟业科技有限公司 Edge defect detection method and device for target object and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170261319A1 (en) * 2015-09-29 2017-09-14 Baidu Online Network Technology (Beijing) Co., Ltd. Building height calculation method, device, and storage medium
CN105551046A (en) * 2015-12-17 2016-05-04 浙江宇视科技有限公司 Vehicle face location method and device
CN106441138A (en) * 2016-10-12 2017-02-22 中南大学 Deformation monitoring method based on vision measurement
CN110823116A (en) * 2019-10-25 2020-02-21 同济大学 Image-based building component deformation measurement method
CN112381847A (en) * 2020-10-27 2021-02-19 新拓三维技术(深圳)有限公司 Pipeline end head space pose measuring method and system
CN112419287A (en) * 2020-11-27 2021-02-26 杭州鲁尔物联科技有限公司 Building deflection determination method and device and electronic equipment
CN113077467A (en) * 2021-06-08 2021-07-06 深圳市华汉伟业科技有限公司 Edge defect detection method and device for target object and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
施一民 等: "减小长度投影变形的一种地图投影新方法", 《同济大学学报(自然科学版)》, vol. 35, no. 3, pages 418 - 421 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117606362A (en) * 2023-11-23 2024-02-27 湖南科天健光电技术有限公司 Detection method and detection system for slope displacement

Also Published As

Publication number Publication date
CN113610782B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN107917695B (en) House inclination monitoring method based on image recognition technology
CN106441138B (en) The deformation monitoring method of view-based access control model measurement
US10083522B2 (en) Image based measurement system
CN102768762B (en) Digital camera calibration method targeted to shield tunnel defect digital radiography detection and device thereof
CN104657711B (en) A kind of readings of pointer type meters automatic identifying method of robust
WO2018204552A1 (en) Gps offset calibration for uavs
JP2007219231A (en) Aerial image processing device and aerial image processing method
WO2021212477A1 (en) Point cloud data correction method, and related device
CN106600561B (en) Aerial image perspective distortion automatic correction method based on projection mapping
CN111442845A (en) Infrared temperature measurement method and device based on distance compensation and computer storage medium
KR101793264B1 (en) Analysis method for occurrence and growth progression of crack
CN114005108A (en) Pointer instrument degree identification method based on coordinate transformation
CN113610782B (en) Building deformation monitoring method, equipment and storage medium
WO2022126339A1 (en) Method for monitoring deformation of civil structure, and related device
WO2021170051A1 (en) Digital photogrammetry method, electronic device, and system
JP3597832B2 (en) Trajectory error measurement method and trajectory error measurement system used for the method
JP6803940B2 (en) Remote meter reading computer, its method and program
CN113240635B (en) Structural object detection image quality testing method with crack resolution as reference
CN109900358A (en) A kind of Sky Types identifying system and method based on image luminance information
JP6452638B2 (en) Ledger generation device and ledger generation program
Frangione et al. Multi-step approach for automated scaling of photogrammetric micro-measurements
CN109636840B (en) Method for detecting building shadow based on ghost image
CN110081828B (en) Machine vision shield tail gap detection image grid characteristic point reliability filtering method
CN113870365B (en) Camera calibration method, device, equipment and storage medium
CN113139454B (en) Road width extraction method and device based on single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant