CN113850869B - Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis - Google Patents

Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis Download PDF

Info

Publication number
CN113850869B
CN113850869B CN202111059498.5A CN202111059498A CN113850869B CN 113850869 B CN113850869 B CN 113850869B CN 202111059498 A CN202111059498 A CN 202111059498A CN 113850869 B CN113850869 B CN 113850869B
Authority
CN
China
Prior art keywords
foundation pit
deep foundation
image
point cloud
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111059498.5A
Other languages
Chinese (zh)
Other versions
CN113850869A (en
Inventor
周茂
杨魁
傅嘉骥
韩俊
易莹鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Chongqing Electric Power Co Construction Branch
Original Assignee
State Grid Chongqing Electric Power Co Construction Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Chongqing Electric Power Co Construction Branch filed Critical State Grid Chongqing Electric Power Co Construction Branch
Priority to CN202111059498.5A priority Critical patent/CN113850869B/en
Publication of CN113850869A publication Critical patent/CN113850869A/en
Application granted granted Critical
Publication of CN113850869B publication Critical patent/CN113850869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/08Investigating permeability, pore-volume, or surface area of porous materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Dispersion Chemistry (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Processing (AREA)

Abstract

A deep foundation pit collapse water seepage detection method based on radar scanning and image analysis comprises the following steps: step 1, calibrating a deep foundation pit three-dimensional model of a laser scanning radar and a holographic camera; step 2, managing engineering mode of original model data of the deep foundation pit; step 3, preprocessing original point cloud of the deep foundation pit; step 4, registering original point cloud data of the deep foundation pit with the panoramic image; step 5, three-dimensional modeling of the deep foundation pit based on the image point cloud; step 6, if the section model in the deep foundation pit is compressed in a large range, and the crack is increased to exceed a threshold value, determining that collapse is about to occur; and 7, comparing the image data of the deep foundation pit, and determining that water seepage occurs if the radar scanning data change exceeds a threshold value under the condition that the image data is basically unchanged. The invention respectively builds the three-dimensional real-time model of the deep foundation pit by utilizing the three-dimensional coordinates and RGB values of the point cloud data, and fuses and forms the three-dimensional structure diagram of the deep foundation pit with high precision, small data volume, high fineness and true texture (color).

Description

Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis
Technical Field
The invention belongs to the field of digital image processing and computer application, and particularly relates to a deep foundation pit collapse water seepage detection method based on radar scanning and image analysis.
Background
The potential safety hazard of the engineering construction site can be timely found through collapse and water seepage detection of the deep foundation pit, the engineering construction safety can be effectively prevented, and the method has very important practical significance. The laser radar scanning point cloud data and the common two-dimensional image data are utilized, a three-dimensional object model of a site construction environment can be built on line in real time by combining a traditional point cloud data processing method, an image processing algorithm and a texture analysis technology, the characteristics of site objects are analyzed, the change trend of the site construction environment is timely found, once dangerous cases are found, audible and visual alarm and various dangerous case broadcasting measures are timely adopted, and therefore necessary emergency measures are timely adopted by various parties such as construction parties, supervision parties, management parties and engineering places, and injury caused by the dangerous cases is avoided or reduced.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a deep foundation pit collapse water seepage detection method based on radar scanning and image analysis, which takes high-density true color laser scanning point cloud data as a data source, utilizes three-dimensional coordinates and RGB values of the point cloud data to respectively construct a three-dimensional real-time model of the deep foundation pit, and fuses a three-dimensional structure diagram of the deep foundation pit with high precision, small data volume, high fineness and true texture (color).
The technical scheme adopted for solving the technical problems is as follows:
a deep foundation pit collapse water seepage detection method based on radar scanning and image analysis comprises the following steps:
step 1, calibrating a deep foundation pit three-dimensional model of a laser scanning radar and a holographic camera: firstly, fixing the relative positions of a laser scanning radar and a holographic camera, and keeping the relative positions unchanged all the time; selecting a set number of characteristic points, establishing a one-to-one correspondence between characteristic point cloud data and image data, further establishing correspondence between all other point cloud data and image data, and establishing a mapping model between the point cloud data and the image data;
step 2, engineering mode management of deep foundation pit original model data: acquiring original point cloud and holographic image data through a three-dimensional laser scanner, and storing, managing and searching the output point cloud and holographic image data in an engineering mode;
Step 3, preprocessing original point clouds of the deep foundation pit: splicing, denoising, classifying and filtering the original point cloud, and outputting preprocessed point cloud data;
Step 4, registering original point cloud data of the deep foundation pit with the panoramic image: associating the three-dimensional point cloud data with the holographic image, automatically carrying out registration mapping according to a first relation, and outputting image point cloud data;
step 5, three-dimensional modeling of the deep foundation pit based on the image point cloud;
Step 6, deep foundation pit collapse detection based on radar scanning and image comparison: comparing the three-dimensional model of the deep foundation pit, and if the section model in the deep foundation pit is compressed in a large range and the crack is increased to exceed a threshold value, determining that collapse is about to occur;
step 7, deep foundation pit water seepage detection based on radar scanning and image comparison: and comparing the image data of the deep foundation pit, and determining that water seepage occurs if the radar scanning data change exceeds a threshold value under the condition that the image data is basically unchanged.
In the step 5, the process of three-dimensional modeling of the deep foundation pit based on the image point cloud is as follows:
5.1, rapidly drawing the contour line of the inner section of the deep foundation pit by utilizing a point cloud section on a three-dimensional point cloud top view, automatically calculating the inner section of the deep foundation pit by utilizing the point cloud and stretching the contour to construct a deep foundation pit inner section model; acquiring three-dimensional coordinates of the inner section of the high-density deep foundation pit, and constructing a model of the inner section of the deep foundation pit by utilizing the three-dimensional coordinates of the three-dimensional laser point cloud data;
5.2, panoramic texture mapping: for the constructed deep foundation pit inner section model, texture extraction is supported through fusion with a holographic image, mapping textures corresponding to the deep foundation pit inner section model are displayed in a three-dimensional model, image data of a scanning object are obtained by combining a camera arranged on a laser scanner, and an orthographic image of the deep foundation pit inner section is constructed by utilizing RGB values of three-dimensional laser point cloud data;
5.3, drawing the inner section of the deep foundation pit in detail: and fusing the deep foundation pit inner section model with the orthographic image to form a three-dimensional model of the deep foundation pit inner section, and further constructing a three-dimensional model of the fine stone, the crack and the pit hole.
In the step 6, the extraction process of the crack: firstly, searching a crack pixel in a depth image, then back projecting the crack pixel into an original two-dimensional visible image, searching corresponding texture information, and finally obtaining crack information, wherein the steps are as follows:
6.1 The depth map obtained by laser scanning is subjected to simple median filtering, corrosion, shrinkage and expansion operations, and as the depth map has no complex texture, the image is relatively smooth and the edge is clear, the quality of the depth map is not greatly affected after the image processing operation;
6.2 The depth images before and after filtering are compared, the pixel points with the pixel values changed are marked, the central line pixels of the cracks are obtained, the central line pixels are continuously extended or a larger circle structure is formed, namely, the extension length exceeds a certain value or the circumference of the circle exceeds a certain value, and the central line pixels can be temporarily identified as the crack structure;
6.3 Back projecting the marked crack pixel points in the step 6.2) to the corresponding positions of the original two-dimensional texture images, analyzing the texture structures of the positions, namely performing median filtering, corrosion, shrinkage and expansion operations on the texture images of the positions, and determining that the cracks exist in the positions if the pixel information and the shape of the crack center line image are the same as those of the pixel information and the shape of the crack center line image in the step 6.2);
6.4 6.3) counting the length, width and number of pixels of the crack as a measure of the size of the crack, further analyzing the development trend of the crack according to the information, and judging that collapse is about to occur if the crack is increased beyond a threshold value.
In step 7, the water seepage condition identification statistics starts from the analysis of the depth image, firstly, the pixel values of the depth images of the front frame and the rear frame are compared at intervals, the place with the largest depth change in the depth image is found, the continuous closed area with the changed pixel values is extracted, then the pixel values of the same area of the two-dimensional texture images of the same time frame are compared, and if the pixel value change of the same area of the two-dimensional texture images of the same time frame does not exceed the corresponding threshold value, water seepage is likely to exist in the area (because of the transparent characteristic of water, the depth image is changed when water seepage occurs, and the texture image is basically unchanged). And finally comparing the water shape models of the water seepage, if the water shape models accord with the water shape models of certain types, confirming that the water seepage exists in the area, and further providing different early warning measures and automatic disposal measures according to the area, the volume, the water seepage change speed and other information of the water seepage area.
The beneficial effects of the invention are mainly shown in the following steps: and respectively constructing a three-dimensional real-time model of the deep foundation pit by using the three-dimensional coordinates and RGB values of the point cloud data by taking the high-density true color laser scanning point cloud data as a data source, and fusing to form a three-dimensional structure diagram of the deep foundation pit with high precision, small data volume, high fineness and true texture (color).
Drawings
Fig. 1 is a flow chart of deep foundation pit construction.
Fig. 2 is a laser radar scan depth map.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a deep foundation pit collapse water seepage detection method based on radar scanning and image analysis comprises the following steps:
step 1, calibrating a deep foundation pit three-dimensional model of a laser scanning radar and a holographic camera: firstly, fixing the relative positions of a laser scanning radar and a holographic camera, and keeping the relative positions unchanged all the time; selecting a set number of characteristic points, establishing a one-to-one correspondence between characteristic point cloud data and image data, further establishing correspondence between all other point cloud data and image data, and establishing a mapping model between the point cloud data and the image data;
step 2, engineering mode management of deep foundation pit original model data: acquiring original point cloud and holographic image data through a three-dimensional laser scanner, and storing, managing and searching the output point cloud and holographic image data in an engineering mode;
Step 3, preprocessing original point clouds of the deep foundation pit: splicing, denoising, classifying and filtering the original point cloud, and outputting preprocessed point cloud data;
Step 4, registering original point cloud data of the deep foundation pit with the panoramic image: associating the three-dimensional point cloud data with the holographic image, automatically carrying out registration mapping according to a first relation, and outputting image point cloud data;
step 5, three-dimensional modeling of the deep foundation pit based on the image point cloud;
Step 6, deep foundation pit collapse detection based on radar scanning and image comparison: comparing the three-dimensional model of the deep foundation pit, and if the section model in the deep foundation pit is compressed in a large range and the crack is increased to exceed a threshold value, determining that collapse is about to occur;
step 7, deep foundation pit water seepage detection based on radar scanning and image comparison: and comparing the image data of the deep foundation pit, and determining that water seepage occurs if the radar scanning data change exceeds a threshold value under the condition that the image data is basically unchanged.
In the step 5, the process of three-dimensional modeling of the deep foundation pit based on the image point cloud is as follows:
5.1, rapidly drawing the contour line of the inner section of the deep foundation pit by utilizing a point cloud section on a three-dimensional point cloud top view, automatically calculating the inner section of the deep foundation pit by utilizing the point cloud and stretching the contour to construct a deep foundation pit inner section model; acquiring three-dimensional coordinates of the inner section of the high-density deep foundation pit, and constructing a model of the inner section of the deep foundation pit by utilizing the three-dimensional coordinates of the three-dimensional laser point cloud data;
5.2, panoramic texture mapping: for the constructed deep foundation pit inner section model, texture extraction is supported through fusion with a holographic image, mapping textures corresponding to the deep foundation pit inner section model are displayed in a three-dimensional model, image data of a scanning object are obtained by combining a camera arranged on a laser scanner, and an orthographic image of the deep foundation pit inner section is constructed by utilizing RGB values of three-dimensional laser point cloud data;
5.3, drawing the inner section of the deep foundation pit in detail: and fusing the deep foundation pit inner section model with the orthographic image to form a three-dimensional model of the deep foundation pit inner section, and further constructing a three-dimensional model of the fine stone, the crack and the pit hole.
In this embodiment, the principle of fusion of point cloud data and camera data is described first, where several relevant coordinate systems involved in the method include: world coordinate system, camera coordinate system, image plane coordinate system, and image storage coordinate system.
World coordinate system: a coordinate system of the photographed object in the three-dimensional space.
Camera coordinate system: and a coordinate system with the optical center of the camera lens as an origin.
Image plane coordinate system: and a coordinate system with the intersection point of the image plane and the optical axis of the camera as an origin.
Image storage coordinate system: and a coordinate system with the upper left corner of the image as an origin.
The world coordinate system is converted into a camera coordinate system: o W is the origin of the world coordinate system, O C is the origin of the camera coordinate system, and X W、YW、ZW and X C、YC、ZC are the X-axis, Y-axis and Z-axis of the world coordinate system and the camera coordinate system, respectively. In the two coordinate systems, X W、YW is the same as the X C、YC direction, and Z W and Z C directions are opposite. The position coordinates of the object in the world coordinate system are absolute values, irrespective of the position of the camera. The world coordinates can be converted to camera coordinates by:
wherein R is an orthogonal rotation matrix with the size of 3 multiplied by 3; t is a translation matrix, and the size is 3 multiplied by 1.
The camera coordinate system is converted into an image plane coordinate system: for the camera pinhole model, its coordinates in the camera coordinate system are P (x C,yC,zC) for any point P in space. The line O C P connecting the point P and the camera centroid O C intersects the image plane at point P (x, y). The optical axis Z C of the camera is perpendicular to the image plane, and the intersection O 1 is the origin of the coordinate system of the image plane. The focal length of the camera is the length of the line between the optical center O C and the origin O 1, and the size is f. From the relationship between similar triangles, it is easy to obtain:
Namely:
the expression (3-4) is expressed as a homogeneous form of coordinates as follows:
the conversion of the camera coordinate system into the image plane coordinate system can be accomplished using equations (3) - (5).
The image plane coordinate system is converted into an image storage coordinate system: o 1 is the origin of the image plane coordinate system, and O 0 is the origin of the image storage coordinate system. In the image storage coordinate system, the coordinate of O 1 is (u 0,v0). Assuming that the length of each pixel point in the x-axis and y-axis directions is d x、dy, respectively, any point (u, v) in the image storage coordinate system can be expressed by the following formula:
The conversion into homogeneous coordinate form can be expressed as:
Combining equations (1), (5) and (7), the coordinate relationship of point P in the image storage coordinate system and the world coordinate system can be obtained:
In formula (8), Q is a 3×4 internal reference matrix, and a x and a y are the ratio of the physical focal length f of the lens to the length d x、dy of the pixel in the x-axis and y-axis directions, respectively, that is:
K is a 4 x4 extrinsic matrix, the parameters in the matrix being determined by the rotation angle of the camera, and the position of the camera in the world coordinate system. P is a 3×4 projection matrix, and is determined by both internal and external parameters.
The observations (3) - (8) show that given the camera internal and external parameters (which can be derived from camera calibration), for any point P (x W,yW,zW) in the world coordinate system, the projected point coordinates (u, v) can be found in the image storage coordinate system. However, in the opposite case, if the pixel coordinate is known to be (u, v), the unique spatial coordinate point P (x W,yW,zW) corresponding thereto cannot be obtained, and if and only if the depth value z W of the pixel is known, the unique corresponding point P can be determined. Therefore, the depth value of the pixel point is generally calculated before the image is synthesized.
When the deep foundation pit is manually operated, a camera and a laser radar are arranged on the pit upper side and the pit lower side of the soil lifting frame, three image data are required to be synthesized into a complete deep foundation pit three-dimensional image graph during data analysis, and in the fusion process of point cloud data and shooting data, the most core part is the three-dimensional image transformation. Each pixel point in the image storage coordinate system is projected into the world coordinate system by combining with the corresponding depth information, namely, the 2-dimensional image is converted into the 3-dimensional space, so that the subsequent display, rotation and viewpoint switching can be realized very conveniently; on the basis, various operations such as filtering, corrosion, shrinkage, expansion and the like of the image can be performed in the world coordinates, and the required characteristics can be extracted.
For the extraction of cracks, firstly, the crack pixels in the depth image are searched, then the pixels are reversely projected into the original two-dimensional visible image, the corresponding texture information is searched, and finally, the crack information is obtained, and the steps are as follows:
6.1 A plurality of operations such as simple median filtering, corrosion, shrinkage, expansion and the like are carried out on the depth map obtained by laser scanning. Because the depth map has no complex texture, the image is relatively smooth and the edge is clear, the quality of the depth map is not greatly affected after the image processing operation;
6.2 The depth images before and after filtering are compared, the pixel points with the pixel values changed are marked, the central line pixels of the cracks are obtained, the central line pixels are continuously extended or a larger circle structure is formed, namely, the extension length exceeds a certain value or the circumference of the circle exceeds a certain value, and the central line pixels can be temporarily identified as the crack structure;
6.3 Back projecting the marked crack pixel points in the step 6.2) to the corresponding positions of the original two-dimensional texture images, analyzing the texture structures of the positions, namely performing median filtering, corrosion, shrinkage and expansion operations on the texture images of the positions, and determining that the cracks exist in the positions if the pixel information and the shape of the crack center line image are the same as those of the pixel information and the shape of the crack center line image in the step 6.2);
6.4 6.3) counting the length, width and number of pixels of the crack as a measure of the size of the crack, further analyzing the development trend of the crack according to the information, and determining that collapse is about to occur if the crack is increased beyond a threshold value, giving out a corresponding alarm signal and taking corresponding precautionary measures.
In step 7, the water seepage condition identification statistics starts from the depth image analysis, firstly, the pixel values of the front and rear two frames of depth images are compared at intervals, the place with the largest depth change in the depth image is found, the continuous closed area with the pixel value change is extracted, the pixel values of the same area of the two frames of two-dimensional texture images in the unified time are compared, and if the pixel value change of the same area of the two frames of two-dimensional texture images in the unified time is not large, water seepage exists in the area. And finally comparing the water shape models of the water seepage, if the water shape models accord with the water shape models of certain types, confirming that the water seepage exists in the area, and further providing different early warning measures and automatic disposal measures according to the area, the volume, the water seepage change speed and other information of the water seepage area.
The deep foundation pit construction flow of the embodiment is as follows:
1) The method comprises the steps of erecting a deep foundation pit working machine, wherein the deep foundation pit working machine comprises a mechanical frame, an edge processing gateway, a camera, a laser radar and the like, and also comprises an execution mechanism and automatic triggering equipment which are related to engineering alarming and fusing, such as a ventilation device, an automatic power-off device, alarm uploading equipment and the like;
2) Engineering report and equipment data uploading;
3) Starting engineering construction, starting working of a camera and a laser radar, and recording engineering construction process in real time;
4) Detecting that dangerous gas in the deep foundation pit exceeds standard, automatically starting an air blower to exhaust and blow air and automatically uploading alarm information until the concentration of the ground gas in the deep foundation pit meets the requirement;
5) The radar and the camera detect the crack and detect the development change of the crack in real time, when the crack reaches an alarm value, the local audible and visual alarm is started and the alarm information is uploaded, and when the crack reaches the upper alarm limit, the alarm information is uploaded, all power supplies except an escape passage are cut off at the same time, and the escape process is started;
6) The radar and the camera detect water seepage and detect development change of the water seepage in real time, when the water seepage reaches an alarm value, a local audible and visual alarm is started, alarm information is uploaded, and when the water seepage reaches an upper alarm limit, all power supplies except an escape passage are cut off, and an escape process is started;
7) And (5) until the construction is completed, and dismantling equipment.
The embodiments described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented. The scope of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, but the scope of the present invention and the equivalents thereof as would occur to one skilled in the art based on the inventive concept.

Claims (4)

1. The deep foundation pit collapse water seepage detection method based on radar scanning and image analysis is characterized by comprising the following steps of:
step 1, calibrating a deep foundation pit three-dimensional model of a laser scanning radar and a holographic camera: firstly, fixing the relative positions of a laser scanning radar and a holographic camera, and keeping the relative positions unchanged all the time; selecting a set number of characteristic points, establishing a one-to-one correspondence between characteristic point cloud data and image data, further establishing correspondence between all other point cloud data and image data, and establishing a mapping model between the point cloud data and the image data;
step 2, engineering mode management of deep foundation pit original model data: acquiring original point cloud and holographic image data through a three-dimensional laser scanner, and storing, managing and searching the output point cloud and holographic image data in an engineering mode;
Step 3, preprocessing original point clouds of the deep foundation pit: splicing, denoising, classifying and filtering the original point cloud, and outputting preprocessed point cloud data;
Step 4, registering original point cloud data of the deep foundation pit with the panoramic image: associating the three-dimensional point cloud data with the holographic image, automatically carrying out registration mapping according to a first relation, and outputting image point cloud data;
step 5, three-dimensional modeling of the deep foundation pit based on the image point cloud;
Step 6, deep foundation pit collapse detection based on radar scanning and image comparison: comparing the three-dimensional model of the deep foundation pit, and if the section model in the deep foundation pit is compressed in a large range and the crack is increased to exceed a threshold value, determining that collapse is about to occur;
step 7, deep foundation pit water seepage detection based on radar scanning and image comparison: and comparing the image data of the deep foundation pit, and determining that water seepage occurs if the radar scanning data change exceeds a threshold value under the condition that the image data is basically unchanged.
2. The method for detecting collapse and water seepage of a deep foundation pit based on radar scanning and image analysis as set forth in claim 1, wherein in the step 5, the process of three-dimensional modeling of the deep foundation pit based on image point cloud is as follows:
5.1, rapidly drawing the contour line of the inner section of the deep foundation pit by utilizing a point cloud section on a three-dimensional point cloud top view, automatically calculating the inner section of the deep foundation pit by utilizing the point cloud and stretching the contour to construct a deep foundation pit inner section model; acquiring three-dimensional coordinates of the inner section of the high-density deep foundation pit, and constructing a model of the inner section of the deep foundation pit by utilizing the three-dimensional coordinates of the three-dimensional laser point cloud data;
5.2, panoramic texture mapping: for the constructed deep foundation pit inner section model, texture extraction is supported through fusion with a holographic image, mapping textures corresponding to the deep foundation pit inner section model are displayed in a three-dimensional model, image data of a scanning object are obtained by combining a camera arranged on a laser scanner, and an orthographic image of the deep foundation pit inner section is constructed by utilizing RGB values of three-dimensional laser point cloud data;
5.3, drawing the inner section of the deep foundation pit in detail: and fusing the deep foundation pit inner section model with the orthographic image to form a three-dimensional model of the deep foundation pit inner section, and further constructing a three-dimensional model of the fine stone, the crack and the pit hole.
3. The method for detecting collapse and water seepage of deep foundation pit based on radar scanning and image analysis as set forth in claim 1 or 2, wherein in the step 6, the crack extraction process is as follows: firstly, searching a crack pixel in a depth image, then back projecting the crack pixel into an original two-dimensional visible image, searching corresponding texture information, and finally obtaining crack information, wherein the steps are as follows:
6.1 Performing median filtering, corrosion, shrinkage and expansion on the depth map obtained by laser scanning;
6.2 The depth images before and after filtering are compared, the pixel points with the pixel values changed are marked, the central line pixels of the cracks are obtained, the central line pixels are continuously extended or a larger circle structure is formed, namely, the extension length exceeds a set value or the circle circumference exceeds a set value, and the central line pixels can be temporarily considered as the crack structure;
6.3 Back projecting the marked crack pixel points in the step 6.2) to the corresponding positions of the original two-dimensional texture images, analyzing the texture structures of the positions, namely performing median filtering, corrosion, shrinkage and expansion operations on the texture images of the positions, and determining that the cracks exist in the positions if the pixel information and the shape of the crack center line image are the same as those of the pixel information and the shape of the crack center line image in the step 6.2);
6.4 6.3) counting the length, width and number of pixels of the crack as a measure of the size of the crack, further analyzing the development trend of the crack according to the information, and determining that collapse is about to occur if the crack is increased beyond a threshold value, giving out a corresponding alarm signal and taking corresponding precautionary measures.
4. The method for detecting collapse and water seepage of deep foundation pit based on radar scanning and image analysis according to claim 1 or 2, wherein in step 7, water seepage condition identification statistics starts from depth image analysis, firstly, the pixel values of the front and rear two frames of depth images are compared at intervals, the place with the largest depth change in the depth images is found out, the continuous closed area with the changed pixel values is extracted, then the pixel values of the same area of two frames of two-dimensional texture images at the same time are compared, if the pixel value change of the same area of the two frames of two-dimensional texture images at the same time does not exceed the corresponding threshold value, water seepage is likely to exist in the area, the texture image is basically unchanged due to the transparent characteristic of water, finally, the water shape model of water seepage is compared, if the water shape model of the water seepage is met, the existence of the area is confirmed, and further different early warning measures and automatic treatment measures are given according to the area, volume and water seepage change speed information of the water seepage area.
CN202111059498.5A 2021-09-10 2021-09-10 Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis Active CN113850869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111059498.5A CN113850869B (en) 2021-09-10 2021-09-10 Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111059498.5A CN113850869B (en) 2021-09-10 2021-09-10 Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis

Publications (2)

Publication Number Publication Date
CN113850869A CN113850869A (en) 2021-12-28
CN113850869B true CN113850869B (en) 2024-06-04

Family

ID=78973897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111059498.5A Active CN113850869B (en) 2021-09-10 2021-09-10 Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis

Country Status (1)

Country Link
CN (1) CN113850869B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998279A (en) * 2022-06-16 2022-09-02 华侨大学 Method for identifying and positioning pits and cracks on surface of stone slab
CN115294190B (en) * 2022-10-08 2022-12-30 南通致和祥智能装备有限公司 Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle
CN117541537B (en) * 2023-10-16 2024-05-24 江苏星湖科技有限公司 Space-time difference detection method and system based on all-scenic-spot cloud fusion technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN105203551A (en) * 2015-09-11 2015-12-30 尹栋 Car-mounted laser radar tunnel detection system, autonomous positioning method based on tunnel detection system and tunnel hazard detection method
CN110196039A (en) * 2019-05-27 2019-09-03 昌鑫生态科技(陕西)有限公司 Explore 3 dimension imaging technology in waste and old pit
CN111473739A (en) * 2020-04-24 2020-07-31 中铁隧道集团二处有限公司 Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN105203551A (en) * 2015-09-11 2015-12-30 尹栋 Car-mounted laser radar tunnel detection system, autonomous positioning method based on tunnel detection system and tunnel hazard detection method
CN110196039A (en) * 2019-05-27 2019-09-03 昌鑫生态科技(陕西)有限公司 Explore 3 dimension imaging technology in waste and old pit
CN111473739A (en) * 2020-04-24 2020-07-31 中铁隧道集团二处有限公司 Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
二维和三维视觉传感集成系统联合标定方法;李琳;张旭;屠大维;;仪器仪表学报;20121115(第11期);全文 *

Also Published As

Publication number Publication date
CN113850869A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN113850869B (en) Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis
CN109215063B (en) Registration method of event trigger camera and three-dimensional laser radar
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN112793564B (en) Autonomous parking auxiliary system based on panoramic aerial view and deep learning
CN113345019B (en) Method, equipment and medium for measuring potential hazards of transmission line channel target
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN106530345B (en) A kind of building three-dimensional laser point cloud feature extracting method under same machine Image-aided
CN107869954B (en) Binocular vision volume weight measurement system and implementation method thereof
CN113096183B (en) Barrier detection and measurement method based on laser radar and monocular camera
CN110766899A (en) Method and system for enhancing electronic fence monitoring early warning in virtual environment
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
Zhou et al. Image-based onsite object recognition for automatic crane lifting tasks
CN113313116B (en) Underwater artificial target accurate detection and positioning method based on vision
CN114089330B (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
CN112802208B (en) Three-dimensional visualization method and device in terminal building
CN107590444A (en) Detection method, device and the storage medium of static-obstacle thing
WO2020199057A1 (en) Self-piloting simulation system, method and device, and storage medium
CN109360269B (en) Ground three-dimensional plane reconstruction method based on computer vision
Jiang et al. Full-field deformation measurement of structural nodes based on panoramic camera and deep learning-based tracking method
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN115546216A (en) Tray detection method, device, equipment and storage medium
CN116051537A (en) Crop plant height measurement method based on monocular depth estimation
CN114155349A (en) Three-dimensional mapping method, three-dimensional mapping device and robot
CN111489384B (en) Method, device, system and medium for evaluating shielding based on mutual viewing angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant