CN113538350A - Method for identifying depth of foundation pit based on multiple cameras - Google Patents

Method for identifying depth of foundation pit based on multiple cameras Download PDF

Info

Publication number
CN113538350A
CN113538350A CN202110731925.3A CN202110731925A CN113538350A CN 113538350 A CN113538350 A CN 113538350A CN 202110731925 A CN202110731925 A CN 202110731925A CN 113538350 A CN113538350 A CN 113538350A
Authority
CN
China
Prior art keywords
cameras
pit
depth
side wall
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110731925.3A
Other languages
Chinese (zh)
Other versions
CN113538350B (en
Inventor
刘玉建
轩阳
张龙亮
黄萌萌
王以安
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Shenbao Investment Development Co ltd
Original Assignee
Hebei Shenbao Investment Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Shenbao Investment Development Co ltd filed Critical Hebei Shenbao Investment Development Co ltd
Priority to CN202110731925.3A priority Critical patent/CN113538350B/en
Publication of CN113538350A publication Critical patent/CN113538350A/en
Application granted granted Critical
Publication of CN113538350B publication Critical patent/CN113538350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method for identifying the depth of a foundation pit based on multiple cameras in the field of buildings, which comprises the following steps: s201: and acquiring and obtaining paired data of the two cameras, including position parameters of the cameras, videos of the cameras and calibration and correction of the two cameras. S202: and preprocessing the acquired video data, including extracting key frames and preprocessing frame images. S203: and (4) segmenting the network by utilizing the trained depth semantic fast RCNN, and segmenting to obtain a machine pit part. S204: and acquiring the upper edge and the lower edge of the pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured. S205: and (3) predicting the depth of the part to be measured by using the imaging principle of a binocular camera, and estimating the depth of the pit. S206: and (5) integrating the camera pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.

Description

Method for identifying depth of foundation pit based on multiple cameras
Technical Field
The invention relates to the field of constructional engineering, in particular to a method for identifying the depth of a foundation pit based on multiple cameras.
Background
The foundation pit depth identification of the construction project site at the present stage is mainly completed by visual inspection of human eyes and matching with measuring instruments such as box rulers, tape measures and the like. And after the measurement is finished, data recording is carried out, and finally the identification of the depth of the foundation pit is achieved. In the identification process, the identification of the depth of the foundation pit is manually completed, which causes the increase of project cost, and certainly, the accuracy and the real-time performance of identification data are difficult to ensure.
Disclosure of Invention
The invention aims to provide a method for identifying the depth of a foundation pit based on multiple cameras, which can automatically identify the depth of the foundation pit and reduce the real-time measurement of field constructors.
The purpose of the invention is realized as follows: a method for identifying the depth of a foundation pit based on multiple cameras comprises the following steps:
s201: acquiring and obtaining paired double-camera data, including position parameters of the cameras, videos of the cameras and calibration and correction of the double cameras;
s202: preprocessing the acquired video data, including extracting key frames and preprocessing frame images;
s203: segmenting a network by utilizing the trained depth semantic fast RCNN, and segmenting to obtain a machine pit part;
s204: acquiring the upper edge and the lower edge of a pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured;
s205: predicting the depth of a part to be measured by using a binocular camera imaging principle, and estimating the depth of a machine pit;
s206: and (5) integrating the camera pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
Preferably, the two cameras belong to a unified model and have the same parameter information.
Preferably, the calibration in the calibration correction of the two cameras is to measure the internal and external parameters of the cameras, and the calibration is to calibrate the positions of the images acquired by the two cameras, so that the two corrected images are in the same plane and parallel to each other, and the basic distance between the cameras is b.
Preferably, the key frame extraction is to respectively take one frame of image by two cameras at an interval of 10 minutes; the frame image preprocessing includes picture size processing and normalization.
Preferably, the step of dividing the fast RCNN into network partitions and obtaining the pit part includes:
a, training a machine pit side wall detector based on deep learning on the basis of an existing data set;
b, putting the frame image of the binocular camera into a trained machine pit side wall detector to obtain the range of the machine pit side wall;
and c, taking the interruption of the machine pit range as the height of the machine pit in the frame image to be measured in a weighted average mode and the like.
Preferably, the image processing technique of the edge detection algorithm includes the following specific steps:
a. converting the partial image of the side wall area of the machine pit, which is obtained by the side wall detector in the step S203 and causes the side wall area of the machine pit, into an HSV color space, and extracting the approximate range of the machine pit by using the color attribute of the side wall of the machine pit;
b. calculating to obtain a further area of the side wall of the machine pit by using a contour extraction method, obtaining the upper and lower boundaries of the side wall, recording the approximate distance weighting of the upper and lower boundaries, and further correcting the height part of the machine pit to be measured;
preferably, the imaging principle of the binocular camera in step S205 is as follows:
let P be (x,y, z) whose projection points on the image planes of the two cameras are respectively (x)r,yr),(xl,yl) (ii) a From the principle of similar triangles we can derive:
Figure BDA0003139465510000031
obtaining by solution:
Figure BDA0003139465510000032
Figure BDA0003139465510000033
Figure BDA0003139465510000034
b, the reference distance between the two cameras;
f, focal lengths of two identical cameras;
d=xl-xrparallax between the two cameras;
x and Z are the X-axis direction and the Z-axis direction in the three-dimensional space;
and x, y, z are the coordinates of the P point in the three-dimensional space.
Compared with the prior art, the invention has the advantages that: the artificial intelligent algorithm of the construction site monitoring camera is used for automatically identifying the depth of the foundation pit, the excavation depth of the foundation pit can be monitored in real time, an automatic early warning value can be set according to the design requirements of a drawing, the accuracy of excavation of the foundation pit is guaranteed, and overexcavation of the foundation pit is prevented; the monitoring equipment on the construction site is automatically attended, so that the manual measurement of the depth of the foundation pit is reduced, the personnel investment is reduced, and the cost is saved; the manual measurement is not needed, the depth of the foundation pit is avoided, the cross operation of operating personnel and excavation machinery is avoided, the personal safety is ensured, and the occurrence of safety accidents is reduced.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an imaging schematic diagram of the binocular camera of the present invention.
Wherein, b is the reference distance between the two cameras; f, focal lengths of two identical cameras;
d=xl-xrparallax between the two cameras; x and Z are the X-axis direction and the Z-axis direction in the three-dimensional space; and x, y, z are the coordinates of the P point in the three-dimensional space.
Detailed Description
As shown in fig. 1, a method for identifying the depth of a foundation pit based on multiple cameras includes the following steps:
s201: acquiring and obtaining paired double-camera data, including position parameters of the cameras, including a focal length f of the cameras and a reference distance b between the double cameras, and performing calibration and correction on videos of the cameras and the double cameras to ensure that image planes of the double cameras are overlapped, so that subsequent depth estimation is accurate;
s202: preprocessing the acquired video data, including extracting key frames and preprocessing frame images, wherein the aim is to extract frame images for depth estimation from the video data;
s203: dividing a network by using the trained depth semantic fast RCNN, dividing and obtaining a machine pit part, and preliminarily obtaining a machine pit depth part area needing to be measured;
s204: acquiring the upper edge and the lower edge of a pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured, wherein the step is to further determine the depth part needing to be measured in order to supplement the deficiency of a depth semantic network by using a traditional image processing method, so as to obtain two points needing to be measured in an image plane;
s205: predicting the depth of a part to be measured by using a binocular camera imaging principle, and estimating the depth of a machine pit;
s206: and (5) integrating the camera pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
The two cameras belong to a unified model and have the same parameter information.
The calibration in the calibration correction of the double cameras measures the internal and external parameters of the cameras, and the calibration refers to calibrating the positions of the images acquired by the double cameras, so that the two calibrated images are positioned on the same plane and are parallel to each other, and the basic distance between the cameras is obtained as b.
The key frame extraction is to respectively extract one frame of image by two cameras at an interval of 10 minutes; the frame image preprocessing includes picture size processing and normalization.
The concrete steps of the Faster RCNN network segmentation and machine pit part obtaining method comprise:
a, training a machine pit side wall detector based on deep learning on the basis of an existing data set;
b, putting the frame image of the binocular camera into a trained machine pit side wall detector to obtain the range of the machine pit side wall;
and c, taking the interruption of the machine pit range as the height of the machine pit in the frame image to be measured in a weighted average mode and the like.
The image processing technology of the edge detection algorithm comprises the following specific steps:
a. converting the partial image of the side wall area of the machine pit, which is obtained by the side wall detector in the step S203 and causes the side wall area of the machine pit, into an HSV color space, and extracting the approximate range of the machine pit by using the color attribute of the side wall of the machine pit;
b. calculating to obtain a further area of the side wall of the machine pit by using a contour extraction method, obtaining the upper and lower boundaries of the side wall, recording the approximate distance weighting of the upper and lower boundaries, and further correcting the height part of the machine pit to be measured;
the imaging principle of the binocular camera in the step S205 is as follows:
let P be (x, y, z), and its projection point on the image plane of the dual-camera be (x)r,yr),(xl,yl) (ii) a From the principle of similar triangles we can derive:
Figure BDA0003139465510000061
obtaining by solution:
Figure BDA0003139465510000062
Figure BDA0003139465510000063
Figure BDA0003139465510000064
b, the reference distance between the two cameras;
f, focal lengths of two identical cameras;
d=xl-xrparallax between the two cameras;
x and Z are the X-axis direction and the Z-axis direction in the three-dimensional space;
and x, y, z are the coordinates of the P point in the three-dimensional space.
The implementation method comprises the following steps: if in a dual-camera system, b is 0.2m, f is 0.008m, and the resolution of the camera is 300dip, the length of one pixel is about 8 × 10-5Rice; upper end point P of machine pit depth part1The coordinates in the two-camera image are (100 ) and (50, 100), respectively, and the lower endpoint P2The coordinates in the dual camera image are (500, 400) and (495, 400), respectively, then
Figure BDA0003139465510000071
Figure BDA0003139465510000072
Figure BDA0003139465510000073
Figure BDA0003139465510000074
Figure BDA0003139465510000075
Figure BDA0003139465510000076
Then
Figure BDA0003139465510000077
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.

Claims (7)

1. A method for identifying the depth of a foundation pit based on multiple cameras is characterized by comprising the following steps:
s201: acquiring and obtaining paired double-camera data, including position parameters of the cameras, videos of the cameras and calibration and correction of the double cameras;
s202: preprocessing the acquired video data, including extracting key frames and preprocessing frame images;
s203: segmenting a network by utilizing the trained depth semantic fast RCNN, and segmenting to obtain a machine pit part;
s204: acquiring the upper edge and the lower edge of a pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured;
s205: predicting the depth of a part to be measured by using a binocular camera imaging principle, and estimating the depth of a machine pit;
s206: and (5) integrating the camera pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
2. The method for identifying the depth of the foundation pit based on the multiple cameras according to claim 1, wherein the two cameras are of a uniform type and have the same parameter information.
3. The method for identifying the depth of the foundation pit based on the multiple cameras as claimed in claim 1, wherein the calibration in the calibration correction of the double cameras is to measure the internal and external parameters of the cameras, and the calibration is to calibrate the positions of the images acquired by the double cameras, so that the two calibrated images are in the same plane and are parallel to each other, and the basic distance between the cameras is b.
4. The method for identifying the depth of the foundation pit based on the multiple cameras as claimed in claim 1, wherein the extracting the key frames is to take one frame of image from each of the two cameras at an interval of 10 minutes; the frame image preprocessing includes picture size processing and normalization.
5. The method for identifying the depth of the foundation pit based on the multiple cameras according to claim 1, wherein the step of dividing the machine pit part by the Faster RCNN division network comprises the following steps:
a, training a machine pit side wall detector based on deep learning on the basis of an existing data set;
b, putting the frame image of the binocular camera into a trained machine pit side wall detector to obtain the range of the machine pit side wall;
and c, taking the interruption of the machine pit range as the height of the machine pit in the frame image to be measured in a weighted average mode and the like.
6. The method for identifying the depth of the foundation pit based on the multiple cameras according to claim 1, wherein the image processing technology of the edge detection algorithm comprises the following specific steps:
a. converting the partial image of the side wall area of the machine pit, which is obtained by the side wall detector in the step S203 and causes the side wall area of the machine pit, into an HSV color space, and extracting the approximate range of the machine pit by using the color attribute of the side wall of the machine pit;
b. calculating to obtain a further area of the side wall of the machine pit by using a contour extraction method, obtaining the upper and lower boundaries of the side wall, recording the approximate distance weighting of the upper and lower boundaries, and further correcting the height part of the machine pit to be measured;
7. the method for identifying the depth of the foundation pit based on the multiple cameras according to claim 1, wherein the imaging principle of the binocular cameras in the step S205 is as follows:
let P be (x, y, z), and its projection point on the image plane of the dual-camera be (x)r,yr),(xl,yl) (ii) a From the principle of similar triangles we can derive:
Figure FDA0003139465500000031
obtaining by solution:
Figure FDA0003139465500000032
Figure FDA0003139465500000033
Figure FDA0003139465500000034
wherein, b: a reference distance between the two cameras;
f: focal lengths of two identical cameras;
d=xl-xr: parallax between the two cameras;
x, Z: the X-axis direction and the Z-axis direction in the three-dimensional space;
x, y, z: the coordinates of the point P in three-dimensional space.
CN202110731925.3A 2021-06-29 2021-06-29 Method for identifying depth of foundation pit based on multiple cameras Active CN113538350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110731925.3A CN113538350B (en) 2021-06-29 2021-06-29 Method for identifying depth of foundation pit based on multiple cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110731925.3A CN113538350B (en) 2021-06-29 2021-06-29 Method for identifying depth of foundation pit based on multiple cameras

Publications (2)

Publication Number Publication Date
CN113538350A true CN113538350A (en) 2021-10-22
CN113538350B CN113538350B (en) 2022-10-04

Family

ID=78097235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110731925.3A Active CN113538350B (en) 2021-06-29 2021-06-29 Method for identifying depth of foundation pit based on multiple cameras

Country Status (1)

Country Link
CN (1) CN113538350B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017045650A1 (en) * 2015-09-15 2017-03-23 努比亚技术有限公司 Picture processing method and terminal
CN107886477A (en) * 2017-09-20 2018-04-06 武汉环宇智行科技有限公司 Unmanned neutral body vision merges antidote with low line beam laser radar
CN108629812A (en) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 A kind of distance measuring method based on binocular camera
CN110231013A (en) * 2019-05-08 2019-09-13 哈尔滨理工大学 A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods
CN111407245A (en) * 2020-03-19 2020-07-14 南京昊眼晶睛智能科技有限公司 Non-contact heart rate and body temperature measuring method based on camera
CN111709985A (en) * 2020-06-10 2020-09-25 大连海事大学 Underwater target ranging method based on binocular vision
CN111931550A (en) * 2020-05-22 2020-11-13 天津大学 Infant monitoring method based on intelligent distance perception technology
CN112001964A (en) * 2020-07-31 2020-11-27 西安理工大学 Flood evolution process inundation range measuring method based on deep learning
CN112016558A (en) * 2020-08-26 2020-12-01 大连信维科技有限公司 Medium visibility identification method based on image quality
CN112634341A (en) * 2020-12-24 2021-04-09 湖北工业大学 Method for constructing depth estimation model of multi-vision task cooperation
WO2021084530A1 (en) * 2019-10-27 2021-05-06 Ramot At Tel-Aviv University Ltd. Method and system for generating a depth map
CN112766274A (en) * 2021-02-01 2021-05-07 长沙市盛唐科技有限公司 Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
CN112801074A (en) * 2021-04-15 2021-05-14 速度时空信息科技股份有限公司 Depth map estimation method based on traffic camera
CN112854175A (en) * 2021-03-04 2021-05-28 西南石油大学 Foundation pit deformation monitoring and early warning method based on machine vision

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017045650A1 (en) * 2015-09-15 2017-03-23 努比亚技术有限公司 Picture processing method and terminal
CN107886477A (en) * 2017-09-20 2018-04-06 武汉环宇智行科技有限公司 Unmanned neutral body vision merges antidote with low line beam laser radar
CN108629812A (en) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 A kind of distance measuring method based on binocular camera
CN110231013A (en) * 2019-05-08 2019-09-13 哈尔滨理工大学 A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods
WO2021084530A1 (en) * 2019-10-27 2021-05-06 Ramot At Tel-Aviv University Ltd. Method and system for generating a depth map
CN111407245A (en) * 2020-03-19 2020-07-14 南京昊眼晶睛智能科技有限公司 Non-contact heart rate and body temperature measuring method based on camera
CN111931550A (en) * 2020-05-22 2020-11-13 天津大学 Infant monitoring method based on intelligent distance perception technology
CN111709985A (en) * 2020-06-10 2020-09-25 大连海事大学 Underwater target ranging method based on binocular vision
CN112001964A (en) * 2020-07-31 2020-11-27 西安理工大学 Flood evolution process inundation range measuring method based on deep learning
CN112016558A (en) * 2020-08-26 2020-12-01 大连信维科技有限公司 Medium visibility identification method based on image quality
CN112634341A (en) * 2020-12-24 2021-04-09 湖北工业大学 Method for constructing depth estimation model of multi-vision task cooperation
CN112766274A (en) * 2021-02-01 2021-05-07 长沙市盛唐科技有限公司 Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
CN112854175A (en) * 2021-03-04 2021-05-28 西南石油大学 Foundation pit deformation monitoring and early warning method based on machine vision
CN112801074A (en) * 2021-04-15 2021-05-14 速度时空信息科技股份有限公司 Depth map estimation method based on traffic camera

Also Published As

Publication number Publication date
CN113538350B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN112686938B (en) Power transmission line clear distance calculation and safety alarm method based on binocular image ranging
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN111784778B (en) Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization
CN111006601A (en) Key technology of three-dimensional laser scanning in deformation monitoring
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN103454285A (en) Transmission chain quality detection system based on machine vision
CN102768022A (en) Tunnel surrounding rock deformation detection method adopting digital camera technique
CN113469178B (en) Power meter identification method based on deep learning
CN111412842A (en) Method, device and system for measuring cross-sectional dimension of wall surface
CN113688817A (en) Instrument identification method and system for automatic inspection
CN116152697A (en) Three-dimensional model measuring method and related device for concrete structure cracks
CN115993096A (en) High-rise building deformation measuring method
CN113850869A (en) Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis
CN116844147A (en) Pointer instrument identification and abnormal alarm method based on deep learning
CN113392846A (en) Water gauge water level monitoring method and system based on deep learning
CN116563262A (en) Building crack detection algorithm based on multiple modes
CN115330712A (en) Intelligent quality inspection method and system for prefabricated components of fabricated building based on virtual-real fusion
CN113749646A (en) Monocular vision-based human body height measuring method and device and electronic equipment
CN113538350B (en) Method for identifying depth of foundation pit based on multiple cameras
CN108180871A (en) A kind of method of quantitative assessment surface of composite insulator dusting roughness
CN110060339B (en) Three-dimensional modeling method based on cloud computing graphic image
CN108985307B (en) Water body extraction method and system based on remote sensing image
CN115131383A (en) Binocular vision-based transmission line external damage identification method
Qu et al. Computer vision-based 3D coordinate acquisition of surface feature points of building structures
CN115393387A (en) Building displacement monitoring method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant