CN113538350B - Method for identifying depth of foundation pit based on multiple cameras - Google Patents
Method for identifying depth of foundation pit based on multiple cameras Download PDFInfo
- Publication number
- CN113538350B CN113538350B CN202110731925.3A CN202110731925A CN113538350B CN 113538350 B CN113538350 B CN 113538350B CN 202110731925 A CN202110731925 A CN 202110731925A CN 113538350 B CN113538350 B CN 113538350B
- Authority
- CN
- China
- Prior art keywords
- foundation pit
- cameras
- depth
- measured
- side wall
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a method for identifying the depth of a foundation pit based on multiple cameras in the field of buildings, which comprises the following steps: s201: and acquiring and obtaining paired data of the two cameras, including position parameters of the cameras, videos of the cameras and calibration and correction of the two cameras. S202: and preprocessing the acquired video data, including extracting key frames and preprocessing frame images. S203: and (3) segmenting the network by utilizing the trained depth semantic fast RCNN, and segmenting to obtain a machine pit part. S204: and acquiring the upper edge and the lower edge of the pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured. S205: and (3) predicting the depth of the part to be measured by using the imaging principle of a binocular camera, and estimating the depth of the pit. S206: and (5) integrating the camera pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
Description
Technical Field
The invention relates to the field of constructional engineering, in particular to a method for identifying the depth of a foundation pit based on multiple cameras.
Background
The foundation pit depth identification of the construction project site at the present stage is mainly completed by visual inspection of human eyes and matching with measuring instruments such as box rulers, tape measures and the like. And after the measurement is finished, data recording is carried out, and finally the identification of the depth of the foundation pit is achieved. In the identification process, the identification of the depth of the foundation pit is manually completed, which causes the increase of project cost, and certainly, the accuracy and the real-time performance of identification data are difficult to ensure.
Disclosure of Invention
The invention aims to provide a method for identifying the depth of a foundation pit based on multiple cameras, which can automatically identify the depth of the foundation pit and reduce the real-time measurement of field constructors.
The purpose of the invention is realized by the following steps: a method for identifying the depth of a foundation pit based on multiple cameras is characterized by comprising the following steps:
s201: acquiring and obtaining paired double-camera data, including position parameters of the cameras, videos of the cameras and calibration and correction of the double cameras;
s202: preprocessing the acquired video data, including extracting key frames and preprocessing frame images;
s203: dividing the network by using the trained depth semantic fast RCNN, and dividing to obtain a foundation pit part;
the concrete steps of segmenting the fast RCNN segmentation network and obtaining the foundation pit part comprise:
step a, training a foundation pit side wall detector based on deep learning on the basis of an existing data set;
b, placing the binocular camera frame image into a trained foundation pit side wall detector to obtain the range of the foundation pit side wall;
c, taking the middle value of the foundation pit range as the height of the foundation pit in the frame image to be measured through weighted average;
s204: acquiring the upper edge and the lower edge of a foundation pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured to obtain two points needing to be measured in an image plane;
the image processing technology of the edge detection algorithm comprises the following specific steps:
a. converting the partial image of the foundation pit side wall area obtained by the side wall detector in the step S203 into an HSV color space, and extracting the approximate range of the foundation pit by using the color attribute of the foundation pit side wall;
b. calculating to obtain a further area of the side wall of the foundation pit by using a contour extraction method, acquiring the upper and lower boundaries of the side wall, and further correcting the height part of the foundation pit to be measured by using the approximate distance weighting of the upper and lower boundaries;
s205: the depth of a part needing to be measured is predicted by utilizing a binocular camera imaging principle, and the depth of a foundation pit is estimated, wherein the calculation formula of the depth of the foundation pit is as follows:wherein (x) 1 ,y 1 ,z 1 ) Is the coordinate of the upper endpoint in three-dimensional space, (x) 2 ,y 2 ,z 2 ) H represents the height of the depth of the foundation pit.
S206: and (5) integrating the camera foundation pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
Preferably, the two cameras belong to a unified model and have the same parameter information.
Preferably, the internal and external parameters of the cameras are measured during calibration in the dual-camera calibration correction, and the correction refers to correcting the positions of the images acquired by the dual cameras, so that the two corrected images are in the same plane and are parallel to each other, and the basic distance between the cameras is b.
Preferably, the key frame extraction is to respectively take one frame of image by two cameras at an interval of 10 minutes; the frame image preprocessing includes picture size processing and normalization.
Preferably, the imaging principle of the binocular camera in step S205 is as follows:
let P = (x, y, z), its projection point on the image plane of the dual camera is (x) respectively r ,y r ),(x l ,y l ) (ii) a From the principle of similar triangles we can derive:
obtaining by solution:
wherein, b: a reference distance between the two cameras;
f: focal lengths of two identical cameras;
d=x l -x r : parallax between the two cameras;
x, z: the X-axis direction and the Z-axis direction in the three-dimensional space;
x, y, z: the coordinates of the point P in three-dimensional space.
Compared with the prior art, the invention has the advantages that: the artificial intelligent algorithm of the construction site monitoring camera is used for automatically identifying the depth of the foundation pit, the excavation depth of the foundation pit can be monitored in real time, an automatic early warning value can be set according to the design requirements of a drawing, the accuracy of excavation of the foundation pit is guaranteed, and overexcavation of the foundation pit is prevented; the monitoring equipment on the construction site is automatically attended, so that the manual measurement of the depth of the foundation pit is reduced, the personnel investment is reduced, and the cost is saved; the manual measurement is not needed, the depth of the foundation pit is avoided, the cross operation of operating personnel and excavation machinery is avoided, the personal safety is ensured, and the occurrence of safety accidents is reduced.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an imaging schematic diagram of the binocular camera of the present invention.
Wherein, b: a reference distance between the two cameras; f: focal lengths of two identical cameras; d = x l -x r : between two camerasA difference; z, z: the X-axis direction and the Z-axis direction in the three-dimensional space; x, y, z: the coordinates of the point P in three-dimensional space.
Detailed Description
As shown in fig. 1, a method for identifying the depth of a foundation pit based on multiple cameras includes the following steps:
s201: acquiring and obtaining paired double-camera data, including position parameters of the cameras, including a focal length f of the cameras and a reference distance b between the double cameras, and performing calibration and correction on videos of the cameras and the double cameras to ensure that image planes of the double cameras are overlapped, so that subsequent depth estimation is accurate;
s202: preprocessing the acquired video data, including extracting key frames and preprocessing frame images, wherein the aim is to extract frame images for depth estimation from the video data;
s203: dividing a network by using the trained depth semantic fast RCNN, dividing and obtaining a machine pit part, and preliminarily obtaining a machine pit depth part area needing to be measured;
the concrete steps of the Faster RCNN network segmentation and machine pit part obtaining method comprise:
a, training a machine pit side wall detector based on deep learning on the basis of an existing data set;
b, putting the frame image of the binocular camera into a trained machine pit side wall detector to obtain the range of the machine pit side wall;
c, taking the middle value of the machine pit range as the height of the machine pit in the frame image to be measured through weighted average;
s204: acquiring the upper edge and the lower edge of a pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured, wherein the step is to further determine the depth part needing to be measured in order to supplement the deficiency of a depth semantic network by using a traditional image processing method, so as to obtain two points needing to be measured in an image plane;
the image processing technology of the edge detection algorithm comprises the following specific steps:
a. converting the partial image of the side wall area of the machine pit obtained by the side wall detector in the step S203 into an HSV color space, and extracting the approximate range of the machine pit by using the color attribute of the side wall of the machine pit;
b. calculating to obtain a further area of the side wall of the machine pit by using a contour extraction method, acquiring the upper and lower boundaries of the side wall, and further correcting the height part of the machine pit to be measured by using the approximate distance weighting of the upper and lower boundaries;
s205: the depth of a part needing to be measured is predicted by utilizing a binocular camera imaging principle, the depth of a foundation pit is estimated, and a calculation formula of the depth of the foundation pit is as follows:wherein (x) 1 ,y 1 ,z 1 ) Is the coordinate of the upper endpoint in three-dimensional space, (x) 2 ,y 2 ,z 2 ) H represents the height of the depth of the foundation pit.
S206: and (5) integrating the camera pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
The two cameras belong to a unified model and have the same parameter information.
The internal and external parameters of the cameras are measured in the calibration correction of the double-camera calibration, wherein the correction refers to correcting the positions of the images acquired by the double cameras, so that the two corrected images are positioned on the same plane and are parallel to each other, and the basic distance between the cameras is b.
The key frame extraction is to respectively extract one frame of image by two cameras at an interval of 10 minutes; the frame image preprocessing includes picture size processing and normalization.
The imaging principle of the binocular camera in the step S205 is as follows:
let P = (x, y, z), its projection point on the image plane of the dual camera is (x) respectively r ,y r ),(x l ,y l ) (ii) a From the principle of similar triangles:
obtaining by solution:
wherein, b: a reference distance between the two cameras;
f: focal lengths of two identical cameras;
d=x l -x r : parallax between the two cameras;
x, z: the X-axis direction and the Z-axis direction in the three-dimensional space;
x, y, z: the coordinates of the point P in three-dimensional space.
The implementation method comprises the following steps: if b =0.2m, f =0.008m and the definition of the camera is 300dip in the dual-camera system, the length of one pixel is about 8 × 10 -5 Rice; upper end point P of machine pit depth part 1 The coordinates in the two-camera image are (100 ) and (50, 100), respectively, and the lower endpoint P 2 The coordinates in the dual camera image are (500, 400) and (495, 400), respectively, then
The present invention is not limited to the above-mentioned embodiments, and based on the technical solutions disclosed in the present invention, those skilled in the art can make some substitutions and modifications to some technical features without creative efforts according to the disclosed technical contents, and these substitutions and modifications are all within the protection scope of the present invention.
Claims (5)
1. A method for identifying the depth of a foundation pit based on multiple cameras is characterized by comprising the following steps:
s201: acquiring and obtaining paired double-camera data, including position parameters of the cameras, videos of the cameras and calibration and correction of the double cameras;
s202: preprocessing the acquired video data, including extracting key frames and preprocessing frame images;
s203: dividing the network by using the trained depth semantic fast RCNN, and dividing to obtain a foundation pit part;
the concrete steps of dividing the Faster RCNN division network and obtaining the foundation pit part comprise:
step a, training a foundation pit side wall detector based on deep learning on the basis of an existing data set;
b, placing the binocular camera frame image into a trained foundation pit side wall detector to obtain the range of the foundation pit side wall;
c, taking the middle value of the foundation pit range as the height of the foundation pit in the frame image to be measured through weighted average;
s204: acquiring the upper edge and the lower edge of a foundation pit by using an edge detection algorithm image processing technology, and correcting the position of a part needing to be measured to obtain two points needing to be measured in an image plane;
the image processing technology of the edge detection algorithm comprises the following specific steps:
a. converting the partial image of the side wall area of the foundation pit obtained by the side wall detector in the step S203 into an HSV color space, and extracting the approximate range of the foundation pit by using the color attribute of the side wall of the foundation pit;
b. calculating to obtain a further area of the side wall of the foundation pit by using a contour extraction method, acquiring the upper and lower boundaries of the side wall, and further correcting the height part of the foundation pit to be measured by using the approximate distance weighting of the upper and lower boundaries;
s205: the depth of a part needing to be measured is predicted by utilizing a binocular camera imaging principle, and the depth of a foundation pit is estimated, wherein the calculation formula of the depth of the foundation pit is as follows:wherein (x) 1 ,y 1 ,z 1 ) Is the coordinate of the upper endpoint in three-dimensional space, (x) 2 ,y 2 ,z 2 ) H represents the height of the depth of the foundation pit;
s206: and (5) integrating the camera foundation pit depth information and the measured position information in the S204 and the S205, and visually feeding back the measured position and the estimated height.
2. The method for identifying the depth of the foundation pit based on the multiple cameras according to claim 1, wherein the two cameras are of a uniform type and have the same parameter information.
3. The method for identifying the depth of the foundation pit based on the multiple cameras as claimed in claim 1, wherein the calibration in the calibration correction of the double cameras is to measure the internal and external parameters of the cameras, and the correction is to correct the positions of the images acquired by the double cameras, so that the two corrected images are in the same plane and parallel to each other, and the reference distance b between the cameras is obtained at the same time.
4. The method for identifying the depth of the foundation pit based on the multiple cameras as claimed in claim 1, wherein the extracting the key frames is to take one frame of image from each of the two cameras at an interval of 10 minutes; the frame image preprocessing includes picture size processing and normalization.
5. The method for identifying the depth of the foundation pit based on the multiple cameras as claimed in claim 1, wherein in the step S205, the imaging principle of the binocular cameras is as follows:
let P = (x, y, z), its projection point on the image plane of the dual camera is (x) respectively r ,y r ),(x l ,y l ) (ii) a From the principle of similar triangles we can derive:
obtaining by solution:
wherein, b: a reference distance between the two cameras;
f: focal lengths of two identical cameras;
d=x l -x r : parallax between the two cameras;
x, y, z: the coordinates of the point P in three-dimensional space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110731925.3A CN113538350B (en) | 2021-06-29 | 2021-06-29 | Method for identifying depth of foundation pit based on multiple cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110731925.3A CN113538350B (en) | 2021-06-29 | 2021-06-29 | Method for identifying depth of foundation pit based on multiple cameras |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113538350A CN113538350A (en) | 2021-10-22 |
CN113538350B true CN113538350B (en) | 2022-10-04 |
Family
ID=78097235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110731925.3A Active CN113538350B (en) | 2021-06-29 | 2021-06-29 | Method for identifying depth of foundation pit based on multiple cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113538350B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017045650A1 (en) * | 2015-09-15 | 2017-03-23 | 努比亚技术有限公司 | Picture processing method and terminal |
CN107886477A (en) * | 2017-09-20 | 2018-04-06 | 武汉环宇智行科技有限公司 | Unmanned neutral body vision merges antidote with low line beam laser radar |
CN108629812A (en) * | 2018-04-11 | 2018-10-09 | 深圳市逗映科技有限公司 | A kind of distance measuring method based on binocular camera |
CN110231013A (en) * | 2019-05-08 | 2019-09-13 | 哈尔滨理工大学 | A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods |
CN111709985A (en) * | 2020-06-10 | 2020-09-25 | 大连海事大学 | Underwater target ranging method based on binocular vision |
CN111931550A (en) * | 2020-05-22 | 2020-11-13 | 天津大学 | Infant monitoring method based on intelligent distance perception technology |
CN112001964A (en) * | 2020-07-31 | 2020-11-27 | 西安理工大学 | Flood evolution process inundation range measuring method based on deep learning |
CN112016558A (en) * | 2020-08-26 | 2020-12-01 | 大连信维科技有限公司 | Medium visibility identification method based on image quality |
CN112634341A (en) * | 2020-12-24 | 2021-04-09 | 湖北工业大学 | Method for constructing depth estimation model of multi-vision task cooperation |
WO2021084530A1 (en) * | 2019-10-27 | 2021-05-06 | Ramot At Tel-Aviv University Ltd. | Method and system for generating a depth map |
CN112801074A (en) * | 2021-04-15 | 2021-05-14 | 速度时空信息科技股份有限公司 | Depth map estimation method based on traffic camera |
CN112854175A (en) * | 2021-03-04 | 2021-05-28 | 西南石油大学 | Foundation pit deformation monitoring and early warning method based on machine vision |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111407245B (en) * | 2020-03-19 | 2021-11-02 | 南京昊眼晶睛智能科技有限公司 | Non-contact heart rate and body temperature measuring method based on camera |
CN112766274B (en) * | 2021-02-01 | 2023-07-07 | 长沙市盛唐科技有限公司 | Water gauge image water level automatic reading method and system based on Mask RCNN algorithm |
-
2021
- 2021-06-29 CN CN202110731925.3A patent/CN113538350B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017045650A1 (en) * | 2015-09-15 | 2017-03-23 | 努比亚技术有限公司 | Picture processing method and terminal |
CN107886477A (en) * | 2017-09-20 | 2018-04-06 | 武汉环宇智行科技有限公司 | Unmanned neutral body vision merges antidote with low line beam laser radar |
CN108629812A (en) * | 2018-04-11 | 2018-10-09 | 深圳市逗映科技有限公司 | A kind of distance measuring method based on binocular camera |
CN110231013A (en) * | 2019-05-08 | 2019-09-13 | 哈尔滨理工大学 | A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods |
WO2021084530A1 (en) * | 2019-10-27 | 2021-05-06 | Ramot At Tel-Aviv University Ltd. | Method and system for generating a depth map |
CN111931550A (en) * | 2020-05-22 | 2020-11-13 | 天津大学 | Infant monitoring method based on intelligent distance perception technology |
CN111709985A (en) * | 2020-06-10 | 2020-09-25 | 大连海事大学 | Underwater target ranging method based on binocular vision |
CN112001964A (en) * | 2020-07-31 | 2020-11-27 | 西安理工大学 | Flood evolution process inundation range measuring method based on deep learning |
CN112016558A (en) * | 2020-08-26 | 2020-12-01 | 大连信维科技有限公司 | Medium visibility identification method based on image quality |
CN112634341A (en) * | 2020-12-24 | 2021-04-09 | 湖北工业大学 | Method for constructing depth estimation model of multi-vision task cooperation |
CN112854175A (en) * | 2021-03-04 | 2021-05-28 | 西南石油大学 | Foundation pit deformation monitoring and early warning method based on machine vision |
CN112801074A (en) * | 2021-04-15 | 2021-05-14 | 速度时空信息科技股份有限公司 | Depth map estimation method based on traffic camera |
Also Published As
Publication number | Publication date |
---|---|
CN113538350A (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112686938B (en) | Power transmission line clear distance calculation and safety alarm method based on binocular image ranging | |
US11551341B2 (en) | Method and device for automatically drawing structural cracks and precisely measuring widths thereof | |
CN111784778B (en) | Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization | |
CN114241298A (en) | Tower crane environment target detection method and system based on laser radar and image fusion | |
CN103454285A (en) | Transmission chain quality detection system based on machine vision | |
CN102768022A (en) | Tunnel surrounding rock deformation detection method adopting digital camera technique | |
CN113469178B (en) | Power meter identification method based on deep learning | |
CN111996883B (en) | Method for detecting width of road surface | |
CN113688817A (en) | Instrument identification method and system for automatic inspection | |
CN115993096A (en) | High-rise building deformation measuring method | |
CN116152697A (en) | Three-dimensional model measuring method and related device for concrete structure cracks | |
CN116844147A (en) | Pointer instrument identification and abnormal alarm method based on deep learning | |
CN113392846A (en) | Water gauge water level monitoring method and system based on deep learning | |
CN115639248A (en) | System and method for detecting quality of building outer wall | |
CN115330712A (en) | Intelligent quality inspection method and system for prefabricated components of fabricated building based on virtual-real fusion | |
CN116563262A (en) | Building crack detection algorithm based on multiple modes | |
CN113538350B (en) | Method for identifying depth of foundation pit based on multiple cameras | |
CN113749646A (en) | Monocular vision-based human body height measuring method and device and electronic equipment | |
CN110060339B (en) | Three-dimensional modeling method based on cloud computing graphic image | |
CN108180871A (en) | A kind of method of quantitative assessment surface of composite insulator dusting roughness | |
CN104573635A (en) | Miniature height recognition method based on three-dimensional reconstruction | |
CN115393387A (en) | Building displacement monitoring method and device | |
CN113592877B (en) | Method and device for identifying red line exceeding of pumped storage power station | |
CN112819817A (en) | River flow velocity estimation method based on graph calculation | |
CN112270357A (en) | VIO vision system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |