CN112819882A - Real-time earth volume calculation method based on binocular vision - Google Patents
Real-time earth volume calculation method based on binocular vision Download PDFInfo
- Publication number
- CN112819882A CN112819882A CN202110102869.7A CN202110102869A CN112819882A CN 112819882 A CN112819882 A CN 112819882A CN 202110102869 A CN202110102869 A CN 202110102869A CN 112819882 A CN112819882 A CN 112819882A
- Authority
- CN
- China
- Prior art keywords
- bucket
- excavator
- coordinate system
- calculating
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Abstract
The invention discloses a binocular vision-based real-time earth volume calculation method, which comprises the steps of obtaining a rotation matrix R1 and a translation matrix T1 of a camera coordinate system relative to a world coordinate system; identifying an area within the frame range of the excavator bucket based on the excavator bucket picture; establishing a bucket coordinate system aiming at the area within the frame range of the bucket of the excavator; calculating a rotation matrix R2 and a translation matrix T2 of a bucket coordinate system relative to a world coordinate system; calculating three-dimensional coordinates of pixel points in a picture of the bucket of the excavator under a world coordinate system, obtaining the world coordinate of each point on the earth surface in the bucket, and calculating the three-dimensional coordinates of each point under the bucket coordinate system; establishing a depth map; and obtaining the total volume of earthwork in the bucket of the single excavator based on the depth map, and further obtaining the real-time workload of the excavator. The invention can measure the earthwork in the bucket of the excavator in real time, and then calculate the workload of the excavator, thereby reducing the labor cost and facilitating the settlement of engineering expenses and the optimization of engineering schemes.
Description
Technical Field
The invention belongs to the technical field of excavators, and particularly relates to a real-time earth volume calculation method based on binocular vision.
Background
An excavator is a machine widely used in engineering at present. In engineering operation, the workload of the excavator needs to be measured and recorded, and the settlement of engineering cost and the optimization of engineering schemes are related. At present, irregular earthwork measurement is difficult, manual measurement is needed for most earthwork measurement, and an intelligent mode based on vision is not available for real-time measurement and calculation of the workload of the excavator.
Disclosure of Invention
Aiming at the problems, the invention provides a binocular vision-based real-time earth volume calculation method, which can measure earth in a bucket of an excavator in real time, then calculate the workload of the excavator, reduce the labor cost and facilitate settlement of engineering expenses and optimization of engineering schemes.
In order to achieve the technical purpose and achieve the technical effects, the invention is realized by the following technical scheme:
a real-time earth volume calculation method based on binocular vision comprises the following steps:
acquiring a rotation matrix R1 and a translation matrix T1 of a camera coordinate system relative to a world coordinate system;
identifying an area within a frame range of the excavator bucket based on an excavator bucket picture shot by a binocular camera;
calculating three-dimensional coordinates of the edge of the excavator bucket under a world coordinate system according to the area within the frame range of the excavator bucket, performing straight line and plane fitting on points on each edge to obtain a rectangular plane, and establishing a bucket coordinate system;
calculating a rotation matrix R2 and a translation matrix T2 of a bucket coordinate system relative to a world coordinate system;
calculating three-dimensional coordinates of a pixel point in a picture of the bucket of the excavator to a camera coordinate system, further calculating the three-dimensional coordinates of the pixel point in a world coordinate system by using a coordinate conversion formula, a rotation matrix R1 and a translation matrix T1 to obtain the world coordinates of each point on the earth surface in the bucket, and then calculating the three-dimensional coordinates of each point in the bucket coordinate system based on the rotation matrix R2 and the translation matrix T2;
establishing a depth map by using the three-dimensional coordinates of each point;
obtaining the total volume of earthwork in the bucket of the excavator each time based on the depth map;
and accumulating and calculating the volume of the earthwork excavated by the bucket of the excavator every time to obtain the real-time workload of the excavator.
As a further improvement of the present invention, the method for identifying the area within the frame of the excavator bucket comprises:
inputting a picture of the excavator bucket shot by a binocular camera into a pre-trained Yolov3 detection model;
the YOLOV3 detects regions within the frame of the model output excavator bucket.
As a further improvement of the present invention, the YOLOV3 test model is obtained by training the following steps:
shooting a plurality of excavator bucket pictures, and labeling the buckets in the pictures by using labelme;
and training a YOLOV3 network by using the labeled pictures to obtain a YOLOV3 detection model.
As a further improvement of the present invention, the method for calculating the three-dimensional coordinates from the pixel point to the camera coordinate system includes:
and calculating the three-dimensional coordinates from the pixel points to the camera coordinate system based on the disparity map and the camera internal parameters.
As a further improvement of the invention, the three-dimensional coordinate calculation formula of each point in the bucket coordinate system is as follows:
x=Xw*R2+T2
y=Yw*R2+T2
z=Zw*R2+T2
wherein, Xw,Yw,ZwWorld coordinates for each point of the earth's surface within the bucket.
As a further improvement of the present invention, the method for calculating the total volume of the earth in the bucket of the excavator at a time includes:
setting an m-by-m sliding window, and enabling the sliding window to sequentially traverse the whole depth map from an original point O of a bucket coordinate system;
when the sliding window traverses the depth map, each position corresponds to a prism;
averaging Z coordinates of four vertexes of the sliding window under a bucket coordinate system to serve as the height of each prism;
based on formula Vi=Si*z=m*m*zcApproximate volume V of each prismi;
Integrating all prisms to obtain volume V1;
the total volume Vi of the earth in the bucket per excavator was obtained by adding V1 to the inner volume V2 of the bucket.
As a further improvement of the present invention, the calculation formula of the Z coordinate averaging is:
wherein z isciIs the Z coordinate of the ith vertex, ZcThe Z coordinate in the bucket coordinate system is averaged.
As a further improvement of the present invention, the formula for calculating the real-time workload of the excavator is as follows:
Vgeneral assembly=k*Vi
Wherein, VGeneral assemblyAnd k is an error correction coefficient.
As a further improvement of the present invention, the error correction coefficient k is obtained by performing error measurement a plurality of times.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a binocular vision-based real-time earth volume calculation method, which can measure earth in a bucket of an excavator in real time, calculate the workload of the excavator, reduce the labor cost and facilitate settlement of engineering expenses and optimization of engineering schemes.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the present disclosure taken in conjunction with the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a binocular vision-based real-time earth volume calculation method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a bucket coordinate system according to an embodiment of the present disclosure;
FIG. 3 is a depth map of a bucket according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the present invention provides a real-time earth volume calculation method based on binocular vision, comprising the following steps:
(1) acquiring a rotation matrix R1 and a translation matrix T1 of a camera coordinate system relative to a world coordinate system;
(2) identifying an area within a frame range of the excavator bucket based on an excavator bucket picture shot by a binocular camera;
(3) calculating three-dimensional coordinates of the edge of the excavator bucket under a world coordinate system according to the area within the frame range of the excavator bucket, performing straight line and plane fitting on points on each edge to obtain a rectangular plane, and establishing a bucket coordinate system;
(4) calculating a rotation matrix R2 and a translation matrix T2 of a bucket coordinate system relative to a world coordinate system;
(5) calculating three-dimensional coordinates of a pixel point in a picture of the bucket of the excavator to a camera coordinate system, further calculating the three-dimensional coordinates of the pixel point in a world coordinate system by using a coordinate conversion formula, a rotation matrix R1 and a translation matrix T1 to obtain the world coordinates of each point on the earth surface in the bucket, and then calculating the three-dimensional coordinates of each point in the bucket coordinate system based on the rotation matrix R2 and the translation matrix T2;
(6) establishing a depth map by using the three-dimensional coordinates of each point;
(7) obtaining the total volume of earthwork in the bucket of the excavator each time based on the depth map;
(8) and accumulating and calculating the volume of the earthwork excavated by the bucket of the excavator every time to obtain the real-time workload of the excavator.
In an embodiment of the present invention, the method for identifying the area within the frame of the excavator bucket includes:
inputting a picture of the excavator bucket shot by a binocular camera into a pre-trained Yolov3 detection model;
the YOLOV3 detects regions within the frame of the model output excavator bucket.
Wherein the Yolov3 test model is obtained by training the following steps:
shooting a plurality of excavator bucket pictures, and labeling the buckets in the pictures by using labelme;
and training a YOLOV3 network by using the labeled pictures to obtain a YOLOV3 detection model.
In a specific embodiment of the present invention, the method for calculating the three-dimensional coordinates from the pixel point to the camera coordinate system includes:
and calculating three-dimensional coordinates from the pixel points to a camera coordinate system based on the disparity map and camera internal parameters, wherein the disparity map is obtained by carrying out stereo matching through an SGBM algorithm (the prior art).
In an embodiment of the present invention, the three-dimensional coordinate calculation formula of each point in the bucket coordinate system is:
x=Xw*R2+T2
y=Yw*R2+T2
z=Zw*R2+T2
wherein, Xw,Yw,ZwWorld coordinates for each point of the earth's surface within the bucket.
In an embodiment of the present invention, the method for calculating the total volume of the earth in the bucket of the excavator each time includes:
setting an m-by-m sliding window, and enabling the sliding window to sequentially traverse the whole depth map from an original point O of a bucket coordinate system;
when the sliding window traverses the depth map, each position corresponds to a prism;
averaging Z coordinates of four vertexes of the sliding window under a bucket coordinate system to serve as the height of each prism;
based on formula Vi=Si*z=m*m*zcApproximate volume V of each prismi;
Integrating all prisms to obtain volume V1;
the total volume Vi of the earth in the bucket per excavator was obtained by adding V1 to the inner volume V2 of the bucket.
As a further improvement of the present invention, the calculation formula of the Z coordinate averaging is:
wherein z isciIs the Z coordinate of the ith vertex, ZcThe Z coordinate in the bucket coordinate system is averaged.
In a specific embodiment of the present invention, a formula for calculating the real-time workload of the excavator is as follows:
Vgeneral assembly=k*Vi
Wherein, VGeneral assemblyAnd k is an error correction coefficient which is obtained by measuring errors for multiple times.
In an embodiment of the present invention, the process of calculating the earth volume in the bucket of the excavator specifically includes:
(1) acquiring a reference plane:
the world coordinate system origin is set at the point A of the excavator bucket (shown in figure 2), and three-dimensional coordinates (in the world coordinate system) of the edge of the excavator bucket are calculated and obtained by using a binocular camera in an area (including a frame) within the frame range of the excavator bucket. Fitting the points on each edge with straight line and plane to obtain a rectangular plane SABCD. And establishing a bucket coordinate system by taking A as a coordinate origin, AB as an x axis and AD as a y axis (see the detailed view in FIG. 2). And (4) selecting points on the bucket coordinate system to obtain a rotation matrix R2 and a translation matrix T2 of the bucket coordinate system and the world coordinate.
(2) Obtaining three-dimensional coordinates of earthwork:
world coordinate (X) of each point of the earth surface in the bucketw,Yw,Zw) (ii) a And then using the rotation matrix R2 and the translation matrix T2 to convert the coordinates (X ═ X)w*R2+T2、y=Yw*R2+T2、z=ZwR2+ T2) can calculate the three-dimensional coordinates (x, y, z) of each point in the bucket coordinate system.
(3) And (3) calculating the volume of the earthwork:
and (3) establishing a depth map (such as fig. 3) by using the coordinates (x, y, z) of each point obtained in the step (2), and setting an m × m sliding window to sequentially traverse the whole depth map from left to right from the bucket coordinate origin O. When traversing the sliding window through the depth map, each position may correspond to a prism as shown in fig. 2; sit the four vertexes of the sliding windowTarget Z coordinate (bucket coordinate system) averagingzcAs the height of each prism; by area SiM x m times height zcBy approximating the volume V of each prismi(ii) a Integrating all prisms to obtain a volume V1 (an earthwork volume corresponding to the depth map above the bucket coordinate system); the total volume Vi of the earth in the bucket of the excavator at each time can be obtained by adding V1 to the inner volume V2 of the bucket.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (9)
1. A real-time earth volume calculation method based on binocular vision is characterized by comprising the following steps:
acquiring a rotation matrix R1 and a translation matrix T1 of a camera coordinate system relative to a world coordinate system;
identifying an area within a frame range of the excavator bucket based on an excavator bucket picture shot by a binocular camera;
calculating three-dimensional coordinates of the edge of the excavator bucket under a world coordinate system according to the area within the frame range of the excavator bucket, performing straight line and plane fitting on points on each edge to obtain a rectangular plane, and establishing a bucket coordinate system;
calculating a rotation matrix R2 and a translation matrix T2 of a bucket coordinate system relative to a world coordinate system;
calculating three-dimensional coordinates of a pixel point in a picture of the bucket of the excavator to a camera coordinate system, further calculating the three-dimensional coordinates of the pixel point in a world coordinate system by using a coordinate conversion formula, R1 and a translation matrix T1 to obtain the world coordinates of each point on the earth surface in the bucket, and then calculating the three-dimensional coordinates of each point in the bucket coordinate system based on the rotation matrix R2 and the translation matrix T2;
establishing a depth map by using the three-dimensional coordinates of each point;
obtaining the total volume of earthwork in the bucket of the excavator each time based on the depth map;
and accumulating and calculating the volume of the earthwork excavated by the bucket of the excavator every time to obtain the real-time workload of the excavator.
2. The binocular vision based real-time earth volume calculation method of claim 1, wherein the identification method of the area within the frame range of the excavator bucket is as follows:
inputting a picture of the excavator bucket shot by a binocular camera into a pre-trained Yolov3 detection model;
the YOLOV3 detects regions within the frame of the model output excavator bucket.
3. The binocular vision based real-time earth volume calculation method of claim 2, wherein the YOLOV3 detection model is obtained by training through the following steps:
shooting a plurality of excavator bucket pictures, and labeling the buckets in the pictures by using labelme;
and training a YOLOV3 network by using the labeled pictures to obtain a YOLOV3 detection model.
4. The binocular vision-based real-time earth volume calculation method of claim 1, wherein: the method for calculating the three-dimensional coordinates from the pixel points to the camera coordinate system comprises the following steps:
and calculating the three-dimensional coordinates from the pixel points to the camera coordinate system based on the disparity map and the camera internal parameters.
5. The binocular vision-based real-time earth volume calculation method of claim 1, wherein: the three-dimensional coordinate calculation formula of each point under the bucket coordinate system is as follows:
x=Xw*R2+T2
y=Yw*R2+T2
z=Zw*R2+T2
wherein, Xw,Yw,ZwWorld coordinates for each point of the earth's surface within the bucket.
6. The binocular vision-based real-time earth volume calculation method of claim 1, wherein: the method for calculating the total volume of the earthwork in the bucket of the excavator at each time comprises the following steps:
setting an m-by-m sliding window, and enabling the sliding window to sequentially traverse the whole depth map from an original point O of a bucket coordinate system;
when the sliding window traverses the depth map, each position corresponds to a prism;
averaging Z coordinates of four vertexes of the sliding window under a bucket coordinate system to serve as the height of each prism;
based on formula Vi=Si*z=m*m*zcApproximate volume V of each prismi;
Integrating all prisms to obtain volume V1;
the total volume Vi of the earth in the bucket per excavator was obtained by adding V1 to the inner volume V2 of the bucket.
8. The binocular vision-based real-time earth volume calculation method of claim 6, wherein: the calculation formula of the real-time workload of the excavator is as follows:
Vgeneral assembly=k*Vi
Wherein, VGeneral assemblyAnd k is an error correction coefficient.
9. The binocular vision-based real-time earth volume calculation method of claim 8, wherein: the error correction coefficient k is obtained by performing error measurement a plurality of times.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110102869.7A CN112819882B (en) | 2021-01-26 | 2021-01-26 | Real-time earth volume calculation method based on binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110102869.7A CN112819882B (en) | 2021-01-26 | 2021-01-26 | Real-time earth volume calculation method based on binocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112819882A true CN112819882A (en) | 2021-05-18 |
CN112819882B CN112819882B (en) | 2021-11-19 |
Family
ID=75859233
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110102869.7A Active CN112819882B (en) | 2021-01-26 | 2021-01-26 | Real-time earth volume calculation method based on binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112819882B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115307548A (en) * | 2022-10-12 | 2022-11-08 | 北京鸿游科技有限公司 | Dynamic monitoring device for excavating equipment and storage medium thereof |
CN115482269A (en) * | 2022-09-22 | 2022-12-16 | 佳都科技集团股份有限公司 | Method and device for calculating earth volume, terminal equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102494611A (en) * | 2011-12-05 | 2012-06-13 | 中国人民解放军国防科学技术大学 | method for rapidly measuring volume of object |
US20170236261A1 (en) * | 2016-02-11 | 2017-08-17 | Caterpillar Inc. | Wear measurement system using computer vision |
CN108007345A (en) * | 2017-12-01 | 2018-05-08 | 南京工业大学 | A kind of digger operating device measuring method based on monocular camera |
CN109903337A (en) * | 2019-02-28 | 2019-06-18 | 北京百度网讯科技有限公司 | Method and apparatus for determining the pose of the scraper bowl of excavator |
CN110887440A (en) * | 2019-12-03 | 2020-03-17 | 西安科技大学 | Real-time measuring method and device for volume of earth of excavator bucket based on structured light |
CN111368664A (en) * | 2020-02-25 | 2020-07-03 | 吉林大学 | Loader full-fill rate identification method based on machine vision and bucket position information fusion |
US10733752B2 (en) * | 2017-07-24 | 2020-08-04 | Deere & Company | Estimating a volume of contents in a container of a work vehicle |
CN111945799A (en) * | 2019-05-16 | 2020-11-17 | 斗山英维高株式会社 | Method for measuring amount of bucket soil in excavation work of excavator |
-
2021
- 2021-01-26 CN CN202110102869.7A patent/CN112819882B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102494611A (en) * | 2011-12-05 | 2012-06-13 | 中国人民解放军国防科学技术大学 | method for rapidly measuring volume of object |
US20170236261A1 (en) * | 2016-02-11 | 2017-08-17 | Caterpillar Inc. | Wear measurement system using computer vision |
US10733752B2 (en) * | 2017-07-24 | 2020-08-04 | Deere & Company | Estimating a volume of contents in a container of a work vehicle |
CN108007345A (en) * | 2017-12-01 | 2018-05-08 | 南京工业大学 | A kind of digger operating device measuring method based on monocular camera |
CN109903337A (en) * | 2019-02-28 | 2019-06-18 | 北京百度网讯科技有限公司 | Method and apparatus for determining the pose of the scraper bowl of excavator |
CN111945799A (en) * | 2019-05-16 | 2020-11-17 | 斗山英维高株式会社 | Method for measuring amount of bucket soil in excavation work of excavator |
CN110887440A (en) * | 2019-12-03 | 2020-03-17 | 西安科技大学 | Real-time measuring method and device for volume of earth of excavator bucket based on structured light |
CN111368664A (en) * | 2020-02-25 | 2020-07-03 | 吉林大学 | Loader full-fill rate identification method based on machine vision and bucket position information fusion |
Non-Patent Citations (2)
Title |
---|
NEELY, HALY L1 等: "Modeling Soil Crack Volume at the Pedon Scale using Available Soil Data.", 《SOIL SCIENCE SOCIETY OF AMERICA JOURNAL》 * |
王孝敏: "三维激光扫描技术在土石方计算中的应用研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115482269A (en) * | 2022-09-22 | 2022-12-16 | 佳都科技集团股份有限公司 | Method and device for calculating earth volume, terminal equipment and storage medium |
CN115307548A (en) * | 2022-10-12 | 2022-11-08 | 北京鸿游科技有限公司 | Dynamic monitoring device for excavating equipment and storage medium thereof |
Also Published As
Publication number | Publication date |
---|---|
CN112819882B (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443836A (en) | A kind of point cloud data autoegistration method and device based on plane characteristic | |
CN104484648B (en) | Robot variable visual angle obstacle detection method based on outline identification | |
CN111784778B (en) | Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization | |
CN108197583B (en) | Building change detection method based on graph cut optimization and image structure characteristics | |
CN102353340B (en) | Cylinder-cover blank machining-size identifying method and device | |
CN110807809B (en) | Light-weight monocular vision positioning method based on point-line characteristics and depth filter | |
CN112819882B (en) | Real-time earth volume calculation method based on binocular vision | |
JP2021082265A (en) | Drone visual travel distance measuring method based on depth point line feature | |
CN110189400B (en) | Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device | |
CN105931234A (en) | Ground three-dimensional laser scanning point cloud and image fusion and registration method | |
CN109255811A (en) | A kind of solid matching method based on the optimization of confidence level figure parallax | |
CN106996748A (en) | A kind of wheel footpath measuring method based on binocular vision | |
CN111091076B (en) | Tunnel limit data measuring method based on stereoscopic vision | |
CN109472802A (en) | A kind of surface grid model construction method constrained certainly based on edge feature | |
CN111899328A (en) | Point cloud three-dimensional reconstruction method based on RGB data and generation countermeasure network | |
CN116188558B (en) | Stereo photogrammetry method based on binocular vision | |
CN112652020A (en) | Visual SLAM method based on AdaLAM algorithm | |
CN117115336A (en) | Point cloud reconstruction method based on remote sensing stereoscopic image | |
CN111325828A (en) | Three-dimensional face acquisition method and device based on three-eye camera | |
CN109215118B (en) | Incremental motion structure recovery optimization method based on image sequence | |
CN113393413B (en) | Water area measuring method and system based on monocular and binocular vision cooperation | |
CN110807799B (en) | Line feature visual odometer method combined with depth map inference | |
Zhu et al. | Triangulation of well-defined points as a constraint for reliable image matching | |
CN110487254B (en) | Rapid underwater target size measuring method for ROV | |
CN111611525A (en) | Remote sensing data elevation calculation method based on object space matching elevation deviation iterative correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |