CN113446986A - Target depth measurement method based on observation height change - Google Patents

Target depth measurement method based on observation height change Download PDF

Info

Publication number
CN113446986A
CN113446986A CN202110521629.0A CN202110521629A CN113446986A CN 113446986 A CN113446986 A CN 113446986A CN 202110521629 A CN202110521629 A CN 202110521629A CN 113446986 A CN113446986 A CN 113446986A
Authority
CN
China
Prior art keywords
target
camera
height
image
optical axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110521629.0A
Other languages
Chinese (zh)
Other versions
CN113446986B (en
Inventor
潘涵彧
毛家发
王雯卿
徐清宇
许金山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110521629.0A priority Critical patent/CN113446986B/en
Publication of CN113446986A publication Critical patent/CN113446986A/en
Application granted granted Critical
Publication of CN113446986B publication Critical patent/CN113446986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A monocular target depth measurement method based on observation height change comprises the following steps: 1) in preparation for collecting data, different camera heights are respectively adopted as the heights h based on observationrThe changed target learning data. 2) Target height of h0The distance m between the target and the camera is unknown, the distance needs to be obtained through learning, and the height h of the camera isrAs is known, in this case we take an image of the target, then change the camera height, we take another image of the target and obtain another image of the target. 3) And establishing a mathematical model according to the imaging principle of the camera and the basic principle of analog-digital conversion, establishing an equation set, and performing joint solution on the two images to obtain the height and the distance of the target. The method has simple algorithm and easy operation, and can effectively reduce the production cost; system structure adopting monocular depth measurementThe method is simple and suitable for being used on mobile phones and network cameras.

Description

Target depth measurement method based on observation height change
Technical Field
The invention relates to target tracking, positioning and measuring in the field of monocular vision, in particular to a target depth measuring method based on observation height change, and belongs to the field of computer vision target depth acquisition.
Background
In recent years, mobile robots have been widely used in various inspection and monitoring, visual navigation, automatic image interpretation, human-computer interaction, virtual reality, and other fields. The machine vision is to simulate the visual function of human eyes by a computer, extract information from a video sequence and identify the form and motion of three-dimensional scenery and objects in an objective world.
With the increasing demand of mobile robots in various fields such as industry and civilian life, machine vision has an important influence on the performance of the robots, and the machine vision technology is more and more widely applied due to the flexibility and adaptability of the algorithm.
At present, methods for acquiring external target depth information by machine vision are roughly divided into two types, one is a target depth information acquisition method based on computer vision, and the other is a positioning technology of a sensor. The target depth information acquisition method based on computer vision comprises a binocular perception method, a monocular camera calibration method, a target depth information estimation method based on depth learning and a single camera + plane mirror depth acquisition method.
The binocular perceived depth method simulates human stereoscopic vision, and target depth information is obtained from the viewing angle difference between the left and right eye direct views. However, the depth accuracy of the binocular perception method is affected by the performance of the cameras, the illumination and the length of the base line (the distance between two cameras). Due to the high complexity and the relatively large processing data amount of the method, the method is greatly limited in application and is not favorable for the real-time requirement of obtaining the target depth information.
The camera calibration method may be classified into a conventional camera calibration method and a self-calibration method. These conventional camera calibration methods have high calibration accuracy but require specific calibration references. During the calibration process, due to the limitation of equipment, the corresponding coordinates of the target in the world coordinate system and the image coordinate system cannot be recorded very accurately, and the precision of coordinate conversion fluctuates accordingly.
The target depth information estimation method based on deep learning is used for acquiring the depth information of a target according to a neural network architecture and large-scale data learning. Although these methods can obtain an estimated value of the target depth, none of these methods can obtain an accurate value of the target depth, and are difficult to be applied to a mobile robot.
The non-visual sensor senses the proximity degree of an external target and the robot by using sound waves, infrared rays, pressure, electromagnetic induction and the like, and solves the depth information of the target. However, the coordinates of the target in 3D space need to be imaged together with the sensor, which not only increases the data processing amount of the DSP of the robot, but also solves the problem of synchronization between vision and the sensor, so that the method slows down the response speed of the robot.
Most of the existing technologies of target tracking, positioning, measuring and the like are focused on double cameras or realized by adopting the technologies of single camera and non-visual sensor and the like, and the two technologies sacrifice the processing speed by increasing the data volume and realize the accurate positioning of machine vision. If machine vision positioning can be realized on the premise of not increasing data processing amount, only a monocular distance measurement method is often adopted, and therefore monocular distance measurement is obviously more challenging in practical research.
Disclosure of Invention
The invention provides a target depth measuring method based on observation height change based on monocular vision, and aims to solve the problem that most of the existing technologies such as target tracking, positioning and measuring in machine vision are focused on double cameras or are realized by adopting the technologies of a single camera and a non-visual sensor, and the two technologies sacrifice the processing speed by increasing the data volume.
The invention provides a monocular vision-based target depth measuring method, which theoretically proves the observation high solvability according to the camera imaging principle and the basic principle of analog-to-digital conversion and provides a basic method for the monocular vision theory. The relationship among the target distance, the target height, the focal length of the camera, the camera height and the image resolution is deduced, the study of monocular target information is realized, and the theory provided by the invention provides a theoretical basis for target tracking, target measurement and the like based on monocular vision.
The technical scheme adopted by the invention for solving the technical problems is as follows:
and (3) introducing known external parameters by utilizing the changed observation height, and establishing a mathematical model according to the imaging principle of the camera so as to deduce the target depth.
The invention relates to a target depth measuring method based on observation height change, which comprises the following steps:
step 1, the definition of 'the target is positioned at the visual center' and 'the depression' is given.
Step 2, establishing a relation model of the target distance and the target height according to the position of the main optical axis drop point of the camera, wherein the relation model comprises the following steps:
1) the target vertex falls on the camera main optical axis and the camera images the geometric model.
As shown in FIG. 1, the length d of the image CD, the target distance m and the target height h are derived0Height h of camerarThe relationship between:
Figure BDA0003064238000000031
as can be seen from the camera imaging rationale, since the vertex B of the object AB falls on the main optical axis, the image C of point B must be the image plane π1Of the center of (c). The target depth distance m can be obtained:
Figure BDA0003064238000000032
2) the target vertex is higher than the model of the imaging geometry at the main optical axis of the camera.
As in fig. 2, when the target vertex is higher than the main optical axis of the camera, then pi is at the image plane1In the middle, the vertex C of the target image CD is higher than the central point of the image plane, and the intersection pi of the main optical axis of the camera and the target AB is set1Point and is further provided withI AB '| ═ X, then | B' B | ═ h0X, the target height h can be set0The relation with the target distance m is shown as follows:
Figure BDA0003064238000000033
3) the target vertex is lower than the model of the imaging geometry when the main optical axis of the camera is.
When the target vertex is lower than the main optical axis of the camera, as in fig. 3, then pi is in the image plane1In the middle, the vertex C of the target image CD is lower than the center point of the image plane, the extension line of the main optical axis of the camera and the target AB intersects at a point B ', and | AB ' | X is further set, so | B ' B | ═ h0X, due to CD at the image plane pi1The numerical coordinates in (A) are respectively C (X)C,YC)D(XD,YD) And C 'is the center point of the image plane, so the simulation coordinate of C' is
Figure BDA0003064238000000034
The target depth distance may be expressed as:
Figure BDA0003064238000000035
and 3, observing solvability analysis of the height change, wherein the process is as follows:
and (3) an imaging geometric model when the target vertex is higher than the main optical axis of the camera, and the variable X in the formula is eliminated to obtain:
Figure BDA0003064238000000041
and (4) the imaging geometry model when the target vertex is lower than the main optical axis of the camera is obtained by eliminating the variable X in the formula:
f(d1-d2)m2-(d1d2+f2)h0m+f(d1-d2)hr(hr-h0)=0………(6)
because d is equal to d1-d2And then:
fdm2-(d1d2+f2)h0m+fdhr(hr-h0)=0………(7)
for the purpose of distinction, the target image length, the distance from the lowest point of the target image to the center point of the image plane, and the distance from the highest point to the center point of the image plane in equation (7) are respectively expressed as: d', d1',d'2The joint formulas (3) and (4) (marked as Λ { (3), (4) }) are simplified as follows:
Figure BDA0003064238000000042
if the soft robot tracks the target, then the external parameter h is given in the machine learning processrThe flexible robot can realize external parameter h by stretching and retracting the height of the flexible robotrFor example, the soft robot may stand on its tiptoe or extend its neck to observe the target, or may bend down or shorten its neck to observe the target. In these cases, the height h is observedrChanges will occur.
Suppose the robot is at height hrThe target image obtained at that time is I, and the target image obtained after the elongation or shortening by Δ h is I', and the observation height h at that timer' is:
hr'=hr+Δh………(9)
provision for
Figure BDA0003064238000000051
When the direction is positive, then Δ h takes a negative value when the robot is shortened and a positive value when the robot is extended. Likewise, when the main optical axis is higher than the target vertex, d2Negative, when the main optical axis is lower than the target top, d2Taking this as a positive example, formula (9) is substituted for formula (8), resulting in a joint solution system of equations with varying external parameters:
Figure RE-GDA0003174902700000052
d ', d ' of formula (10) '1,d'2Respectively, the length of the target image C ' D ', the distance from C ' to the image center, and the distance from D ' to the image center in the image I ', such as the red portion in fig. 4.
The curve expression (10-1) is recorded as xi (I), and the curve expression (10-2) is recorded as xi (I').
The relation h is obtained for both sides of the formula (8-1)0The partial derivatives of (c) are given by:
Figure BDA0003064238000000053
due to d1d2<<f2Thus, the above formula can be simplified to:
Figure BDA0003064238000000054
the molecular moiety dh of formula (12)r+ fm >0, determining phi (m, h)r,h0) The positive and negative of the function only have the denominator part thereof, and the curves xi (I) and xi (I') can be deduced to be
Figure BDA0003064238000000055
There is a unique intersection point within the range
Figure BDA0003064238000000056
Do not intersect within the range.
In conclusion, a simplified graph of the two curves is drawn, and it can be seen from the simplified graph that only unique intersection points exist in the range of m >0 in the two curves, that is, only unique solution exists in the equation set (8).
And 4, changing the observation height of the camera to obtain a plurality of groups of different photos, listing equations of the target depth and the observation height under different observation heights through the established mathematical model, and then simultaneously solving to obtain the target depth and the target height.
Through the operation of the steps, the determination of the target depth can be completed, and the target calibration and tracking can be completed.
The invention has the advantages that: the target depth and height are solved by establishing a mathematical model according to the camera imaging principle and the analog-to-digital conversion basic principle, the algorithm is simple, the operation is easy, and the production cost can be effectively reduced. The monocular depth measurement system is simple in structure, can be widely used on a mobile phone and a network camera, avoids a complex stereo matching process of distance measurement by a binocular camera, reduces the operation complexity, and can meet the requirement of real-time performance in industrial production.
Drawings
In order to measure the horizontal distance m between the target object AB and the camera, the coordinate system shown in fig. 1 is established by taking the bottom position O point OF the camera as the center OF the coordinate system, the horizontal line in the machine vision direction as the X axis, the straight line perpendicular to the OX axis on the ground plane as the Y axis, and the position OF the camera as the F axis. And let the image plane be pi1Ground plane is pi2After the object AB is imaged by the camera, the image plane pi is projected1For CD, like CD length d, target distance m, target height h0Height h of camerar
FIG. 1 is an imaging diagram of a target vertex just falling on the main optical axis, wherein the highest point of the target just falling on the main optical axis of the camera means that the target vertex is located at the visual center, and since the vertex B of the target | AB | falls on the main optical axis, the image C of the point B must be the image plane π1Of the center of (c).
FIG. 2 is a schematic diagram of imaging with a target vertex above the main optical axis of the camera, where the target vertex is above the main optical axis of the camera, then at the image plane π1In the middle, the vertex C of the target image CD is higher than the central point of the image plane, and the intersection pi of the main optical axis of the camera and the target AB is set1If | AB '| ═ X is again assumed, then | B' B | ═ h0-X。
FIG. 3 is a schematic view of an image taken with the target vertex below the principal optical axis of the camera, where the target vertex is below the principal optical axis of the camera, then at an image plane π1In the middle, the vertex C of the target image CD is lower than the central point of the image plane and is provided with a main light of a cameraThe axis intersects with the extension line of the target AB at the point B ', and if | AB ' | ═ X is further set, then | B ' B | ═ h0X, in the image plane pi due to CD1The numerical coordinates in (A) are respectively C (X)C,YC)D(XD,YD) And C 'is the center point of the image plane, so the simulation coordinate of C' is
Figure BDA0003064238000000071
FIG. 4 is a schematic representation of the imaging geometry principle of changing the observation height, in which the robot is assumed to be at height hrThe object image obtained for this time is I, and the object image obtained after the elongation or shortening by Δ h is I', the red portion in the figure.
FIG. 5 is an approximate graph of a curve and a curve when the external parameter is changed, in which the external parameter h of the embodiment is shownrApproximate graph of curve xi (I) and curve xi (I') in variation, depth m is in abscissa, and ordinate h0An intersection is seen for the object height.
Fig. 6 is a flow chart of the method of the present invention.
Detailed Description
In order to better explain the technical scheme of the invention, the invention is further explained by an embodiment with the accompanying drawings.
A target depth measurement method based on observation height change comprises the following steps:
step one, preparing a steel pipe and a camera, preparing for collecting data, respectively adopting different camera heights as the height h based on observationrThe changed target learning data.
Step two, the target height is h0The distance m between the target and the camera is unknown and needs to be obtained by learning, and the height h of the camera isrAs is known, in this case, an image of the object is taken, the camera height is then changed, the object is again taken and an image of the object is again obtained. The camera changes by Δ h, which is a known number
Step three, establishing a mathematical model according to the images 1, 2 and 3, establishing an equation set, and carrying out operation through the two imagesAnd jointly solving to obtain the height and the distance of the target. And recording the solved target height and target distance of ^ { i, j }. Specific parameter, h of i diagramrH of 2010mm, j diagramr1110mm,. DELTA.h 910mm, the target depth m was 2399.4, and the error was 0.6 mm.
The above detailed description, further details the object, technical solution and advantages of the present invention, it should be understood that the above described is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. A monocular target depth measurement method based on observation height change comprises the following steps:
step 1, giving definitions of 'a target is positioned at a visual center' and 'a depression angle';
step 2, establishing a relation model of the target distance and the target height according to the position of the main optical axis drop point of the camera, wherein the relation model comprises the following steps:
1) the target vertex is positioned on a camera main optical axis to form a geometric model;
deducing the length d, the target distance m and the target height h of the image CD0Height h of camerarThe relationship between:
Figure RE-FDA0003174902690000011
as can be seen from the camera imaging rationale, since the vertex B of the object AB falls on the main optical axis, the image C of point B must be the image plane π1The center of (a); the target depth distance m can be obtained:
Figure RE-FDA0003174902690000012
the imaging geometric model when the target vertex is higher than the main optical axis of the camera;
when the target vertex is higher than the main optical axis of the camera, then pi is in the image plane1In the middle, the vertex C of the target image CD is higher than the central point of the image plane, and the intersection pi of the main optical axis of the camera and the target AB is set1If | AB '| ═ X is again assumed, then | B' B | ═ h0X, the target height h can be set0The relation with the target distance m is shown as follows:
Figure RE-FDA0003174902690000013
when the target vertex is lower than the main optical axis of the camera, the imaging geometric model is obtained;
when the target vertex is lower than the main optical axis of the camera, then pi is in the image plane1In the middle, the vertex C of the target image CD is lower than the center point of the image plane, the extension line of the main optical axis of the camera and the target AB intersects at a point B ', and | AB ' | X is further set, so | B ' B | ═ h0X, due to CD at the image plane pi1The numerical coordinates in (A) are respectively C (X)C,YC)D(XD,YD) And C 'is the center point of the image plane, so the simulation coordinate of C' is
Figure RE-FDA0003174902690000014
The target depth distance may be expressed as:
Figure RE-FDA0003174902690000015
and 3, observing solvability analysis of the height change, wherein the process is as follows:
and (3) an imaging geometric model when the target vertex is higher than the main optical axis of the camera, and the variable X in the formula is eliminated to obtain:
Figure RE-FDA0003174902690000021
and (4) the imaging geometry model when the target vertex is lower than the main optical axis of the camera is obtained by eliminating the variable X in the formula:
f(d1-d2)m2-(d1d2+f2)h0m+f(d1-d2)hr(hr-h0)=0…………(6)
because d is equal to d1-d2And then:
fdm2-(d1d2+f2)h0m+fdhr(hr-h0)=0…………(7)
for the purpose of distinction, the target image length, the distance from the lowest point of the target image to the center point of the image plane, and the distance from the highest point to the center point of the image plane in equation (7) are respectively expressed as: d ', d'1,d′2The joint formulas (3) and (4) (marked as Λ { (3), (4) }) are simplified as follows:
Figure RE-FDA0003174902690000022
if the soft robot tracks the target, then the external parameter h is given in the machine learning processrThe software robot can realize external parameter h by stretching and contracting the height of the software robotrFor example, the soft robot can stand on tiptoe or extend neck to observe the target, and can also bend down or shorten neck to observe the target; in these cases, the height h is observedrWill change;
suppose the robot is at height hrThe target image obtained at that time is I, and the target image obtained after extending or shortening Δ h is I', and the observation height h at that time isr' is:
h′r=hr+△h…………(9)
provision for
Figure RE-FDA0003174902690000023
When the direction is positive, when the robot is shortened, the delta h is negative, and when the robot is extended, the delta h is positive; similarly, when the main optical axis is above the targetAt the vertex, d2Negative, when the main optical axis is lower than the target top, d2Taking this as a positive example, formula (9) is substituted for formula (8), resulting in a joint solution system of equations with varying external parameters:
Figure RE-FDA0003174902690000024
d ', d ' of formula (10) '1,d′2Respectively representing the length of a target image C ' D ', the distance from C ' to the image center and the distance from D ' to the image center in the image I ';
marking the curve formula (10-1) as xi (I), and marking the curve formula (10-2) as xi (I');
the relation h is obtained for both sides of the formula (10-1)0The partial derivatives of (c) are given by:
Figure RE-FDA0003174902690000031
due to d1d2<<f2Thus, the above formula can be simplified to:
Figure RE-FDA0003174902690000032
the molecular moiety dh of formula (12)r+fm>0; determining phi (m, h)r,h0) The positive and negative of the function only have the denominator part thereof, and the curves xi (I) and xi (I') can be deduced to be
Figure RE-FDA0003174902690000033
There is a unique intersection point within the range
Figure RE-FDA0003174902690000034
Do not intersect within the range;
in conclusion, a simple graph of the two curves is drawn, and the simple graph shows that only one unique intersection point exists in the range of m >0 of the two curves, namely that only one unique solution exists in the equation set (8);
and 4, changing the observation height of the camera to obtain a plurality of groups of different photos, listing equations of the target depth and the observation height under different observation heights through an established mathematical model, and then simultaneously solving to obtain the target depth and the target height.
2. A monocular object depth measuring method based on observed altitude change as described in claim 1, wherein: the definition of "target located at visual center" in step 1 is the basis of the mathematical model classification in step 2.
3. A monocular object depth measuring method based on observed altitude change as described in claim 1, wherein: and 2, deducing 4 relation models among the target distance, the target height, the camera height, the image resolution, the image target size, the camera parameters and the like according to a geometric model of the camera imaging and a basic principle of converting analog signals into digital signals, wherein only one camera is needed.
4. The method for estimating the click stream of the cross neural network based on the weight matrix decomposition of the associated features as recited in claim 1, wherein: and 4, equations under different mathematical models at different observation heights are combined to obtain a unique solution.
CN202110521629.0A 2021-05-13 2021-05-13 Target depth measuring method based on observation height change Active CN113446986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110521629.0A CN113446986B (en) 2021-05-13 2021-05-13 Target depth measuring method based on observation height change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110521629.0A CN113446986B (en) 2021-05-13 2021-05-13 Target depth measuring method based on observation height change

Publications (2)

Publication Number Publication Date
CN113446986A true CN113446986A (en) 2021-09-28
CN113446986B CN113446986B (en) 2022-07-22

Family

ID=77809690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110521629.0A Active CN113446986B (en) 2021-05-13 2021-05-13 Target depth measuring method based on observation height change

Country Status (1)

Country Link
CN (1) CN113446986B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3038011A1 (en) * 2014-12-22 2016-06-29 Delphi Technologies, Inc. Method for determining the distance between an object and a motor vehicle by means of a monocular imaging device
CN106443650A (en) * 2016-09-12 2017-02-22 电子科技大学成都研究院 Monocular vision range finding method based on geometric relation
CN107084680A (en) * 2017-04-14 2017-08-22 浙江工业大学 Target depth measuring method based on machine monocular vision
WO2020234906A1 (en) * 2019-05-17 2020-11-26 Alma Mater Studiorum - Universita' Di Bologna Method for determining depth from images and relative system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3038011A1 (en) * 2014-12-22 2016-06-29 Delphi Technologies, Inc. Method for determining the distance between an object and a motor vehicle by means of a monocular imaging device
CN106443650A (en) * 2016-09-12 2017-02-22 电子科技大学成都研究院 Monocular vision range finding method based on geometric relation
CN107084680A (en) * 2017-04-14 2017-08-22 浙江工业大学 Target depth measuring method based on machine monocular vision
WO2020234906A1 (en) * 2019-05-17 2020-11-26 Alma Mater Studiorum - Universita' Di Bologna Method for determining depth from images and relative system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MAO JIAFA ETC.: "Target distance measurement method using monocular vision", 《IET JOURNALS》 *

Also Published As

Publication number Publication date
CN113446986B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN109615652B (en) Depth information acquisition method and device
CN109658457B (en) Method for calibrating arbitrary relative pose relationship between laser and camera
Orteu et al. Multiple-camera instrumentation of a single point incremental forming process pilot for shape and 3D displacement measurements: methodology and results
EP1886281B1 (en) Image processing method and image processing apparatus
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN111667536A (en) Parameter calibration method based on zoom camera depth estimation
CN108413917B (en) Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN111427451A (en) Method for determining position of fixation point in three-dimensional scene by adopting scanner and eye tracker
CN113446957B (en) Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
CN113393439A (en) Forging defect detection method based on deep learning
CN107917700A (en) The 3 d pose angle measuring method of target by a small margin based on deep learning
CN104732586B (en) A kind of dynamic body of 3 D human body and three-dimensional motion light stream fast reconstructing method
CN102881040A (en) Three-dimensional reconstruction method for mobile photographing of digital camera
CN112525106A (en) Three-phase machine cooperative laser-based 3D detection method and device
Grudziński et al. Stereovision tracking system for monitoring loader crane tip position
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN109636856A (en) Object 6 DOF degree posture information union measuring method based on HOG Fusion Features operator
CN111998834B (en) Crack monitoring method and system
KR100438212B1 (en) Extraction method for 3-dimensional spacial data with electron microscope and apparatus thereof
CN113446986B (en) Target depth measuring method based on observation height change

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant