CN105333818B - 3d space measuring method based on monocular-camera - Google Patents

3d space measuring method based on monocular-camera Download PDF

Info

Publication number
CN105333818B
CN105333818B CN201410339869.9A CN201410339869A CN105333818B CN 105333818 B CN105333818 B CN 105333818B CN 201410339869 A CN201410339869 A CN 201410339869A CN 105333818 B CN105333818 B CN 105333818B
Authority
CN
China
Prior art keywords
msub
mrow
mfrac
point
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410339869.9A
Other languages
Chinese (zh)
Other versions
CN105333818A (en
Inventor
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201410339869.9A priority Critical patent/CN105333818B/en
Publication of CN105333818A publication Critical patent/CN105333818A/en
Application granted granted Critical
Publication of CN105333818B publication Critical patent/CN105333818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention provides a kind of 3d space measuring method based on monocular-camera, including:The tested point on ground and subtest point are focused respectively, obtain corresponding imaging parameters, wherein, subtest point is located at camera optical axis on the projection line on ground, and tested point is different from subtest point object distance, and the video camera installation parameter being imaged twice is constant;According to tested point and the imaging parameters of subtest point, calculating is associated to being imaged twice, obtain the relative coordinate of tested point, the reference frame of relative coordinate is, origin is projected as on ground with video camera installation fulcrum, using the fulcrum and the vertical line on ground as Y-axis, X-axis is projected as along ground with camera optical axis, using the direction vertical with X-axis and Y-axis as Z axis.The present invention calculates the space coordinates of tested point using the inner parameter of monocular-camera, without measuring the installation parameter of video camera and demarcation thing being demarcated, saves cost, simplifies operation.

Description

3D space measuring method based on monocular camera
Technical Field
The invention relates to the field of video monitoring, in particular to a 3D space measuring method based on a monocular camera.
Background
The monocular camera cannot form 3D vision by a single shot. When the auxiliary measurement is carried out on a reference object without a pre-known size and the erection height of the camera and the included angle between the central axis of the lens and the ground are not known, the relative coordinate between the shot object and the installation point of the camera cannot be measured.
In the prior art, when a monocular camera shoots a single time, a proportional relation between an object size and a pixel of an image shot by the camera is calculated by shooting a reference object with a preset size according to the pixel occupied by the shot reference object in the image, the actual object size and an included angle between the camera and the ground. Then, in the subsequent shooting, the actual size of the object is calculated according to the number of pixels occupied by the shot object without changing the installation parameters of the camera. The relative coordinate between the shot object and the camera can be calculated through the included angle between the camera and the ground and the height of the erection rod.
Through the process, the monocular camera needs to have a plurality of external conditions for single shooting, and needs camera installation data such as the height of an erection rod and the included angle between the camera and the ground; a calibration object is needed to be used, and auxiliary calibration calculation is carried out manually to obtain conversion parameters; the installation parameters of the camera cannot be changed in subsequent use, otherwise the camera must be calibrated again, and the adaptability is poor.
Or the same object is shot by a binocular camera in the prior art, or the same object is shot by a monocular camera at two different positions and angles, and 3D visual perception is realized through different positions in an image. However, it is necessary to use two cameras or to provide a single camera with a moving guide and a driving device, which is costly.
Disclosure of Invention
In view of the above, the present invention provides a 3D space measuring method based on a monocular camera, including:
focusing a point to be tested and an auxiliary test point on the ground respectively to obtain corresponding imaging parameters, wherein the auxiliary test point is positioned on a projection line of an optical axis of a camera on the ground, the object distance between the point to be tested and the auxiliary test point is different, and the camera installation parameters of two times of imaging are unchanged;
and performing correlation calculation on the two imaging according to the imaging parameters of the point to be measured and the auxiliary test point to obtain a relative coordinate of the point to be measured, wherein a reference coordinate system of the relative coordinate is that the projection of a camera mounting fulcrum on the ground is taken as an origin, the perpendicular line of the fulcrum and the ground is taken as a Y axis, the projection of the camera optical axis along the ground is taken as an X axis, and the direction perpendicular to the X axis and the Y axis is taken as a Z axis.
The invention also provides a 3D space measuring device based on the monocular camera, which comprises:
the imaging parameter acquisition unit is used for respectively focusing a point to be measured and an auxiliary test point on the ground to acquire corresponding imaging parameters, wherein the auxiliary test point is positioned on a projection line of a camera optical axis on the ground, the object distance between the point to be measured and the auxiliary test point is different, and the camera installation parameters of the two times of imaging are unchanged;
and the relative coordinate calculation unit is used for performing correlation calculation on the two imaging according to the imaging parameters of the point to be measured and the auxiliary test point to obtain the relative coordinate of the point to be measured, wherein the reference coordinate system of the relative coordinate is that the projection of the camera installation fulcrum on the ground is taken as an origin, the perpendicular line of the fulcrum and the ground is taken as a Y axis, the projection of the camera optical axis along the ground is taken as an X axis, and the direction perpendicular to the X axis and the Y axis is taken as a Z axis.
The invention uses the internal parameters of the monocular camera to calculate the space relative coordinate of the point to be measured, does not need to measure the installation parameters of the camera and calibrate the calibration object, saves the labor, material and time costs and simplifies the operation process.
Drawings
Fig. 1 is a schematic diagram of a logic structure of monocular camera-based 3D spatial measurement and its underlying hardware environment in one embodiment of the present invention.
Fig. 2 is a flowchart of a monocular camera-based 3D spatial measurement method in one embodiment of the present invention.
Fig. 3 is a schematic view of a monocular camera mount.
FIG. 4 is a schematic illustration of optical imaging in one embodiment of the present invention.
FIG. 5 is a schematic diagram of the image height of an imaging point in the Y-axis direction of an image sensor according to an embodiment of the present invention.
Fig. 6 is a schematic view of the principle of lens imaging.
FIG. 7 is a schematic diagram of the image height of an imaging point in the Z-axis direction of an image sensor according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The present invention provides a monocular camera based 3D spatial measuring device, which is described below by taking a software implementation as an example, but the present invention does not exclude other implementations such as hardware or logic devices. As shown in fig. 1, the hardware environment in which the apparatus operates includes a CPU, a memory, a nonvolatile memory, and other hardware. The device acts as a logical level virtual device that is run by the CPU. The apparatus includes an imaging parameter acquisition unit and a relative coordinate calculation unit. Referring to fig. 2, the operation of the apparatus includes the following steps:
step 101, an imaging parameter obtaining unit respectively focuses on a point to be tested and an auxiliary test point on the ground to obtain corresponding imaging parameters, wherein the auxiliary test point is located on a projection line of a camera optical axis on the ground, the object distance between the point to be tested and the auxiliary test point is different, and the camera installation parameters of two times of imaging are unchanged;
102, performing correlation calculation on the two imaging according to the imaging parameters of the point to be measured and the auxiliary test point by a relative coordinate calculation unit to obtain a relative coordinate of the point to be measured, wherein a reference coordinate system of the relative coordinate is that the projection of a camera installation fulcrum on the ground is taken as an origin, the perpendicular line of the fulcrum and the ground is taken as a Y axis, the projection of a camera optical axis along the ground is taken as an X axis, and the direction perpendicular to the X axis and the Y axis is taken as a Z axis.
The method images the point to be measured and the auxiliary test point under the condition of not changing the installation parameters (position, height, holder angle and the like) of the monocular camera, and performs correlation calculation on the two images according to the imaging parameters to obtain the spatial coordinate of the point to be measured. The specific processing procedure is as follows.
As shown in fig. 3, the monocular camera is vertically mounted at point E by a pole. And establishing a reference coordinate system by taking the point E as an origin, and calculating the position coordinates of the point to be measured relative to the coordinate system. The X axis of the coordinate system is the projection of the optical axis direction of the camera on the ground, the Y axis is the vertical rod direction, and the Z axis is vertical to the XY plane. In the figure, the intersection point A of an object AD and the ground is a point to be measured, and the point B on the ground is an auxiliary test point, and is projected on the optical axis of the camera along the ground.
As shown in fig. 4, a simple structure of the camera is shown, wherein the optical center of the lens is a virtual optical center formed by multiple lenses of the camera lens, the distance (along the optical axis direction of the camera) between the optical center of the lens and the installation pivot of the camera is r, and the imaging of the two times can be changed. Is the included angle between the optical axis of the camera and the ground.
Under the condition of not changing the installation parameters (height, optical axis angle and direction) of the camera, focusing the point A and the point B respectively by using a monocular camera to obtain corresponding imaging parameters. The imaging point of the point A on the image sensor is a point a, the imaging point of the point B on the image sensor is a point B, and images and objects are projected onto a plane vertical to the optical axis, so that calculation and derivation are facilitated. P1An object plane for primary imaging, namely an object plane where the point A is located; p2The object plane of the secondary imaging, namely the object plane where the point B is located. Thereby obtaining the image distance V of one-time imaging1Focal length F1And the distance r between the optical center of the lens and the mounting pivot1(ii) a Image distance V of secondary imaging2Focal length F2And the distance r between the optical center of the lens and the mounting pivot2
The image height is calculated from the position of the imaging point in the image sensor, as shown in fig. 5. The upper image point in the figure is point b, and the lower image point is point a. And calculating the physical size (image height) of the imaging point along the XY plane according to the direct proportion relation between the physical size and the number of the corresponding pixel points along the direction of the physical size. S in the figure is the physical size of the effective pixel range of the image sensor along the XY plane; s1The vertical distance from the point a to the central horizontal line of the image sensor is the imaging height of the point A; s2The vertical distance between the B point and the horizontal line at the center of the image sensor, namely the imaging height of the B point.
Through the process, the image distance V corresponding to the A point is obtained1Focal length F1Image height S1And the distance r between the optical center of the lens and the mounting pivot1(ii) a Obtaining the image distance V corresponding to the B point2Focal length F2Image height S2And the distance r between the optical center of the lens and the mounting pivot2. And performing correlation calculation on the two imaging according to the imaging parameters to obtain the relative coordinates of the point A to be measured. The calculation process is described in detail below with reference to fig. 4.
According to the Gaussian imaging formula
Respectively calculating the object distance U of the point A1And object distance U of point B2
The distance (in the direction of the optical axis) between the object planes from which the two images are obtained is thus:
according to the imaging principle of the lens shown in FIG. 6
The object heights of points A and B relative to the optical axis of the camera, i.e., the values of n and k in FIG. 4, are determined
From the geometrical relationships in FIG. 4, it can be derived
Substituting the formula (4), the formula (6) and the formula (7) into the formula (8) to obtain
Similarly, it can be derived from the geometric relationship
Substituting the formula (4), the formula (6) and the formula (7) into the formula (10) to obtain
Therefore, the temperature of the molten metal is controlled,
l is the X-axis coordinate of the point A to be measured and is marked as Ax
Point A is in object plane P1The upper distance from the plane of the optical axis perpendicular to the ground is AzThe distance between the image point a of the point A and the plane of which the optical axis is vertical to the ground is azThen, then
Similarly, the physical dimension (image height a) of the imaging point a along the Z-axis direction is calculated according to the direct proportion relation between the physical dimension and the number of the corresponding pixel points along the physical dimension directionz). In fig. 7, Q is the physical size of the effective pixel range of the image sensor along the Z-axis direction.
Substituting the formula (2) into the formula (13) to obtain
AzThe Z-axis coordinate of the point A to be measured is taken as the coordinate of the Z-axis of the point A to be measured; y-axis coordinate A of point AyIs 0. Coordinates of point A (A)x,Ay,Az) The coordinates of the point a are relative to the coordinates of the preset coordinate system of the point E, and if the physical coordinates of the point E are known, the actual geographic position coordinates of the point a can be calculated by adding the coordinates of the point E to the calculated relative coordinates of the point a.
Therefore, the position of the point to be measured can be measured by performing correlation calculation on the imaging parameters through two or more times of imaging of the monocular camera and by utilizing the internal data of the monocular camera, such as information of image distance, focal length, image sensor size and the like. In the process, the installation parameters of the camera do not need to be measured and the calibration object does not need to be calibrated, so that the labor, material and time costs are saved, and the operation process is simplified.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A3D space measurement method based on a monocular camera is characterized by comprising the following steps:
focusing a point to be tested and an auxiliary test point on the ground respectively to obtain corresponding imaging parameters, wherein the auxiliary test point is positioned on a projection line of an optical axis of a camera on the ground, the object distance between the point to be tested and the auxiliary test point is different, and the camera installation parameters of two times of imaging are unchanged;
and performing correlation calculation on the two imaging according to the imaging parameters of the point to be measured and the auxiliary test point to obtain a relative coordinate of the point to be measured, wherein a reference coordinate system of the relative coordinate is that the projection of a camera mounting fulcrum on the ground is taken as an origin, the perpendicular line of the fulcrum and the ground is taken as a Y axis, the projection of the camera optical axis along the ground is taken as an X axis, and the direction perpendicular to the X axis and the Y axis is taken as a Z axis.
2. The method of claim 1, wherein:
x-axis coordinate A in the relative coordinate of the point to be measuredxComprises the following steps:
<mrow> <msub> <mi>A</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>+</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
wherein,
V1the image distance of the point to be measured is taken as the image distance of the point to be measured;
F1the focal length of the point to be measured is taken as the focal length of the point to be measured;
S1the image height of the point to be measured along the XY plane is taken as the image height;
r1the distance between the optical center of the lens of the point to be measured and the installation fulcrum of the camera along the optical axis direction of the camera;
V2the image distance of the auxiliary test point is used as the image distance of the auxiliary test point;
F2the focal distance of the auxiliary test point is;
S2the image height of the auxiliary test point along the XY plane is taken as the image height;
r2the distance between the optical center of the lens of the auxiliary test point and the installation fulcrum of the camera along the optical axis direction of the camera.
3. The method of claim 1, wherein:
z-axis coordinate A in the relative coordinate of the point to be measuredzComprises the following steps:
<mrow> <msub> <mi>A</mi> <mi>z</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>a</mi> <mi>z</mi> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
wherein,
V1the image distance of the point to be measured is taken as the image distance of the point to be measured;
F1the focal length of the point to be measured is taken as the focal length of the point to be measured;
azand the image height of the point to be measured along the Z-axis direction is obtained.
4. The method of claim 2, wherein:
a is describedxThe specific calculation process is as follows:
the object distance U of the point to be measured1And the object distance U of the auxiliary test point2Respectively as follows:
<mrow> <msub> <mi>U</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
<mrow> <msub> <mi>U</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow>
thereby obtaining object planes P imaged twice1And P2The distance j + m between the two cameras along the optical axis direction of the camera is as follows:
<mrow> <mi>j</mi> <mo>+</mo> <mi>m</mi> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow>
respectively solving the object height n of the point to be tested, which is vertical to the optical axis of the camera, and the object height k of the auxiliary test point, which is vertical to the optical axis of the camera:
<mrow> <mi>n</mi> <mo>=</mo> <mfrac> <msub> <mi>U</mi> <mn>1</mn> </msub> <msub> <mi>V</mi> <mn>1</mn> </msub> </mfrac> <mo>*</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
<mrow> <mi>k</mi> <mo>=</mo> <mfrac> <msub> <mi>U</mi> <mn>2</mn> </msub> <msub> <mi>V</mi> <mn>2</mn> </msub> </mfrac> <mo>*</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow>
respectively substituting the above parameters into a geometric formulaAnd can obtain the product
<mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
<mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
Ax=L1+L2
5. A monocular camera-based 3D spatial measuring device, comprising:
the imaging parameter acquisition unit is used for respectively focusing a point to be measured and an auxiliary test point on the ground to acquire corresponding imaging parameters, wherein the auxiliary test point is positioned on a projection line of a camera optical axis on the ground, the object distance between the point to be measured and the auxiliary test point is different, and the camera installation parameters of the two times of imaging are unchanged;
and the relative coordinate calculation unit is used for performing correlation calculation on the two imaging according to the imaging parameters of the point to be measured and the auxiliary test point to obtain the relative coordinate of the point to be measured, wherein the reference coordinate system of the relative coordinate is that the projection of the camera installation fulcrum on the ground is taken as an origin, the perpendicular line of the fulcrum and the ground is taken as a Y axis, the projection of the camera optical axis along the ground is taken as an X axis, and the direction perpendicular to the X axis and the Y axis is taken as a Z axis.
6. The apparatus of claim 5, wherein:
the relative coordinate calculation unit calculates the X-axis coordinate A of the point to be measuredxComprises the following steps:
<mrow> <msub> <mi>A</mi> <mi>x</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> <mo>+</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
wherein,
V1the image distance of the point to be measured is taken as the image distance of the point to be measured;
F1the focal length of the point to be measured is taken as the focal length of the point to be measured;
S1the image height of the point to be measured along the XY plane;
r1The distance between the optical center of the lens of the point to be measured and the installation fulcrum of the camera along the optical axis direction of the camera;
V2the image distance of the auxiliary test point is used as the image distance of the auxiliary test point;
F2the focal distance of the auxiliary test point is;
S2the image height of the auxiliary test point along the XY plane is taken as the image height;
r2the distance between the optical center of the lens of the auxiliary test point and the installation fulcrum of the camera along the optical axis direction of the camera.
7. The apparatus of claim 5, wherein:
the relative coordinate calculation unit calculates the Z-axis coordinate A of the point to be measuredzComprises the following steps:
<mrow> <msub> <mi>A</mi> <mi>z</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>a</mi> <mi>z</mi> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
wherein,
V1the image distance of the point to be measured is taken as the image distance of the point to be measured;
F1the focal length of the point to be measured is taken as the focal length of the point to be measured;
azand the image height of the point to be measured along the Z-axis direction is obtained.
8. The apparatus of claim 6, wherein:
the relative coordinate calculation unit calculates the AxThe specific process comprises the following steps:
the point to be measuredObject distance U1And the object distance U of the auxiliary test point2Respectively as follows:
<mrow> <msub> <mi>U</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
<mrow> <msub> <mi>U</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow>
thereby obtaining object planes P imaged twice1And P2The distance j + m between the two cameras along the optical axis direction of the camera is as follows:
<mrow> <mi>j</mi> <mo>+</mo> <mi>m</mi> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow>
respectively solving the object height n of the point to be tested, which is vertical to the optical axis of the camera, and the object height k of the auxiliary test point, which is vertical to the optical axis of the camera:
<mrow> <mi>n</mi> <mo>=</mo> <mfrac> <msub> <mi>U</mi> <mn>1</mn> </msub> <msub> <mi>V</mi> <mn>1</mn> </msub> </mfrac> <mo>*</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
<mrow> <mi>k</mi> <mo>=</mo> <mfrac> <msub> <mi>U</mi> <mn>2</mn> </msub> <msub> <mi>V</mi> <mn>2</mn> </msub> </mfrac> <mo>*</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow>
respectively substituting the above parameters into a geometric formulaAnd can obtain the product
<mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
<mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> <mo>(</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>-</mo> <mfrac> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> <mrow> <msub> <mi>V</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
Ax=L1+L2
CN201410339869.9A 2014-07-16 2014-07-16 3d space measuring method based on monocular-camera Active CN105333818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410339869.9A CN105333818B (en) 2014-07-16 2014-07-16 3d space measuring method based on monocular-camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410339869.9A CN105333818B (en) 2014-07-16 2014-07-16 3d space measuring method based on monocular-camera

Publications (2)

Publication Number Publication Date
CN105333818A CN105333818A (en) 2016-02-17
CN105333818B true CN105333818B (en) 2018-03-23

Family

ID=55284479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410339869.9A Active CN105333818B (en) 2014-07-16 2014-07-16 3d space measuring method based on monocular-camera

Country Status (1)

Country Link
CN (1) CN105333818B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109931906B (en) * 2019-03-28 2021-02-23 华雁智科(杭州)信息技术有限公司 Camera ranging method and device and electronic equipment
CN110225400B (en) * 2019-07-08 2022-03-04 北京字节跳动网络技术有限公司 Motion capture method and device, mobile terminal and storage medium
CN113115017B (en) * 2021-03-05 2022-03-18 上海炬佑智能科技有限公司 3D imaging module parameter inspection method and 3D imaging device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183206A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Method for calculating distance and actuate size of shot object
CN101344376A (en) * 2008-08-28 2009-01-14 上海交通大学 Measuring method for spacing circle geometric parameter based on monocular vision technology
KR20110025724A (en) * 2009-09-05 2011-03-11 백상주 Method for measuring height of a subject using camera module
CN102168954B (en) * 2011-01-14 2012-11-21 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
CN103049918A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Method for accurately calculating size of actual target in video frequency monitoring
CN102661717A (en) * 2012-05-09 2012-09-12 河北省电力建设调整试验所 Monocular vision measuring method for iron tower
CN103206919A (en) * 2012-07-31 2013-07-17 广州三星通信技术研究有限公司 Device and method used for measuring object size in portable terminal
CN103033132B (en) * 2012-12-20 2016-05-18 中国科学院自动化研究所 Plane survey method and device based on monocular vision
CN103292695B (en) * 2013-05-10 2016-02-24 河北科技大学 A kind of single eye stereo vision measuring method
CN103471500B (en) * 2013-06-05 2016-09-21 江南大学 A kind of monocular camera machine vision midplane coordinate and the conversion method of 3 d space coordinate point

Also Published As

Publication number Publication date
CN105333818A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN111220130B (en) Focusing measurement method and terminal capable of measuring object at any position in space
CN104266608B (en) Field calibration device for visual sensor and calibration method
CN105486235B (en) A kind of goal-griven metric method in ball machine video pictures
CN102768762B (en) Digital camera calibration method targeted to shield tunnel defect digital radiography detection and device thereof
CN107401976B (en) A kind of large scale vision measurement system and its scaling method based on monocular camera
CN108489423B (en) Method and system for measuring horizontal inclination angle of product surface
CN110779491A (en) Method, device and equipment for measuring distance of target on horizontal plane and storage medium
CN109862345B (en) Method and system for testing field angle
CN109238235A (en) Monocular sequence image realizes rigid body pose parameter continuity measurement method
CN104764401B (en) A kind of engine flexible angle of cant and center of oscillation measuring method
KR102255017B1 (en) Method for calibrating an image capture sensor comprising at least one sensor camera using a time coded pattern target
CN107063644B (en) Finite object distance distortion measuring method and system
CN105333818B (en) 3d space measuring method based on monocular-camera
CN109146959A (en) Monocular camera realizes dynamic point method for three-dimensional measurement
CN110617772A (en) Non-contact type line diameter measuring device and method
JP2012198142A (en) Area measurement device and area measurement method
CN104807405A (en) Three-dimensional coordinate measurement method based on light ray angle calibration
CN114062265B (en) Evaluation method for stability of support structure of vision system
US20180040138A1 (en) Camera-based method for measuring distance to object (options)
CN112907647B (en) Three-dimensional space size measurement method based on fixed monocular camera
CN206038278U (en) Concave mirror imaging measurement telephoto lens modulation transfer function&#39;s device
CN104700409A (en) Method for automatically adjusting preset position of camera in accordance with monitoring target
CN116678322A (en) Crack width measurement method and system considering parallel laser beam inclination angle
KR20120072539A (en) System for measuring an object and method for measuring an object using the same
CN111385565A (en) Optical axis included angle measuring and adjusting device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant