CN111192235B - Image measurement method based on monocular vision model and perspective transformation - Google Patents

Image measurement method based on monocular vision model and perspective transformation Download PDF

Info

Publication number
CN111192235B
CN111192235B CN201911236558.9A CN201911236558A CN111192235B CN 111192235 B CN111192235 B CN 111192235B CN 201911236558 A CN201911236558 A CN 201911236558A CN 111192235 B CN111192235 B CN 111192235B
Authority
CN
China
Prior art keywords
image
straight line
axis
coordinate system
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911236558.9A
Other languages
Chinese (zh)
Other versions
CN111192235A (en
Inventor
佘锦华
刘振焘
杜晨
简旭
秦梦溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201911236558.9A priority Critical patent/CN111192235B/en
Publication of CN111192235A publication Critical patent/CN111192235A/en
Application granted granted Critical
Publication of CN111192235B publication Critical patent/CN111192235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides an image measurement method based on monocular vision model and perspective transformation, firstly, any image is obtained; then, performing perspective transformation on the image, and adjusting the image so that the aspect ratio of an object in the new image obtained after transformation is the same as the actual aspect ratio of the object; a monocular vision model is established; on the image, two auxiliary straight lines RO and ST are constructed, perspective transformation is carried out, a double parallel line projection model is established, and conversion parameters f of points between coordinate systems of the monocular vision model are obtained through solving x The method comprises the steps of carrying out a first treatment on the surface of the And further obtaining the distance between any object D in the image and the photographer X. The invention has the following effects: the distance between the photographer and any target point in the photographed image is simply and clearly measured, the measurement accuracy is high, and the practicality and the applicability are achieved.

Description

Image measurement method based on monocular vision model and perspective transformation
Technical Field
The invention relates to the field of image processing, in particular to an image measurement method for monocular vision model and perspective transformation.
Background
About 80% of information acquired by a general person comes from vision, and how to extract target information is a key problem of vision research. As a non-contact measurement, the machine vision system does not generate any contact with observers and observers, so that the reliability of the system is improved, and the machine vision system can stably work for a long time and is widely applied, in particular to a binocular vision system and a multi-vision system. Because of the specificity of the system, the system parameters can be calibrated and analyzed through the high-precision calibration plate, and a lot of useful information can be obtained. The main carrier of visual information is images and videos, but in general, photos or videos are only shot by a common mobile phone or a camera, not shot by a binocular or multi-vision system, and have no calibration process, and even parameters of the camera cannot be known in some long-term photos. In the prior art, the whole camera parameters or the high-precision calibration plate are used for calibrating, so that the accurate internal and external parameters of the camera are obtained, and the monocular vision model is calculated based on the accurate internal and external parameters, for example, the application number is as follows: CN108317958A, entitled: the patent application of the image measuring method and the measuring instrument is to solve parameters by using a calibration plate so as to solve a measuring result. However, the solving process is complicated, and some photos with long time have no information except the photos which can be observed by human eyes, so that any calibration-based method cannot be used.
Disclosure of Invention
In order to solve the problems, the invention provides an image measurement method based on a monocular vision model and perspective transformation, wherein parallel projection transformation is an improved model based on the monocular vision model and the image perspective transformation, and the method is characterized in that from the conclusion obtained by calculating the monocular vision model, a proper auxiliary line is constructed, the auxiliary line is subjected to perspective transformation, a group of targets with length information in the vertical direction are selected for simple calibration, the distance corresponding to any pixel point of a plane is calculated, and then the distance between a photographer and a target point on an image is calculated according to the plane projection. The image measurement method based on the monocular vision model and perspective transformation mainly comprises the following steps:
s1: firstly, any shot image is acquired; then, carrying out perspective transformation on the image, and adjusting a certain object in the image after the perspective transformation, wherein the actual aspect ratio of the certain object is known, so that the aspect ratio of the certain object in the new image obtained after the perspective transformation and the corresponding adjustment is the same as the actual aspect ratio of the certain object; meanwhile, according to any known actual distance in the new image, the actual distance D corresponding to the unit pixel in the new image is obtained 1 So as to calculate the distance between any two points in the new image;
s2: taking the optical center of the camera as the center, establishing a pixel coordinate system, an image coordinate system and a camera coordinate system, wherein the conversion relation among the 3 coordinate systems is the established monocular vision model;
s3: on the image, two are constructedAuxiliary lines RO and ST, which are parallel to the bottom edge of the image, wherein the auxiliary line RO passes through the center point O of the image, and the distance between the auxiliary line ST and the auxiliary line RO is dis; performing perspective transformation on the image added with the auxiliary lines, and constructing a double parallel line projection model based on two auxiliary lines RO and ST to obtain the pixel numbers of the auxiliary lines RO and the lines SP which are respectively n 1 and n2 The method comprises the steps of carrying out a first treatment on the surface of the P is the intersection point of the straight line from the photographer X to the center point O of the image and the straight line ST, and the actual distance S of the straight line RO is obtained by combining the distances corresponding to the unit pixels obtained in the step S1 1 Actual distance s from straight line SP 2 The method comprises the steps of carrying out a first treatment on the surface of the According to the distance difference between the photographer X and the two straight lines RO and SP equal to the distance dis, solving to obtain a conversion parameter f of a point between coordinate systems of the monocular vision model x
S4: according to the solved conversion parameter f x From the formula
Figure GDA0004155475940000021
Calculating the depth of the optical center to the straight line RO>
Figure GDA0004155475940000022
The depth of the optical center from the optical center to the straight line RO is the distance XO from the photographer X to the straight line RO; wherein z is the depth of the line between the optical center of the camera and any two points, s is the actual distance between the any two points, d is the pixel size, and n is the number of pixels on the line between the any two points; because the straight line XO is perpendicular to the straight line RO, the included angle theta between the straight line XO and the straight line OS is calculated according to the Pythagorean theorem by combining the actual distance of the straight line RO and the actual distance of the XO, and then the distance between any target point D in the image and the photographer X can be obtained.
Further, the image is shot by a camera with length information in the transverse direction and the longitudinal direction, but other camera parameters are unknown.
Further, the new image is a top view of the scene of the image.
Further, a trial-and-error method is adopted to adjust a certain object in the image after perspective transformation.
Further, the pixel coordinate system takes the top left corner vertex of the image as an origin, takes a U axis and a V axis as transverse and longitudinal axes, and represents coordinates in units of pixels by (U, V); the image coordinate system takes the center of the image as an origin, takes an x axis and a y axis as transverse and longitudinal axes, the x axis and the y axis are respectively parallel to a U, V axis, and the coordinates are expressed by (x, y) with the unit of mm; the camera coordinate system takes an optical center of the camera as an origin, takes an X axis and a Y axis as transverse and longitudinal axes, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, the optical axis of the camera is a Z axis, and the image coordinate point P (X, Y) corresponds to a coordinate point P (X, Y, Z) of the camera coordinate system;
the relationship between the pixel coordinate system and the image coordinate system is as shown in formula (1):
Figure GDA0004155475940000031
wherein dx and dy denote the size of each pixel in the x-axis and y-axis, respectively, (u) 0 ,v 0 ) Representing the coordinates of the origin of the image coordinate system corresponding in the pixel coordinate system.
Further, a straight line EQ is arbitrarily selected from the image, and the coordinates of the E and Q points in the image coordinate system are assumed to be E (u 1 ,v 1) and Q(u2 ,v 2 ) The coordinates of the two points in the camera coordinate system are (X 1 ,Y 1 ,Z 1) and (X2 ,Y 2 ,Z 2 ) The method comprises the steps of carrying out a first treatment on the surface of the The number n of pixels between EQ two points is combined with the actual distance D of the unit pixels 1 Obtaining the actual length s of two points of EQ, wherein the depth from the camera optical center O' to the straight line EQ is z, and the focal length of the camera is f, and d1=d.n; where d is the pixel size, n is the upper number of pixels of line EQ,
Figure GDA0004155475940000032
from the similar principle, it can be seen that: />
Figure GDA0004155475940000033
The method can obtain:
Figure GDA0004155475940000034
/>
wherein ,
Figure GDA0004155475940000035
the technical scheme provided by the invention has the beneficial effects that: under the condition that the parameters of the camera are unknown, the distance between a photographer and any target point in the photographed image is simply and clearly measured, the measurement accuracy is high, and the method has practicability and applicability.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of an image measurement method based on monocular vision model and perspective transformation in an embodiment of the present invention;
FIG. 2 is a schematic illustration of an image in an embodiment of the invention;
FIG. 3 is a schematic diagram of perspective transformation principles in an embodiment of the invention;
FIG. 4 is a schematic diagram of a four-point perspective transformation in an embodiment of the invention;
FIG. 5 is a graph of image contrast before and after perspective transformation in an embodiment of the invention;
FIG. 6 is a diagram of a monocular vision model of a camera in an embodiment of the present invention;
FIG. 7 is a schematic diagram of image coordinate system and pixel coordinate system conversion in an embodiment of the invention;
FIG. 8 is a schematic diagram of the calculation of the straight line distance from the optical center to the target in the embodiment of the invention;
FIG. 9 is a schematic diagram showing perspective transformation contrast of an image with the addition of auxiliary lines in an embodiment of the present invention;
FIG. 10 is a schematic diagram of a parallel line projection model in an embodiment of the invention.
Detailed Description
For a clearer understanding of technical features, objects and effects of the present invention, a detailed description of embodiments of the present invention will be made with reference to the accompanying drawings.
The embodiment of the invention provides an image measurement method based on a monocular vision model and perspective transformation.
Referring to fig. 1 to 10, fig. 1 is a flowchart of an image measurement method based on monocular vision model and perspective transformation in the embodiment of the present invention, fig. 2 is a schematic diagram of an image in the embodiment of the present invention, fig. 3 is a schematic diagram of perspective transformation principle in the embodiment of the present invention, fig. 4 is a schematic diagram of four-point perspective transformation in the embodiment of the present invention, fig. 5 is a schematic diagram of image contrast before and after perspective transformation in the embodiment of the present invention, fig. 6 is a schematic diagram of camera monocular vision model in the embodiment of the present invention, fig. 7 is a schematic diagram of image coordinate system and pixel coordinate system conversion in the embodiment of the present invention, fig. 8 is a schematic diagram of optical center to target straight line distance calculation in the embodiment of the present invention, fig. 9 is a schematic diagram of image perspective transformation contrast after adding an auxiliary line in the embodiment of the present invention, fig. 10 is a schematic diagram of parallel line projection model in the embodiment of the present invention, and the method specifically comprises the following steps:
s1: firstly, any shot image is acquired; then, carrying out perspective transformation on the image, and carrying out trial-and-error adjustment on an object of the image after the perspective transformation, so that the aspect ratio of the object in the new image obtained after the perspective transformation and the corresponding adjustment is the same as the actual aspect ratio of the object; meanwhile, according to any known actual distance in the new image, the actual distance D corresponding to the unit pixel in the new image is obtained 1 So as to calculate the distance between any two points in the new image; the image is formed by shooting a camera with length information in the transverse direction and the longitudinal direction and unknown parameters of other cameras; the new image is a top view of a scene of the image;
s2: taking the optical center of the camera as the center, establishing a pixel coordinate system, an image coordinate system and a camera coordinate system, wherein the conversion relation among the 3 coordinate systems is the established monocular vision model, and the conversion parameter f of any fixed-length information in the new image to the point between the coordinate systems of the monocular vision model is used x Solving;
s3: on the image, two auxiliary lines RO and ST are constructed, both parallel to the bottom edge of the image, wherein the auxiliary line ROThrough the center point O of the image, the distance between the auxiliary straight line ST and the auxiliary straight line RO is dis; performing perspective transformation on the image added with the auxiliary lines, and constructing a double parallel line projection model based on two auxiliary lines RO and ST to obtain the pixel numbers of the auxiliary lines RO and the lines SP which are respectively n 1 and n2 The method comprises the steps of carrying out a first treatment on the surface of the P is the intersection point of the straight line from the photographer X to the center O of the image and the straight line ST, and the actual distance S of the straight line RO is obtained by combining the distances corresponding to the unit pixels obtained in the step S1 1 Actual distance s from straight line SP 2 The method comprises the steps of carrying out a first treatment on the surface of the According to the distance difference between the photographer X and the two straight lines RO and SP equal to the distance dis, solving to obtain a conversion parameter f of a point between coordinate systems of the monocular vision model x
S4: according to the solved conversion parameter f x According to the formula
Figure GDA0004155475940000051
Calculating the depth of the optical center to the straight line RO>
Figure GDA0004155475940000052
The depth of the optical center from the optical center to the straight line RO is the distance XO from the photographer X to the straight line RO; wherein z is the depth of the line between the optical center of the camera and any two points, s is the actual distance between the any two points, d is the pixel size, and n is the number of pixels on the line between the any two points; because the straight line XO is perpendicular to the straight line RO, the included angle theta between the straight line XO and the straight line OS is calculated according to the Pythagorean theorem by combining the actual distance of the straight line RO and the actual distance of the XO, and then the distance between a certain object D in the image and a photographer X can be obtained.
The image is selected as shown in fig. 2, and the specific operation of the method is described by taking the example of solving the distance between the photographer X and the left side of the road.
Step one: performing perspective transformation on the image to obtain a top view of the ground, and performing corresponding calibration
The nature of perspective transformation causes the image to be projected onto a new viewing plane, representing the "linearity" of the image, i.e., the straight line within the original image, which remains straight after perspective transformation, as shown in fig. 3, in which case the ground is primarily transformed, the image being in a two-dimensional plane, with the transformed image being shown in fig. 4. The actual distance information of the objects in the transverse direction and the longitudinal direction after transformation can enable the images to correctly show the far-near hierarchical relationship between the objects through perspective transformation, and the images can be restored from the original 'near-large-far-small' images to the images with the same transverse-longitudinal ratio as the actual images, which is one of the purposes of perspective transformation.
And calibrating the object with the determined length and width in the image after perspective transformation, analyzing and calculating the corresponding distance of each pixel point of the plane in the transverse direction and the longitudinal direction, and then calculating the distance of any two points on the plane. The principle of perspective transformation is as follows, the original target point is
Figure GDA0004155475940000061
Move to the target point +.>
Figure GDA0004155475940000062
The transformation formula of the perspective transformation is as follows:
Figure GDA0004155475940000063
wherein />
Figure GDA0004155475940000064
Is a perspective matrix. Setting a certain value in the perspective matrix, and adjusting the rest value in the perspective matrix by a trial-and-error method to obtain the whole perspective matrix, wherein the effect of the trial-and-error method is that the aspect ratio of a certain object in the new image to be transformed is the same as the actual aspect ratio of the object. Due to the conversion of the two-dimensional space into the three-dimensional space, the division by Z results in (X ', Y ', Z ') because the image is in a two-dimensional plane, this coordinate representing a point on the image, i.e./the>
Figure GDA0004155475940000065
Let a 33 =1, the expansion formula, can be:
Figure GDA0004155475940000066
the specific perspective transformation is shown in fig. 5, and is first based on making two auxiliary lines HI and JK perpendicular to the double line which is open in view in the image and which appears at a distance from the middle of the road, as shown in fig. 5 (a). The image is corrected by using a trial-and-error method until the horizontal-to-vertical pixel ratio of a certain reference object in the obtained new image is consistent with the actual horizontal-to-vertical ratio of the reference object, for example, the length and the width of an actual double-yellow broken line in a road in the image can be selected, in the actual road, the road marks on the road all have corresponding standards, and the corrected image shown in the (b) diagram in fig. 5 is finally obtained by taking the information of the road marks as a reference image for calibration.
The corrected image can be used to calculate the distance between any two points or any two straight lines of the plane by using any known length information in the graph, in the graph (b) in fig. 5, the proportion information of the unit pixel and the actual size, namely the actual distance D corresponding to the unit pixel, is calculated according to the known distance information such as the fence distance 1 And taking the result as a reference to perform subsequent calculation.
Step two: establishing a monocular vision system model, and analyzing depths of optical centers to points and straight lines
Based on the common image, the related distance is obtained, a monocular vision system model shown in fig. 6 is firstly established, a real-world Cartesian three-dimensional coordinate system is established by taking an optical center O' as a center, then fixed-length information in the image is selected to conduct inverse solution on monocular vision model parameters, and data in the model are obtained. The three-dimensional coordinates of the object relative to the camera to be calculated involve three coordinate systems: conversion of pixel coordinate system, image Coordinate System (ICS) and Camera Coordinate System (CCS). The pixel coordinate system takes the top left corner vertex of the image as an origin, takes a U axis and a V axis as transverse and longitudinal axes, and represents coordinates in units of pixels by (U, V); the image coordinate system takes the center of the image as an origin, takes an x axis and a y axis as transverse and longitudinal axes, the x axis and the y axis are respectively parallel to a U, V axis, and the coordinates are expressed by (x, y) with the unit of mm; the camera coordinate system takes an optical center of the camera as an origin, takes an X axis and a Y axis as transverse and longitudinal axes, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, the optical axis of the camera is a Z axis, and the image coordinate point P (X, Y) corresponds to a coordinate point P (X, Y, Z) of the camera coordinate system; the point (x, y) in the image coordinate system corresponds to the point (x, y, z) in the actual spatial coordinates.
1) Calculating depth of actual point
From the monocular vision model shown in fig. 6, a relationship between the image coordinate system and the pixel coordinate system shown in fig. 7 is established, and as can be seen from fig. 7, the relationship between the pixel coordinate system and the image coordinate system in ICS is shown as follows, where dx and dy represent the size of each pixel in the x-axis and the y-axis, respectively, and f is the focal length of the camera:
Figure GDA0004155475940000071
the point P (X, Y, Z) in the Camera Coordinate System (CCS) corresponds to the point P (X, Y) in the image coordinates, from which the following mathematical relationship can be derived:
Figure GDA0004155475940000081
2) Calculating distance from optical center to image straight line
Solving the distance from the optical center to a certain straight line in the image, namely solving the depth from the optical center to the target straight line, as shown in fig. 8, wherein O' is the optical center of the camera, namely the optical center of the monocular vision system, selecting a straight line in the image, and respectively obtaining two end points E (u 1 ,v 1 ),Q(u 2 ,v 2 ) The coordinates of the two points in reality are (x 1 ,y 1 ,z 1) and (x2 ,y 2 ,z 2 ). When the optical center is farther from the straight line, the depth (i.e., distance) of the photographer to any point on the line segment EQ is considered to be the same, i.e., the depth of O' to any point of the EQ line segment is considered to be unchanged.
In FIG. 8, the EQ two points have a length s in practice and the actual distance D of the unit pixel 1 The depth from the camera optical center to the straight line is z, the focal length is f,then there is D 1 =d·n. Where d is the pixel size and n is the number of pixels in the line, it can be obtained:
Figure GDA0004155475940000082
from the similar principle, it can be seen that: />
Figure GDA0004155475940000083
The method can obtain:
Figure GDA0004155475940000084
wherein ,fx Is a simplified quantity of the product,
Figure GDA0004155475940000085
step three: constructing auxiliary lines and performing perspective transformation to obtain a parallel projection model, and calculating the distance required to be calculated
In the first step, corresponding perspective transformation is performed, and the distance corresponding to the transformed single pixel point is obtained. In the third step, two auxiliary lines are constructed in the image, perspective transformation is then performed for analyzing a related model, and the following steps are performed with the example of solving the distance between the photographer X and the left side of the road as guidance:
1) In the image shown in the diagram (a) in FIG. 9, two auxiliary straight lines parallel to the bottom edge of the image are constructed, one of which is required to pass through the exact center of the image, because such angle is from the optical center and perpendicular to the ICS. The image obtained by perspective transformation of the straight lines RO and ST shown in fig. 9 (a) is shown in fig. 9 (b).
2) And constructing a double parallel line projection model. After perspective transformation, the auxiliary line is schematically shown in fig. 10 in the top view, and the number n of the pixels of the line segments SP and SP can be directly obtained from the image 1 and n2 . Since RO is a straight line passing through the right center of the image and parallel to the bottom side, the straight line XO from the photographer to the right center point O is perpendicular thereto, and similarly SP is perpendicular to XO. Due to the new drawingThe actual distance represented by the unit pixel in the image is known, and the actual distances of the line segments RO and SP can be calculated to be s 1 and s2
3) The depths of the photographer X to the two straight lines RO and SP are respectively obtained by the formula (4)
Figure GDA0004155475940000091
and />
Figure GDA0004155475940000092
The actual distance dis between the lines RO and SP is known, since the depth difference between the photographer X and the two lines is the distance between the two lines RO and SP, therefore +.>
Figure GDA0004155475940000093
From this, +.>
Figure GDA0004155475940000094
4) From f obtained in step 3) x The depth of the optical center to the straight line RO is obtained by the value of (2)
Figure GDA0004155475940000095
Also, the distance XO of the photographer X from the straight line RO is finally calculated according to the pythagorean theorem according to the geometric information in fig. 10 to obtain the included angle θ, and further the distance DX of the photographer X from the left road is obtained as follows: dx=cd+cx=cd+ox sin (θ); the distance between the left road and the double lines appearing in the middle of the road is CD, and can be obtained from information when the road is designed, that is, the distance is known when the road is designed, and the distance DX from the photographer X to the left road can be obtained by combining the calculated XO and θ.
The image measuring method based on the monocular vision model and perspective transformation can simply and rapidly calculate the distance between a photographer and any target point in the photographed image, and has higher precision.
The beneficial effects of the invention are as follows: under the condition that the parameters of the camera are unknown, the distance between a photographer and any target point in the photographed image is simply and clearly measured, the measurement accuracy is high, and the method has practicability and applicability.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (6)

1. An image measurement method based on monocular vision model and perspective transformation is characterized in that: the method comprises the following steps:
s1: firstly, any shot image is acquired; then, performing perspective transformation on the image, and adjusting a certain object in the perspective transformed image, wherein the actual aspect ratio of the certain object is known, so that the aspect ratio of the certain object in the perspective transformation and a new image obtained after corresponding adjustment is the same as the actual aspect ratio of the certain object; meanwhile, according to any known actual distance in the new image, the actual distance D corresponding to the unit pixel in the new image is obtained 1 So as to calculate the distance between any two points in the new image;
s2: taking the optical center of the camera as the center, establishing a pixel coordinate system, an image coordinate system and a camera coordinate system, wherein the conversion relation among the 3 coordinate systems is the established monocular vision model;
s3: on the image, two auxiliary straight lines RO and ST are constructed, and the two auxiliary straight lines are parallel to the bottom edge of the image, wherein the auxiliary straight line RO passes through the center point O of the image, and the distance between the auxiliary straight line ST and the auxiliary straight line RO is dis; performing perspective transformation on the image added with the auxiliary lines, and constructing a double parallel line projection model based on two auxiliary lines RO and ST to obtain the pixel numbers of the auxiliary lines RO and the lines SP which are respectively n 1 and n2 The method comprises the steps of carrying out a first treatment on the surface of the P is the intersection point of the straight line from the photographer X to the center point O of the image and the straight line ST, and the actual distance S of the straight line RO is obtained by combining the distances corresponding to the unit pixels obtained in the step S1 1 Actual distance s from straight line SP 2 The method comprises the steps of carrying out a first treatment on the surface of the According to the distance difference between the photographer X and the two straight lines RO and SP equal to the distance dis, solving to obtain monocular visionConversion parameters f of points between coordinate systems of the model x
S4: according to the solved conversion parameter f x From the formula
Figure FDA0004155475930000011
Calculating the depth of the optical center to the straight line RO>
Figure FDA0004155475930000012
The depth of the optical center from the optical center to the straight line RO is the distance XO from the photographer X to the straight line RO; wherein z is the depth of the line between the optical center of the camera and any two points, s is the actual distance between any two points, d is the pixel size, n is the number of pixels on the line between any two points, s 1 Represents the actual distance of the straight line RO, n 1 The number of pixels representing the auxiliary straight line RO, f representing the camera focal length; because the straight line XO is perpendicular to the straight line RO, the included angle theta between the straight line XO and the straight line OS is calculated according to the Pythagorean theorem by combining the actual distance of the straight line RO and the actual distance of the XO, and then the distance between any target point D in the image and the photographer X can be obtained.
2. An image measurement method based on monocular vision model and perspective transformation as claimed in claim 1, characterized in that: the image is shot by a camera with length information in the transverse direction and the longitudinal direction and unknown parameters of other cameras.
3. An image measurement method based on monocular vision model and perspective transformation as claimed in claim 1, characterized in that: the new image is a top view of the scene of the image.
4. An image measurement method based on monocular vision model and perspective transformation as claimed in claim 1, characterized in that: in step S1, a trial and error method is used to adjust an object in the perspective-transformed image.
5. An image measurement method based on monocular vision model and perspective transformation as claimed in claim 1, characterized in that: in step S2, the pixel coordinate system uses the top left corner vertex of the image as the origin, uses the U-axis and the V-axis as the horizontal and vertical axes, and uses (U, V) to represent coordinates in pixels; the image coordinate system takes the center of the image as an origin, takes an x axis and a y axis as transverse and longitudinal axes, the x axis and the y axis are respectively parallel to a U, V axis, and the coordinates are expressed by (x, y) with the unit of mm; the camera coordinate system takes an optical center of the camera as an origin, takes an X axis and a Y axis as transverse and longitudinal axes, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, the optical axis of the camera is a Z axis, and the image coordinate point P (X, Y) corresponds to a coordinate point P (X, Y, Z) of the camera coordinate system;
the relationship between the pixel coordinate system and the image coordinate system is as shown in formula (1):
Figure FDA0004155475930000021
wherein dx and dy denote the size of each pixel in the x-axis and y-axis, respectively, (u) 0 ,v 0 ) Representing the coordinates of the origin of the image coordinate system corresponding in the pixel coordinate system.
6. An image measurement method based on monocular vision model and perspective transformation as claimed in claim 1, characterized in that: in step S3, a straight line EQ is arbitrarily selected from the image, assuming that the coordinates of the E and Q points in the image coordinate system are E (u 1 ,v 1) and Q(u2 ,v 2 ) The coordinates of the two points in the camera coordinate system are (X 1 ,Y 1 ,Z 1) and (X2 ,Y 2 ,Z 2 ) The method comprises the steps of carrying out a first treatment on the surface of the The number n of pixels between EQ two points is combined with the actual distance D of the unit pixels 1 Obtaining the actual length s of the two points of EQ, wherein the depth from the camera optical center O' to the straight line EQ is z, the focal length of the camera is f, and D 1 =d·n; where d is the pixel size, n is the upper number of pixels of line EQ,
Figure FDA0004155475930000031
from the similar principle, it can be seen that: />
Figure FDA0004155475930000032
The method can obtain:
Figure FDA0004155475930000033
wherein ,
Figure FDA0004155475930000034
/>
CN201911236558.9A 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation Active CN111192235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911236558.9A CN111192235B (en) 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911236558.9A CN111192235B (en) 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation

Publications (2)

Publication Number Publication Date
CN111192235A CN111192235A (en) 2020-05-22
CN111192235B true CN111192235B (en) 2023-05-26

Family

ID=70707534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911236558.9A Active CN111192235B (en) 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation

Country Status (1)

Country Link
CN (1) CN111192235B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111380503B (en) * 2020-05-29 2020-09-25 电子科技大学 Monocular camera ranging method adopting laser-assisted calibration
CN115031635A (en) * 2020-08-31 2022-09-09 深圳市慧鲤科技有限公司 Measuring method and device, electronic device and storage medium
CN112325780B (en) * 2020-10-29 2022-01-25 青岛聚好联科技有限公司 Distance measuring and calculating method and device based on community monitoring
CN112665577A (en) * 2020-12-29 2021-04-16 北京电子工程总体研究所 Monocular vision target positioning method and system based on inverse perspective transformation matrix
CN112734832B (en) * 2021-01-22 2024-05-31 逆可网络科技有限公司 Method for measuring real size of on-line object in real time
CN113124820B (en) * 2021-06-17 2021-09-10 中国空气动力研究与发展中心低速空气动力研究所 Monocular distance measurement method based on curved mirror
CN113538578B (en) * 2021-06-22 2023-07-25 恒睿(重庆)人工智能技术研究院有限公司 Target positioning method, device, computer equipment and storage medium
CN113361507B (en) * 2021-08-11 2021-11-09 金成技术有限公司 Visual measurement method for production information of structural member
CN114935316B (en) * 2022-05-20 2024-03-12 长春理工大学 Standard depth image generation method based on optical tracking and monocular vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
CN110174088A (en) * 2019-04-30 2019-08-27 上海海事大学 A kind of target ranging method based on monocular vision

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157910A (en) * 2008-12-26 2010-07-15 Toyota Motor Corp Image display device
EP3825807A1 (en) * 2015-11-02 2021-05-26 Starship Technologies OÜ Method, device and assembly for map generation
CN109146980B (en) * 2018-08-12 2021-08-10 浙江农林大学 Monocular vision based optimized depth extraction and passive distance measurement method
CN109822754B (en) * 2019-02-25 2020-12-22 长安大学 Dump truck carriage size detection system and method for asphalt concrete mixing plant
CN110009682B (en) * 2019-03-29 2022-12-06 北京理工大学 Target identification and positioning method based on monocular vision
CN110057295B (en) * 2019-04-08 2020-12-25 河海大学 Monocular vision plane distance measuring method without image control
CN110136211A (en) * 2019-04-18 2019-08-16 中国地质大学(武汉) A kind of workpiece localization method and system based on active binocular vision technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
CN110174088A (en) * 2019-04-30 2019-08-27 上海海事大学 A kind of target ranging method based on monocular vision

Also Published As

Publication number Publication date
CN111192235A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111192235B (en) Image measurement method based on monocular vision model and perspective transformation
US10290119B2 (en) Multi view camera registration
EP1378790B1 (en) Method and device for correcting lens aberrations in a stereo camera system with zoom
US10373337B2 (en) Methods and computer program products for calibrating stereo imaging systems by using a planar mirror
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN102376089A (en) Target correction method and system
WO2011145285A1 (en) Image processing device, image processing method and program
KR102129206B1 (en) 3 Dimensional Coordinates Calculating Apparatus and 3 Dimensional Coordinates Calculating Method Using Photo Images
CN109727290A (en) Zoom camera dynamic calibrating method based on monocular vision triangle telemetry
CN107589069B (en) Non-contact type measuring method for object collision recovery coefficient
WO2020208686A1 (en) Camera calibration device, camera calibration method, and non-transitory computer-readable medium having program stored thereon
CN111709985A (en) Underwater target ranging method based on binocular vision
Liu et al. Epipolar rectification method for a stereovision system with telecentric cameras
CN112815843A (en) Online monitoring method for workpiece surface printing deviation in 3D printing process
Ding et al. A robust detection method of control points for calibration and measurement with defocused images
CN109974618A (en) The overall calibration method of multisensor vision measurement system
JP2005322128A (en) Calibration method for stereo three-dimensional measurement and three-dimensional position calculating method
CN109506629B (en) Method for calibrating rotation center of underwater nuclear fuel assembly detection device
Luhmann 3D imaging: how to achieve highest accuracy
CN111998834B (en) Crack monitoring method and system
JP3696336B2 (en) How to calibrate the camera
CN113822920A (en) Method for acquiring depth information by structured light camera, electronic equipment and storage medium
CN112712566A (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
Ricolfe-Viala et al. Optimal conditions for camera calibration using a planar template
KR101634283B1 (en) The apparatus and method of 3d modeling by 3d camera calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant