CN111192235A - Image measuring method based on monocular vision model and perspective transformation - Google Patents

Image measuring method based on monocular vision model and perspective transformation Download PDF

Info

Publication number
CN111192235A
CN111192235A CN201911236558.9A CN201911236558A CN111192235A CN 111192235 A CN111192235 A CN 111192235A CN 201911236558 A CN201911236558 A CN 201911236558A CN 111192235 A CN111192235 A CN 111192235A
Authority
CN
China
Prior art keywords
image
straight line
axis
distance
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911236558.9A
Other languages
Chinese (zh)
Other versions
CN111192235B (en
Inventor
佘锦华
刘振焘
杜晨
简旭
秦梦溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201911236558.9A priority Critical patent/CN111192235B/en
Publication of CN111192235A publication Critical patent/CN111192235A/en
Application granted granted Critical
Publication of CN111192235B publication Critical patent/CN111192235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides an image measuring method based on a monocular vision model and perspective transformation, which comprises the steps of firstly, obtaining any image; then, perspective transformation is carried out on the image, and adjustment is carried out, so that the aspect ratio of a certain object in a new image obtained after transformation is the same as the actual aspect ratio of the object; establishing a monocular vision model; on the image, two auxiliary straight lines RO and ST are constructed, perspective transformation is carried out, a double parallel line projection model is established, and the conversion parameter f of the point between the coordinate systems of the monocular vision model is solved and obtainedx(ii) a And then the distance between any object D in the image and the photographer X is obtained. The invention has the following effects: simple and clear image taking device for measuring distance between photographerThe distance of a target point, and measurement accuracy is higher, has practicality and suitability.

Description

Image measuring method based on monocular vision model and perspective transformation
Technical Field
The invention relates to the field of image processing, in particular to an image measuring method of a monocular vision model and perspective transformation.
Background
About 80% of information acquired by general people comes from vision, and how to extract target information is a key problem of vision research. As a non-contact measurement, the machine vision system does not make any contact with an observer or an observed person, thereby improving the reliability of the system, and has the advantages of long-term stable operation, and the machine vision system is widely applied, in particular to a binocular vision system and a multi-view vision system. Due to the particularity of the system, the system parameters can be calibrated and analyzed through a high-precision calibration board, and a lot of useful information can be obtained. The main carriers of visual information are images and videos, and generally, photos or videos are only obtained by shooting through a common mobile phone or a camera, are not obtained by shooting through a binocular or multi-view vision system, and have no calibration process, so that parameters of some long-distance photos or even camera cannot be known. However, in the prior art, the camera parameters are mostly calibrated completely or by a high-precision calibration board, so as to obtain the precise internal and external parameters of the camera, and the monocular vision model is calculated on the basis of the parameters, for example, the application number is as follows: CN108317958A, name: the patent application of an image measuring method and a measuring instrument is to solve parameters by using a calibration plate so as to solve a measuring result. However, the solving process is complicated, and some pictures of long ages have no information except pictures which can be observed by human eyes, and any calibration-based method cannot be used at the moment.
Disclosure of Invention
In order to solve the problems, the invention provides an image measuring method based on a monocular vision model and perspective transformation, parallel projection transformation is an improved model based on the monocular vision model and the image perspective transformation, and based on the conclusion obtained by calculating the monocular vision model, the parallel projection transformation is carried out by constructing a proper auxiliary line, carrying out perspective transformation on the auxiliary line, selecting a group of targets with length information in the vertical direction for simple calibration, calculating the distance corresponding to any pixel point of the plane, and calculating the distance from a photographer to the target point on the image according to plane projection. The image measuring method based on the monocular vision model and the perspective transformation mainly comprises the following steps:
s1: firstly, acquiring any shot image; then, perspective transformation is carried out on the image, and a certain object in the image after perspective transformation is adjusted, wherein the actual aspect ratio of the certain object is known, so that the aspect ratio of the certain object in a new image obtained after perspective transformation and corresponding adjustment is the same as the actual aspect ratio of the certain object; meanwhile, according to any known actual distance in the new image, obtaining an actual distance D corresponding to a unit pixel in the new image so as to calculate the distance between any two points in the new image;
s2: establishing a pixel coordinate system, an image coordinate system and a camera coordinate system by taking the optical center of a camera as a center, wherein the conversion relation among the 3 coordinate systems is the established monocular vision model;
s3: constructing two auxiliary straight lines RO and ST on the image, wherein the two auxiliary straight lines are parallel to the bottom edge of the image, the auxiliary straight line RO passes through the central point O of the image, and the distance between the auxiliary straight line ST and the auxiliary straight line RO is dis; carrying out perspective transformation on the image added with the auxiliary line, constructing a double parallel line projection model based on two auxiliary straight lines RO and ST, and obtaining the auxiliary straight line RO and the straight line SP with the pixel number of n respectively1 and n2(ii) a P is the intersection point of the straight line from the photographer F to the center point O of the image and the straight line ST, and the actual distance S of the straight line RO is obtained by combining the distance corresponding to the unit pixel obtained in the step S11Actual distance s from straight line SP2(ii) a Solving to obtain a conversion parameter f of a point between coordinate systems of the monocular vision model according to the fact that the distance difference between the photographer X and the two straight lines RO and SP is equal to the distance disx
S4: according to the solved conversion parameter fxFrom the formula
Figure BDA0002305041450000021
Calculating the depth from the optical center to the straight line RODegree of rotation
Figure BDA0002305041450000022
The depth from the optical center to the straight line RO is the distance XO from the photographer X to the straight line RO; wherein z is the depth from the optical center of the camera to the connecting line of any two points, s is the actual distance between any two points, d is the pixel size, and n is the number of pixels on the connecting line of any two points; because the straight line XO is perpendicular to the straight line RO, the included angle theta between the straight line XO and the straight line OS is calculated according to the pythagorean theorem by combining the actual distance of the straight line RO and the actual distance of the XO, and the distance between any target point D in the image and the photographer X can be further obtained.
Further, the image is shot by a camera with length information in the horizontal direction and the vertical direction but unknown other camera parameters.
Further, the new image is a top view of the scene of the image.
Further, a trial and error method is adopted to adjust a certain object in the image after perspective transformation.
Further, the pixel coordinate system takes the vertex at the upper left corner of the image as an origin, takes the U axis and the V axis as horizontal and vertical axes, and expresses coordinates by (U, V), and the unit is a pixel; the image coordinate system takes the center of the image as an origin, takes an x axis and a y axis as horizontal and vertical axes, the x axis and the y axis are respectively parallel to U, V axes, and coordinates are expressed by (x, y) and the unit is mm; the camera coordinate system takes the optical center of the camera as an origin, takes an X axis and a Y axis as horizontal and vertical axes, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, the optical axis of the camera is a Z axis, and the image coordinate point P (X, Y) corresponds to the coordinate point P (X, Y, Z) of the camera coordinate system;
the relationship between the pixel coordinate system and the image coordinate system is shown in equation (1):
Figure BDA0002305041450000031
where dx and dy denote the size of each pixel in the x-axis and y-axis, respectively, (u)0,v0) Representing the corresponding coordinates of the origin of the image coordinate system in the pixel coordinate system.
Further, a straight line EQ is selected in the image, and coordinates of points E and Q in an image coordinate system are respectively E (u)1,v1) and Q(u2,v2) The coordinates of the two points in the camera coordinate system are (X)1,Y1,Z1) and (X2,Y2,Z2) (ii) a The number n of pixels between two points EQ is combined with the actual length D of a unit pixel to obtain the actual length s of the two points EQ, the depth from the optical center O' of the camera to the straight line EQ is z, the focal length of the camera is f, and D is d.n; wherein d is the pixel size, n is the number of upper pixels of the straight line EQ,
Figure BDA0002305041450000032
from a similar principle, it can be seen that:
Figure BDA0002305041450000033
the following can be obtained:
Figure BDA0002305041450000034
wherein ,
Figure BDA0002305041450000035
the technical scheme provided by the invention has the beneficial effects that: under the condition that the camera parameters are unknown, the distance between the photographer and any target point in the shot image is simply and clearly measured, the measurement precision is high, and the method has practicability and applicability.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of an image measurement method based on a monocular vision model and perspective transformation in an embodiment of the present invention;
FIG. 2 is a schematic illustration of an image in an embodiment of the invention;
FIG. 3 is a schematic diagram of the principle of perspective transformation in an embodiment of the invention;
FIG. 4 is a schematic diagram of a four-point perspective transformation in an embodiment of the present invention;
FIG. 5 is an image contrast diagram before and after perspective transformation in an embodiment of the present invention;
FIG. 6 is a schematic view of a monocular vision model of a camera in an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating the transformation of the image coordinate system and the pixel coordinate system according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a linear distance between an optical center and a target according to an embodiment of the present invention;
FIG. 9 is a perspective transformation comparison diagram of an image after adding an auxiliary line in the embodiment of the present invention;
FIG. 10 is a schematic diagram of a parallel line projection model in an embodiment of the invention.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
The embodiment of the invention provides an image measuring method based on a monocular vision model and perspective transformation.
Referring to fig. 1 to 10, fig. 1 is a flowchart of an image measurement method based on a monocular vision model and perspective transformation according to an embodiment of the present invention, fig. 2 is a schematic diagram of an image in an embodiment of the invention, fig. 3 is a schematic diagram of a perspective transformation principle in an embodiment of the invention, fig. 4 is a schematic diagram of a four-point perspective transformation in an embodiment of the present invention, fig. 5 is a diagram of image contrast before and after the perspective transformation in an embodiment of the present invention, FIG. 6 is a schematic view of a monocular vision model of a camera according to an embodiment of the present invention, FIG. 7 is a schematic view of a transformation between an image coordinate system and a pixel coordinate system according to an embodiment of the present invention, FIG. 8 is a schematic diagram of finding a linear distance from an optical center to a target in an embodiment of the present invention, FIG. 9 is a schematic diagram of comparing perspective transformation of an image after an auxiliary line is added in an embodiment of the present invention, fig. 10 is a schematic diagram of a parallel line projection model in an embodiment of the present invention, which specifically includes the following steps:
s1: firstly, acquiring any shot image; then, perspective transformation is carried out on the image, and trial-and-error adjustment is carried out on a certain object of the image after perspective transformation, so that the aspect ratio of the certain object in a new image obtained after perspective transformation and corresponding adjustment is the same as the actual aspect ratio of the object; meanwhile, according to any known actual distance in the new image, obtaining an actual distance D corresponding to a unit pixel in the new image so as to calculate the distance between any two points in the new image; the image is shot by a camera with length information in the horizontal and vertical directions but unknown parameters of other cameras; the new image is a top view of the scene of the image;
s2: establishing a pixel coordinate system, an image coordinate system and a camera coordinate system by taking the optical center of a camera as a center, wherein the conversion relation among the 3 coordinate systems is the established monocular vision model, and converting parameters f of any fixed length information in the new image to points among the coordinate systems of the monocular vision modelxSolving is carried out;
s3: constructing two auxiliary lines RO and ST on the image, wherein the two auxiliary lines are parallel to the bottom edge of the image, the auxiliary straight line RO passes through the central point O of the image, and the auxiliary straight line ST is at a distance dis from the auxiliary straight line RO; carrying out perspective transformation on the image added with the auxiliary line, constructing a double parallel line projection model based on two auxiliary straight lines RO and ST, and obtaining the auxiliary straight line RO and the straight line SP with the pixel number of n respectively1 and n2(ii) a P is an intersection point of a straight line from the photographer F to the center O of the image and the straight line ST, and the actual distance S to the straight line RO is obtained by combining the distance corresponding to the unit pixel obtained in step S11Actual distance s from straight line SP2(ii) a Solving to obtain a conversion parameter f of a point between coordinate systems of the monocular vision model according to the fact that the distance difference between the photographer X and the two straight lines RO and SP is equal to the distance disx
S4: according to the solved conversion parameter fxAccording to the formula
Figure BDA0002305041450000051
Calculating the depth from the optical center to the straight line RO
Figure BDA0002305041450000052
The depth from the optical center to the straight line RO is the distance from the photographer X to the straight line ROSeparating from the XO; wherein z is the depth from the optical center of the camera to the connecting line of any two points, s is the actual distance between any two points, d is the pixel size, and n is the number of pixels on the connecting line of any two points; because straight line XO is perpendicular to straight line RO, combines the actual distance of straight line RO and the actual distance of XO, according to the pythagorean theorem, calculates the contained angle theta that obtains straight line XO and straight line OS, and then can obtain the distance between certain object D and shooter X in the image.
As shown in fig. 2, the image is selected, and the specific operation of the method is described by taking the example of solving the distance between the photographer X and the left side of the road.
The method comprises the following steps: subjecting the image to perspective transformation to obtain a top view of the ground, and performing corresponding calibration
The nature of the perspective transformation is such that the image is projected to a new viewing plane, which is represented by the "linearity" of the image, i.e. the straight line in the original image remains straight after perspective transformation, as shown in fig. 3, in this example, mainly the ground is transformed, the image is in a two-dimensional plane, and the transformed image is shown in fig. 4. The actual distance information of the objects in the horizontal and vertical directions after transformation can make the picture accurately show the far and near hierarchical relationship between the objects through perspective transformation, and can also make the image restore from the original picture of 'big near and small far' to the image with the same horizontal and vertical ratio as the actual image, which is also one of the purposes of perspective transformation.
And calibrating the object with the determined length and width in the image after perspective transformation, analyzing and calculating the corresponding distance of each pixel point on the plane in the horizontal and vertical directions, and calculating the distance of any two points on the plane. The principle of perspective transformation is as follows, the original target point is
Figure BDA0002305041450000061
Move to a target point of
Figure BDA0002305041450000062
The transformation formula of the perspective transformation is as follows:
Figure BDA0002305041450000063
wherein
Figure BDA0002305041450000064
Is a perspective matrix. Setting a certain numerical value in the perspective matrix, and adjusting the residual numerical value in the perspective matrix through a trial and error method to obtain the whole perspective matrix, wherein the trial and error method has the effect that the aspect ratio of a certain object in a new image to be converted is the same as the actual aspect ratio of the object. Since it is the transformation of the two-dimensional space into the three-dimensional space, since the image is in a two-dimensional plane, dividing by Z yields (X ', Y ', Z '), this coordinate representing a point on the image, i.e. the point on the image
Figure BDA0002305041450000065
Let a33Developing the formula, one can get 1:
Figure BDA0002305041450000066
the specific perspective transformation is shown in fig. 5, first, based on making two auxiliary lines HI and JK perpendicular to the double line in the image which is wide in view and appears at the middle of the road, as shown in (a) of fig. 5. The image is corrected by using a trial and error method until the horizontal-vertical pixel ratio of a certain reference object in the obtained new image is consistent with the actual horizontal-vertical ratio of the reference object, for example, the length and width of the actual double-yellow dotted line in the road in the image can be selected, in the actual road, the road signs on the road have corresponding standards, and the information thereof is used as the reference for image calibration, so that the corrected image shown in the (b) diagram in fig. 5 is finally obtained.
The corrected image can be obtained by utilizing any known length information in the graph to calculate the distance between any two points or any two straight lines of the plane, and in the graph (b) in FIG. 5, the proportion information of the unit pixel and the actual size, namely the actual distance D corresponding to the unit pixel is calculated according to the known distance information such as the barrier distance and the like, and the subsequent calculation is carried out by taking the proportion information as the reference.
Step two: establishing a monocular vision system model, and analyzing the depths of the light center to the point and the straight line
Based on the common image, the related distance is obtained, firstly, a monocular vision system model shown in fig. 6 is established, a cartesian three-dimensional coordinate system of the real world is established by taking the optical center O' as the center, then, the information with the fixed length in the image is selected to perform inverse solution on parameters of the monocular vision system model, and the data in the model is obtained. Three coordinate systems are involved to calculate the three-dimensional coordinates of the object relative to the camera: translation of pixel coordinate system, Image Coordinate System (ICS) and Camera Coordinate System (CCS). The pixel coordinate system takes the vertex at the upper left corner of the image as an origin, takes a U axis and a V axis as horizontal and vertical axes, and represents coordinates by (U, V), and the unit is a pixel; the image coordinate system takes the center of the image as an origin, takes an x axis and a y axis as horizontal and vertical axes, the x axis and the y axis are respectively parallel to U, V axes, and coordinates are expressed by (x, y) and the unit is mm; the camera coordinate system takes the optical center of the camera as an origin, takes an X axis and a Y axis as horizontal and vertical axes, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, the optical axis of the camera is a Z axis, and the image coordinate point P (X, Y) corresponds to the coordinate point P (X, Y, Z) of the camera coordinate system; the point (x, y) in the image coordinate system corresponds to the point (x, y, z) in the actual spatial coordinates.
1) Calculating the depth of the actual point
From the monocular vision model shown in fig. 6, a relationship between the image coordinate system and the pixel coordinate system shown in fig. 7 is established, and as can be seen from fig. 7, in the ICS, the relationship between the pixel coordinate system and the image coordinate system is as follows, where dx and dy represent the size of each pixel in the x-axis and the y-axis, respectively, and f is the focal length of the camera:
Figure BDA0002305041450000071
the point P (X, Y, Z) in the Camera Coordinate System (CCS) corresponds to the point P (X, Y) in the image coordinates, and from the similarity relationship, the following mathematical relationship can be obtained:
Figure BDA0002305041450000072
2) calculating the distance from the optical center to a certain straight line of the image
Solving the distance from the optical center to a certain straight line in the image, i.e. solving the depth from the optical center to the target straight line, as shown in fig. 8, where O' is the optical center of the camera, i.e. the optical center of the monocular vision system, and selecting a straight line in the image, where two end points are E (u) respectively1,v1),Q(u2,v2) The coordinates of two points are (x)1,y1,z1) and (x2,y2,z2). When the optical center is farther from the straight line, the depth (i.e., distance) from the photographer to any point on the EQ line segment is considered to be the same, i.e., the depth from O' to any point on the EQ line segment is considered to be constant.
In fig. 8, the length of two points EQ is s, the actual length of a unit pixel is D, the depth from the camera optical center to a straight line is z, and the focal length is f. Where d is the pixel size and n is the number of pixels on the line, we can obtain:
Figure BDA0002305041450000081
from a similar principle, it can be seen that:
Figure BDA0002305041450000082
the following can be obtained:
Figure BDA0002305041450000083
wherein ,fxIs a simplified quantity of the raw materials,
Figure BDA0002305041450000084
step three: constructing auxiliary lines and carrying out perspective transformation to obtain a parallel projection model, and calculating the distance required to be solved
In the first step, corresponding perspective transformation is already carried out, and the corresponding distance of a single pixel point after transformation is obtained. In step three, two auxiliary lines are constructed in the image, then perspective transformation is carried out for analyzing a relevant model, and the following steps are carried out by taking an example of the distance of the photographer X from the left side of the road as a guide:
1) in the image shown in (a) of fig. 9, two auxiliary straight lines parallel to the bottom edge of the image are constructed, one of which is required to pass through the very center of the image because such an angle is from the optical center and perpendicular to the ICS. The image obtained by subjecting the straight lines RO and ST shown in fig. 9 (a) to perspective transformation is shown in fig. 9 (b).
2) And constructing a double parallel line projection model. After the perspective transformation of the auxiliary line, the schematic plan view of the auxiliary line is shown in fig. 10, and the line segment SP and the number n of the pixels of the SP can be directly obtained from the image1 and n2. Since RO is a straight line passing through the center of the image and parallel to the bottom side, a straight line XO extending from the photographer to the center point O is perpendicular to it, and SP is also perpendicular to XO in the same manner. Since the actual distance represented by a single pixel in the new image is known, the actual distances of the line segments RO and SP, respectively, can be calculated as s1 and s2
3) The depth from the photographer X to the two straight lines RO and SP can be obtained from the formula (4) as
Figure BDA0002305041450000091
And
Figure BDA0002305041450000092
the actual distance dis between the straight lines RO and SP is known, and the difference in depth between the photographer X and the two straight lines is the distance between the two straight lines RO and SP, so
Figure BDA0002305041450000093
From which it can be calculated
Figure BDA0002305041450000094
4) From f obtained in step 3)xThe value of (A) is determined as the depth of the optical center to the straight line RO
Figure BDA0002305041450000095
The distance XO between the photographer X and the straight line RO is calculated to obtain the included angle theta according to the geometric information in the graph 10 and the pythagorean theoremAnd then obtaining the distance DX from the photographer X to the left road as follows: DX + CD + CX + OX sin (θ); wherein, CD is the distance between the two lines between the left road and the left road, which can be obtained from the information during the road design, i.e. the distance is known during the road design, and then the distance DX from the photographer X to the left road can be obtained by combining the calculated XO and θ.
By the image measuring method based on the monocular vision model and the perspective transformation, the distance between a photographer and any target point in a shot image can be simply and quickly obtained, and the precision is high.
The invention has the beneficial effects that: under the condition that the camera parameters are unknown, the distance between the photographer and any target point in the shot image is simply and clearly measured, the measurement precision is high, and the method has practicability and applicability.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. An image measuring method based on a monocular vision model and perspective transformation is characterized in that: the method comprises the following steps:
s1: firstly, acquiring any shot image; then, perspective transformation is carried out on the image, and a certain object in the image after perspective transformation is adjusted, wherein the actual aspect ratio of the certain object is known, so that the aspect ratio of the certain object in a new image obtained after perspective transformation and corresponding adjustment is the same as the actual aspect ratio of the object; meanwhile, according to any known actual distance in the new image, obtaining an actual distance D corresponding to a unit pixel in the new image so as to calculate the distance between any two points in the new image;
s2: establishing a pixel coordinate system, an image coordinate system and a camera coordinate system by taking the optical center of a camera as a center, wherein the conversion relation among the 3 coordinate systems is the established monocular vision model;
s3: in the figureLike above, two auxiliary straight lines RO and ST are constructed, both of which are parallel to the bottom side of the image, wherein the auxiliary straight line RO passes through the center point O of the image, and the auxiliary straight line ST is located at a distance dis from the auxiliary straight line RO; carrying out perspective transformation on the image added with the auxiliary line, constructing a double parallel line projection model based on two auxiliary straight lines RO and ST, and obtaining the auxiliary straight line RO and the straight line SP with the pixel number of n respectively1 and n2(ii) a P is the intersection point of the straight line from the photographer F to the center point O of the image and the straight line ST, and the actual distance S of the straight line RO is obtained by combining the distance corresponding to the unit pixel obtained in the step S11Actual distance s from straight line SP2(ii) a Solving to obtain a conversion parameter f of a point between coordinate systems of the monocular vision model according to the fact that the distance difference between the photographer X and the two straight lines RO and SP is equal to the distance disx
S4: according to the solved conversion parameter fxFrom the formula
Figure FDA0002305041440000011
Calculating the depth from the optical center to the straight line RO
Figure FDA0002305041440000012
The depth from the optical center to the straight line RO is the distance XO from the photographer X to the straight line RO; wherein z is the depth from the optical center of the camera to the connecting line of any two points, s is the actual distance between any two points, d is the pixel size, and n is the number of pixels on the connecting line of any two points; because the straight line XO is perpendicular to the straight line RO, the included angle theta between the straight line XO and the straight line OS is calculated according to the pythagorean theorem by combining the actual distance of the straight line RO and the actual distance of the XO, and the distance between any target point D in the image and the photographer X can be further obtained.
2. An image measurement method based on a monocular vision model and perspective transformation as set forth in claim 1, wherein: the image is shot by a camera with length information in the horizontal direction and the longitudinal direction but unknown parameters of other cameras.
3. An image measurement method based on a monocular vision model and perspective transformation as set forth in claim 1, wherein: the new image is a top view of the scene of the image.
4. An image measurement method based on a monocular vision model and perspective transformation as set forth in claim 1, wherein: in step S1, an object in the image after perspective transformation is adjusted by trial and error.
5. An image measurement method based on a monocular vision model and perspective transformation as set forth in claim 1, wherein: in step S2, the pixel coordinate system takes the vertex at the top left corner of the image as the origin, takes the U axis and the V axis as the horizontal and vertical axes, and represents coordinates by (U, V), and the unit is a pixel; the image coordinate system takes the center of the image as an origin, takes an x axis and a y axis as horizontal and vertical axes, the x axis and the y axis are respectively parallel to U, V axes, and coordinates are expressed by (x, y) and the unit is mm; the camera coordinate system takes the optical center of the camera as an origin, takes an X axis and a Y axis as horizontal and vertical axes, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, the optical axis of the camera is a Z axis, and the image coordinate point P (X, Y) corresponds to the coordinate point P (X, Y, Z) of the camera coordinate system;
the relationship between the pixel coordinate system and the image coordinate system is shown in equation (1):
Figure FDA0002305041440000021
where dx and dy denote the size of each pixel in the x-axis and y-axis, respectively, (u)0,v0) Representing the corresponding coordinates of the origin of the image coordinate system in the pixel coordinate system.
6. An image measurement method based on a monocular vision model and perspective transformation as set forth in claim 1, wherein: in step S3, a straight line EQ is selected in the image, and the coordinates of the points E and Q in the image coordinate system are each E (u)1,v1) and Q(u2,v2) The coordinates of the two points in the camera coordinate system are (X)1,Y1,Z1) and (X2,Y2,Z2) (ii) a The number n of pixels between two points EQ is combined with the actual length D of a unit pixel to obtain the actual length s of the two points EQ, the depth from the optical center O' of the camera to the straight line EQ is z, the focal length of the camera is f, and D is d.n; wherein d is the pixel size, n is the number of upper pixels of the straight line EQ,
Figure FDA0002305041440000022
from a similar principle, it can be seen that:
Figure FDA0002305041440000023
the following can be obtained:
Figure FDA0002305041440000031
wherein ,
Figure FDA0002305041440000032
CN201911236558.9A 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation Active CN111192235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911236558.9A CN111192235B (en) 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911236558.9A CN111192235B (en) 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation

Publications (2)

Publication Number Publication Date
CN111192235A true CN111192235A (en) 2020-05-22
CN111192235B CN111192235B (en) 2023-05-26

Family

ID=70707534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911236558.9A Active CN111192235B (en) 2019-12-05 2019-12-05 Image measurement method based on monocular vision model and perspective transformation

Country Status (1)

Country Link
CN (1) CN111192235B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111380503A (en) * 2020-05-29 2020-07-07 电子科技大学 Monocular camera ranging method adopting laser-assisted calibration
CN112197708A (en) * 2020-08-31 2021-01-08 深圳市慧鲤科技有限公司 Measuring method and device, electronic device and storage medium
CN112325780A (en) * 2020-10-29 2021-02-05 青岛聚好联科技有限公司 Distance measuring and calculating method and device based on community monitoring
CN112665577A (en) * 2020-12-29 2021-04-16 北京电子工程总体研究所 Monocular vision target positioning method and system based on inverse perspective transformation matrix
CN112734832A (en) * 2021-01-22 2021-04-30 逆可网络科技有限公司 Method for measuring real size of on-line object in real time
CN113124820A (en) * 2021-06-17 2021-07-16 中国空气动力研究与发展中心低速空气动力研究所 Monocular distance measurement method based on curved mirror
CN113361507A (en) * 2021-08-11 2021-09-07 金成技术有限公司 Visual measurement method for production information of structural member
CN113538578A (en) * 2021-06-22 2021-10-22 恒睿(重庆)人工智能技术研究院有限公司 Target positioning method and device, computer equipment and storage medium
CN114935316A (en) * 2022-05-20 2022-08-23 长春理工大学 Standard depth image generation method based on optical tracking and monocular vision
CN112734832B (en) * 2021-01-22 2024-05-31 逆可网络科技有限公司 Method for measuring real size of on-line object in real time

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157910A (en) * 2008-12-26 2010-07-15 Toyota Motor Corp Image display device
WO2017076928A1 (en) * 2015-11-02 2017-05-11 Starship Technologies Oü Method, device and assembly for map generation
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
CN109146980A (en) * 2018-08-12 2019-01-04 浙江农林大学 The depth extraction and passive ranging method of optimization based on monocular vision
CN109822754A (en) * 2019-02-25 2019-05-31 长安大学 Body dump size detecting system and method for asphalt concrete mixer
CN110009682A (en) * 2019-03-29 2019-07-12 北京理工大学 A kind of object recognition and detection method based on monocular vision
CN110057295A (en) * 2019-04-08 2019-07-26 河海大学 It is a kind of to exempt from the monocular vision plan range measurement method as control
CN110136211A (en) * 2019-04-18 2019-08-16 中国地质大学(武汉) A kind of workpiece localization method and system based on active binocular vision technology
CN110174088A (en) * 2019-04-30 2019-08-27 上海海事大学 A kind of target ranging method based on monocular vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157910A (en) * 2008-12-26 2010-07-15 Toyota Motor Corp Image display device
WO2017076928A1 (en) * 2015-11-02 2017-05-11 Starship Technologies Oü Method, device and assembly for map generation
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
CN109146980A (en) * 2018-08-12 2019-01-04 浙江农林大学 The depth extraction and passive ranging method of optimization based on monocular vision
CN109822754A (en) * 2019-02-25 2019-05-31 长安大学 Body dump size detecting system and method for asphalt concrete mixer
CN110009682A (en) * 2019-03-29 2019-07-12 北京理工大学 A kind of object recognition and detection method based on monocular vision
CN110057295A (en) * 2019-04-08 2019-07-26 河海大学 It is a kind of to exempt from the monocular vision plan range measurement method as control
CN110136211A (en) * 2019-04-18 2019-08-16 中国地质大学(武汉) A kind of workpiece localization method and system based on active binocular vision technology
CN110174088A (en) * 2019-04-30 2019-08-27 上海海事大学 A kind of target ranging method based on monocular vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CAI MENG等: "Monocular pose measurement method based on circle and line features" *
M. N. A. WAHAB等: "Target Distance Estimation Using Monocular Vision System for Mobile Robot" *
贾星伟等: "单目摄像机线性模型可靠性分析与研究" *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111380503A (en) * 2020-05-29 2020-07-07 电子科技大学 Monocular camera ranging method adopting laser-assisted calibration
CN112197708A (en) * 2020-08-31 2021-01-08 深圳市慧鲤科技有限公司 Measuring method and device, electronic device and storage medium
CN112325780A (en) * 2020-10-29 2021-02-05 青岛聚好联科技有限公司 Distance measuring and calculating method and device based on community monitoring
CN112665577A (en) * 2020-12-29 2021-04-16 北京电子工程总体研究所 Monocular vision target positioning method and system based on inverse perspective transformation matrix
CN112734832A (en) * 2021-01-22 2021-04-30 逆可网络科技有限公司 Method for measuring real size of on-line object in real time
CN112734832B (en) * 2021-01-22 2024-05-31 逆可网络科技有限公司 Method for measuring real size of on-line object in real time
CN113124820A (en) * 2021-06-17 2021-07-16 中国空气动力研究与发展中心低速空气动力研究所 Monocular distance measurement method based on curved mirror
CN113538578A (en) * 2021-06-22 2021-10-22 恒睿(重庆)人工智能技术研究院有限公司 Target positioning method and device, computer equipment and storage medium
CN113361507A (en) * 2021-08-11 2021-09-07 金成技术有限公司 Visual measurement method for production information of structural member
CN114935316A (en) * 2022-05-20 2022-08-23 长春理工大学 Standard depth image generation method based on optical tracking and monocular vision

Also Published As

Publication number Publication date
CN111192235B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN111192235A (en) Image measuring method based on monocular vision model and perspective transformation
CN106595528B (en) A kind of micro- binocular stereo vision measurement method of telecentricity based on digital speckle
CN110057295B (en) Monocular vision plane distance measuring method without image control
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
Zhang et al. A robust and rapid camera calibration method by one captured image
CN107025670A (en) A kind of telecentricity camera calibration method
CN109238235B (en) Method for realizing rigid body pose parameter continuity measurement by monocular sequence image
US20220092819A1 (en) Method and system for calibrating extrinsic parameters between depth camera and visible light camera
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
WO2007133620A2 (en) System and architecture for automatic image registration
CN109727290A (en) Zoom camera dynamic calibrating method based on monocular vision triangle telemetry
CN107084680A (en) A kind of target depth measuring method based on machine monocular vision
Rüther et al. A comparison of close-range photogrammetry to terrestrial laser scanning for heritage documentation
CN111105467B (en) Image calibration method and device and electronic equipment
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN114926538A (en) External parameter calibration method and device for monocular laser speckle projection system
JPH11514434A (en) Method and apparatus for determining camera position and orientation using image data
CN114078163A (en) Precise calibration method for laser radar and visible light camera
Terpstra et al. Accuracies in Single Image Camera Matching Photogrammetry
CN111429571A (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
Cai et al. Near-infrared camera calibration for optical surgical navigation
CN111998834B (en) Crack monitoring method and system
CN115375773A (en) External parameter calibration method and related device for monocular laser speckle projection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant