CN108648237B - Space positioning method based on vision - Google Patents

Space positioning method based on vision Download PDF

Info

Publication number
CN108648237B
CN108648237B CN201810220678.9A CN201810220678A CN108648237B CN 108648237 B CN108648237 B CN 108648237B CN 201810220678 A CN201810220678 A CN 201810220678A CN 108648237 B CN108648237 B CN 108648237B
Authority
CN
China
Prior art keywords
coordinate system
square
camera
coordinates
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810220678.9A
Other languages
Chinese (zh)
Other versions
CN108648237A (en
Inventor
葛仕明
刘文瑜
赵胜伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN201810220678.9A priority Critical patent/CN108648237B/en
Publication of CN108648237A publication Critical patent/CN108648237A/en
Application granted granted Critical
Publication of CN108648237B publication Critical patent/CN108648237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a space positioning method based on square labels and vision, which comprises the following steps: obtaining internal parameter matrixes of the two cameras through calibration; setting square tags with different ids at a certain fixed point and a position to be positioned; shooting a fixed point and a square label at a position to be positioned by using two cameras at the same time, acquiring coordinate information of each square label, and solving three world coordinates of the two cameras at the fixed point as an origin and three rotation angles from a camera coordinate system to a three-dimensional world coordinate system according to the coordinate information and an internal parameter matrix; and according to the three-dimensional world coordinates of the two cameras with the fixed point as the origin and the three rotation angles from the camera coordinate system to the three-dimensional world coordinate system, obtaining the three-dimensional world coordinates of the center point of each square label with the fixed point as the origin to finish positioning. The method can solve the problems of slow speed, low precision, complex operation, high equipment requirement and the like of the current space positioning method.

Description

Space positioning method based on vision
Technical Field
The invention relates to the field of digital image processing and computer vision, in particular to a method and a system for accurately positioning world coordinates of any point in space, belongs to the space positioning technology, and is particularly suitable for robot positioning and industrial automatic production.
Background
In recent years, artificial intelligence has rapidly developed and even becomes an important factor for competing strong science and technology countries of all countries in the world. Computer vision technology has received attention from researchers as an important branch in the field of artificial intelligence. Vision-based spatial localization techniques are also applied to a large number of machine vision systems. The space positioning technology has wide application, and comprises the fields of robot positioning, industrial automatic production, monitoring, engineering surveying and mapping, military aerospace and the like. For example, when a robot grips an object, it needs to grip the object by accurately positioning the object. When the multi-degree-of-freedom manipulator works, whether the positions of all the joint points can be accurately positioned or not is of great significance to safe production. In the monitoring field, the spatial positioning technology can accurately monitor whether an object is displaced.
Many researchers have proposed spatial localization methods. The CVsuite software is invented by the robot vision research group of the institute of automation of Chinese academy of sciences, the functions of extracting and matching feature points of images, calibrating a camera and finally displaying three dimensions are realized, the application is convenient, the operation speed is slow, the real-time positioning effect is influenced, and the precision is not high enough. In recent years, the visual SLAM (instantaneous positioning and mapping) technology of special fire and heat has been successfully applied to many robots, but the operation is complex and the requirement on a camera is high.
Disclosure of Invention
The invention mainly aims to provide a space positioning method based on square labels and vision, which can solve the problems of slow speed, low precision, complex operation, high equipment requirement and the like of the conventional space positioning method.
In order to solve the technical problems, the invention adopts a technical scheme that:
a vision-based spatial localization method comprises the following steps:
obtaining internal parameter matrixes of the two cameras through calibration;
setting square tags with different ids at a certain fixed point and a position to be positioned;
shooting a fixed point and a square label at a position to be positioned by using two cameras at the same time, acquiring coordinate information of each square label, and solving three world coordinates of the two cameras at the fixed point as an origin and three rotation angles from a camera coordinate system to a three-dimensional world coordinate system according to the coordinate information and an internal parameter matrix;
and according to the three-dimensional world coordinates of the two cameras with the fixed point as the origin and the three rotation angles from the camera coordinate system to the three-dimensional world coordinate system, obtaining the three-dimensional world coordinates of the center point of each square label with the fixed point as the origin to finish positioning.
Further, by calibrating, acquiring the intrinsic parameter matrix of the two cameras includes:
shooting and collecting a plurality of calibration pictures of a calibration plate with a checkerboard through a camera;
and extracting the information of the inner corners on the checkerboard in each calibration picture to obtain the 2D pixel coordinates of each inner corner and the 3D world coordinates with the first inner corner at the upper left corner as the origin, and calculating the internal parameter matrix K of the camera according to the 3D world coordinates.
Further, an intrinsic parameter matrix of the camera is calculated according to the following formula:
Figure GDA0003515388230000021
where K denotes the camera's internal reference matrix, fxAnd fyScale factors in the x-axis and y-axis directions in the coordinate system are respectively expressed, s represents distortion coefficients of the two coordinate systems, and (u) represents distortion coefficient of the two coordinate systems0,v0) Is the principal point coordinates, R is a 3 × 3 rotation matrix, t is a 1 × 3 translation vector, (X)w,Yw,Zw) And (u, v) is the 3d/2d coordinate of a point.
Further, the coordinate information of each square label comprises the id of each square label and the pixel coordinates of four vertexes of each square label.
Further, using two cameras to shoot the square tags of the fixed point and the position to be located at the same time, and acquiring the coordinate information of each square tag comprises:
simultaneously shooting images with square label positions at different positions by using two cameras, solving gradient images of the images, extracting straight lines in the gradient images, and detecting square angular points to obtain a plurality of square areas and key angular points thereof;
matching and comparing the square area with square labels in a label library one by one to determine the position of each square label in an image, obtain the id of each square label, and obtain the pixel coordinates of four vertexes of each square label in a coordinate system of an image plane according to the angular point information so as to obtain the pixel coordinates P (u, v) of the central point P of each label;
taking the central point of the square label at the fixed point as the original point Ow(0, 0, 0), taking four vertex world coordinates (-n, -n, 0), (n, -n, 0), (n, n, 0) and (-n, n, 0) as four vertex 3D coordinates of the label at the fixed point and corresponding 2D pixel coordinates, wherein n is the side length of the square label.
Further, the step of obtaining three-dimensional world coordinates of the two cameras with the fixed point as an origin and three rotation angles from the camera coordinate system to the three-dimensional world coordinate system according to the coordinate information and the internal parameter matrix comprises the following steps:
the four vertex 3D coordinates and the corresponding 2D pixel coordinates are substituted into the following pixel coordinate system and world coordinate system conversion equations,
Figure GDA0003515388230000031
solving a rotation matrix R and a translational vector T of the camera; world coordinate system origin OwCorresponding projection point o in the camera coordinate systemwHas the coordinate of (T)11,T12,T13);
Based on the rotation matrix R, three rotation angles theta of the camera coordinate system rotated to be completely parallel to the world coordinate system are sequentially obtained by the following calculation formulaz,θyAnd thetax
θz=atan2(R21,R11)
Figure GDA0003515388230000032
θx=atan2(R32,R33)。
Further, still include: based on o obtained as described abovew(T11,T12,T13) And three rotation angles thetaz,θyAnd thetaxSequentially rotating the original camera coordinate system by theta around the z-axiszRotation of theta about the y-axisyRotation of theta about the x-axisxObtaining a coordinate system parallel to the world coordinate system, and calculatingwRotation of-theta about z-axiszRotated about the y-axis by-thetayRotated about the x-axis by-thetaxTo obtain ow2(x2,y2,z2) Make the camera OcThe coordinate in the world coordinate system is (-x)2,-y2,-z2)。
Further, the obtaining of the three-dimensional world coordinate with the fixed point as the origin of each square label center point according to the three-dimensional world coordinate with the fixed point as the origin of the two cameras and the three rotation angles from the camera coordinate system to the three-dimensional world coordinate system comprises:
the three-dimensional coordinates P of the projection points of the position P to be positioned corresponding to the two camera coordinate systems are respectively obtained through the following calculation formulac1(xpc1,ypc1,zpc1) And Pc2(xpc2,ypc2,zpc2),
xc=(u-u0)·F/fx
yc=(v-v0)·F/fy
Zc=F;
Respectively rotating the two camera coordinate systems for three times in sequence to obtain a coordinate system parallel to the world coordinate system, and respectively rotating Pc1And Pc2Three times of reverse rotation are carried out in sequence to obtain a three-dimensional coordinate P of the opposite camerac1R(xpw1,ypw1,zpw1) And Pc2R(xpw2,ypw2,zpw2);
Determining world coordinates O of two camerasc1(xow1,yow1,zow1) And Oc2(xow2,yow2,zow2) P1 and P2World coordinates of (a): p1 (x)pw1+xow1,ypw1+yow1,zpw1+zow1) And P2(xpw2+xow2,ypw2+yow2,zpw2+zow2) To obtain two straight lines Oc1 P1And Oc2 P2
Respectively adding Oc1And P1Coordinate, Oc2And P2The coordinates are substituted into the following calculation formula,
Figure GDA0003515388230000041
finding a straight line Oc1 P1、Oc2 P2The equation of (c);
by two straight lines Oc1 P1And Oc2 P2The intersection point of (a) determines the position P to be located.
Further, the method also comprises the step of eliminating errors, which comprises the following steps: when two straight lines Oc1 P1And Oc2 P2If there is no intersection point, two straight lines O are obtainedc1 P1And Oc2 P2The midpoint of the closest point of (2) is used as the three-dimensional world coordinate of the position P to be located.
By adopting the technical scheme, the invention can complete space accurate positioning only by two common cameras and the prepared square label, and has extremely low requirement on the cameras, thereby ensuring that the whole scheme has low realization cost and high efficiency. In the process of the space positioning method based on the square label, the camera calibration process is simple and easy to realize, the calibration accuracy is high, the required equipment is simple, the algorithm complexity is low, the positioning precision is high, and the method can be widely applied to the fields of robot positioning systems, industrial automatic real-time production, video monitoring and the like.
Drawings
FIG. 1 is a schematic flow chart of a method for vision-based spatial localization according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating the calibration of two cameras according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of the method for determining three-dimensional world coordinates of center points of all tags according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Referring to fig. 1 and 3, in an embodiment, the method for visual-based spatial localization includes the following steps:
1) calibrating the two cameras to respectively obtain internal parameter matrixes (distortion coefficients in the internal parameter matrixes) of the two cameras;
2) setting square tags with different ids at a certain fixed point and a position to be positioned; more commonly, a square label is arranged in a sticking manner.
The square tags are patterns similar to two-dimensional codes, each tag corresponds to a unique id number and a pattern arrangement, and a program can identify the tag and obtain a corresponding id.
3) Acquiring the id of each label and the pixel coordinates of four vertexes by accurately identifying images shot by two cameras, substituting the coordinate information and the internal parameter matrix obtained in the step 1) into a correlation formula to obtain three-dimensional world coordinates of the two cameras with a fixed point as an origin and three rotation angles from a camera coordinate system to a three-dimensional world coordinate system; the camera position can be at any position, and the two cameras can shoot the fixed point and the label at the position needing to be positioned.
4) And obtaining a three-dimensional world coordinate of the central point of each label by taking the fixed point as an origin according to the two images and the three rotation angles of the two cameras to finish positioning.
Further, both the cameras are calibrated by the method in step 1), and referring to fig. 2, the method for calibrating the cameras in step 1) specifically includes the following steps:
1-1) preparing checkerboard calibration paper, wherein the checkerboard calibration paper is uniformly distributed, the side length of each square grid is fixed, the calibration paper is pasted on a plane plate to manufacture a calibration plate, and the calibration plate is used for shooting 15 left and right calibration pictures by a camera at different positions and different angles and postures;
1-2) extracting information of inner corner points on the checkerboard for each calibration picture to obtain 2D pixel coordinates of each inner corner point and 3D world coordinates with a first inner corner point at the upper left corner as an origin;
1-3) based on the principle of camera imaging, K in formula (1) represents the camera's internal reference matrix, where fxAnd fyScale factors representing the directions of the x-axis and the y-axis, respectively, s represents distortion coefficients of two coordinate systems, (u)0,v0) Is the principal point coordinates, R is a 3 × 3 rotation matrix, t is a 1 × 3 translation vector, (X)w,Yw,Zw) And (u, v) is the 3d/2d coordinate of a point. Since the calibration plate is on a plane, let ZwR is 0, R is only1And r2I.e. r1And r2And (3) carrying out standard orthogonality, and substituting the 3d/2d coordinate combination obtained in the step 1-2) to obtain an internal reference matrix K of the camera.
Figure GDA0003515388230000061
Further, the two cameras acquire three-dimensional world coordinates of the two cameras with the fixed point as an origin and three rotation angles from the camera coordinate system to the three-dimensional world coordinate system by adopting the method in the step 3), wherein the step 3) specifically comprises the following steps:
3-1) shooting images at different positions simultaneously by a camera, solving a gradient image of the images, extracting straight lines in the gradient image, and detecting square corner points. Finally, square areas and key angular points thereof are obtained, the square areas and the labels of the label library are matched and compared one by one, so that the position of each label in the image is determined, the id of each label is obtained, the pixel coordinates of four vertexes of each square label in a coordinate system of an image plane are obtained according to angular point information, and the pixel coordinates P (u, v) of the central point P of each label are obtained;
3-2) the side length of each square label is n millimeters, and the center point of the square label at the fixed point is taken as the original point Ow(0, 0, 0), and the remaining four vertex world coordinates are (-n, -n, 0), (n, -n, 0), (n, n, 0), and (-n, n, 0), in that order, so that there are four vertex 3D coordinates of the label at the fixed point and the corresponding 2D pixel coordinates.
3-3) substituting the four groups of 3D/2D coordinates obtained in the step 3-2) into a pixel coordinate system and world coordinate system conversion formula as shown in the formula (2) to obtain a rotation matrix R and a translational vector T of the camera; world coordinate system origin OwCorresponding projection point o in the camera coordinate systemwHas the coordinate of (T)11,T12,T13);
Figure GDA0003515388230000062
Wherein f isx、fy,u0、v0And s has been found in step 1), R is a 3X 3 orthogonal matrix, T is a 3X 1 translation vector, (X)w,Yw,Zw) And (u, v) is the 3d/2d coordinate of a point.
3-4) respectively and sequentially calculating three rotation angles theta of the camera coordinate system which is rotated to be completely parallel to the world coordinate system according to the rotation matrix R obtained in the step 3-3) by the formula (3)z,θyAnd thetax
θz=atan2(R21,R11)
Figure GDA0003515388230000071
θx=atan2(R32,R33) Formula (3)
In the formula (3), θ is atan2(x, y) and when the absolute value of x is greater than the absolute value of y, θ is arctan (y/x), otherwise θ is arctan (x/y);
3-5) o determined according to step 3-3)w(T11,T12,T13) And 3-4) obtaining three rotation angles of the camera, and sequentially rotating the original camera coordinate system by theta around the z axiszRotation of theta about the y-axisyRotation of theta about the x-axisxObtaining a coordinate system parallel to the world coordinate system for ensuring the origin Oc to o of the camerawIs still pointing to OwIt is necessary to mixwRotation of-theta about z-axiszRotated about the y-axis by-thetayRotated about the x-axis by-thetaxTo obtain ow2(x2,y2,z2) Thus, camera OcThe coordinate in the world coordinate system is (-x)2,-y2,-z2);
Further, positioning all the tag center points P through step 4), wherein step 4) specifically comprises the following steps:
4-1) respectively solving the three-dimensional coordinates P of the projection points of the point P to be solved in the two camera coordinate systems according to the formula (4)c1(xpc1,ypc1,zpc1) And Pc2(xpc2,ypc2,zpc2). As in the step 3-5), the two camera coordinate systems are respectively rotated for three times in sequence to obtain a coordinate system parallel to the world coordinate system, and P is respectively calculatedc1And Pc2Three times of reverse rotation are carried out in sequence to obtain a three-dimensional coordinate P of the opposite camerac1R(xpw1,ypw1,zpw1) And Pc2R(xpw2,ypw2,zpw2);
xc=(u-u0)·F/fx
yc=(v-v0)·F/fy
zcF equation (4)
4-2) respectively obtaining world coordinates O of the two cameras through the step 3-5)c1(xow1,yow1,zow1) And Oc2(xow2,yow2,zow2) Then P1 and P can be determined2World coordinates of (a): p1 (x)pw1+xow1,ypw1+yow1,zpw1+zow1) And P2(xpw2+xow2,ypw2+yow2,zpw2+zow2) Thereby obtaining two straight lines Oc1 P1And Oc2 P2
4-3) separately adding Oc1And P1Substituting the coordinates into equation (5) to obtain a straight line Oc1 P1Equation (c) ofc2And P2Substituting the coordinates into equation (5) to obtain a straight line Oc2 P2The equation of (c);
Figure GDA0003515388230000081
4-4) two straight lines Oc1 P1And Oc2 P2The intersection point of (3) is the point P to be solved, but due to the existence of errors, when the intersection point is solved through the two linear equations obtained in the step 4-3), the situation that the intersection point does not exist may occur, and the two straight lines O are obtainedc1 P1And Oc2 P2The midpoint of the closest point of (1) is taken as the three-dimensional world coordinate of point P.
And 4-3) solving the equations of the two straight lines, and solving the coordinates of the closest point on the two straight lines when the two straight lines are closest to each other by a solid geometry method, wherein the midpoint of the two closest points is the coordinate of the point P, so that the positioning is completed.
Taking an application scenario of positioning joint points of a six-degree-of-freedom robot arm as an example, a specific implementation scheme comprises the following steps:
1) firstly, calibrating two selected cameras to respectively obtain an internal parameter matrix and a distortion coefficient of the two cameras;
2) attaching square labels with different ids to the plane of the platform where the mechanical arm is located and each mechanical arm joint point needing to be positioned;
3) two cameras are arranged around the mechanical arm, so that the whole mechanical arm is ensured to be in the visual field range of the two cameras as much as possible. Simultaneously, the camera is pressed to start shooting videos, and the mechanical arm can be set to move in the process.
4) Selecting a certain frame of image in the two videos shot in the step 3) as an example, and positioning the position of each mechanical arm at the moment. The method comprises the following specific steps:
4_1) accurately identifying images shot by two cameras to obtain the id of each label and the pixel coordinates of four vertexes, substituting the coordinate information and the internal parameter matrix obtained in the step 1) into a correlation formula to obtain the three-dimensional world coordinates of the two cameras with a fixed point as the origin and three rotation angles from a camera coordinate system to a three-dimensional world coordinate system.
4_2) obtaining three-dimensional world coordinates of the center point of each label by taking the fixed point as an origin according to the two images and the three rotation angles of the two cameras to complete positioning.
It should be noted that, according to the technical scheme provided by the present invention, the positioning can be realized only by two cameras shooting the same square label, wherein the "camera" may be a common digital camera such as a webcam, or other image acquisition devices capable of realizing the photographing function.
It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Claims (2)

1. A vision-based spatial localization method comprises the following steps:
obtaining internal parameter matrixes of the two cameras through calibration;
setting square tags with different ids at a certain fixed point and a position to be positioned;
using two cameras to shoot a fixed point and a square label at a position to be positioned simultaneously, acquiring coordinate information of each square label, and solving three world coordinates of the two cameras using the fixed point as an origin and three rotation angles from a camera coordinate system to a three-dimensional world coordinate system according to the coordinate information and an internal parameter matrix; the position of the camera is any position, and only two cameras can shoot the square label on the fixed point and the position needing to be positioned;
according to the three-dimensional world coordinates of the two cameras with the fixed point as the origin and the three rotation angles from the camera coordinate system to the three-dimensional world coordinate system, the three-dimensional world coordinates of the center point of each square label with the fixed point as the origin are obtained to complete positioning;
wherein, through demarcating, obtaining the internal parameter matrix of the two cameras comprises:
shooting and collecting a plurality of calibration pictures of a calibration plate with a checkerboard through a camera;
extracting the information of the inner corners on the checkerboard in each calibration picture to obtain the 2D pixel coordinates of each inner corner and the 3D world coordinates with the first inner corner at the upper left corner as the origin, and calculating the internal parameter matrix K of the camera according to the coordinates;
calculating an intrinsic parameter matrix of the camera according to the following formula:
Figure FDA0003515388220000011
where K denotes the camera's internal reference matrix, fxAnd fyScale factors in the x-axis and y-axis directions in the coordinate system are respectively expressed, s represents distortion coefficients of the two coordinate systems, and (u) represents distortion coefficient of the two coordinate systems0,v0) Is the principal point coordinates, R is a 3 × 3 rotation matrix, t is a 1 × 3 translation vector, (X)w,Yw,Zw) And (u, v) is the 3d/2d coordinate of a point;
the coordinate information of each square label comprises the id of each square label and the pixel coordinates of four vertexes of each square label;
the two cameras are used for shooting the fixed point and the square label at the position to be positioned at the same time, and coordinate information of the square labels is obtained, and the method comprises the following steps:
simultaneously shooting images with square label positions at different positions by using two cameras, solving gradient images of the images, extracting straight lines in the gradient images, and detecting square angular points to obtain a plurality of square areas and key angular points thereof;
matching and comparing the square area with square labels in a label library one by one to determine the position of each square label in an image, obtain the id of each square label, and obtain the pixel coordinates of four vertexes of each square label in a coordinate system of an image plane according to the angular point information so as to obtain the pixel coordinates P (u, v) of the central point P of each label;
taking the central point of the square label at the fixed point as the original point Ow(0, 0, 0) using four vertex world coordinates (-n, -n, 0), (n, -n, 0), (n, n, 0) and (-n, n, 0) as four vertex 3D coordinates of a label at a fixed point and corresponding 2D pixel coordinates, wherein n is the side length of a square label;
the method for solving the three-dimensional world coordinate of the two cameras with the fixed point as the origin and the three rotation angles from the camera coordinate system to the three-dimensional world coordinate system according to the coordinate information and the internal parameter matrix comprises the following steps:
the four vertex 3D coordinates and the corresponding 2D pixel coordinates are substituted into the following pixel coordinate system and world coordinate system conversion equations,
Figure FDA0003515388220000021
solving a rotation matrix R and a translational vector T of the camera; world coordinate system origin OwCorresponding projection point o in the camera coordinate systemwHas the coordinate of (T)11,T12,T13);
Based on the rotation matrix R, three rotation angles theta of the camera coordinate system rotated to be completely parallel to the world coordinate system are sequentially obtained by the following calculation formulazθ y and θx
θz=atan2(R21,R11)
Figure FDA0003515388220000022
θx=atan2(R32,R33);
Based on o obtained as described abovew(T11,T12,T13) And three rotation angles thetaz,θyAnd thetaxSequentially rotating the original camera coordinate system by theta around the z-axiszRotation of theta about the y-axisyRotation of theta about the x-axisxObtaining a coordinate system parallel to the world coordinate system, and calculatingwRotation of-theta about z-axiszRotated about the y-axis by-thetayRotated about the x-axis by-thetaxTo obtain ow2(x2,y2,z2) Make the camera OcThe coordinate in the world coordinate system is (-x)2,-y2,-z2);
The three-dimensional world coordinate which takes the fixed point as the origin point of each square label center point and is obtained according to the three-dimensional world coordinate which takes the fixed point as the origin point of the two cameras and the three rotation angles from the camera coordinate system to the three-dimensional world coordinate system, and the positioning completion comprises the following steps:
the three-dimensional coordinates P of the projection points of the position P to be positioned corresponding to the two camera coordinate systems are respectively obtained through the following calculation formulac1(xpc1,ypc1,zpc1) And Pc2(zpc2,ypc2,zpc2),
xc=(u-u0)·F/fx
yc=(v-v0)·F/fy
Zc=F;
Respectively rotating the two camera coordinate systems for three times in sequence to obtain a coordinate system parallel to the world coordinate system, and respectively rotating Pc1And Pc2Three times of reverse rotation are carried out in sequence to obtain a three-dimensional coordinate P of the opposite camerac1R(xpw1,ypw1,zpw1) And Pc2R(xpw2,ypw2,zpw2);
Determining world coordinates O of two camerasc1(xow1,yow1,zow1) And Oc2(xow2,yow2,zow2) P1 and P2World coordinates of (a): p1 (x)pw1+xow1,ypw1+yow1,zpw1+zow1) And P2(xpw2+xow2,ypw2+yow2,zpw2+zow2) To obtain two straight lines Oc1P1And Oc2P2
Respectively adding Oc1And P1Coordinate, Oc2And P2The coordinates are substituted into the following calculation formula,
Figure FDA0003515388220000031
finding a straight line Oc1P1、Oc2P2The equation of (c);
by two straight lines Oc1P1And Oc2P2The intersection point of (a) determines the position P to be located.
2. The vision-based spatial localization method of claim 1, further comprising eliminating errors, comprising: when two straight lines Oc1P1And Oc2P2If there is no intersection point, two straight lines O are obtainedc1P1And Oc2P2The midpoint of the closest point of (2) is used as the three-dimensional world coordinate of the position P to be located.
CN201810220678.9A 2018-03-16 2018-03-16 Space positioning method based on vision Active CN108648237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810220678.9A CN108648237B (en) 2018-03-16 2018-03-16 Space positioning method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810220678.9A CN108648237B (en) 2018-03-16 2018-03-16 Space positioning method based on vision

Publications (2)

Publication Number Publication Date
CN108648237A CN108648237A (en) 2018-10-12
CN108648237B true CN108648237B (en) 2022-05-03

Family

ID=63744227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810220678.9A Active CN108648237B (en) 2018-03-16 2018-03-16 Space positioning method based on vision

Country Status (1)

Country Link
CN (1) CN108648237B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109001920B (en) * 2018-05-02 2021-07-09 上海视眸自动化科技有限公司 Automatic pin alignment method for glass plate and IC plate of small liquid crystal display screen
CN109775055B (en) * 2019-01-08 2021-06-04 河北科技大学 Vision-based method for detecting label missing on end face of bundled bar materials and measuring error
CN109993798B (en) * 2019-04-09 2021-05-28 上海肇观电子科技有限公司 Method and equipment for detecting motion trail by multiple cameras and storage medium
CN110926373A (en) * 2019-12-10 2020-03-27 中南大学 Structured light plane calibration method and system under railway foreign matter detection scene
CN111192321B (en) * 2019-12-31 2023-09-22 武汉市城建工程有限公司 Target three-dimensional positioning method and device
CN111360822B (en) * 2020-02-24 2022-10-28 天津职业技术师范大学(中国职业培训指导教师进修中心) Vision-based method for grabbing space cube by manipulator
CN112163537B (en) * 2020-09-30 2024-04-26 中国科学院深圳先进技术研究院 Pedestrian abnormal behavior detection method, system, terminal and storage medium
CN112230204A (en) * 2020-10-27 2021-01-15 深兰人工智能(深圳)有限公司 Combined calibration method and device for laser radar and camera
CN112419416B (en) * 2020-12-10 2022-10-14 华中科技大学 Method and system for estimating camera position based on small amount of control point information
CN112596460A (en) * 2020-12-16 2021-04-02 读书郎教育科技有限公司 Method for quickly positioning material for servo/step-by-step driving motion system
CN112802125A (en) * 2021-02-20 2021-05-14 上海电机学院 Multi-view space positioning method based on visual detection
CN113808195B (en) * 2021-08-26 2024-04-12 领翌技术(横琴)有限公司 Visual positioning method, device and storage medium
CN114926552B (en) * 2022-06-17 2023-06-27 中国人民解放军陆军炮兵防空兵学院 Method and system for calculating Gaussian coordinates of pixel points based on unmanned aerial vehicle image
CN115239801B (en) * 2022-09-23 2023-03-24 南京博视医疗科技有限公司 Object positioning method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308012A (en) * 2008-05-29 2008-11-19 上海交通大学 Double monocular white light three-dimensional measuring systems calibration method
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision
CN102842127A (en) * 2011-05-10 2012-12-26 哈曼贝克自动系统股份有限公司 Automatic calibration for extrinsic parameters of camera of surround view system camera
CN103115613A (en) * 2013-02-04 2013-05-22 安徽大学 Three-dimensional space positioning method
CN104361594A (en) * 2014-11-18 2015-02-18 国家电网公司 Camera cross positioning method
CN105955260A (en) * 2016-05-03 2016-09-21 大族激光科技产业集团股份有限公司 Mobile robot position perception method and device
CN105981074A (en) * 2014-11-04 2016-09-28 深圳市大疆创新科技有限公司 Camera calibration
CN106197422A (en) * 2016-06-27 2016-12-07 东南大学 A kind of unmanned plane based on two-dimensional tag location and method for tracking target
CN106600649A (en) * 2016-12-07 2017-04-26 西安蒜泥电子科技有限责任公司 Camera self-calibration method based on two-dimensional mark code
CN107367229A (en) * 2017-04-24 2017-11-21 天津大学 Free binocular stereo vision rotating shaft parameter calibration method
CN107478203A (en) * 2017-08-10 2017-12-15 王兴 A kind of 3D imaging devices and imaging method based on laser scanning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003109B (en) * 2014-11-13 2019-11-05 奥林巴斯株式会社 Calibrating installation, calibration method, Optical devices, camera, projection arrangement, measuring system and measurement method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308012A (en) * 2008-05-29 2008-11-19 上海交通大学 Double monocular white light three-dimensional measuring systems calibration method
CN102842127A (en) * 2011-05-10 2012-12-26 哈曼贝克自动系统股份有限公司 Automatic calibration for extrinsic parameters of camera of surround view system camera
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision
CN103115613A (en) * 2013-02-04 2013-05-22 安徽大学 Three-dimensional space positioning method
CN105981074A (en) * 2014-11-04 2016-09-28 深圳市大疆创新科技有限公司 Camera calibration
CN104361594A (en) * 2014-11-18 2015-02-18 国家电网公司 Camera cross positioning method
CN105955260A (en) * 2016-05-03 2016-09-21 大族激光科技产业集团股份有限公司 Mobile robot position perception method and device
CN106197422A (en) * 2016-06-27 2016-12-07 东南大学 A kind of unmanned plane based on two-dimensional tag location and method for tracking target
CN106600649A (en) * 2016-12-07 2017-04-26 西安蒜泥电子科技有限责任公司 Camera self-calibration method based on two-dimensional mark code
CN107367229A (en) * 2017-04-24 2017-11-21 天津大学 Free binocular stereo vision rotating shaft parameter calibration method
CN107478203A (en) * 2017-08-10 2017-12-15 王兴 A kind of 3D imaging devices and imaging method based on laser scanning

Also Published As

Publication number Publication date
CN108648237A (en) 2018-10-12

Similar Documents

Publication Publication Date Title
CN108648237B (en) Space positioning method based on vision
Kumar et al. Simple calibration of non-overlapping cameras with a mirror
CN109658457B (en) Method for calibrating arbitrary relative pose relationship between laser and camera
Zhang et al. A robust and rapid camera calibration method by one captured image
CN111415391B (en) External azimuth parameter calibration method for multi-camera by adopting mutual shooting method
CN112541946A (en) Real-time pose detection method of mechanical arm based on perspective multi-point projection
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN109087339A (en) A kind of laser scanning point and Image registration method
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN114283203A (en) Calibration method and system of multi-camera system
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
Mei et al. Monocular vision for pose estimation in space based on cone projection
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
Yamauchi et al. Calibration of a structured light system by observing planar object from unknown viewpoints
CN115457142B (en) Calibration method and system of MR hybrid photographic camera
CN112734842B (en) Auxiliary positioning method and system for centering installation of large ship equipment
Ye et al. Research on flame location and distance measurement method based on binocular stereo vision
Li et al. Method for horizontal alignment deviation measurement using binocular camera without common target
Zhang et al. Camera calibration algorithm for long distance binocular measurement
CN117557659B (en) Opposite camera global calibration method and system based on one-dimensional target and turntable
CN115082570B (en) Calibration method for laser radar and panoramic camera
Jiang et al. Discussion on the Technical Issues Faced in the Calibration of Multi-camera Vision Detection Systems
CN114565714B (en) Monocular vision sensor hybrid high-precision three-dimensional structure recovery method
Ni et al. Multi-layer flat refractive underwater camera calibration for visual SLAM
Li et al. A Novel Stratified Self-calibration Method of Camera Based on Rotation Movement.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant