CN107977996B - Space target positioning method based on target calibration positioning model - Google Patents

Space target positioning method based on target calibration positioning model Download PDF

Info

Publication number
CN107977996B
CN107977996B CN201710983474.6A CN201710983474A CN107977996B CN 107977996 B CN107977996 B CN 107977996B CN 201710983474 A CN201710983474 A CN 201710983474A CN 107977996 B CN107977996 B CN 107977996B
Authority
CN
China
Prior art keywords
positioning
target
projection
space
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710983474.6A
Other languages
Chinese (zh)
Other versions
CN107977996A (en
Inventor
钟桦
胡雪纯
黄学然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201710983474.6A priority Critical patent/CN107977996B/en
Publication of CN107977996A publication Critical patent/CN107977996A/en
Application granted granted Critical
Publication of CN107977996B publication Critical patent/CN107977996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a space target positioning method based on a target calibration positioning model, which aims at a binocular system and comprises the following implementation processes: firstly, setting a space coordinate system and a positioning camera, and then determining projection parameters of a positioning model through calibration of four plane targets; thirdly, determining camera optical center parameters of the positioning model through calibration of two three-dimensional targets; and finally, determining the space coordinate of the target object to be measured according to the colinearity in the positioning model. The invention uses the target to calibrate the parameters of the positioning model, and uses the collinear relationship in the positioning model to position, thereby reducing the complexity of the calibration process in the positioning system, improving the problem of lower stereoscopic vision positioning precision, and being capable of simply and accurately constructing the indoor positioning system.

Description

Space target positioning method based on target calibration positioning model
Technical Field
The invention belongs to the technical field of image processing, and further relates to a space target positioning method based on a target calibration positioning model in the technical field of stereoscopic vision. The invention can utilize the target to calibrate the positioning model parameters and realize the space positioning of the indoor target, and can be widely applied to the application occasions of tracking and positioning the indoor target.
Background
The positioning technology in the stereoscopic vision aims to acquire three-dimensional information of a space object by analyzing a plurality of two-dimensional images, and the space positioning of the space object usually needs to be calibrated by positioning model parameters in the early stage. The traditional method in the calibration method based on stereoscopic vision positioning can be suitable for any camera model, but the calibration process is complex, the positioning precision is low, and the efficiency is not high.
the existing conventional stereoscopic vision-based space positioning technology mainly comprises two methods, namely binocular positioning based on polar line constraint and binocular vision positioning based on parallax error. The binocular positioning method based on epipolar constraint estimates the coordinates of the target object to be measured under a camera coordinate system by a triangulation principle, and then obtains conversion parameters between the camera coordinate system and a space coordinate system by solving an ICP problem, so as to obtain the coordinates of the target object to be measured under the space coordinate system. The binocular vision positioning method based on the parallax error comprises the steps of calibrating external parameters of two cameras under a space coordinate system by solving a PNP problem, obtaining parallel parallax error by utilizing stereo matching, and estimating the coordinates of a target object to be measured under the space coordinate system by combining a triangulation principle. However, in practical applications, because the internal and external parameters in the positioning system often need to be calibrated and obtained, the calibration process is complicated, and the accuracy of errors is not high due to the limitation of the triangulation model.
the patent document "logo-based binocular vision indoor positioning method" (application number: 201610546034.X, publication number: 106228538A) filed by Harbin industry university discloses a logo-based binocular vision indoor positioning method for positioning with a camera as a target. The method comprises the steps of solving internal parameters of a left camera and a right camera and external parameters of a relative pose relationship of the two cameras by utilizing a Zhangyingyou chessboard calibration method; and positioning logo image features under a certain camera coordinate system, and solving an ICP problem according to matching information of the logo image features and Visual Map database image features to obtain a conversion relation between the camera coordinate system and a space coordinate system, so that coordinates of two cameras, namely the target, under the space coordinate system are obtained. Although, the method solves the problem of complicated calibration steps in the traditional vision system to a certain extent. However, the method still has the disadvantage that an additional database image needs to be established, and the positioning precision of the database image depends on the matching accuracy of logo image features. If the image point of the object to be measured is extracted to have a slight deviation, the error is amplified through the triangulation model, and the spatial coordinate of the object to be measured has a large deviation, which is very obvious especially when the method is applied to the indoor space positioning with a large scene size.
in the patent document "a binocular vision-based physical coordinate positioning method" (application No. 201510351400.1, publication No. 104933718a) applied by the automated research of Guangdong province, a method for calculating the physical coordinate positioning of an object to be measured based on a calibration method for binocular stereo vision positioning is disclosed. The method comprises the steps of firstly calibrating internal parameters of two cameras by using a calibration object, then solving a PNP problem by using a three-dimensional calibration object to establish a conversion model between a camera coordinate system and a space coordinate system, then carrying out three-dimensional matching on images shot by the cameras according to the conversion model, finally calculating the parallax of a target object to be measured on left and right images by using the images after the three-dimensional matching, and calculating the space coordinate of the target point by combining the baseline distance between the two cameras. Although the method improves the matching precision of the target object to be measured on the left and right images by utilizing stereo matching, thereby improving the positioning precision, the method still has the defects that the baseline distance between the cameras needs to be calibrated, and the positioning precision also depends on the length of the baseline and the measurement precision. Meanwhile, the method needs to perform three-dimensional calibration on the binocular positioning system, and the problems of complicated three-dimensional calibration operation and low precision are not improved.
disclosure of Invention
the invention aims to overcome the defects of the prior art, provides a space target positioning method based on a target calibration positioning model, and realizes an indoor positioning technology with simple calibration operation and higher precision.
the basic idea for realizing the invention is that the projection parameters of the positioning model are calculated according to four plane targets and the ground projection points corresponding to the plane targets by taking each positioning camera as the projection center, the ground projection points of two stereo targets by taking each positioning camera optical center as the projection center are calculated by combining the projection parameters, the intersection point of two space straight lines formed by the ground plane projection points of the two stereo targets by taking each positioning camera as the projection center and the corresponding stereo targets is calculated to determine the camera optical center parameters of the positioning model, the ground plane projection point of the object to be measured by taking each positioning camera optical center as the projection center is calculated by combining the projection parameters, the equation coefficient of the space straight line formed by each positioning camera optical center and the ground plane projection point of the object to be measured by taking the corresponding positioning camera optical center as the projection center is calculated by combining the camera optical center parameters, and the equations of the two space straight lines, and calculating the intersection point of the two space straight lines, and taking the coordinates of the intersection point as the space position coordinates of the object to be measured.
the method comprises the following specific steps:
(1) Setting a positioning camera:
fixing two positioning cameras to the indoor high positions which can be covered by the focal lengths respectively to enable the cameras to shoot from top to bottom, wherein the two positioning cameras have a common visual area;
(2) Setting a space coordinate system:
Setting the indoor height as a Z axis, and setting the indoor ground as a two-dimensional XY plane when the Z axis coordinate value is zero, and establishing a three-dimensional orthogonal coordinate system related to XYZ as a reference space coordinate system;
(3) And determining projection parameters of the positioning model through plane target calibration:
(3a) Placing four planar targets to be calibrated, which are not collinear and have different profile characteristics, on the indoor ground, enabling the four planar targets to be simultaneously present in a common visual area of a positioning camera, and recording corresponding position coordinates of each planar target on a space coordinate system;
(3b) acquiring primary images of four plane targets appearing together in a common visual area by using each positioning camera;
(3c) Optionally selecting one planar target as a target to be extracted, and extracting a centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain a centroid pixel of the corresponding planar target in the acquired image;
(3d) Judging whether all centroid pixels of the four plane targets in each acquired image are obtained, if so, executing the step (3e), otherwise, executing the step (3 c);
(3e) Calculating a projection matrix of each positioning camera according to the space coordinates of the four plane targets and the coordinates of the centroid pixels of the four plane targets on the acquired image of each positioning camera, and taking the projection matrix of each positioning camera as a projection parameter of a positioning model;
(4) determining camera optical center parameters of the positioning model through three-dimensional target calibration:
(4a) placing two non-overlapped three-dimensional targets to be calibrated with different profile characteristics on the indoor ground, enabling the two targets to be simultaneously present in a common visual area of a positioning camera, and recording corresponding position coordinates of the two three-dimensional targets on a space coordinate system;
(4b) Acquiring primary images of two three-dimensional targets appearing together in a common visual area by using each positioning camera;
(4c) Optionally selecting one three-dimensional target as a target to be extracted, and extracting a centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain a centroid pixel corresponding to the three-dimensional target in the acquired image;
(4d) Judging whether all the centroid pixels of the two three-dimensional targets in each acquired image are obtained, if so, executing the step (4e), otherwise, executing the step (4 c);
(4e) multiplying a projection matrix of each positioning camera in the projection parameters by the centroid pixel coordinates of the two three-dimensional targets in the acquired image of the corresponding positioning camera to obtain a ground plane projection point of the two three-dimensional targets taking the positioning camera as a projection center;
(4f) Obtaining an equation of two space straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional targets by using each positioning camera as a projection center by using a symmetrical method;
(4g) Calculating an intersection point of the positioning cameras by using an analytic geometric method according to an equation of two space straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional targets by taking each positioning camera as a projection center, and taking the intersection point as the optical center of the positioning camera;
(4f) Recording the optical center of each positioning camera as a camera optical center parameter of the positioning model;
(5) Using the positioning model to determine the spatial coordinates of the target object to be positioned:
(5a) Acquiring an image of a target object to be positioned appearing in the common view area by using each positioning camera;
(5b) taking a target object to be positioned as a target to be extracted, and extracting a centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain the centroid pixel of the target object to be positioned in the acquired image;
(5c) multiplying a projection matrix of each positioning camera in the projection parameters by the centroid pixel coordinate of the target object to be positioned in the acquired image of the corresponding positioning camera to obtain a ground plane projection point of the target object to be positioned by taking the positioning camera as a projection center;
(5d) Obtaining an equation of two space straight lines formed by the optical centers of the two positioning cameras and a ground plane projection point of the target object to be positioned by taking the optical center of the corresponding positioning camera as a projection center by using a two-point method;
(5e) And calculating the intersection point of the two space straight lines by using an analytic geometric method according to the equations of the two space straight lines formed by the optical centers of the two positioning cameras and the ground plane projection point of the target object to be positioned by taking the optical center of the corresponding positioning camera as the projection center, and taking the coordinate of the intersection point as the position of the target object to be positioned in a space coordinate system.
Compared with the prior art, the invention has the following advantages:
Firstly, the projection relation between the ground and an image plane, the optical center of the camera and the target are introduced, the collinear relation between the target mass center pixel in the image and the ground projection point of the target taking the optical center of the camera as the projection center is collected, and the projection parameters and the optical center parameters of the camera in the positioning model are calibrated, so that the defect that the calibration operation of the internal and external parameters of the binocular positioning system in the camera calibration technology in the prior art is complex is overcome, the complexity of the calibration process in the positioning system is reduced, and the efficiency of the positioning system is improved.
Secondly, because the invention takes the coordinates of the intersection point of the space straight line as the space position estimation of the object to be measured according to the space straight line formed by the optical center of the positioning camera, the object to be measured and the ground projection point of the object to be measured in the positioning model, the invention overcomes the defects that the precision of the centroid pixel of the object to be measured on the image is not high when the positioning range is large in the prior art, the positioning precision is not high because the error of the centroid pixel of the object to be measured is amplified by the triangulation model, the centroid pixel of the object to be measured on the image is mapped into the ground projection point of the object to be measured according to the prior constraint of the projection matrix, the space position of the object to be measured is calculated by utilizing the collinear relationship of the optical center of the positioning camera, the object to be measured and the ground projection point of the object to be measured in the positioning model, the invention has the advantage of high-precision positioning.
drawings
FIG. 1 is a flow chart of the present invention;
Fig. 2 is a simulation effect diagram of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the specific steps of the present invention are as follows.
step 1, a positioning camera is arranged.
Two positioning cameras are respectively fixed to the indoor high positions which can be covered by the focal length, one positioning camera is selected to be marked as a positioning camera 1, the other positioning camera is a positioning camera 2,
and adjusting the camera postures of the two positioning cameras to shoot from top to bottom, wherein the two positioning cameras have a common visual area.
And 2, setting a space coordinate system.
and setting the indoor ground plane as a two-dimensional XY plane with Z being 0, setting the indoor height as a Z axis, and establishing a three-dimensional orthogonal coordinate system as a reference space coordinate system.
and 3, determining the projection parameters of the positioning model through plane target calibration.
four plane targets which are not collinear and have different outlines and are to be positioned are placed on the indoor ground, wherein each plane target corresponds to a template picture, and the template picture mainly comprises outline information of the corresponding plane target.
The positions of the four plane targets are adjusted to be simultaneously present in a common visual area of the positioning camera, and the corresponding position coordinates of each plane target on the space coordinate system are recorded.
Each positioning camera acquires an image once for four planar targets appearing together in the common view area.
and sequentially taking each plane target as a target to be extracted, and extracting the centroid pixels of the target to be extracted from each acquired image by using a contour matching method to obtain the centroid pixels of the four plane targets in the acquired image.
the contour matching method is a method for calculating the coordinates of the centroid pixels of the target to be extracted by contour matching to obtain the centroid pixels of four plane targets in the acquired image, and specifically comprises the following steps:
the first step is as follows: carrying out binarization processing on each acquired image;
The second step is that: sequentially using findcontours functions in an open-source computer vision opencv library to extract a cluster of convex hull outlines from each image after binarization processing;
The third step: and (4) optionally selecting one plane target as the target to be extracted, and extracting the contour feature of the target to be extracted from the template picture of the target to be extracted by using a canny function in an opencv library.
The fourth step: and (3) screening out the convex hull contour with the highest similarity to the contour feature of the target to be extracted by sequentially using the matchShapes function in the opencv library from the contour of the target to be extracted in each cluster of convex hull contour of each binary image, and calculating the first moment of the convex hull contour to obtain the centroid pixel of the target to be extracted in the acquired image.
the fifth step: and judging whether all the mass center pixels of the four plane targets in each acquired image are obtained, if so, obtaining the mass center pixels of the four plane targets in the acquired image, and otherwise, executing the third step.
And calculating a projection matrix of each positioning camera by using a findhomograph function in an opencv library according to the coordinates of the four plane target mass center pixels on the acquired image of each positioning camera and the space coordinates of the four plane targets, and taking the projection matrix of each positioning camera as the projection parameters of the positioning model.
and 4, determining the camera optical center parameters of the positioning model through the calibration of the three-dimensional target.
Two non-overlapping three-dimensional targets with different profile characteristics, height and are placed on the indoor ground and respectively marked as a three-dimensional target 1 and a three-dimensional target 2, wherein each three-dimensional target corresponds to a template picture, and the template picture mainly comprises profile information of the corresponding three-dimensional target.
And adjusting the placing positions of the two three-dimensional targets to enable the two three-dimensional targets to appear in the common visual area of the positioning camera together, and recording the corresponding position coordinates of the two three-dimensional targets on the space coordinate system.
each positioning camera is used to acquire an image of two stereoscopic targets appearing together within the common field of view.
and sequentially taking each three-dimensional target as a target to be extracted, and extracting the centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain the centroid pixels of the two three-dimensional targets in the acquired image.
the contour matching method comprises the following steps of calculating the coordinates of mass center pixels of a target to be extracted by using contour matching to obtain the mass center pixels of two three-dimensional targets in an acquired image, and specifically comprises the following steps:
The first step is as follows: carrying out binarization processing on each acquired image;
The second step is that: sequentially using findcontours functions in an open-source computer vision opencv library to extract a cluster of convex hull outlines from each image after binarization processing;
The third step: and (4) optionally selecting one three-dimensional target as the target to be extracted, and extracting the contour feature of the target to be extracted from the template picture of the target to be extracted by using a canny function in an opencv library.
The fourth step: and (3) screening out the convex hull contour with the highest similarity to the contour feature of the target to be extracted by sequentially using the matchShapes function in the opencv library from the contour of the target to be extracted in each cluster of convex hull contour of each binary image, and calculating the first moment of the convex hull contour to obtain the centroid pixel of the target to be extracted in the acquired image.
the fifth step: and judging whether all the mass center pixels of the two targets in each acquired image are obtained, if so, obtaining the mass center pixels of the two three-dimensional targets in the acquired image, and otherwise, executing the third step.
And multiplying the projection matrix of the positioning camera 1 in the projection parameters by the centroid pixel coordinates of the two three-dimensional targets on the corresponding acquired image according to the following formula to respectively obtain the ground plane projection points of the two three-dimensional targets taking the positioning camera as the projection center.
WhereinCoordinates representing the ground plane projection point of the stereoscopic target 1 with the positioning camera 1 as the projection center, H1a projection matrix representing the positioning of the camera 1 in the projection parameters,representing the coordinates of the pixels of the center of mass of the stereoscopic target 1 in the image acquired by the positioning camera 1,To coordinate the ground plane projection point of the stereoscopic target 2 with the positioning camera 1 as the projection center,Representing the coordinates of the pixels of the center of mass of the stereoscopic target 2 in the image acquired by the positioning camera 1.
And in the same way, multiplying the projection matrix of the positioning camera 2 in the projection parameters by the coordinates of the mass center pixels of the two three-dimensional targets on the corresponding acquired image according to the following formula, and respectively obtaining the ground plane projection points of the two three-dimensional targets by taking the positioning camera 2 as the projection center.
whereinCoordinates representing the ground plane projection point of the stereographic target 1 with the positioning camera 2 as the projection center, H2A projection matrix representing the positioning of the camera 2 in the projection parameters,indicating the acquisition by the positioning camera 2the coordinates of the centroid pixel of the stereoscopic target 1 in the image,to take the positioning camera 2 as the coordinates of the ground plane projection point of the projection center stereo target 2,representing the coordinates of the pixels of the center of mass of the stereoscopic target 2 in the image acquired by the positioning camera 2.
and obtaining an equation of two space straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional targets by using each positioning camera as a projection center by using a symmetrical method.
the symmetric method is constructed according to the following formula:
The first step is as follows: according to the following formula, the coordinate values of the ground plane projection points of the two three-dimensional targets with the positioning camera 1 as the projection center are respectively subtracted from the coordinate values of the corresponding three-dimensional targets, so as to respectively obtain the direction vector elements of a space straight line.
WhereinA direction vector (L) representing a spatial straight line formed by the ground plane projection point of the three-dimensional target 1 and the three-dimensional target 1 with the positioning camera 1 as the projection centerp1,Wp1,Hp1) Representing the spatial position coordinates of the stereoscopic target 1,the direction vector of a space straight line formed by the ground plane projection point of the three-dimensional target 2 and the three-dimensional target 2 by taking the positioning camera 1 as the projection center is shown, and the space position coordinate of the three-dimensional target 2 is
Similarly, according to the following formula, the coordinate values of the ground plane projection points of the two three-dimensional targets with the positioning camera 2 as the projection center are subtracted from the coordinate values of the corresponding three-dimensional targets respectively to obtain the direction vector elements of one spatial straight line respectively.
whereinA direction vector representing a spatial straight line formed by the ground plane projection point of the stereoscopic target 1 and the stereoscopic target 1 with the positioning camera 2 as the projection center,and a direction vector representing a spatial straight line formed by the ground plane projection point of the stereoscopic target 2 and the stereoscopic target 2 with the positioning camera 2 as the projection center.
The second step is that: and constructing a symmetrical equation of the space straight line according to the direction vector elements of the space straight line, combining the coordinates of the corresponding three-dimensional target to meet the symmetrical equation of the space straight line, and calculating the coefficient of the symmetrical equation of the space straight line to obtain the equation of the space straight line.
the third step: and judging whether a linear equation formed by the ground plane projection points of all the three-dimensional targets with each positioning camera as a projection center and the corresponding three-dimensional target is obtained, and if not, executing the first step. If so, obtaining an equation of two space straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional target by taking each positioning camera as a projection center.
Wherein the space straight line equation formed by the ground plane projection point of the three-dimensional target 1 and the three-dimensional target 1 by taking the positioning camera 1 as the projection center isThe space straight line equation formed by the ground plane projection point of the three-dimensional target 1 and the three-dimensional target 2 by taking the positioning camera 1 as the projection center isThe space straight line equation formed by the ground plane projection point of the three-dimensional target 1 and the three-dimensional target 1 by taking the positioning camera 2 as the projection center isThe space straight line equation formed by the ground plane projection point of the three-dimensional target 2 and the three-dimensional target 2 by taking the positioning camera 2 as the projection center is
And 5, calculating the intersection point of the two space straight lines by using an analytic geometry method according to the equation of the space straight line formed by the ground plane projection point of the two three-dimensional targets and the corresponding three-dimensional target by taking each positioning camera as the projection center, taking the intersection point as the optical center of each positioning camera, and taking the optical center of each positioning camera as the camera optical center parameter of the positioning model.
the analytic geometry method is a method for calculating the intersection point of two space straight lines, and specifically comprises the following steps:
First, whether the following formula is zero is judged as a criterion for judging whether two space straight lines are on the same plane.
Taking the positioning camera 1 as a projection center as an example, according to a space straight line equation formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional target with the positioning camera 1 as the projection center, whether the two space straight lines are on the same plane is judged according to the following formula.
Wherein m is1Is the mixed product of the two spatial straight direction vectors and the vector connecting the two spatial straight lines. If m1and if the position of the three-dimensional target is not equal to 0, executing a sixth step to calculate intersection coordinates of the ground plane projection points of the two three-dimensional targets with the positioning camera 1 as the projection center and a space straight line formed by the corresponding three-dimensional target, otherwise, executing a second step to calculate the intersection coordinates.
Similarly, according to a space equation formed by the ground plane projection points of the two three-dimensional targets with the positioning camera 2 as the projection center and the corresponding three-dimensional target, whether the two spatial straight lines are on the same plane is judged according to the following formula.
wherein m is2Is the mixed product of the two spatial straight direction vectors and the vector connecting the two spatial straight lines. If m2And if the position of the three-dimensional target is not equal to 0, executing a sixth step to calculate intersection coordinates of the ground plane projection points of the two three-dimensional targets with the positioning camera 2 as the projection center and a space straight line formed by the corresponding three-dimensional target, otherwise, executing a second step to calculate the intersection coordinates.
And secondly, calculating direction vectors of the two space straight lines according to equations of the two space straight lines, and performing cross multiplication operation on the two direction vectors to obtain direction vector elements of the perpendicular bisectors of the two space straight lines.
and thirdly, respectively calculating equation coefficients of a plane formed by the perpendicular bisector and each space straight line according to the direction vector of the perpendicular bisector and the equations of the two space straight lines.
and fourthly, calculating an equation coefficient of an intersection line of the two planes according to an equation of the two planes formed by the perpendicular bisector and the two space straight lines, wherein the intersection line is the perpendicular bisector.
And fifthly, solving the perpendicular bisector equation and the two space straight line equations in a simultaneous mode to obtain two intersection points, taking the midpoint of the two intersection points, and taking the midpoint as the intersection point of the two space straight lines.
And sixthly, solving equations of the two space straight lines simultaneously to obtain an intersection point of the two space straight lines.
and 6, determining the space coordinates of the target object to be measured by using the positioning model.
and placing the object to be measured, wherein the object to be measured corresponds to a template image of the object to be measured, and adjusting the position of the object to be measured to enable the object to be measured to appear in the common view area of the positioning camera.
an image is acquired once for the object to be measured appearing in the common view zone using each positioning camera.
And taking the target object to be measured as the target to be extracted, and extracting the centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain the centroid pixels of the two three-dimensional targets in the acquired images.
the contour matching method is a method for calculating the coordinates of the centroid pixel of a target to be extracted by using contour matching, the centroid pixel of the target object to be measured in an acquired image is obtained by using the contour matching method, and the specific execution steps are as follows:
The first step is as follows: carrying out binarization processing on each acquired image;
the second step is that: sequentially using findcontours functions in an open-source computer vision opencv library to extract a cluster of convex hull outlines from each image after binarization processing;
the third step: and taking the target to be measured as the target to be extracted, and extracting the contour characteristics of the target to be extracted from the template picture of the target to be extracted by using a canny function in an opencv library.
The fourth step: and (3) screening out the convex hull contour with the highest similarity to the contour feature of the target to be extracted by sequentially using the matchShapes function in the opencv library from the contour of the target to be extracted in each cluster of convex hull contour of each binary image, and calculating the first moment of the convex hull contour to obtain the centroid pixel of the target to be extracted in the acquired image.
And multiplying the projection matrix of each positioning camera in the projection parameters by the centroid pixel coordinate of the object to be measured on the corresponding acquired image according to the following formula to obtain a ground plane projection point of the object to be measured by taking the positioning camera as a projection center.
Whereinrepresenting the horizon of the object to be measured with the positioning camera 1 as the projection centerThe coordinates of the projected point of the surface,representing the coordinates of the pixels of the center of mass of the object to be measured in the image acquired by the positioning camera 1,in order to use the positioning camera 2 as the coordinate of the projection point of the ground plane of the object to be measured as the projection center,which represents the coordinates of the pixels of the center of mass of the object to be measured in the image acquired by the positioning camera 2.
And calculating the coefficients of the equations of the space straight lines formed by the optical center of each positioning camera and the ground plane projection point of the target object to be measured by taking the optical center of the corresponding positioning camera as the projection center by using a two-point method in combination with the optical center parameters of the cameras to obtain the equations of the two space straight lines.
the two-point method is constructed according to the following formula:
The first step is as follows: an equation of a spatial straight line is constructed.
The second step is that: and substituting the coordinates of the optical center of any positioning camera and the coordinates of the ground plane projection point of the object to be measured with the optical center of the corresponding positioning camera as the projection center into the equation of the space straight line, and solving the coefficient of the equation of the space straight line.
The third step: and judging whether the space straight line equation is obtained or not, calculating the space straight line equation formed by all the positioning camera optical centers and the ground plane projection point of the target object to be measured by taking the corresponding positioning camera optical centers as the projection centers, if so, obtaining two space straight line equations, and otherwise, executing the first step.
and 4, calculating the intersection point of the two space straight lines according to the equation of the two space straight lines by using the analytic geometry method in the step 4, and taking the coordinates of the intersection point as the position of the object to be measured in the space coordinate system.
The simulation effect of the present invention will be further described with reference to fig. 2.
1. Simulation conditions are as follows:
the simulation experiment of the invention is carried out in a hardware environment with a CPU main frequency of 2.5GHz and a memory of 4GB and a software environment of VS 2013.
2. Simulation content and simulation effect analysis:
the data used by the simulation experiment of the invention are a group of images of pure red balls collected by a positioning camera at different positions in a common visual area and a group of images of red grids collected by the positioning camera in the common visual area respectively. The radius of the pure red round ball is 3.5cm, the size of the red grid is composed of 104 grids with the size of 20cm by 20cm, 100 effective angular points are distributed in the common visual area, and the resolution of images collected by the positioning camera is 1080 by 720.
The simulation experiment of the invention adopts indexes such as distribution error and average error histogram to evaluate the actual effect of the method of the invention, wherein the distribution error is the relative error between the real coordinate and the estimated coordinate, and the average error is defined as the mean square value of the sum of the errors of the relative error between the real coordinate and the estimated coordinate in the directions of x axis, y axis and z axis.
the red round balls are respectively positioned under different positions by using the method of the invention and the positioning method based on parallax error in the prior art. The results are shown in Table 1, with the optimal results shown in bold font.
TABLE 1 comparison of distributed errors for conventional positioning method and positioning of the present invention
as can be seen from table 1, when the red round balls are positioned at different positions, the positioning errors of the invention are uniformly distributed within 2cm, while the positioning errors in the x-axis direction in the conventional positioning method are larger, and in the whole, the positioning errors of the invention are significantly reduced compared with the conventional method.
the method of the present invention and the conventional method in the prior art are used to locate 100 effective corner points in the red grid, which are located in the common view area, respectively. Due to the excessive data, only a statistical method is utilized to display the average error histogram. Fig. 2(a) is an average error histogram for positioning 100 effective corner points located in the common view area in the red grid by using a conventional method in the prior art, wherein the abscissa in the histogram of fig. 2(a) represents the average error and the ordinate represents the statistical number of the effective corner points. Fig. 2(b) is an average error histogram for positioning 100 effective corner points located in the common view area in the red grid according to the present invention, wherein the abscissa of the histogram in fig. 2(b) represents the average error, and the ordinate represents the statistical number of effective corner points.
comparing the histograms of fig. 2(a) and 2(b), it can be seen that when 100 effective corner points located in the common view area in the red grid are located, the average error of location of the effective corner points in the present invention is centrally distributed in the area with smaller average error. Compared with the conventional method, the positioning average errors of the effective angular points of the conventional positioning method are intensively distributed in the area with larger average errors, and then, the positioning accuracy of the indoor large-range position estimation method is obviously improved compared with the conventional method.
The simulation experiment results show that the invention provides a space target positioning method based on a target calibration positioning model, which is used for calculating the space coordinates of the target object to be measured. The method shows better performance no matter from experimental operation or positioning precision comparison, and fully illustrates the superiority of the method.

Claims (5)

1. a space target positioning method based on a target calibration positioning model is characterized by comprising the following steps:
(1) Setting a positioning camera:
Fixing two positioning cameras to the indoor high positions which can be covered by the focal lengths respectively to enable the cameras to shoot from top to bottom, wherein the two positioning cameras have a common visual area;
(2) setting a space coordinate system:
setting the indoor height as a Z axis, and setting the indoor ground as a two-dimensional XY plane when the Z axis coordinate value is zero, and establishing a three-dimensional orthogonal coordinate system related to XYZ as a reference space coordinate system;
(3) and determining projection parameters of the positioning model through plane target calibration:
(3a) Placing four planar targets to be calibrated, which are not collinear and have different profile characteristics, on the indoor ground, enabling the four planar targets to be simultaneously present in a common visual area of a positioning camera, and recording corresponding position coordinates of each planar target on a space coordinate system;
(3b) Acquiring primary images of four plane targets appearing together in a common visual area by using each positioning camera;
(3c) optionally selecting one planar target as a target to be extracted, and extracting a centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain a centroid pixel of the corresponding planar target in the acquired image;
(3d) Judging whether all centroid pixels of the four plane targets in each acquired image are obtained, if so, executing the step (3e), otherwise, executing the step (3 c);
(3e) Calculating a projection matrix of each positioning camera according to the space coordinates of the four plane targets and the coordinates of the centroid pixels of the four plane targets on the acquired image of each positioning camera, and taking the projection matrix of each positioning camera as a projection parameter of a positioning model;
(4) Determining camera optical center parameters of the positioning model through three-dimensional target calibration:
(4a) Placing two non-overlapped three-dimensional targets to be calibrated with different profile characteristics on the indoor ground, enabling the two targets to be simultaneously present in a common visual area of a positioning camera, and recording corresponding position coordinates of the two three-dimensional targets on a space coordinate system;
(4b) Acquiring primary images of two three-dimensional targets appearing together in a common visual area by using each positioning camera;
(4c) Optionally selecting one three-dimensional target as a target to be extracted, and extracting a centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain a centroid pixel corresponding to the three-dimensional target in the acquired image;
(4d) Judging whether all the centroid pixels of the two three-dimensional targets in each acquired image are obtained, if so, executing the step (4e), otherwise, executing the step (4 c);
(4e) Multiplying a projection matrix of each positioning camera in the projection parameters by the centroid pixel coordinates of the two three-dimensional targets in the acquired image of the corresponding positioning camera to obtain a ground plane projection point of the two three-dimensional targets taking the positioning camera as a projection center;
(4f) Obtaining an equation of two space straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional targets by using each positioning camera as a projection center by using a symmetrical method;
(4g) calculating an intersection point of the positioning cameras by using an analytic geometric method according to an equation of two space straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional targets by taking each positioning camera as a projection center, and taking the intersection point as the optical center of the positioning camera;
(4h) Recording the optical center of each positioning camera as a camera optical center parameter of the positioning model;
(5) using the positioning model to determine the spatial coordinates of the target object to be positioned:
(5a) Acquiring an image of a target object to be positioned appearing in the common view area by using each positioning camera;
(5b) taking a target object to be positioned as a target to be extracted, and extracting a centroid pixel of the target to be extracted from each acquired image by using a contour matching method to obtain the centroid pixel of the target object to be positioned in the acquired image;
(5c) Multiplying a projection matrix of each positioning camera in the projection parameters by the centroid pixel coordinate of the target object to be positioned in the acquired image of the corresponding positioning camera to obtain a ground plane projection point of the target object to be positioned by taking the positioning camera as a projection center;
(5d) obtaining an equation of two space straight lines formed by the optical centers of the two positioning cameras and a ground plane projection point of the target object to be positioned by taking the optical center of the corresponding positioning camera as a projection center by using a two-point method;
(5e) and calculating the intersection point of the two space straight lines by using an analytic geometric method according to the equations of the two space straight lines formed by the optical centers of the two positioning cameras and the ground plane projection point of the target object to be positioned by taking the optical center of the corresponding positioning camera as the projection center, and taking the coordinate of the intersection point as the position of the target object to be positioned in a space coordinate system.
2. the method for locating a spatial object based on a target calibration and location model according to claim 1, wherein the steps of the contour matching method in the steps (3c), (4c) and (5b) are as follows:
The first step is as follows: carrying out binarization processing on each acquired image;
the second step is that: extracting a cluster of convex hull outlines from each binarized image;
The third step: and screening out a convex hull contour with the highest similarity to the contour characteristic of the target to be extracted from a cluster of convex hull contours of each binarized image, and calculating the first moment of the convex hull contour to obtain the centroid pixel of the target to be extracted in the collected image.
3. the method for positioning spatial object based on the target calibration and positioning model of claim 1, wherein the equation of the two spatial straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional target with each positioning camera as the projection center is obtained by the symmetric method in step (4f) as follows:
The first step is as follows: subtracting the coordinate value of the ground plane projection point of any one three-dimensional target with each positioning camera as a projection center from the coordinate value of the corresponding three-dimensional target to obtain a direction vector element of a space straight line;
The second step is that: constructing a symmetric equation of the space straight line according to the direction vector elements of the space straight line, combining the coordinates of the corresponding three-dimensional target to meet the symmetric equation of the space straight line, and calculating the coefficient of the symmetric equation of the space straight line to obtain the equation of the space straight line;
the third step: and judging whether a linear equation formed by the ground plane projection points of all the three-dimensional targets and the corresponding three-dimensional targets with each positioning camera as a projection center is obtained, if so, obtaining an equation of two spatial straight lines formed by the ground plane projection points of the two three-dimensional targets and the corresponding three-dimensional targets with each positioning camera as a projection center, and otherwise, executing the first step.
4. The method for positioning a spatial object based on a target calibration and positioning model of claim 1, wherein the step of calculating the intersection point by the analytic geometry method in the steps (4g) and (5e) is as follows:
the method comprises the following steps that firstly, whether two space straight lines are on the same plane or not is judged according to an equation of the two space straight lines, if yes, the sixth step is executed, and if not, the second step is executed;
Secondly, calculating direction vectors of the two space straight lines according to an equation of the two space straight lines, and performing cross multiplication operation on the two direction vectors to obtain direction vector elements of perpendicular bisectors of the two space straight lines;
thirdly, respectively calculating equation coefficients of a plane formed by the perpendicular bisector and each space straight line according to the direction vector of the perpendicular bisector and the equations of the two space straight lines;
fourthly, calculating an equation coefficient of an intersection line of the two planes according to an equation of the two planes formed by the perpendicular bisector and the two space straight lines, wherein the intersection line is the perpendicular bisector;
fifthly, solving the perpendicular bisector equation and the two space straight line equations in a simultaneous mode to obtain two intersection points, taking the midpoint of the two intersection points, and taking the midpoint as the intersection point of the two space straight lines;
and sixthly, solving equations of the two space straight lines simultaneously to obtain an intersection point of the two space straight lines.
5. the method for positioning a spatial object based on a target calibration and positioning model according to claim 1, wherein the two-point method in step (5d) is characterized in that the following steps are performed to obtain an equation of two spatial straight lines formed by two positioning camera optical centers and a ground plane projection point of the object to be positioned by taking the corresponding positioning camera optical center as a projection center:
the first step is as follows: constructing an equation of a spatial straight line;
The second step is that: substituting the coordinates of the optical center of any positioning camera and the coordinates of the ground plane projection point of the target object to be positioned by taking the optical center of the corresponding positioning camera as the projection center into the equation of the space straight line, and solving the coefficient of the equation of the space straight line;
The third step: and judging whether two space straight line equations are obtained or not, if so, obtaining two space straight line equations formed by the optical centers of the two positioning cameras and the ground plane projection point of the target object to be positioned by taking the optical center of the corresponding positioning camera as the projection center, and otherwise, executing the first step.
CN201710983474.6A 2017-10-20 2017-10-20 Space target positioning method based on target calibration positioning model Active CN107977996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710983474.6A CN107977996B (en) 2017-10-20 2017-10-20 Space target positioning method based on target calibration positioning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710983474.6A CN107977996B (en) 2017-10-20 2017-10-20 Space target positioning method based on target calibration positioning model

Publications (2)

Publication Number Publication Date
CN107977996A CN107977996A (en) 2018-05-01
CN107977996B true CN107977996B (en) 2019-12-10

Family

ID=62012582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710983474.6A Active CN107977996B (en) 2017-10-20 2017-10-20 Space target positioning method based on target calibration positioning model

Country Status (1)

Country Link
CN (1) CN107977996B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109077822B (en) * 2018-06-22 2020-11-03 雅客智慧(北京)科技有限公司 Dental implant handpiece calibration system and method based on vision measurement
CN109272551B (en) * 2018-08-03 2022-04-01 北京航空航天大学 Visual positioning method based on circular mark point layout
CN109151458B (en) * 2018-08-31 2020-10-09 歌尔股份有限公司 Test model construction method, depth of field module optical center test method and equipment
CN109242917A (en) * 2018-10-24 2019-01-18 南昌航空大学 One kind being based on tessellated camera resolution scaling method
CN113327291B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Calibration method for 3D modeling of remote target object based on continuous shooting
CN111381215A (en) * 2020-03-25 2020-07-07 中国科学院地质与地球物理研究所 Phase correction method and meteor position acquisition method
CN112509059B (en) * 2020-12-01 2023-04-07 合肥中科君达视界技术股份有限公司 Large-view-field binocular stereo calibration and positioning method based on coplanar targets
CN113112545B (en) * 2021-04-15 2023-03-21 西安电子科技大学 Handheld mobile printing device positioning method based on computer vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7313265B2 (en) * 2003-03-13 2007-12-25 Kabushiki Kaisha Toshiba Stereo calibration apparatus and stereo image monitoring apparatus using the same
CN101876532A (en) * 2010-05-25 2010-11-03 大连理工大学 Camera on-field calibration method in measuring system
CN102810205A (en) * 2012-07-09 2012-12-05 深圳泰山在线科技有限公司 Method for calibrating camera shooting or photographing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7313265B2 (en) * 2003-03-13 2007-12-25 Kabushiki Kaisha Toshiba Stereo calibration apparatus and stereo image monitoring apparatus using the same
CN101876532A (en) * 2010-05-25 2010-11-03 大连理工大学 Camera on-field calibration method in measuring system
CN102810205A (en) * 2012-07-09 2012-12-05 深圳泰山在线科技有限公司 Method for calibrating camera shooting or photographing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LED-tracking and ID-estimation for indoor positioning using visible light communication;Yohei Nakazawa,et al.;《2014 International Conference on Indoor Positioning and Indoor Navigation》;20141030;87-90 *

Also Published As

Publication number Publication date
CN107977996A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN107977996B (en) Space target positioning method based on target calibration positioning model
CN109035320B (en) Monocular vision-based depth extraction method
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN110689579B (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN109269430B (en) Multi-standing-tree breast height diameter passive measurement method based on deep extraction model
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN107481284A (en) Method, apparatus, terminal and the system of target tracking path accuracy measurement
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN113592957A (en) Multi-laser radar and multi-camera combined calibration method and system
CN109448043A (en) Standing tree height extracting method under plane restriction
CN111814792B (en) Feature point extraction and matching method based on RGB-D image
CN112734844B (en) Monocular 6D pose estimation method based on octahedron
Zhou et al. A measurement system based on internal cooperation of cameras in binocular vision
CN110415304B (en) Vision calibration method and system
CN114998448B (en) Multi-constraint binocular fisheye camera calibration and space point positioning method
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
CN113589263A (en) Multi-homologous sensor combined calibration method and system
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN113340201B (en) Three-dimensional measurement method based on RGBD camera
CN117372498A (en) Multi-pose bolt size measurement method based on three-dimensional point cloud
CN116630423A (en) ORB (object oriented analysis) feature-based multi-target binocular positioning method and system for micro robot
CN112634377B (en) Camera calibration method, terminal and computer readable storage medium of sweeping robot
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
OL01 Intention to license declared
OL01 Intention to license declared