WO2018233514A1 - Pose measurement method and device, and storage medium - Google Patents

Pose measurement method and device, and storage medium Download PDF

Info

Publication number
WO2018233514A1
WO2018233514A1 PCT/CN2018/090821 CN2018090821W WO2018233514A1 WO 2018233514 A1 WO2018233514 A1 WO 2018233514A1 CN 2018090821 W CN2018090821 W CN 2018090821W WO 2018233514 A1 WO2018233514 A1 WO 2018233514A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
real
virtual
rotation
matrix
Prior art date
Application number
PCT/CN2018/090821
Other languages
French (fr)
Chinese (zh)
Inventor
徐坤
周轶
范国田
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018233514A1 publication Critical patent/WO2018233514A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken

Definitions

  • the present invention relates to the field of measurement technology, and in particular, to a pose measurement method, device, and storage medium.
  • Object pose measurement has important application value in many fields such as modern communication, national defense and aerospace.
  • the measurement of the antenna pose is closely related to the coverage of the base station; for example, during the robot assembly process, the pose of the robot determines the accuracy of the assembly. How to achieve the pose measurement of objects has always been the focus of research.
  • the position measurement method usually needs to be configured with relevant measurement personnel, and the relevant measurement instrument is measured on the object.
  • this method can be implemented for objects that can be operated near.
  • objects such as antennas and aircraft that require long-distance measurement
  • such measurement methods are very dangerous, and it is impossible to ensure the personal safety of the measurement personnel, and at the same time, excessive manpower is required. Therefore, it is very necessary to provide a simple and feasible pose measurement method.
  • the embodiment of the invention provides a method, a device and a storage medium for measuring a pose, which solves the problem that the pose measurement method in the prior art is labor-intensive and cannot guarantee the personal safety of the measurement personnel.
  • a pose measurement method including:
  • a posture parameter of the object is calculated according to position information of the object in the real three-dimensional environment.
  • the posture parameter of the photographing device includes rotation angle information of the photographing device; and the matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device, including:
  • the posture parameter of the photographing device includes the positioning information of the photographing device; and the matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device, further includes:
  • a panning and scaling transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is determined according to the positioning information and the position of the photographing device in the virtual three-dimensional scene.
  • the calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix comprises:
  • the average of the rotation angles of the respective rotation conversion matrices on the three coordinate axes is calculated, and the final rotation transformation matrix is obtained based on the average value reconstruction.
  • the calculating the attitude parameter of the object according to the location information of the object in the real three-dimensional environment comprises:
  • Two pairs of points with the same name are determined according to the straight line of the same name calibrated in the two target photos;
  • the rotation transformation matrix and the translation and scaling transformation matrix are used to transform the positions of the centers of the two phase points, and the positioning values and altitude values of the objects in the real three-dimensional environment are obtained according to the transformed positions.
  • the two pairs of the same name are determined according to the same-named line calibrated in the two target photos, including:
  • intersection of the polar line and the straight line of the same name in another target photo is the point of the same name of the end point.
  • the determining, according to the two pairs of phase names of the same name, the positions of two phase points in the virtual three-dimensional scene including:
  • a linear equation of the same name pair is formed into a linear equation group, and the linear equation group is solved by a matrix method, and the position of the phase point in the virtual three-dimensional scene is determined based on the solution.
  • the linear equations of the same-named pair of points form a linear equation group, and the linear equations are solved by a matrix method, and the position of the phase points in the virtual three-dimensional scene is determined based on the solution, including :
  • the new weighting factor of each phase point is obtained.
  • the linear equations of each phase point are respectively divided by the corresponding weighting factors to obtain a new linear equation group, and then the new solution is calculated. Repeat this step until After the new solution is equal to the previous solution, this step is repeated once again, and the calculated final solution is the position of the phase point.
  • the calculating the attitude parameter of the object according to the location information of the object in the real three-dimensional environment comprises:
  • the attitude angle of the object is calculated based on the position information of the straight line in the real space.
  • a pose measuring apparatus includes a camera, a processor, and a memory, wherein the memory stores a pose measurement program; the processor is configured to execute the stored in the memory The program is used to implement the steps in the pose measurement method described above.
  • a computer readable storage medium on which the pose measurement program is stored, and the pose measurement program is implemented by a processor to implement the bit described above. The steps in the attitude measurement method.
  • the object in different positions is photographed by the photographing device, and then the photograph of the photographed object is analyzed by combining the posture parameters of the photographing device, so that the actual pose parameter information of the object can be obtained. Therefore, the embodiment of the invention realizes the remote measurement of the pose of the object, and the operation method is very simple and convenient, and the measurement personnel are not required to climb, thereby effectively eliminating the risk of measurement.
  • FIG. 1 is a flowchart of a pose measurement method according to an embodiment of the present invention.
  • FIGS. 2a and 2b are schematic diagrams showing the result of calculating the phase point of the same name in an embodiment of the present invention.
  • FIG. 3 is a flow chart of a three-focus tensor measuring antenna according to an embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of a pose measuring device according to an embodiment of the present invention.
  • the pose measurement method includes the following steps:
  • Step 101 Acquire a plurality of photos of the objects photographed at different positions and posture parameters of the photographing device when the photographs are taken.
  • the object here is not limited to the antenna, but also an object such as a robot, an aircraft, or the like that needs to monitor the attitude parameter.
  • the posture parameters of the robot arm are required to be tested to ensure the accuracy of the assembly; or, in the case of the aircraft wind test, the flight attitude is detected.
  • the shooting device here includes a camera, a camera, a mobile phone, and a tablet computer, and the like, and the type of the shooting device is not specifically limited.
  • the photographing device is a movable device
  • the movable device can acquire the posture parameter information by itself.
  • you can choose your own shooting location and get photos of objects taken at different locations.
  • the attitude parameters of the photographing apparatus include positioning information (for example, global positioning system GPS information) and information such as three rotation angles (rotation angles with respect to three coordinate axes).
  • positioning information for example, global positioning system GPS information
  • information such as three rotation angles (rotation angles with respect to three coordinate axes).
  • the photographing device is a movable device
  • the parameter can be directly obtained by the corresponding sensor.
  • you can pre-configure the rotation angle and GPS information, and directly obtain the configuration information when needed.
  • Step 102 Construct a virtual three-dimensional scene according to the object photo, and match the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device.
  • the matching between the virtual three-dimensional scene and the real three-dimensional environment is performed according to the posture parameter of the photographing device, mainly by calculating the rotation conversion matrix according to the rotation angle information of the photographing device, and calculating the translation and zoom matrix according to the positioning information of the photographing device.
  • the transformation of a virtual 3D scene into a real 3D scene can be achieved by rotating the transformation matrix, translation and scaling transformation matrices.
  • Step 103 Calculate the attitude parameter of the object according to the position information of the object in the real three-dimensional environment.
  • step 102 the conversion information of the virtual three-dimensional scene to the real three-dimensional environment has been acquired, and according to the conversion information, the position information of the antenna in the real space can be obtained by the user calibrating the position of the desired measurement in the object photo.
  • the attitude parameter of the object can be directly calculated according to the position information of the object in the real space.
  • the pose measurement method captureds an object at different positions by a photographing device, analyzes the photograph of the photographed object by combining the posture parameters of the photographing device, and finally obtains a photograph by combining a mathematical algorithm.
  • the actual pose parameter of the medium object The embodiment of the invention realizes the remote measurement of the posture of the object, and does not require the measuring personnel to perform the climbing operation, so the operation is very simple and convenient, the risk of measurement is effectively eliminated, and the personal safety of the measuring personnel is ensured.
  • the antenna is taken as an example to introduce the technical solution through the pose measurement process of the antenna.
  • Step 101 Acquire a plurality of photos of the objects photographed at different positions and posture parameters of the photographing device when the photographs are taken.
  • a mobile phone is used as a measuring device, and a photo of the antenna is taken by using a camera of the mobile phone, and the GPS position information and posture are acquired by using related sensors (for example, a GPS sensor, a gyroscope, an accelerometer, and an electronic compass) in the mobile phone.
  • related sensors for example, a GPS sensor, a gyroscope, an accelerometer, and an electronic compass
  • parameter. Take several aerial photos at different locations on your phone (the number can be anywhere from 5 to 10).
  • the attitude parameter of the mobile phone is acquired and recorded, and the posture parameter here includes the GPS position and the rotation angle information of the three coordinate axes.
  • Step 102 Construct a virtual three-dimensional scene according to the object photo, and match the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device.
  • the rotation transformation matrix is calculated by the three rotation angle information acquired when the shooting device is photographed, and the following steps are included:
  • Step 21 Acquire a first rotation matrix R SfM of the photographing device from the two-dimensional image plane to the virtual three-dimensional scene by using the SFM algorithm.
  • the rotation matrix (the rotation matrix of the two-dimensional image plane to the SFM space) in the external reference of each shooting device is obtained according to the SFM algorithm, and the rotation matrix is the first rotation matrix, and the matrix is passed through the matrix.
  • the attitude of the photographing device in the virtual three-dimensional space can be known from the two-dimensional image plane.
  • Step 22 Acquire a second rotation matrix of the photographing device from the two-dimensional image plane to the real three-dimensional environment according to the rotation angle information of the photographing device.
  • the rotation matrix of the photographing device in the real three-dimensional environment can be obtained according to the three rotation angles of the photographing device.
  • the following formula is used to calculate the attitude of the phone's own coordinate system with respect to the real three-dimensional environment (the geodetic coordinate system):
  • R camera R phone *R x (180)*R z (-90) (1)
  • R x (180) represents that the coordinate system first rotates 180 degrees around the x-axis of the phone's own coordinate system in a counterclockwise direction
  • R z (-90) represents the coordinate system and then rotates 90 degrees around the z-axis in a clockwise direction
  • R phone Three rotation angle information for shooting with the shooting device.
  • Step 23 Calculate a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix.
  • step 21 and step 22 respectively, to obtain the photographing apparatus in the rotation matrix R SfM SFM space (virtual three-dimensional scene) is, the rotation matrix R camera imaging device to the real three dimensional environment according to the formula (2) can be obtained rotational transformation Matrix R trans :
  • R trans R camera *(R SfM ) -1 (2)
  • the photographing device photographs the object at different positions (multiple positions)
  • several rotation conversion matrices can be obtained according to the photographing devices of different positions, and the present invention is ensured in order to ensure the accuracy of the rotation conversion matrix.
  • the average rotation transformation matrix of the plurality of rotation conversion matrices obtained from different positions is required to obtain a final rotation transformation matrix. Specifically, the average values of the rotation angles of the respective rotation conversion matrices on the three coordinate axes are acquired, and the final rotation transformation matrix is obtained based on the average value reconstruction.
  • each rotation transformation matrix is decomposed into three rotation angles R x , R y , R z , which are averaged in three directions and then averaged according to three rotation angles. The value is re-reconstructed to obtain the final rotation transformation matrix.
  • the conversion process of the matrix and the rotation angle here is a technique well known to those skilled in the art and will not be described in detail herein.
  • the coordinate system of the real space is that the Y axis represents the true north direction, the X axis represents the true east direction, and the Z axis represents the vertical direction from the center of the earth to the sky. It can be seen that the side of the building is nearly perpendicular to the Z axis, so a more accurate conversion is made.
  • the position of the point cloud needs to be translated and scaled.
  • the conversion parameters since the scaling parameters are the same ratio of the respective coordinate axes, the value in the Z-axis direction can be ignored.
  • the numerical calculations of the X-axis and the Y-axis are explained.
  • the position of the shooting device in meters after conversion is (X i , Y i ) (that is, the position of the shooting device in the real three-dimensional environment can be obtained according to the GPS information of the shooting device), and the same can be obtained in the three-dimensional reconstruction space.
  • n represents the number of photographs of the object
  • the above formula is an overdetermined equation.
  • the solution can be solved by using the least square method, QR decomposition or singular value decomposition (SVD) and other common solutions in matrix theory, so that the target can be translated and scaled. Conversion parameters.
  • QR decomposition or singular value decomposition (SVD) and other common solutions in matrix theory, so that the target can be translated and scaled. Conversion parameters.
  • SSD singular value decomposition
  • an interface for the user to input the zoom parameters can be set, and the user provides the average moving distance between each of the two antenna photos at the time of shooting, and this parameter is divided by the neighboring photographing device in the virtual three-dimensional scene. The average distance between them to get the scaling parameter. Normally, the average moving distance of the user is between 0.3 and 1 meter.
  • the translation parameter can also be solved using the above formula. The reason for this method is that the GPS positioning accuracy is low. For example, the user has taken 5 photos, and the moving distance of the adjacent photos is 0.5 meters, which moves within a total range of 2.5 meters. However, the GPS accuracy is often accurate at 3 meters. Left and right, it is very likely that user movement occurs but the GPS reading is too dense.
  • Step 103 Calculate the attitude parameter of the object according to the position information of the object in the real three-dimensional environment.
  • the antenna attitude parameters include the attitude angle of the antenna, as well as GPS and altitude (height) information.
  • GPS and altitude (height) information we first introduce the acquisition of GPS and altitude (height) information.
  • the internal and external parameters of the shooting device can be obtained.
  • the phase name of the same name representing the antenna position to calculate the three-dimensional spatial position of the antenna target, which includes the following:
  • Two pairs of points with the same name are determined according to the straight line of the same name calibrated in the two target photos;
  • the rotation transformation matrix, translation and scaling transformation matrix are used to transform the position of the center of the two phase points, and the GPS and altitude values of the antenna in the real three-dimensional environment are obtained according to the transformed position.
  • the user after taking a picture of the antenna, the user selects two photos from the captured photos as the target picture, and calibrates the straight line of the same name indicating the position of the antenna in the target picture.
  • the final GPS and altitude values are determined based on the information of the line of the same name.
  • the method of inputting the same name straight line pair and the binocular visual limit constraint is used to obtain the same name phase point, including the following:
  • the polar line at the two end points of the same name line in one of the target photos is calculated according to the binocular vision limit constraint principle
  • intersection of the polar line and the line of the same name in the other target photo is the point of the same name of the endpoint.
  • the user selects two photos as the target photo, and selects an antenna target in each target photo, and selects a straight line pair of the same name representing the antenna in the antenna target.
  • the constraint limit binocular vision principle the same name is calculated 2a a line l at the end of the electrode line a in FIG. L; FIG. 2b and the entry of the same name polar straight line l 'intersects the intersection points a', thereby to obtain a point of the same name Can also get a straight line pair with the same name
  • the acquisition method of the above-mentioned same-named phase has increased the difficulty of the user's operation.
  • the current linear matching algorithm cannot accurately match the corresponding straight line, and the method of the embodiment of the present invention ensures the accuracy of the straight line matching.
  • the precision of the sex and the same name are the precision of the sex and the same name.
  • the embodiment of the present invention is implemented as follows:
  • the linear equations of the same-named pair are formed into a linear equation, and the linear equations are solved by the matrix method.
  • the position of the phase points in the virtual three-dimensional scene is determined based on the solution result.
  • Each phase point here corresponds to a shooting angle, that is, the shooting angle of the two target photos (different shooting positions) mentioned above.
  • X is the position coordinate of the three-dimensional space of the phase point to be calculated.
  • phase points of the same name phase point and projection matrix are needed.
  • A is a 4 x 4 matrix. Normally, due to the presence of noise, this equation cannot be completely equivalent. You can find the smallest X that makes
  • 1.
  • the optimal solution of this equation is the eigenvector corresponding to the minimum eigenvalue of the matrix A T A.
  • the general solution such as SVD, QR decomposition can be used to solve X.
  • the solved X (a, b, c, d)
  • the homogeneous coordinate X1 (a/d, b/d, c/d, d/d)
  • the final target coordinate T (a/d) , b/d, c/d).
  • one problem with the linear method is that the minimized
  • the spatial point X calculated at the beginning does not fully satisfy the linear equation and there is an error. What you want to minimize is the projection point of the real image point x and X. Distance, ie This means that the linear equation is divided by the weighting factor. Then the final error is to minimize the meaning of the photo.
  • the embodiment of the present invention is implemented according to an iterative method: a new weighting factor for each view is obtained according to the previous solution result, and the linear equations of each view are respectively divided by the corresponding weighting factors to obtain a new one. The linear equations are recalculated to obtain a new solution. Repeat this step until the new solution is equal to the previous solution. Then, the solution step of the linear equations is repeated according to the new weighting factor, and the final solution is calculated.
  • the optimal weighting factor obtained by the iterative method is divided into the optimal weighting factors of the respective angles of view, and the obtained solution is the position of the final target space point. Based on this method, the minimized equation can be made to conform to the coordinate error in the photographic sense.
  • the spatial iteration method calculated by such a linear iterative method has high accuracy, and generally can achieve convergence with a small number of iterations, and the implementation is simple and the program is simple.
  • the GPS position and altitude in the real three-dimensional scene are calculated according to the position coordinates of the center of the phase connecting the two ends, thereby representing the The GPS position and altitude of the antenna.
  • the (X, Y) of the real space coordinates obtained after the rotation transformation is obtained according to the translation and scaling matrix obtained in step 102, and is obtained as:
  • the (X, Y) is subjected to back projection transformation to obtain the GPS value.
  • the target's altitude is:
  • the spatial position coordinates of the two phase points may be utilized, and then the rotation transformation matrix, the translation and the scaling matrix are used to perform the rotation transformation.
  • the two phase points are directly at the coordinate points of the real three-dimensional environment, and the attitude angle of the antenna can be directly calculated.
  • the coordinate point representing the spatial straight line is re-acquired by the three-focus tensor method, and the attitude angle of the antenna is calculated according to the coordinate point.
  • the attitude angle of the antenna is calculated according to the coordinate point. Specifically, determining the position of the same straight line calibrated in the three target photos; calculating the position information of the straight line in the real three-dimensional environment by using the three-focus tensor algorithm according to the position and the position information of the photographing device; according to the position of the straight line in the real space The information calculates the attitude angle of the antenna.
  • the projection matrix of the three photographing devices centered at ⁇ C 0 , C 1 , C 2 ⁇ is P 1 , P 2 , P 3
  • the embodiment of the present invention further provides a pose measuring device for implementing the above-described pose measurement method.
  • the device includes a processor 42 and a memory 41 storing instructions executable by the processor 42. among them,
  • the processor 42 may be a general-purpose processor, such as a central processing unit (CPU), or may be a digital signal processor (DSP), an application specific integrated circuit (ASIC), or One or more integrated circuits configured to implement embodiments of the present invention.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the memory 41 is configured to store the program code and transmit the program code to the CPU.
  • the memory 41 may include a volatile memory such as a random access memory (RAM); the memory 41 may also include a non-volatile memory such as a read-only memory (read- Only memory, ROM), flash memory, hard disk drive (HDD), or solid-state drive (SSD); the memory 41 may also include a combination of the above types of memories.
  • RAM random access memory
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid-state drive
  • a pose measuring device includes a camera, a processor and a memory, wherein a pose measurement program is stored in the memory; and the processor is configured to execute the pose measurement program stored in the memory, and the method is as follows:
  • the attitude parameter of the object is calculated according to the position information of the object in the real three-dimensional environment.
  • the posture parameter of the photographing device includes rotation angle information of the photographing device; and matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device, including:
  • a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is calculated according to the first rotation matrix and the second rotation matrix.
  • the posture parameter of the shooting device includes positioning information of the shooting device; matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device, further includes:
  • the translation and scaling transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is determined according to the positioning information and the position of the photographing device in the virtual three-dimensional scene.
  • calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix including:
  • the average of the rotation angles of the respective rotation conversion matrices in the three coordinate axes is calculated, and the final rotation transformation matrix is obtained based on the average value reconstruction.
  • calculating the attitude parameter of the object according to the position information of the object in the real three-dimensional environment including:
  • Two pairs of points with the same name are determined according to the straight line of the same name calibrated in the two target photos;
  • the rotation transformation matrix and the translation and scaling transformation matrix are used to transform the position of the center of the two phase points, and the position value and the altitude value of the object in the real three-dimensional environment are obtained according to the transformed position.
  • two pairs of the same name are determined according to the straight line of the same name calibrated in the two target photos, including:
  • intersection of the polar line and the line of the same name in the other target photo is the point of the same name of the endpoint.
  • determining the positions of the two phase points in the virtual three-dimensional scene based on the two pairs of the same name including:
  • the linear equations of the same-named pair are formed into a linear system of equations, and the linear equations are solved by the matrix method.
  • the position of the phase points in the virtual three-dimensional scene is determined based on the solution.
  • a linear equation of the same name pair is formed into a linear equation group, and the matrix equation is used to solve the linear equation group, and the position of the phase point in the virtual three-dimensional scene is determined based on the solution, including:
  • the new weighting factor of each phase point is obtained.
  • the linear equations of each phase point are respectively divided by the corresponding weighting factors to obtain a new linear equation group, and then the new solution is calculated. Repeat this step until After the new solution is equal to the previous solution, this step is repeated, and the calculated final solution is the position of the phase point.
  • calculating the attitude parameter of the object according to the position information of the object in the real three-dimensional environment including:
  • the attitude angle of the object is calculated from the position information of the straight line in the real space.
  • the embodiment of the invention further provides a computer readable storage medium.
  • the computer readable storage medium herein stores one or more programs.
  • the computer readable storage medium may include a volatile memory such as a random access memory; the memory may also include a non-volatile memory such as a read only memory, a flash memory, a hard disk or a solid state hard disk; the memory may also include the above categories a combination of memory.
  • One or more programs in a computer readable storage medium may be executed by one or more processors to implement the pose measurement methods provided in the method embodiments.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the present invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the actual pose parameter information of the object can be obtained by shooting the object at different positions by the photographing device and analyzing the photograph of the photographed object in combination with the posture parameter of the photographing device.
  • the remote measurement of the pose of the object is realized, and the operation method is very simple and convenient, and no measurement personnel are required to climb, which effectively eliminates the risk of measurement.

Abstract

A pose measurement method and device, and a storage medium. The pose measurement method comprises: obtaining photos of an object photographed at different positions and pose parameters of the photographing device when photographing the photos (step 101); constructing a virtual three dimensional scene according to the photos of the object, and matching the virtual three dimensional scene with the real three dimensional scene according to the pose parameters of the photographing device (step 102); and calculating a pose parameter of the object according to position information of the object in the real three dimensional scene (step 103).

Description

一种位姿测量方法、设备及存储介质Position measuring method, device and storage medium
相关申请的交叉引用Cross-reference to related applications
本申请基于申请号为201710475557.4、申请日为2017年06月21日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的内容在此以引入方式并入本申请。The present application is filed on the basis of the Chinese Patent Application No. PCT Application No.
技术领域Technical field
本发明涉及测量技术领域,特别是涉及一种位姿测量方法、设备及存储介质。The present invention relates to the field of measurement technology, and in particular, to a pose measurement method, device, and storage medium.
背景技术Background technique
物体位姿测量在现代通信、国防以及航空航天等诸多领域都有着重要的应用价值。例如,尤其是在通信系统中,天线位姿的测量与基站的覆盖范围息息相关;在例如,机器人装配过程中,机械臂的位姿决定的装配的精度。如何,实现物体的位姿测量一直是人们研究的重点。Object pose measurement has important application value in many fields such as modern communication, national defense and aerospace. For example, especially in communication systems, the measurement of the antenna pose is closely related to the coverage of the base station; for example, during the robot assembly process, the pose of the robot determines the accuracy of the assembly. How to achieve the pose measurement of objects has always been the focus of research.
目前,位姿测量方法中通常需要配置相关测量人员,将相关测量仪器被测量物体上。但是该种方法,对于可近操作的物体可以实现。但是,对于天线以及飞行器等需要远距离测量的物体,该种测量方法具有很大的危险性,无法保证测量人员的人身安全,同时耗费过多的人力。因此提供一种简便可行的的位姿测量方法,是非常有必要的。At present, the position measurement method usually needs to be configured with relevant measurement personnel, and the relevant measurement instrument is measured on the object. However, this method can be implemented for objects that can be operated near. However, for objects such as antennas and aircraft that require long-distance measurement, such measurement methods are very dangerous, and it is impossible to ensure the personal safety of the measurement personnel, and at the same time, excessive manpower is required. Therefore, it is very necessary to provide a simple and feasible pose measurement method.
发明内容Summary of the invention
本发明实施例提供一种位姿测量方法、设备及存储介质,用以解决现有技术中位姿测量方法耗费人力且无法保证测量人员的人身安全的问题。The embodiment of the invention provides a method, a device and a storage medium for measuring a pose, which solves the problem that the pose measurement method in the prior art is labor-intensive and cannot guarantee the personal safety of the measurement personnel.
本发明实施例采用下述的技术方案:The embodiment of the invention adopts the following technical solutions:
依据本发明实施例的一个方面,提供一种位姿测量方法,包括:According to an aspect of an embodiment of the present invention, a pose measurement method is provided, including:
获取多张不同位置拍摄的物体照片以及拍摄所述照片时拍摄设备的姿态参数;Obtaining a plurality of photos of objects taken at different positions and attitude parameters of the photographing device when the photographs are taken;
根据所述物体照片构建虚拟三维场景,并根据所述拍摄设备的姿态参数将所述虚拟三维场景与真实三维环境进行匹配;Constructing a virtual three-dimensional scene according to the object photo, and matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device;
根据所述物体在所述真实三维环境的位置信息计算所述物体的姿态参数。A posture parameter of the object is calculated according to position information of the object in the real three-dimensional environment.
在一个可选的方案中,所述拍摄设备的姿态参数包括拍摄设备的旋转角信息;所述根据所述拍摄设备的姿态参数将所述虚拟三维场景与真实三维环境进行匹配,包括:In an optional solution, the posture parameter of the photographing device includes rotation angle information of the photographing device; and the matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device, including:
获得二维像平面到虚拟三维场景中的第一旋转矩阵;Obtaining a first rotation matrix from a two-dimensional image plane to a virtual three-dimensional scene;
根据所述旋转角信息获得二维像平面到真实三维环境的第二旋转矩阵;Obtaining a second rotation matrix of the two-dimensional image plane to the real three-dimensional environment according to the rotation angle information;
根据所述第一旋转矩阵和所述第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵。Calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix.
在一个可选的方案中,所述拍摄设备的姿态参数包括拍摄设备的定位信息;所述根据所述拍摄设备的姿态参数将所述虚拟三维场景与真实三维环境进行匹配,还包括:In an optional solution, the posture parameter of the photographing device includes the positioning information of the photographing device; and the matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device, further includes:
根据所述定位信息以及拍摄设备在虚拟三维场景中的位置确定虚拟三维场景到真实三维环境的平移和缩放转化矩阵。A panning and scaling transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is determined according to the positioning information and the position of the photographing device in the virtual three-dimensional scene.
在一个可选的方案中,所述根据所述第一旋转矩阵和所述第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵,包括:In an optional solution, the calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix comprises:
获取拍摄设备在不同位置得到的旋转转换矩阵;Obtaining a rotation conversion matrix obtained by the photographing device at different positions;
计算各个旋转转换矩阵在三个坐标轴的旋转角的平均值,并基于所述平均值重构获得最终的旋转转化矩阵。The average of the rotation angles of the respective rotation conversion matrices on the three coordinate axes is calculated, and the final rotation transformation matrix is obtained based on the average value reconstruction.
在一个可选的方案中,所述根据所述物体在所述真实三维环境的位置信息计算所述物体的姿态参数,包括:In an optional solution, the calculating the attitude parameter of the object according to the location information of the object in the real three-dimensional environment comprises:
根据在两张目标照片中标定的同名直线确定两个同名相点对;Two pairs of points with the same name are determined according to the straight line of the same name calibrated in the two target photos;
基于所述两个同名相点对确定虚拟三维场景中两个相点的位置;Determining the positions of two phase points in the virtual three-dimensional scene based on the two pairs of phase names of the same name;
利用所述旋转转化矩阵和所述平移和缩放转化矩阵对所述两个相点连线中心的位置进行转化,根据转化后的位置获得所述物体在真实三维环境的定位值和海拔值。The rotation transformation matrix and the translation and scaling transformation matrix are used to transform the positions of the centers of the two phase points, and the positioning values and altitude values of the objects in the real three-dimensional environment are obtained according to the transformed positions.
在一个可选的方案中,所述根据在两张目标照片中标定的同名直线确定两个同名相点对,包括:In an optional solution, the two pairs of the same name are determined according to the same-named line calibrated in the two target photos, including:
检测到所述目标照片中标定的同名直线后,根据双目视觉极限约束原理,计算其中一张目标照片中同名直线两个端点处的极线;After detecting the straight line of the same name calibrated in the target photo, calculating the polar line at the two end points of the same name line in one of the target photos according to the binocular visual limit constraint principle;
所述极线与另一目标照片中同名直线的交点即为所述端点的同名相点。The intersection of the polar line and the straight line of the same name in another target photo is the point of the same name of the end point.
在一个可选的方案中,所述基于所述两个同名相点对确定虚拟三维场景中两个相点的位置,包括:In an optional solution, the determining, according to the two pairs of phase names of the same name, the positions of two phase points in the virtual three-dimensional scene, including:
根据三维空间到二维像平面的投影方程,构建同名相点对每个相点对应的线性方程;According to the projection equation from the three-dimensional space to the two-dimensional image plane, construct a linear equation corresponding to each phase point of the same-named phase point;
将同名相点对的线性方程构成线性方程组,利用矩阵方法对所述线性方程组进行求解,基于所述解确定虚拟三维场景中相点的位置。A linear equation of the same name pair is formed into a linear equation group, and the linear equation group is solved by a matrix method, and the position of the phase point in the virtual three-dimensional scene is determined based on the solution.
在一个可选的方案中,所述将同名相点对的线性方程构成线性方程组,利用矩阵方法对所述线性方程组进行求解,基于所述解确定虚拟三维场景中相点的位置,包括:In an optional solution, the linear equations of the same-named pair of points form a linear equation group, and the linear equations are solved by a matrix method, and the position of the phase points in the virtual three-dimensional scene is determined based on the solution, including :
根据上一次的解得到每个相点新的加权因子,对每个相点的线性方程都分别除以对应的加权因子,得到新的线性方程组,再计算得到新解,重复本步骤,直至得到新解与上一次的解相等后,再重复一次本步骤,计算 得到的最终解为所述相点的位置。According to the previous solution, the new weighting factor of each phase point is obtained. The linear equations of each phase point are respectively divided by the corresponding weighting factors to obtain a new linear equation group, and then the new solution is calculated. Repeat this step until After the new solution is equal to the previous solution, this step is repeated once again, and the calculated final solution is the position of the phase point.
在一个可选的方案中,所述根据所述物体在所述真实三维环境的位置信息计算所述物体的姿态参数,包括:In an optional solution, the calculating the attitude parameter of the object according to the location information of the object in the real three-dimensional environment comprises:
确定在三张目标照片中标定的同一直线的位置;Determine the position of the same line calibrated in the three target photos;
根据所述位置以及拍摄设备的位置信息利用三焦点张量算法计算所述直线在真实三维环境中的位置信息;Calculating position information of the straight line in a real three-dimensional environment by using a three-focus tensor algorithm according to the position and position information of the photographing device;
根据所述直线在真实空间中的位置信息计算物体的姿态角。The attitude angle of the object is calculated based on the position information of the straight line in the real space.
依据本发明实施例的一个方面,提供一种位姿测量设备,包括摄像头、处理器和存储器,其中所述存储器中存储有位姿测量程序;所述处理器用于执行所述存储器中存储的所述程序,用以实现上述所述的位姿测量方法中的步骤。According to an aspect of an embodiment of the present invention, a pose measuring apparatus includes a camera, a processor, and a memory, wherein the memory stores a pose measurement program; the processor is configured to execute the stored in the memory The program is used to implement the steps in the pose measurement method described above.
依据本发明实施例的一个方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有位姿测量程序,所述位姿测量程序被处理器执行时实现上述所述的位姿测量方法中的步骤。According to an aspect of an embodiment of the present invention, a computer readable storage medium is provided, on which the pose measurement program is stored, and the pose measurement program is implemented by a processor to implement the bit described above. The steps in the attitude measurement method.
本发明实施例有益效果如下:The beneficial effects of the embodiments of the present invention are as follows:
本发明实施例通过拍摄设备对不同位置上的物体进行拍摄,而后结合拍摄设备的姿态参数对拍摄的物体照片进行分析,即可获得物体的实际位姿参数信息。因此本发明实施例实现了物体位姿的远程测量,操作方法非常简单便捷,无需测量人员进行攀爬,有效消除了测量的危险性。In the embodiment of the present invention, the object in different positions is photographed by the photographing device, and then the photograph of the photographed object is analyzed by combining the posture parameters of the photographing device, so that the actual pose parameter information of the object can be obtained. Therefore, the embodiment of the invention realizes the remote measurement of the pose of the object, and the operation method is very simple and convenient, and the measurement personnel are not required to climb, thereby effectively eliminating the risk of measurement.
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solutions of the present invention, and the above-described and other objects, features and advantages of the present invention can be more clearly understood. Specific embodiments of the invention are set forth below.
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有中的方案,下面将对实施例或 现有描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the prior art, the drawings used in the embodiments or the prior description will be briefly described below. It is obvious that the drawings in the following description are only Some embodiments of the invention may also be used to obtain other figures from these figures without departing from the scope of the invention.
图1为本发明实施例所提供的位姿测量方法的流程图;1 is a flowchart of a pose measurement method according to an embodiment of the present invention;
图2a和图2b为本发明一具体实施例中计算同名相点的结果图像示意图。2a and 2b are schematic diagrams showing the result of calculating the phase point of the same name in an embodiment of the present invention.
图3为本发明一具体实施例中三焦点张量测量天线的流程图;3 is a flow chart of a three-focus tensor measuring antenna according to an embodiment of the present invention;
图4为本发明实施例所提供的位姿测量设备的原理框图。FIG. 4 is a schematic block diagram of a pose measuring device according to an embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图以及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不限定本发明。The invention will be further described in detail below with reference to the drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
方法实施例Method embodiment
如图1所示,本发明实施例所提供的位姿测量方法,包括如下步骤:As shown in FIG. 1 , the pose measurement method provided by the embodiment of the present invention includes the following steps:
步骤101,获取多张不同位置拍摄的物体照片以及拍摄照片时拍摄设备的姿态参数。Step 101: Acquire a plurality of photos of the objects photographed at different positions and posture parameters of the photographing device when the photographs are taken.
在该步骤中,这里的物体不局限于天线,还可以机器人、飞行器等需要监测姿态参数的物体。例如,在装配过程中,需要机器人手臂的姿态参数进行检测确保装配的准确性;或者,飞行器风力测试时,对飞行姿态进行检测。In this step, the object here is not limited to the antenna, but also an object such as a robot, an aircraft, or the like that needs to monitor the attitude parameter. For example, during the assembly process, the posture parameters of the robot arm are required to be tested to ensure the accuracy of the assembly; or, in the case of the aircraft wind test, the flight attitude is detected.
其中,这里的拍摄设备包括相机、摄像机、手机以及平板电脑等具有拍摄功能的设备,这里对于拍摄设备的类型不做具体的限定。其中,当拍摄设备为可移动设备时,可移动设备能够自行获取姿态参数信息。在拍摄时,可以自行选择拍摄位置,并获取不同位置拍摄的物体照片。当然还可以通过在不同位置设置固定的摄像机时,当需要时,则采集相应位置的照片即可。Among them, the shooting device here includes a camera, a camera, a mobile phone, and a tablet computer, and the like, and the type of the shooting device is not specifically limited. Wherein, when the photographing device is a movable device, the movable device can acquire the posture parameter information by itself. When shooting, you can choose your own shooting location and get photos of objects taken at different locations. Of course, it is also possible to collect a photo of the corresponding position when a fixed camera is set at a different position.
其中,这里的拍摄设备的姿态参数包括定位信息(例如,全球定位系统GPS信息)以及三个旋转角(相对于三个坐标轴的旋转角)等信息。其中,当拍摄设备为可移动设备时,该参数可以直接通过相应的传感器获取即可。而对于通过固定位置的摄像头进行拍摄时,可以预先配置旋转角和GPS信息,在需要时,直接获取配置信息即可。Here, the attitude parameters of the photographing apparatus herein include positioning information (for example, global positioning system GPS information) and information such as three rotation angles (rotation angles with respect to three coordinate axes). Wherein, when the photographing device is a movable device, the parameter can be directly obtained by the corresponding sensor. For shooting with a fixed position camera, you can pre-configure the rotation angle and GPS information, and directly obtain the configuration information when needed.
步骤102,根据物体照片构建虚拟三维场景,并根据拍摄设备的姿态参数将虚拟三维场景与真实三维环境进行匹配。Step 102: Construct a virtual three-dimensional scene according to the object photo, and match the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device.
在该步骤中,根据物体照片构建虚拟三维场景时,需要分别对不同位置获取的物体照片进行特征点提取、匹配和矩阵转换等处理构建照片的稀疏点云;而后再利用SFM(Structure from Motion,运动恢复结构)算法计算特征点的三维坐标,构建简单的稀疏的虚拟三维场景。这里,对于SFM算法构建虚拟三维场景的过程已属于本领域技术人员所熟知的技术,因此这里不再进行赘述。In this step, when constructing a virtual three-dimensional scene according to the object photo, it is necessary to separately perform feature point extraction, matching, and matrix conversion on the object photos acquired at different positions to construct a sparse point cloud of the photo; and then use SFM (Structure from Motion, The motion recovery structure algorithm calculates the three-dimensional coordinates of the feature points and constructs a simple sparse virtual three-dimensional scene. Here, the process of constructing a virtual three-dimensional scene for the SFM algorithm is already well known to those skilled in the art, and thus will not be described again here.
其中,在根据拍摄设备的姿态参数实现虚拟三维场景与真实三维环境的匹配,主要是通过根据拍摄设备的旋转角信息计算旋转转换矩阵,根据拍摄设备的定位信息计算平移和缩放矩阵。通过旋转转化矩阵、平移和缩放转化矩阵可以将实现虚拟三维场景到真实三维场景的转化。The matching between the virtual three-dimensional scene and the real three-dimensional environment is performed according to the posture parameter of the photographing device, mainly by calculating the rotation conversion matrix according to the rotation angle information of the photographing device, and calculating the translation and zoom matrix according to the positioning information of the photographing device. The transformation of a virtual 3D scene into a real 3D scene can be achieved by rotating the transformation matrix, translation and scaling transformation matrices.
步骤103,根据物体在真实三维环境的位置信息计算物体的姿态参数。Step 103: Calculate the attitude parameter of the object according to the position information of the object in the real three-dimensional environment.
在步骤102中,已经获取虚拟三维场景到真实三维环境的转换信息,根据该转换信息,通过用户在物体照片中标定所需测量的位置,即可获取天线在真实空间的位置信息。根据物体在真实空间的位置信息即可直接计算物体的姿态参数。In step 102, the conversion information of the virtual three-dimensional scene to the real three-dimensional environment has been acquired, and according to the conversion information, the position information of the antenna in the real space can be obtained by the user calibrating the position of the desired measurement in the object photo. The attitude parameter of the object can be directly calculated according to the position information of the object in the real space.
基于上述可知,本发明实施例所提供的位姿测量方法,通过拍摄设备对不同位置上的物体进行拍摄,通过结合拍摄设备的姿态参数对拍摄的物体照片进行分析,结合数学算法最终获得拍摄照片中物体的实际位姿参数。 本发明实施例实现了物体位姿的远程测量,无需测量人员进行攀爬操作,因此操作非常简单便捷,有效消除了测量的危险性,保证了测量人员人身安全。Based on the above, the pose measurement method provided by the embodiment of the present invention captures an object at different positions by a photographing device, analyzes the photograph of the photographed object by combining the posture parameters of the photographing device, and finally obtains a photograph by combining a mathematical algorithm. The actual pose parameter of the medium object. The embodiment of the invention realizes the remote measurement of the posture of the object, and does not require the measuring personnel to perform the climbing operation, so the operation is very simple and convenient, the risk of measurement is effectively eliminated, and the personal safety of the measuring personnel is ensured.
下面结合具体实施例对本发明的技术内容做进一步的详细说明。在该实施例中,以天线为例,通过天线的位姿测量过程对本技术方案进行介绍。The technical content of the present invention will be further described in detail below with reference to specific embodiments. In this embodiment, the antenna is taken as an example to introduce the technical solution through the pose measurement process of the antenna.
步骤101,获取多张不同位置拍摄的物体照片以及拍摄照片时拍摄设备的姿态参数。Step 101: Acquire a plurality of photos of the objects photographed at different positions and posture parameters of the photographing device when the photographs are taken.
在该实施例中,采用手机作为测量设备,通过利用手机的摄像头进行天线的照片拍摄,同时使用手机中相关传感器(例如,GPS传感器、陀螺仪、加速度计以及电子罗盘)获取GPS位置信息和姿态参数。通过手机在不同位置上拍摄数张天线照片(数量可为5至10张中的任意张数)。在拍摄天线照片时,获取并记录手机的姿态参数,这里的姿态参数包括GPS位置和三个坐标轴的旋转角信息。In this embodiment, a mobile phone is used as a measuring device, and a photo of the antenna is taken by using a camera of the mobile phone, and the GPS position information and posture are acquired by using related sensors (for example, a GPS sensor, a gyroscope, an accelerometer, and an electronic compass) in the mobile phone. parameter. Take several aerial photos at different locations on your phone (the number can be anywhere from 5 to 10). When taking an aerial photograph, the attitude parameter of the mobile phone is acquired and recorded, and the posture parameter here includes the GPS position and the rotation angle information of the three coordinate axes.
步骤102,根据物体照片构建虚拟三维场景,并根据拍摄设备的姿态参数将虚拟三维场景与真实三维环境进行匹配。Step 102: Construct a virtual three-dimensional scene according to the object photo, and match the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device.
上述提到利用SFM算法进行了获得到场景的三维重建。在构建完成三维的稀疏点云之后,需要将稀疏点云进行旋转,尽量使得点云的姿态、朝向、分布于真实三维环境的一致。The above mentioned using the SFM algorithm to obtain a three-dimensional reconstruction of the scene. After constructing the three-dimensional sparse point cloud, it is necessary to rotate the sparse point cloud to make the point cloud's posture, orientation, and distribution in the real three-dimensional environment as consistent as possible.
其中,通过拍摄设备拍摄时获取的三个旋转角信息,计算旋转转化矩阵,包括如下步骤:Wherein, the rotation transformation matrix is calculated by the three rotation angle information acquired when the shooting device is photographed, and the following steps are included:
步骤21,利用SFM算法获取拍摄设备从二维像平面到虚拟三维场景中的第一旋转矩阵R SfM。在进行虚拟三维场景的重建时,根据SFM算法得到每个拍摄设备外参中的旋转矩阵(二维像平面到SFM空间的旋转矩阵),而该旋转矩阵即为第一旋转矩阵,通过该矩阵可以根据二维像平面获知拍摄设备在虚拟三维空间中的姿态。 Step 21: Acquire a first rotation matrix R SfM of the photographing device from the two-dimensional image plane to the virtual three-dimensional scene by using the SFM algorithm. When performing the reconstruction of the virtual three-dimensional scene, the rotation matrix (the rotation matrix of the two-dimensional image plane to the SFM space) in the external reference of each shooting device is obtained according to the SFM algorithm, and the rotation matrix is the first rotation matrix, and the matrix is passed through the matrix. The attitude of the photographing device in the virtual three-dimensional space can be known from the two-dimensional image plane.
步骤22,根据拍摄设备的旋转角度信息获取拍摄设备从二维像平面到真实三维环境的第二旋转矩阵。Step 22: Acquire a second rotation matrix of the photographing device from the two-dimensional image plane to the real three-dimensional environment according to the rotation angle information of the photographing device.
在该步骤中,根据内部旋转转化定理即可根据拍摄设备的三个旋转角度获取到拍摄设备在真实三维环境的旋转矩阵。这里,使用如下公式计算手机自身坐标系关于真实三维环境(大地坐标系)的姿态:In this step, according to the internal rotation conversion theorem, the rotation matrix of the photographing device in the real three-dimensional environment can be obtained according to the three rotation angles of the photographing device. Here, the following formula is used to calculate the attitude of the phone's own coordinate system with respect to the real three-dimensional environment (the geodetic coordinate system):
R camera=R phone*R x(180)*R z(-90) (1) R camera =R phone *R x (180)*R z (-90) (1)
其中,R x(180)代表坐标系先以逆时针方向绕手机自身坐标系的x轴旋转180度,R z(-90)代表坐标系再以顺时针方向绕z轴旋转90度,R phone为拍摄设备拍摄时的三个旋转角信息。 Where R x (180) represents that the coordinate system first rotates 180 degrees around the x-axis of the phone's own coordinate system in a counterclockwise direction, and R z (-90) represents the coordinate system and then rotates 90 degrees around the z-axis in a clockwise direction, R phone Three rotation angle information for shooting with the shooting device.
步骤23,根据第一旋转矩阵和第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵。Step 23: Calculate a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix.
在步骤21和步骤22中,分别获得了拍摄设备在SFM空间(虚拟三维场景)中的旋转矩阵R SfM,拍摄设备到真实三维环境的旋转矩阵R camera,根据公式(2)即可获得旋转转化矩阵R trans: In step 21 and step 22, respectively, to obtain the photographing apparatus in the rotation matrix R SfM SFM space (virtual three-dimensional scene) is, the rotation matrix R camera imaging device to the real three dimensional environment according to the formula (2) can be obtained rotational transformation Matrix R trans :
R trans=R camera*(R SfM) -1 (2) R trans =R camera *(R SfM ) -1 (2)
在一个可选的实施例中,由于拍摄设备在不同位置(多个位置)拍摄了物体照片,根据不同位置的拍摄设备可以得到数个旋转转换矩阵,为了保证旋转转换矩阵的准确性,本发明实施例中,需要根据不同位置获得的多个旋转转换矩阵的均值来获得最终的旋转转化矩阵。具体地,获取各个旋转转换矩阵在三个坐标轴的旋转角的平均值,并基于平均值重构获得最终的旋转转化矩阵。In an optional embodiment, since the photographing device photographs the object at different positions (multiple positions), several rotation conversion matrices can be obtained according to the photographing devices of different positions, and the present invention is ensured in order to ensure the accuracy of the rotation conversion matrix. In an embodiment, the average rotation transformation matrix of the plurality of rotation conversion matrices obtained from different positions is required to obtain a final rotation transformation matrix. Specifically, the average values of the rotation angles of the respective rotation conversion matrices on the three coordinate axes are acquired, and the final rotation transformation matrix is obtained based on the average value reconstruction.
在该实施例中,需将每个旋转转换矩阵分解成三个旋转角R x,R y,R z,将这些旋转角在三个方向上分别求平均值,而后根据三个旋转角的平均值再重构则得到最终的旋转转化矩阵。这里对于矩阵与旋转角度的转化过程,已属于本领域技术人员所熟知的技术,这里不再进行详细介绍。在进行旋 转转化后,真实空间的坐标系为Y轴代表正北方向,X轴代表正东方向,Z轴代表由地心往天空的垂直方向。可以看到,建筑物的侧边与Z轴接近垂直,因此进行了较为准确的转化。 In this embodiment, each rotation transformation matrix is decomposed into three rotation angles R x , R y , R z , which are averaged in three directions and then averaged according to three rotation angles. The value is re-reconstructed to obtain the final rotation transformation matrix. The conversion process of the matrix and the rotation angle here is a technique well known to those skilled in the art and will not be described in detail herein. After the rotation transformation, the coordinate system of the real space is that the Y axis represents the true north direction, the X axis represents the true east direction, and the Z axis represents the vertical direction from the center of the earth to the sky. It can be seen that the side of the building is nearly perpendicular to the Z axis, so a more accurate conversion is made.
可知,通过对旋转转换矩阵进行旋转角度的分解,而后再根据角度平均值进行逆变换,获取最终的旋转转换矩阵,可以有效减少最终的计算误差,提高后续姿态参数的准确性。It can be seen that by decomposing the rotation angle of the rotation transformation matrix and then inversely transforming according to the angle average to obtain the final rotation transformation matrix, the final calculation error can be effectively reduced, and the accuracy of the subsequent attitude parameters can be improved.
在一个可选的实施例中,为了让将旋转后的稀疏点云的大小、位置,与真实世界一致,保证GPS的精度的精度,这里需要对点云的位置进行平移和缩放。在计算转换参数时,因为缩放参数是各个坐标轴同比例的,故可以忽略Z轴方向数值,这里仅对X轴与Y轴的数值计算进行说明。In an optional embodiment, in order to make the size and position of the rotated sparse point cloud conform to the real world and ensure the accuracy of the accuracy of the GPS, the position of the point cloud needs to be translated and scaled. When calculating the conversion parameters, since the scaling parameters are the same ratio of the respective coordinate axes, the value in the Z-axis direction can be ignored. Here, only the numerical calculations of the X-axis and the Y-axis are explained.
设转化后以米作为度量单位的拍摄设备位置为(X i,Y i)(即拍摄设备在真实三维环境的位置,可以根据拍摄设备的GPS信息获得),在三维重建空间中同样能够得到拍摄设备的位置(x i,y i),这里,令缩放系数为S,平移向量为(T x,T y),则可以建立如下线性方程: The position of the shooting device in meters after conversion is (X i , Y i ) (that is, the position of the shooting device in the real three-dimensional environment can be obtained according to the GPS information of the shooting device), and the same can be obtained in the three-dimensional reconstruction space. The position of the device (x i , y i ), where the scaling factor is S and the translation vector is (T x , T y ), the following linear equation can be established:
Figure PCTCN2018090821-appb-000001
Figure PCTCN2018090821-appb-000001
其中n代表物体照片的数量,上式为超定方程,求解该方程可以采用最小二乘法,QR分解或者奇异值分解(SVD)等矩阵论中的常用解法,由此可以得到目标的平移和缩放转化参数。对于上述方程的求解过程,是本领域技术人员所熟知的技术,因此这里也不再进行说明。Where n represents the number of photographs of the object, and the above formula is an overdetermined equation. The solution can be solved by using the least square method, QR decomposition or singular value decomposition (SVD) and other common solutions in matrix theory, so that the target can be translated and scaled. Conversion parameters. The solution process for the above equations is a technique well known to those skilled in the art and therefore will not be described here.
由于在拍摄天线照片时,最佳的GPS精度不过3m,得到的GPS的精 度是非常不精确的,通过上述结合所有获得GPS信息重新确定进行计算目标的GPS,即可获得非常精确的GPS值。Since the best GPS accuracy is only 3m when taking aerial photographs, the accuracy of the obtained GPS is very inaccurate. By combining all the GPS information obtained by re-determining the calculation target, a very accurate GPS value can be obtained.
除了上述计算平移与缩放参数的方法,还可以设置用户输入缩放参数的接口,由用户提供在拍摄时每两张天线照片之间的平均移动距离,用此参数除以虚拟三维场景中邻近拍摄设备之间的平均距离以得到缩放参数。通常情况下,用户的平均移动距离在0.3米至1米之间,得到缩放参数后,平移参数同样可以使用上式求解。采用此种方法计算的原因是GPS的定位精度较低,例如用户拍摄了5张照片,相邻照片的移动距离为0.5米,总共在2.5米范围内移动,然而GPS精度往往精度就在3米左右,很可能发生用户移动却发生了GPS读数过于密集的现象。In addition to the above method of calculating the pan and zoom parameters, an interface for the user to input the zoom parameters can be set, and the user provides the average moving distance between each of the two antenna photos at the time of shooting, and this parameter is divided by the neighboring photographing device in the virtual three-dimensional scene. The average distance between them to get the scaling parameter. Normally, the average moving distance of the user is between 0.3 and 1 meter. After the scaling parameter is obtained, the translation parameter can also be solved using the above formula. The reason for this method is that the GPS positioning accuracy is low. For example, the user has taken 5 photos, and the moving distance of the adjacent photos is 0.5 meters, which moves within a total range of 2.5 meters. However, the GPS accuracy is often accurate at 3 meters. Left and right, it is very likely that user movement occurs but the GPS reading is too dense.
步骤103,根据物体在真实三维环境的位置信息计算物体的姿态参数。Step 103: Calculate the attitude parameter of the object according to the position information of the object in the real three-dimensional environment.
这里天线姿态参数包括天线的姿态角,以及GPS和海拔(挂高)信息。这里首先介绍GPS和海拔(挂高)信息的获取。Here, the antenna attitude parameters include the attitude angle of the antenna, as well as GPS and altitude (height) information. Here we first introduce the acquisition of GPS and altitude (height) information.
在经过SFM算法后,可获取拍摄设备内外参数。在该前提下,需要获取代表天线位置的同名相点来计算天线目标的三维空间位置,具体包括如下:After passing the SFM algorithm, the internal and external parameters of the shooting device can be obtained. Under this premise, it is necessary to obtain the phase name of the same name representing the antenna position to calculate the three-dimensional spatial position of the antenna target, which includes the following:
根据在两张目标照片中标定的同名直线确定两个同名相点对;Two pairs of points with the same name are determined according to the straight line of the same name calibrated in the two target photos;
基于两个同名相点对确定虚拟三维场景中两个相点的位置;Determining the positions of two phase points in the virtual three-dimensional scene based on two phase pairs of the same name;
利用旋转转化矩阵、平移和缩放转化矩阵对两个相点连线中心的位置进行转化,根据转化后的位置获得天线在真实三维环境的定位值GPS和海拔值。The rotation transformation matrix, translation and scaling transformation matrix are used to transform the position of the center of the two phase points, and the GPS and altitude values of the antenna in the real three-dimensional environment are obtained according to the transformed position.
这里,在拍摄天线的照片后,由用户从所拍摄的照片中选择两张照片作为目标图片,并在目标照片中标定用以表示天线位置的同名直线。根据同名直线的信息来确定最终的GPS和海拔值。Here, after taking a picture of the antenna, the user selects two photos from the captured photos as the target picture, and calibrates the straight line of the same name indicating the position of the antenna in the target picture. The final GPS and altitude values are determined based on the information of the line of the same name.
其中,基于用户输入同名直线对和双目视觉极限约束的方法来获取同 名相点,包括如下:Among them, the method of inputting the same name straight line pair and the binocular visual limit constraint is used to obtain the same name phase point, including the following:
待检测到目标照片中标定的同名直线后,根据双目视觉极限约束原理,计算其中一张目标照片中同名直线两个端点处的极线;After the straight line of the same name calibrated in the target photo is detected, the polar line at the two end points of the same name line in one of the target photos is calculated according to the binocular vision limit constraint principle;
极线与另一目标照片中同名直线的交点即为端点的同名相点。The intersection of the polar line and the line of the same name in the other target photo is the point of the same name of the endpoint.
举例说明,如图2a和图2b所示,用户选取两张照片作为目标照片,并在每张目标照片中框选出天线目标,在天线目标中选取代表天线的同名直线对
Figure PCTCN2018090821-appb-000002
根据双目视觉极限约束原理,计算图2a中同名直线l的端点a处的极线l a;此条极线与图2b同名直线l′相交得到交点a′,由此得到一个同名点对
Figure PCTCN2018090821-appb-000003
同样能够获取到同名直线对
Figure PCTCN2018090821-appb-000004
For example, as shown in FIG. 2a and FIG. 2b, the user selects two photos as the target photo, and selects an antenna target in each target photo, and selects a straight line pair of the same name representing the antenna in the antenna target.
Figure PCTCN2018090821-appb-000002
The constraint limit binocular vision principle, the same name is calculated 2a a line l at the end of the electrode line a in FIG. L; FIG. 2b and the entry of the same name polar straight line l 'intersects the intersection points a', thereby to obtain a point of the same name
Figure PCTCN2018090821-appb-000003
Can also get a straight line pair with the same name
Figure PCTCN2018090821-appb-000004
基于上述可知,采用这上述同名相点的获取方法固然增加了用户的操作难度,但是目前的直线匹配算法不能够较好地精确匹配对应直线,采用本发明实施例的方法保证了直线匹配的精确性和同名相点获取的精度。Based on the above, the acquisition method of the above-mentioned same-named phase has increased the difficulty of the user's operation. However, the current linear matching algorithm cannot accurately match the corresponding straight line, and the method of the embodiment of the present invention ensures the accuracy of the straight line matching. The precision of the sex and the same name.
其中,在获取选取的两张目标照片中两个同名相点对后,需要根据两个同名相点对在照片中的位置确定对应的两个相点的三维空间位置坐标。可选的,本发明实施例通过如下方式实现:After obtaining the two pairs of the same name in the selected two target photos, it is necessary to determine the three-dimensional spatial position coordinates of the corresponding two phase points according to the positions of the two same-named points in the photo. Optionally, the embodiment of the present invention is implemented as follows:
根据三维空间到二维像平面的投影方程,构建同名相点对每个相点对应的线性方程;According to the projection equation from the three-dimensional space to the two-dimensional image plane, construct a linear equation corresponding to each phase point of the same-named phase point;
将同名相点对的线性方程构成线性方程组,利用矩阵方法对线性方程组进行求解,基于求解结果确定虚拟三维场景中相点的位置。The linear equations of the same-named pair are formed into a linear equation, and the linear equations are solved by the matrix method. The position of the phase points in the virtual three-dimensional scene is determined based on the solution result.
这里每个相点对应的是一个拍摄视角,即上述提到两个目标照片(不同拍摄位置)的拍摄视角。具体地,根据三维空间与二维像平面的投影方程有x=PX;其中,x为齐次坐标且x=w(u,v,1),(u,v)为照片上的坐标点(二维像平面坐标),w代表投影深度,即缩放因子。X为所需计算的相点的三维空间的位置坐标,为了能够使用矩阵进行计算,通常使用齐次坐标表示,即X=(x,y,z,1)这样的四维向量;P为投影矩阵(可根据SFM算法获得),这 里第i行用
Figure PCTCN2018090821-appb-000005
表示,则投影方程可以被重写为:
Each phase point here corresponds to a shooting angle, that is, the shooting angle of the two target photos (different shooting positions) mentioned above. Specifically, the projection equation according to the three-dimensional space and the two-dimensional image plane has x=PX; wherein x is a homogeneous coordinate and x=w(u, v, 1), and (u, v) is a coordinate point on the photo ( Two-dimensional image plane coordinates), w represents the projection depth, ie the scaling factor. X is the position coordinate of the three-dimensional space of the phase point to be calculated. In order to be able to use the matrix for calculation, it is usually expressed in homogeneous coordinates, that is, a four-dimensional vector such as X=(x, y, z, 1); P is a projection matrix. (according to the SFM algorithm), here the i-th line
Figure PCTCN2018090821-appb-000005
Representation, then the projection equation can be rewritten as:
Figure PCTCN2018090821-appb-000006
Figure PCTCN2018090821-appb-000006
对上述的投影方程,将参数w消去,得到如下2个线性方程:For the above projection equation, the parameter w is eliminated, and the following two linear equations are obtained:
Figure PCTCN2018090821-appb-000007
Figure PCTCN2018090821-appb-000007
其中,若需要计算X的值,至少需要两个视角的同名相点和投影矩阵。上述公式(5)表示的是在某一个视角下获取到的两个线性方程;若有两个视角,则可以获得四个线性方程,并经过将等式右边的项移动到左部,可以将四个方程组转化为AX=0的形式。Wherein, if it is necessary to calculate the value of X, at least two phase points of the same name phase point and projection matrix are needed. The above formula (5) represents two linear equations acquired at a certain angle of view; if there are two angles of view, four linear equations can be obtained, and by moving the item on the right side of the equation to the left, The four equations are converted to the form of AX=0.
其中,A是一个4×4的矩阵。通常情况下,由于噪声的存在,这个方程式不能够完全对等的,可以找到使得||AX||最小的X,并满足约束条件||X||=1。这个方程的最优解是对应于矩阵A TA的最小特征值的特征向量,可以采用通用的解法如SVD,QR分解等方法来求解X。 Where A is a 4 x 4 matrix. Normally, due to the presence of noise, this equation cannot be completely equivalent. You can find the smallest X that makes ||AX|| and satisfy the constraint ||X||=1. The optimal solution of this equation is the eigenvector corresponding to the minimum eigenvalue of the matrix A T A. The general solution such as SVD, QR decomposition can be used to solve X.
其中,在计算得到X后,它是一个齐次向量,将向量的每一元素除以最后一个元素,最后得到的前三个元素代表了最终的目标位置。即,求解出的X=(a,b,c,d),齐次坐标X1=(a/d,b/d,c/d,d/d),最终的目标坐标T=(a/d,b/d,c/d)。Among them, after calculating X, it is a homogeneous vector, dividing each element of the vector by the last element, and the resulting first three elements represent the final target position. That is, the solved X=(a, b, c, d), the homogeneous coordinate X1=(a/d, b/d, c/d, d/d), and the final target coordinate T=(a/d) , b/d, c/d).
需要说明的是,上述从两个视角(两张目标照片)对求解原理进行说明,当存在有多个视角,可以构建6个,8个…等线性方程组,此时的A是超定矩阵,同样可以使用最小二乘法来求解。可以使用奇异值分解SVD,QR方法等求解。It should be noted that the above principle is explained from two angles of view (two target photos). When there are multiple angles of view, six or eight linear equations can be constructed. At this time, A is an overdetermined matrix. , can also be solved using the least squares method. Singular value decomposition SVD, QR method, etc. can be used to solve.
在一个可选的实施例中,线性方法存在的一个问题是最小化的||AX||没有几何意义,也与最小化的目标函数不相符,并且给矩阵A的每一行乘以加权因子,得出的结果也不尽相同。因此,为了解决该问题,本发明一实施例中,通过线性迭代方法不断地改变线性方程的加权因子,使得加权的 方程能够与照片坐标点的误差统一。In an alternative embodiment, one problem with the linear method is that the minimized ||AX|| has no geometric meaning, does not match the minimized objective function, and multiplies each row of the matrix A by a weighting factor, The results are not the same. Therefore, in order to solve this problem, in an embodiment of the present invention, the weighting factor of the linear equation is continually changed by a linear iterative method, so that the weighted equation can be unified with the error of the photograph coordinate point.
举例来说,一开始计算到的空间点X并不能完全满足线性方程,且存在误差
Figure PCTCN2018090821-appb-000008
想要最小化的是真实图相点x和X的投影点
Figure PCTCN2018090821-appb-000009
的距离,即
Figure PCTCN2018090821-appb-000010
这就意味着线性方程如果除以加权因子
Figure PCTCN2018090821-appb-000011
那么最终的误差就是符合照片意义的最小化。
For example, the spatial point X calculated at the beginning does not fully satisfy the linear equation and there is an error.
Figure PCTCN2018090821-appb-000008
What you want to minimize is the projection point of the real image point x and X.
Figure PCTCN2018090821-appb-000009
Distance, ie
Figure PCTCN2018090821-appb-000010
This means that the linear equation is divided by the weighting factor.
Figure PCTCN2018090821-appb-000011
Then the final error is to minimize the meaning of the photo.
为了让该误差最小化,本发明实施例根据迭代法实现:根据上一次的求解结果得到每个视角新的加权因子,对每个视角的线性方程都分别除以对应的加权因子,得到新的线性方程组,再计算得到新解,重复本步骤,直至得到新解与上一次的求解结果相等后,再根据新的加权因子重复一次线性方程组的求解步骤,计算得到最终解。In order to minimize the error, the embodiment of the present invention is implemented according to an iterative method: a new weighting factor for each view is obtained according to the previous solution result, and the linear equations of each view are respectively divided by the corresponding weighting factors to obtain a new one. The linear equations are recalculated to obtain a new solution. Repeat this step until the new solution is equal to the previous solution. Then, the solution step of the linear equations is repeated according to the new weighting factor, and the final solution is calculated.
具体地,设定初始加权因子w 0=w 0′=1,计算上述线性方程组的解,并令该解为初始解X 0Specifically, the initial weighting factor w 0 =w 0 '=1 is set, the solution of the above linear equations is calculated, and the solution is made the initial solution X 0 .
第一视角方程组除以加权因子
Figure PCTCN2018090821-appb-000012
同样对第二视角的方程除以加权因子
Figure PCTCN2018090821-appb-000013
从而得到新的线性方程组,再计算得到解X i,,式中X i-1为上次计算出的结果。重复该步骤,直至收敛得到X i=X i-1且有
Figure PCTCN2018090821-appb-000014
此时的误差
Figure PCTCN2018090821-appb-000015
就是最小化的照片误差,为期望误差。
First view equations divided by weighting factors
Figure PCTCN2018090821-appb-000012
Also divide the equation of the second perspective by the weighting factor
Figure PCTCN2018090821-appb-000013
Thus, a new system of linear equations is obtained, and the solution X i is obtained, where X i-1 is the result of the last calculation. Repeat this step until convergence yields X i =X i-1 and there is
Figure PCTCN2018090821-appb-000014
Error at this time
Figure PCTCN2018090821-appb-000015
It is the minimized photo error, which is the expected error.
通过迭代方法获取的最优加权因子,并将原始的各个视角的线性方程分别除以各个视角的最优加权因子,获得的解即为最终的目标空间点的位置。基于该方法可以使得最小化的方程能够符合照片意义上的坐标误差。此种线性迭代方法计算出的空间点精确度较高,且一般能够在较少次数迭代即可达到收敛,且实现简单,程序简洁。The optimal weighting factor obtained by the iterative method is divided into the optimal weighting factors of the respective angles of view, and the obtained solution is the position of the final target space point. Based on this method, the minimized equation can be made to conform to the coordinate error in the photographic sense. The spatial iteration method calculated by such a linear iterative method has high accuracy, and generally can achieve convergence with a small number of iterations, and the implementation is simple and the program is simple.
通过天线两端的同名相点对在虚拟三维场景中对应空间直线两端相点的位置后,根据两端相点连线中心的位置坐标计算在真实三维场景的GPS 位置和海拔,以此代表该天线的GPS位置和海拔。方法如下:After the position of the opposite end point of the corresponding spatial line in the virtual three-dimensional scene is obtained by the phase name of the same name at both ends of the antenna, the GPS position and altitude in the real three-dimensional scene are calculated according to the position coordinates of the center of the phase connecting the two ends, thereby representing the The GPS position and altitude of the antenna. Methods as below:
针对连线中心位置的三维点(x,y),得到其旋转转化后真实空间坐标的(X,Y)根据步骤102中获得的平移和缩放矩阵,即可获得为:For the three-dimensional point (x, y) of the center position of the connection, the (X, Y) of the real space coordinates obtained after the rotation transformation is obtained according to the translation and scaling matrix obtained in step 102, and is obtained as:
X=x*S+t x,Y=y*S+t yX=x*S+t x , Y=y*S+t y ;
将(X,Y)经过反投影变换即可得到GPS的值。在拍摄时都处于同一海拔Z 0,根据X,Y,Z轴等比例缩放原理,目标对应的海拔为: The (X, Y) is subjected to back projection transformation to obtain the GPS value. At the same altitude Z 0 when shooting, according to the X, Y, Z axis and other scaling principles, the target's altitude is:
Z=Z 0+(z-z camera)*S Z=Z 0 +(zz camera )*S
接下来对根据不同天线照片中标定的同一直线的位置计算天线的姿态角过程进行说明。Next, the process of calculating the attitude angle of the antenna based on the positions of the same straight line calibrated in different antenna photographs will be described.
其中,可选的,在通过上述方法获得代表空间直线的两个相点之后,可以利用两个相点的空间位置坐标后,根据上述提到的旋转转化矩阵、平移和缩放矩阵进行旋转转化后的两个相点在真实三维环境的坐标点,直接可以计算出天线的姿态角。Optionally, after obtaining the two phase points representing the spatial straight line by the above method, the spatial position coordinates of the two phase points may be utilized, and then the rotation transformation matrix, the translation and the scaling matrix are used to perform the rotation transformation. The two phase points are directly at the coordinate points of the real three-dimensional environment, and the attitude angle of the antenna can be directly calculated.
其中,可选的,通过三焦点张量法重新获取代表空间直线的坐标点,根据该坐标点计算天线的姿态角。具体地,确定在三张目标照片中标定的同一直线的位置;根据位置以及拍摄设备的位置信息利用三焦点张量算法计算直线在真实三维环境中的位置信息;根据直线在真实空间中的位置信息计算天线的姿态角。Optionally, the coordinate point representing the spatial straight line is re-acquired by the three-focus tensor method, and the attitude angle of the antenna is calculated according to the coordinate point. Specifically, determining the position of the same straight line calibrated in the three target photos; calculating the position information of the straight line in the real three-dimensional environment by using the three-focus tensor algorithm according to the position and the position information of the photographing device; according to the position of the straight line in the real space The information calculates the attitude angle of the antenna.
由于三焦点张量是表示三个视图间的相互对应关系,这是一种不依赖于被测物体本身结构的空间几何关系。如图3所示,其中直线l 1,l 2,l 3是空间直线L在每张照片上的投影,中心在{C 0,C 1,C 2}的三个拍摄设备的投影矩阵为P 1,P 2,P 3,投影矩阵可以经过SFM算法估计出的内外参数计算得到,具体为P=K[R t]得到,构建三焦点张量矩阵为: Since the three-focus tensor represents the mutual correspondence between the three views, this is a spatial geometric relationship that does not depend on the structure of the measured object itself. As shown in FIG. 3, where the straight lines l 1 , l 2 , l 3 are projections of the spatial straight line L on each photo, the projection matrix of the three photographing devices centered at {C 0 , C 1 , C 2 } is P 1 , P 2 , P 3 , the projection matrix can be calculated by the internal and external parameters estimated by the SFM algorithm, specifically P=K[R t], and the trifocal tensor matrix is constructed as:
Figure PCTCN2018090821-appb-000016
Figure PCTCN2018090821-appb-000016
用奇异值分解法分解W得到[u,s,v]=SVD(W),两个空间点的齐次坐标分别对应于v的最后两列4维向量X a=v(:,3),X b=v(:,4),将这两个4维向量非齐次化即可得到空间直线两点,也用此表示标定直线在空间中的位置,从而进一步通过简单的数学方法求解直线姿态角,也就是天线的姿态角(方向角与俯仰角)。对于根据空间点求解姿态角的过程已属于本领域技术人员所熟知的技术,这里就不再赘述。 Decompose W with singular value decomposition to obtain [u, s, v] = SVD(W). The homogeneous coordinates of the two spatial points correspond to the last two columns of v, the 4-dimensional vector X a = v(:, 3), X b =v(:,4), the two 4-dimensional vectors are non-homogeneous to obtain two points of the space line, and also use this to indicate the position of the calibration line in space, thereby further solving the straight line by simple mathematical methods. The attitude angle, which is the attitude angle of the antenna (direction angle and pitch angle). The process for solving the attitude angle from the spatial point is well known to those skilled in the art and will not be described here.
设备实施例Equipment example
本发明实施例还提供了一种位姿测量设备,用以实现上述的位姿测量方法,如图4所示,该设备包括处理器42以及存储有处理器42可执行指令的存储器41。其中,The embodiment of the present invention further provides a pose measuring device for implementing the above-described pose measurement method. As shown in FIG. 4, the device includes a processor 42 and a memory 41 storing instructions executable by the processor 42. among them,
处理器42可以是通用处理器,例如中央处理器(central processing unit,CPU),还可以是数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC),或者是被配置成实施本发明实施例的一个或多个集成电路。The processor 42 may be a general-purpose processor, such as a central processing unit (CPU), or may be a digital signal processor (DSP), an application specific integrated circuit (ASIC), or One or more integrated circuits configured to implement embodiments of the present invention.
存储器41,用于存储程序代码,并将该程序代码传输给CPU。存储器41可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM);存储器41也可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM)、快闪存储器(flash memory)、硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器41还可以包括上述种类的存储器的组合。The memory 41 is configured to store the program code and transmit the program code to the CPU. The memory 41 may include a volatile memory such as a random access memory (RAM); the memory 41 may also include a non-volatile memory such as a read-only memory (read- Only memory, ROM), flash memory, hard disk drive (HDD), or solid-state drive (SSD); the memory 41 may also include a combination of the above types of memories.
本发明实施例所提供的一种位姿测量设备,包括摄像头、处理器和存储器,其中存储器中存储有位姿测量程序;处理器用于执行存储器中存储 的位姿测量程序,用以如下步骤:A pose measuring device according to an embodiment of the present invention includes a camera, a processor and a memory, wherein a pose measurement program is stored in the memory; and the processor is configured to execute the pose measurement program stored in the memory, and the method is as follows:
获取多张不同位置拍摄的物体照片以及拍摄照片时拍摄设备的姿态参数;Obtaining a plurality of photographs of objects taken at different positions and attitude parameters of the photographing device when photographing;
根据物体照片构建虚拟三维场景,并根据拍摄设备的姿态参数将虚拟三维场景与真实三维环境进行匹配;Constructing a virtual three-dimensional scene according to the photo of the object, and matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device;
根据物体在真实三维环境的位置信息计算物体的姿态参数。The attitude parameter of the object is calculated according to the position information of the object in the real three-dimensional environment.
可选的,拍摄设备的姿态参数包括拍摄设备的旋转角信息;根据拍摄设备的姿态参数将虚拟三维场景与真实三维环境进行匹配,包括:Optionally, the posture parameter of the photographing device includes rotation angle information of the photographing device; and matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device, including:
利用SFM算法获得二维像平面到虚拟三维场景中的第一旋转矩阵;Obtaining a first rotation matrix in a two-dimensional image plane to a virtual three-dimensional scene by using an SFM algorithm;
根据旋转角信息获得二维像平面到真实三维环境的第二旋转矩阵;Obtaining a second rotation matrix of the two-dimensional image plane to the real three-dimensional environment according to the rotation angle information;
根据第一旋转矩阵和第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵。A rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is calculated according to the first rotation matrix and the second rotation matrix.
可选的,拍摄设备的姿态参数包括拍摄设备的定位信息;根据拍摄设备的姿态参数将虚拟三维场景与真实三维环境进行匹配,还包括:Optionally, the posture parameter of the shooting device includes positioning information of the shooting device; matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the shooting device, further includes:
根据定位信息以及拍摄设备在虚拟三维场景中的位置确定虚拟三维场景到真实三维环境的平移和缩放转化矩阵。The translation and scaling transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is determined according to the positioning information and the position of the photographing device in the virtual three-dimensional scene.
可选的,根据第一旋转矩阵和第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵,包括:Optionally, calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix, including:
获取拍摄设备在不同位置得到的旋转转换矩阵;Obtaining a rotation conversion matrix obtained by the photographing device at different positions;
计算各个旋转转换矩阵在三个坐标轴的旋转角的平均值,并基于平均值重构获得最终的旋转转化矩阵。The average of the rotation angles of the respective rotation conversion matrices in the three coordinate axes is calculated, and the final rotation transformation matrix is obtained based on the average value reconstruction.
可选的,根据物体在真实三维环境的位置信息计算物体的姿态参数,包括:Optionally, calculating the attitude parameter of the object according to the position information of the object in the real three-dimensional environment, including:
根据在两张目标照片中标定的同名直线确定两个同名相点对;Two pairs of points with the same name are determined according to the straight line of the same name calibrated in the two target photos;
基于两个同名相点对确定虚拟三维场景中两个相点的位置;Determining the positions of two phase points in the virtual three-dimensional scene based on two phase pairs of the same name;
利用旋转转化矩阵和平移和缩放转化矩阵对两个相点连线中心的位置进行转化,根据转化后的位置获得物体在真实三维环境的定位值和海拔值。The rotation transformation matrix and the translation and scaling transformation matrix are used to transform the position of the center of the two phase points, and the position value and the altitude value of the object in the real three-dimensional environment are obtained according to the transformed position.
可选的,根据在两张目标照片中标定的同名直线确定两个同名相点对,包括:Optionally, two pairs of the same name are determined according to the straight line of the same name calibrated in the two target photos, including:
检测到在目标照片中标定同名直线后,根据双目视觉极限约束原理,计算其中一张目标照片中同名直线两个端点处的极线;After detecting the straight line of the same name in the target photo, calculating the polar line at the two end points of the same name line in one of the target photos according to the binocular vision limit constraint principle;
极线与另一目标照片中同名直线的交点即为端点的同名相点。The intersection of the polar line and the line of the same name in the other target photo is the point of the same name of the endpoint.
可选的,基于两个同名相点对确定虚拟三维场景中两个相点的位置,包括:Optionally, determining the positions of the two phase points in the virtual three-dimensional scene based on the two pairs of the same name, including:
根据三维空间到二维像平面的投影方程,构建同名相点对每个相点对应的线性方程;According to the projection equation from the three-dimensional space to the two-dimensional image plane, construct a linear equation corresponding to each phase point of the same-named phase point;
将同名相点对的线性方程构成线性方程组,利用矩阵方法对线性方程组进行求解,基于解确定虚拟三维场景中相点的位置。The linear equations of the same-named pair are formed into a linear system of equations, and the linear equations are solved by the matrix method. The position of the phase points in the virtual three-dimensional scene is determined based on the solution.
可选的,将同名相点对的线性方程构成线性方程组,利用矩阵方法对线性方程组进行求解,基于解确定虚拟三维场景中相点的位置,包括:Optionally, a linear equation of the same name pair is formed into a linear equation group, and the matrix equation is used to solve the linear equation group, and the position of the phase point in the virtual three-dimensional scene is determined based on the solution, including:
根据上一次的解得到每个相点新的加权因子,对每个相点的线性方程都分别除以对应的加权因子,得到新的线性方程组,再计算得到新解,重复本步骤,直至得到新解与上一次的解相等后,再重复一次本步骤,计算得到的最终解为相点的位置。According to the previous solution, the new weighting factor of each phase point is obtained. The linear equations of each phase point are respectively divided by the corresponding weighting factors to obtain a new linear equation group, and then the new solution is calculated. Repeat this step until After the new solution is equal to the previous solution, this step is repeated, and the calculated final solution is the position of the phase point.
可选的,根据物体在真实三维环境的位置信息计算物体的姿态参数,包括:Optionally, calculating the attitude parameter of the object according to the position information of the object in the real three-dimensional environment, including:
确定在三张目标照片中标定的同一直线的位置;Determine the position of the same line calibrated in the three target photos;
根据位置以及拍摄设备的位置信息利用三焦点张量算法计算直线在真实三维环境中的位置信息;Calculating position information of the straight line in the real three-dimensional environment by using a three-focus tensor algorithm according to the position and the position information of the photographing device;
根据直线在真实空间中的位置信息计算物体的姿态角。The attitude angle of the object is calculated from the position information of the straight line in the real space.
存储介质实施例Storage medium embodiment
本发明实施例还提供了一种计算机可读存储介质。这里的计算机可读存储介质存储有一个或者多个程序。其中,计算机可读存储介质可以包括易失性存储器,例如随机存取存储器;存储器也可以包括非易失性存储器,例如只读存储器、快闪存储器、硬盘或固态硬盘;存储器还可以包括上述种类的存储器的组合。当计算机可读存储介质中一个或者多个程序可被一个或者多个处理器执行,以实现方法实施例中所提供的位姿测量方法。The embodiment of the invention further provides a computer readable storage medium. The computer readable storage medium herein stores one or more programs. The computer readable storage medium may include a volatile memory such as a random access memory; the memory may also include a non-volatile memory such as a read only memory, a flash memory, a hard disk or a solid state hard disk; the memory may also include the above categories a combination of memory. One or more programs in a computer readable storage medium may be executed by one or more processors to implement the pose measurement methods provided in the method embodiments.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。A person skilled in the art can understand that all or part of the process of implementing the above embodiment method can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium. The flow of an embodiment of the methods as described above may be included.
虽然通过实施例描述了本申请,本领域的技术人员知道,本申请有许多变形和变化而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。While the present invention has been described by the embodiments of the invention, it will be understood that Thus, it is intended that the present invention cover the modifications and modifications of the invention
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the present invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得 通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。The above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention.
工业实用性Industrial applicability
本发明实施例中,通过拍摄设备对不同位置上的物体进行拍摄,并结合拍摄设备的姿态参数对拍摄的物体照片进行分析,即可获得物体的实际位姿参数信息。实现了物体位姿的远程测量,操作方法非常简单便捷,无需测量人员进行攀爬,有效消除了测量的危险性。In the embodiment of the present invention, the actual pose parameter information of the object can be obtained by shooting the object at different positions by the photographing device and analyzing the photograph of the photographed object in combination with the posture parameter of the photographing device. The remote measurement of the pose of the object is realized, and the operation method is very simple and convenient, and no measurement personnel are required to climb, which effectively eliminates the risk of measurement.

Claims (11)

  1. 一种位姿测量方法,包括:A pose measurement method comprising:
    获取多张不同位置拍摄的物体照片以及拍摄所述照片时拍摄设备的姿态参数;Obtaining a plurality of photos of objects taken at different positions and attitude parameters of the photographing device when the photographs are taken;
    根据所述物体照片构建虚拟三维场景,并根据所述拍摄设备的姿态参数将所述虚拟三维场景与真实三维环境进行匹配;Constructing a virtual three-dimensional scene according to the object photo, and matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device;
    根据所述物体在所述真实三维环境的位置信息计算所述物体的姿态参数。A posture parameter of the object is calculated according to position information of the object in the real three-dimensional environment.
  2. 如权利要求1所述的方法,其中,所述拍摄设备的姿态参数包括拍摄设备的旋转角信息;The method of claim 1, wherein the attitude parameter of the photographing device comprises rotation angle information of the photographing device;
    所述根据所述拍摄设备的姿态参数将所述虚拟三维场景与真实三维环境进行匹配,包括:The matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device includes:
    获得二维像平面到虚拟三维场景中的第一旋转矩阵;Obtaining a first rotation matrix from a two-dimensional image plane to a virtual three-dimensional scene;
    根据所述旋转角信息获得二维像平面到真实三维环境的第二旋转矩阵;Obtaining a second rotation matrix of the two-dimensional image plane to the real three-dimensional environment according to the rotation angle information;
    根据所述第一旋转矩阵和所述第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵。Calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix.
  3. 如权利要求2所述的方法,其中,所述拍摄设备的姿态参数包括拍摄设备的定位信息;The method of claim 2, wherein the attitude parameter of the photographing device comprises positioning information of the photographing device;
    所述根据所述拍摄设备的姿态参数将所述虚拟三维场景与真实三维环境进行匹配,还包括:The matching the virtual three-dimensional scene with the real three-dimensional environment according to the posture parameter of the photographing device further includes:
    根据所述定位信息以及拍摄设备在虚拟三维场景中的位置确定虚拟三维场景到真实三维环境的平移和缩放转化矩阵。A panning and scaling transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment is determined according to the positioning information and the position of the photographing device in the virtual three-dimensional scene.
  4. 如权利要求2所述的方法,其中,所述根据所述第一旋转矩阵和所述第二旋转矩阵计算虚拟三维场景到真实三维环境的旋转转化矩阵,包括:The method of claim 2, wherein the calculating a rotation transformation matrix of the virtual three-dimensional scene to the real three-dimensional environment according to the first rotation matrix and the second rotation matrix comprises:
    获取拍摄设备在不同位置得到的旋转转换矩阵;Obtaining a rotation conversion matrix obtained by the photographing device at different positions;
    计算各个旋转转换矩阵在三个坐标轴的旋转角的平均值,并基于所述平均值获得的虚拟三维场景到真实三维环境的旋转转化矩阵。The average of the rotation angles of the respective rotation conversion matrices in the three coordinate axes is calculated, and the rotation transformation matrix of the virtual three-dimensional scene obtained from the average value to the real three-dimensional environment is calculated.
  5. 如权利要求3所述的方法,其中,所述根据所述物体在所述真实三维环境的位置信息计算所述物体的姿态参数,包括:The method of claim 3, wherein the calculating the attitude parameter of the object according to the position information of the object in the real three-dimensional environment comprises:
    根据在两张目标照片中标定的同名直线确定两个同名相点对,所述目标照片为所述多张物体照片中的任意两张;Determining two pairs of the same name by a line of the same name calibrated in the two target photos, the target photo being any two of the plurality of object photos;
    基于所述两个同名相点对确定虚拟三维场景中两个相点的位置;Determining the positions of two phase points in the virtual three-dimensional scene based on the two pairs of phase names of the same name;
    利用所述旋转转化矩阵、所述平移和缩放转化矩阵对所述两个相点连线中心的位置进行转化,根据转化后的位置获得所述物体在真实三维环境的定位值和海拔值。The rotation transformation matrix, the translation and the scaling transformation matrix are used to transform the positions of the centers of the two phase points, and the positioning values and altitude values of the objects in the real three-dimensional environment are obtained according to the transformed positions.
  6. 如权利要求5所述的方法,其中,所述根据在两张目标照片中标定的同名直线确定两个同名相点对,包括:The method of claim 5 wherein said determining a pair of points of the same name based on a line of the same name calibrated in the two target photos comprises:
    检测到所述目标照片中标定的同名直线后,根据双目视觉极限约束原理,计算其中一张目标照片中同名直线两个端点处的极线;After detecting the straight line of the same name calibrated in the target photo, calculating the polar line at the two end points of the same name line in one of the target photos according to the binocular visual limit constraint principle;
    所述极线与另一目标照片中同名直线的交点即为所述端点的同名相点。The intersection of the polar line and the straight line of the same name in another target photo is the point of the same name of the end point.
  7. 如权利要求5或6所述的方法,其中,所述基于所述两个同名相点对确定虚拟三维场景中两个相点的位置,包括:The method according to claim 5 or 6, wherein said determining a position of two phase points in the virtual three-dimensional scene based on said two pairs of phase names of the same name comprises:
    根据三维空间到二维像平面的投影方程,构建同名相点对每个相点对应的线性方程;According to the projection equation from the three-dimensional space to the two-dimensional image plane, construct a linear equation corresponding to each phase point of the same-named phase point;
    将同名相点对的线性方程构成线性方程组,利用矩阵方法对所述线性方程组进行求解,基于求解结果确定虚拟三维场景中相点的位置。A linear equation of the same name pair is formed into a linear equation group, and the linear equation group is solved by a matrix method, and the position of the phase point in the virtual three-dimensional scene is determined based on the solution result.
  8. 如权利要求7所述的方法,其中,所述将同名相点对的线性方程构成线性方程组,利用矩阵方法对所述线性方程组进行求解,基于求解结果 确定虚拟三维场景中相点的位置,包括:The method according to claim 7, wherein said linear equations of the same-named phase pairs form a linear equation group, and said linear equations are solved by a matrix method, and the position of the phase points in the virtual three-dimensional scene is determined based on the solution result. ,include:
    根据上一次的求解结果得到每个相点新的加权因子,对每个相点的线性方程都分别除以对应的加权因子,得到新的线性方程组,再计算得到新解,重复本步骤,直至得到新解与上一次的解相等后,再重复一次本步骤,计算得到的最终解为所述相点的位置。According to the previous solution result, the new weighting factor of each phase point is obtained. The linear equations of each phase point are respectively divided by the corresponding weighting factors to obtain a new linear equation group, and then the new solution is calculated. Repeat this step. This step is repeated until the new solution is equal to the previous solution, and the calculated final solution is the position of the phase point.
  9. 如权利要求1所述的方法,其中,所述根据所述物体在所述真实三维环境的位置信息计算所述物体的姿态参数,包括:The method of claim 1, wherein the calculating the attitude parameter of the object according to the position information of the object in the real three-dimensional environment comprises:
    确定在三张目标照片中标定的同一直线的位置,所述三张目标照片为所述多张物体照片中的任意三张;Determining a position of the same straight line calibrated in the three target photos, the three target photos being any three of the plurality of object photos;
    根据所述位置以及拍摄设备的位置信息利用三焦点张量算法计算所述直线在真实三维环境中的位置信息;Calculating position information of the straight line in a real three-dimensional environment by using a three-focus tensor algorithm according to the position and position information of the photographing device;
    根据所述直线在真实空间中的位置信息计算物体的姿态角。The attitude angle of the object is calculated based on the position information of the straight line in the real space.
  10. 一种位姿测量设备,包括摄像头、处理器和存储器,其中所述存储器中存储有位姿测量程序;所述处理器用于执行所述存储器中存储的所述程序,用以实现权利要求1~9任一项所述的位姿测量方法中的步骤。A pose measuring device comprising a camera, a processor and a memory, wherein the memory stores a pose measurement program; the processor is configured to execute the program stored in the memory to implement the claim 1 The step in the pose measurement method of any of the nine.
  11. 一种计算机可读存储介质,所述计算机可读存储介质上存储有位姿测量程序,所述位姿测量程序被处理器执行时实现权利要求1~9任一项所述的位姿测量方法中的步骤。A computer readable storage medium storing a pose measurement program, wherein the pose measurement program is executed by a processor to implement the pose measurement method according to any one of claims 1 to 9. The steps in .
PCT/CN2018/090821 2017-06-21 2018-06-12 Pose measurement method and device, and storage medium WO2018233514A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710475557.4A CN109099888A (en) 2017-06-21 2017-06-21 A kind of pose measuring method, equipment and storage medium
CN201710475557.4 2017-06-21

Publications (1)

Publication Number Publication Date
WO2018233514A1 true WO2018233514A1 (en) 2018-12-27

Family

ID=64735483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090821 WO2018233514A1 (en) 2017-06-21 2018-06-12 Pose measurement method and device, and storage medium

Country Status (2)

Country Link
CN (1) CN109099888A (en)
WO (1) WO2018233514A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110296686B (en) * 2019-05-21 2021-11-09 北京百度网讯科技有限公司 Vision-based positioning method, device and equipment
CN112815923B (en) * 2019-11-15 2022-12-30 华为技术有限公司 Visual positioning method and device
CN113781548A (en) * 2020-06-10 2021-12-10 华为技术有限公司 Multi-device pose measurement method, electronic device and system
CN114274147B (en) * 2022-02-10 2023-09-22 北京航空航天大学杭州创新研究院 Target tracking control method and device, mechanical arm control equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN103217147A (en) * 2012-01-19 2013-07-24 株式会社东芝 Measurement device and measurement method
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
US20150029345A1 (en) * 2012-01-23 2015-01-29 Nec Corporation Camera calibration device, camera calibration method, and camera calibration program
CN106296801A (en) * 2015-06-12 2017-01-04 联想(北京)有限公司 A kind of method setting up object three-dimensional image model and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6369534B2 (en) * 2014-03-05 2018-08-08 コニカミノルタ株式会社 Image processing apparatus, image processing method, and image processing program
CN106569591A (en) * 2015-10-26 2017-04-19 苏州梦想人软件科技有限公司 Tracking method and system based on computer vision tracking and sensor tracking
CN105528082B (en) * 2016-01-08 2018-11-06 北京暴风魔镜科技有限公司 Three dimensions and gesture identification tracking exchange method, device and system
CN106651942B (en) * 2016-09-29 2019-09-17 苏州中科广视文化科技有限公司 Three-dimensional rotating detection and rotary shaft localization method based on characteristic point

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN103217147A (en) * 2012-01-19 2013-07-24 株式会社东芝 Measurement device and measurement method
US20150029345A1 (en) * 2012-01-23 2015-01-29 Nec Corporation Camera calibration device, camera calibration method, and camera calibration program
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system
CN103759716A (en) * 2014-01-14 2014-04-30 清华大学 Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm
CN106296801A (en) * 2015-06-12 2017-01-04 联想(北京)有限公司 A kind of method setting up object three-dimensional image model and electronic equipment

Also Published As

Publication number Publication date
CN109099888A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
WO2021212844A1 (en) Point cloud stitching method and apparatus, and device and storage device
EP3309751B1 (en) Image processing device, method, and program
US9466143B1 (en) Geoaccurate three-dimensional reconstruction via image-based geometry
WO2018233514A1 (en) Pose measurement method and device, and storage medium
CN108592950B (en) Calibration method for relative installation angle of monocular camera and inertial measurement unit
EP3368859B1 (en) Method of solving initial azimuth for survey instruments, cameras, and other devices with position and tilt information
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN110969665B (en) External parameter calibration method, device, system and robot
CN112288853B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium
CN106530358A (en) Method for calibrating PTZ camera by using only two scene images
JP2013539147A5 (en)
IL214151A (en) Method and apparatus for three-dimensional image reconstruction
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN113820735A (en) Method for determining position information, position measuring device, terminal, and storage medium
Duran et al. Accuracy comparison of interior orientation parameters from different photogrammetric software and direct linear transformation method
CN116295279A (en) Unmanned aerial vehicle remote sensing-based building mapping method and unmanned aerial vehicle
JP7114686B2 (en) Augmented reality device and positioning method
JP6928217B1 (en) Measurement processing equipment, methods and programs
Tjahjadi et al. Single image orientation of UAV's imagery using orthogonal projection model
CN115423863B (en) Camera pose estimation method and device and computer readable storage medium
El-Ashmawy A comparison study between collinearity condition, coplanarity condition, and direct linear transformation (DLT) method for camera exterior orientation parameters determination
Bakuła et al. Capabilities of a smartphone for georeferenced 3dmodel creation: An evaluation
D'Alfonso et al. On the use of IMUs in the PnP Problem
KR20210009019A (en) System for determining position and attitude of camera using the inner product of vectors and three-dimensional coordinate transformation
CN108764161B (en) Remote sensing image processing method and device for breaking pathological singularity caused by sparse array based on polar coordinate system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18821112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18821112

Country of ref document: EP

Kind code of ref document: A1