CN106441242B - A kind of interactive plotting method based on laser point cloud and full-view image - Google Patents

A kind of interactive plotting method based on laser point cloud and full-view image Download PDF

Info

Publication number
CN106441242B
CN106441242B CN201610740228.3A CN201610740228A CN106441242B CN 106441242 B CN106441242 B CN 106441242B CN 201610740228 A CN201610740228 A CN 201610740228A CN 106441242 B CN106441242 B CN 106441242B
Authority
CN
China
Prior art keywords
coordinate
point
space
panoramic
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610740228.3A
Other languages
Chinese (zh)
Other versions
CN106441242A (en
Inventor
刘如飞
卢秀山
田茂义
曲杰卿
侯海龙
俞家勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
QINGDAO SUPERSURS MOBILE SURVEYING CO Ltd
Original Assignee
QINGDAO SUPERSURS MOBILE SURVEYING CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QINGDAO SUPERSURS MOBILE SURVEYING CO Ltd filed Critical QINGDAO SUPERSURS MOBILE SURVEYING CO Ltd
Priority to CN201610740228.3A priority Critical patent/CN106441242B/en
Publication of CN106441242A publication Critical patent/CN106441242A/en
Application granted granted Critical
Publication of CN106441242B publication Critical patent/CN106441242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Instructional Devices (AREA)

Abstract

The interactive plotting method based on laser point cloud and full-view image that the invention discloses a kind of, it includes positive interaction or/and reversed interaction mapping step, and positive interactive step is from panoramic image pixel coordinate setting to laser point coordinates;Reversed interactive step is to navigate to panoramic image pixel coordinate from laser point coordinates.The present invention confirms mapping object from panoramic picture, has the characteristics that intuitive, clear, clear in forward direction interaction.In reversed interaction, it is obtained using way to play for time interaction and specifies laser point, switch to rectangular space coordinate using coordinate transformation parameter, and panoramic image pixel coordinate is calculated using space calibration parameter, panorama point GPS coordinate and spatial attitude information, to show the corresponding orientation of laser point in panorama sketch.

Description

Interactive mapping method based on laser point cloud and panoramic image
Technical Field
The invention belongs to the mapping technology.
Background
The laser point cloud is utilized to carry out mapping, the field work amount can be greatly reduced, the range is wide, the precision is high, and therefore the application of the laser point cloud is more and more extensive. However, laser point cloud mapping currently has some significant problems: the intuition degree is low, a certain time is consumed for searching and confirming the target, and especially under the condition that the incomplete point cloud is incomplete, certain experience is needed for confirmation. Meanwhile, due to the randomness of the target, omission of mapping elements is easily caused during mapping. The requirements of mapping, surveying and construction are difficult to meet, and the application of the laser point cloud in the field of mapping is restricted.
The mobile measurement system can rapidly acquire the laser point cloud and the panoramic image, and the panoramic image has the characteristics of strong intuition and wide display range. However, panoramic images acquired by the current mobile measurement system are mainly used for browsing, and no precedent of combining laser point cloud for interactive mapping exists. Although there are some applications for mapping based on panoramic images, the mapping accuracy is low due to the fact that space calibration parameters and space attitude Information (IMU) are not comprehensively used, the requirement for mapping accuracy of a large scale cannot be met, and the mapping accuracy can only be used for rough positioning. Some panoramic images are used for confirming an acquisition point, so that the operation complexity is too high, the characteristic of high-precision mapping of laser point cloud is not utilized, and the mapping efficiency and precision are influenced.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides an interactive mapping method based on laser point cloud and panoramic image based on the concept of mapping by combining the laser point cloud and the panoramic image, thereby achieving the purpose of high-precision mapping.
In order to achieve the purpose, the invention adopts the following technical scheme:
an interactive mapping method based on laser point cloud and panoramic image is characterized by comprising a forward interaction or/and reverse interaction mapping step, wherein the forward interaction step is to locate the pixel coordinate of the panoramic image to the laser point coordinate; the reverse interaction step is to locate the pixel coordinates of the panoramic image from the coordinates of the laser points; wherein:
the forward interaction steps are as follows:
firstly, using the space attitude information and coordinate conversion parameter of the panoramic image to convert the pixel coordinate of the image into a three-dimensional local coordinate p1(ii) a Simultaneously calculating the three-dimensional local coordinate o of the panoramic shooting point1(ii) a The method specifically comprises the following steps:
1.1) obtaining pixel coordinates of the panoramic image, and establishing a conversion relation between the pixel coordinates of the image and spatial rectangular coordinates:
1.1.1) coordinate P of panoramic pixelsr(X, Y) conversion to image space coordinates Pr' (X, Y,0) wherein PrThe (X, Y) coordinate system is a Cartesian rectangular coordinate system taking the lower left corner of the panoramic image as an origin; 0<X<w, w is the number of horizontal pixels of the panoramic image; 0<Y<h, h is the longitudinal pixel number of the panoramic image;
1.1.2) image space coordinate Pr' (X, Y,0) into panorama sphere space coordinates Psp(Xsp,Ysp,Zsp)
Defining a panoramic ball space coordinate system, taking the center of a ball as an origin, setting the radius of the ball as R, and establishing a conversion relation between an image space coordinate system and the panoramic ball space coordinate system, wherein the conversion formula is as follows:
ψ=X/R;
θ=Y/R;
Xsp=R*sin(θ)*cos(ψ);
Ysp=R*sin(θ)*sin(ψ);
Zsp=R*cos(θ);
converting the image space coordinate P by the above formular' (X, Y,0) into panorama sphere space coordinates Psp(Xsp,Ysp,Zsp);
1.1.3) panoramic ball space coordinate Psp(Xsp,Ysp,Zsp) Conversion to combined navigation system coordinates
Acquiring space calibration parameters of the panoramic camera and the integrated navigation system, taking the anticlockwise direction as positive, and setting a rotation matrix corresponding to an axis X, Y, Z as RZ,RX,RY6 exterior orientation elements of the ball in an object space coordinate system areω, κ, Δ X, Δ Y, Δ Z, where Δ X, Δ Y, Δ Z represent three offsets from the X, Y, Z axis,ω, κ denote three rotational angles around axis Z, X, Y, respectively;
the conversion relationship between the panoramic ball space coordinate and the integrated navigation system coordinate is as follows:
wherein,
the panoramic ball space coordinate P is converted through the conversion relationsp(Xsp,Ysp,Zsp) Conversion to combined navigation system coordinates Pimu(Ximu,Yimu,Zimu);
1.1.4) combining the navigation System coordinates Pimu(Ximu,Yimu,Zimu) Conversion to spatial rectangular coordinates
Acquiring inertial navigation information of a current panorama measured by an IMU, wherein the inertial navigation information comprises a space attitude angle roll angle R, a pitch angle P, a yaw angle H during panorama shooting and longitude L and latitude B under a shooting point space rectangular coordinate; wherein R represents an included angle between an inertial navigation x axis and the horizontal direction, P represents an included angle between an inertial navigation y axis and the horizontal direction, and H represents an included angle between an inertial navigation advancing direction and the due north direction;
then R, P, H are respectively set as r, p and y, and the rotation y is around the z axis; rotating p around the x axis; finally, rotating r around the y axis; then there are:
wherein
The coordinates of the point under a common earth surface coordinate system are set as(the value is recorded in the inertial navigation information); rotate around the x-axis firstRotate again around the z-axisThen there are:
wherein:
wherein,a is the major semi-axis (6378137 meters) of the WGS84 ellipsoid parameters; b is the minor semi-axis (6356752.314 meters) of the WGS84 ellipsoid parameter;
thenThe space rectangular coordinate obtained by conversion;
1.1.5) finally, converting the space rectangular coordinate into a three-dimensional local coordinate by using the coordinate conversion parameter and the prior art;
1.2) obtaining the width of the current panoramic image as w, the height as h, the unit of w and h as pixel, then the pixel coordinate of the panoramic center point is (w/2, h/2), according to the processing procedure of converting pixel coordinate of panoramic image into space rectangular coordinate described in the steps of 1.1.1-1.1.4, firstly converting pixel coordinate of panoramic image into space rectangular coordinate, then converting into three-dimensional local coordinate o by using coordinate conversion parameter according to the step of 1.1.51
Second step of1、p1Constructing a spatial three-dimensional straight line segment l', and calculating a plane projection buffer area based on the line segment; acquiring a laser point set PC in the buffer range; the method specifically comprises the following steps:
2.1) Using two three-dimensional spatial points o1(Xo1,Yo1,Zo1) And p1(Xp1,Yp1,Zp1) Constructing a space straight line l; processing in vector mode with o1Is the initial point of the vector, p1Is a pointing point, and therefore its unit direction vector v is calculated as:
wherein Lo1p1Is o1And p1The three-dimensional spatial distance of (a);
2.2) calculating according to different conditions based on the Z component of the vector of the l to obtain a specified line segment; for a point P' on l:
if there is Zp1≥Zo1And then:
P'=(Xo1+v[0]*Lv,Yo1+v[1]*Lv,Zo1+v[2]*Lv)
wherein L isvIs the maximum search length;
if there is Zp1<Zo1And then:
firstly, calculating the ground elevation of a panoramic shooting point: zg=Zo1-HcIn which H iscIs the camera height of the device; let plane PgIs passed through (0,0, Z)g) The horizontal plane of the dots, P' is then l and PgThe intersection point of (a); further calculate the current P' and o1Is a spatial distance LP'O1If L isP'O1>LvThen, then
P'=(Xo1+v[0]*Lv,Yo1+v[1]*Lv,Zo1+v[2]*Lv)
From o1And P 'as two vertices to form a line segment l';
2.3) carrying out projection processing on the three-dimensional line segment l 'to obtain a two-dimensional plane line segment l'; obtaining a line segment l passing through one end point of l' and perpendicular to lv1The length of the laser point cloud data is d x 4, and d is the thinning interval of the laser point cloud data and is divided equally by the end points;
the perpendicular segment l passing through the other end of l' is obtained using the same methodv2
Is prepared fromv1And lv2A rectangle formed by four end points is used as a plane projection buffer area of the line segment;
2.4) based on the rectangular buffer area, acquiring a laser point set PC in the plane range by using a space query method;
traversing points in the laser point set to obtain an alternative point set with a distance l' less than a threshold value; further screening out a point closest to the panoramic shooting point in the set, and taking the coordinate of the point as a laser point coordinate corresponding to the pixel coordinate of the panoramic image; the method specifically comprises the following steps:
3.1) traverse each point pep (X) in the set of laser points PCpcp,Ypcp,Zpcp) Calculating its spatial straight-line distance L to the straight line Lpcp(ii) a If L ispcp<LnPutting the point cloud point into a spare point set PC'; wherein L isnIs a distance threshold from the point cloud point to L, LnD is the thinning interval of the point cloud data;
3.2) traverse each point pep 'in the set of point cloud points PC', calculate it to o1A spatial linear distance of (1) is then from o1The point with the closest distance is the final point, and the coordinate of the point is the mapping coordinate;
the reverse interaction steps are as follows:
firstly, interactively acquiring three-dimensional local coordinates and time attributes of laser points, and then determining corresponding panoramic images and space posture information thereof according to time attribute values; the method specifically comprises the following steps:
1.1) interactively acquiring a local two-dimensional coordinate plCreating a square buffer area which takes the point as the center and has the side length of d x 4, and carrying out space query on the laser point cloud based on the buffer area to obtain a laser point set C in the range; wherein d is the thinning interval of the laser point cloud data;
1.2) traversing each cloud point P in the seti(1<i<N), calculating the sum of p and N) respectivelylThe laser point P with the smallest distance value is obtained. Reading time information contained in the P point, and acquiring a panoramic image containing the time and information thereof from an inertial navigation information file, wherein the information comprises a GPS coordinate of a panoramic shooting point and space attitude information;
secondly, converting the three-dimensional local coordinates of the laser points into space rectangular coordinates by using coordinate conversion parameters;
thirdly, converting the spatial rectangular coordinate into a panoramic pixel coordinate by using the panoramic GPS coordinate panoramic image and the spatial attitude information, wherein the coordinate is an image pixel coordinate corresponding to the laser point coordinate; the method specifically comprises the following steps:
3.1) converting the space rectangular coordinate into the coordinate of the integrated navigation system
Performing inverse operation according to a calculation formula of 1.1.4 steps of a forward interaction step of the technical scheme based on matrix operation by using a roll angle R, a pitch angle P, a yaw angle H and longitude L and latitude B under a spatial rectangular coordinate of a shooting point in inertial navigation data corresponding to the current panorama and combining a common earth surface coordinate system, and converting the spatial rectangular coordinate into a combined navigation system coordinate;
3.2) converting the coordinates of the integrated navigation system into the space coordinates of the panoramic ball
Using the spatial calibration parameters DeltaX, DeltaY, DeltaZ of the integrated navigation system,Based on the inverse operation of the matrix in the step 1.1.3, the conversion relation between the integrated navigation system and the panoramic ball space coordinate system is constructed, and the integrated navigation system coordinate is converted into the panoramic ball space coordinate;
3.3) converting the space coordinate of the panoramic ball into the pixel coordinate of the panoramic image
Let the radius of the panorama sphere space coordinate be R and the panorama sphere space coordinate be (X)sp,Ysp,Zsp) Then, then
If X issp>0, with Lon ═ π + arccos (Y)sp/R)
If X isspLess than or equal to 0, there is Lon ═ pi-arccos (Y)sp/R)
Lat=arcsin(Zsp/R)
Then (Lon, Lat) is converted into (X, Y):
X=Lon/(π*2)
Y=(lat+π/2)/π
and (X w, Y h) is the pixel coordinate of the panoramic image, wherein w is the width of the panoramic image, i.e. the number of horizontal pixels, and h is the height of the panoramic image, i.e. the number of vertical pixels.
The invention has the following advantages:
the invention introduces the technical ideas of pixel coordinate and space rectangular coordinate transformation and space line segment construction based on respective characteristics of laser point cloud and panoramic image. When the method is used for forward interaction, the mapping object is confirmed from the panoramic image, and the method has the characteristics of intuition, clearness and definition. Calculating space rectangular coordinates of interaction points and shooting points in the panoramic image and converting the space rectangular coordinates into three-dimensional local coordinates based on space calibration parameters, the GPS coordinates of the panoramic points and space attitude information; and constructing a real three-dimensional line segment for carrying out spatial relationship calculation with the point cloud to obtain an appointed laser point, thereby finally determining a three-dimensional local coordinate. And during reverse interaction, a designated laser point is obtained by utilizing a buffer method, a coordinate conversion parameter is converted into a spatial rectangular coordinate, and a panoramic image pixel coordinate is calculated by utilizing a spatial calibration parameter, a panoramic point GPS coordinate and spatial attitude information, so that the corresponding direction of the laser point is displayed in the panoramic image.
Drawings
FIG. 1 is a schematic flow chart of an interactive mapping method based on a laser point cloud and a panoramic image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of pixel coordinates of a panoramic image according to an embodiment of the present invention;
FIG. 2.1 is a schematic diagram of interactively acquiring panoramic images, wherein the images are panoramic images of courtyards of certain office buildings, according to the embodiment of the invention;
FIG. 2.2 is a coordinate diagram of a certain mark point on the flagpole of the panoramic image Chinese flag;
FIG. 2.3 is a schematic diagram of a panoramic ball spatial coordinate system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of spatial three-dimensional segment construction and projection buffer range calculation according to an embodiment of the present invention;
FIG. 3.1 is a diagram of mapping coordinate conditions corresponding to pixel coordinates of an image of a flagpole obtained according to the embodiment;
fig. 4 is a diagram showing the situation of a flagpole laser spot which is interactively searched on a two-dimensional map.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description: the embodiment takes the example of calculating the coordinates of a flagpole of a national flag in a panoramic image of a yard of an office building.
Fig. 1 shows a flow diagram of the method, wherein the flow diagram for positioning from panoramic image pixel coordinates to laser point coordinates in forward interaction is shown on the left; the right side shows a schematic flow chart from laser point coordinate positioning to panoramic image pixel coordinate during reverse interaction; as seen from the figure, the forward interaction steps are as follows:
first, a spatial coordinate system of the panoramic image shown in fig. 2 is established, and the panoramic pixel coordinate P is calculatedr(X, Y) conversion to image space coordinates Pr' (X, Y, 0); wherein P isrThe (X, Y) coordinate system is a Cartesian rectangular coordinate system taking the lower left corner of the panoramic image as an origin;
as can be seen from FIGS. 2.1 and 2.2, 0 is shown in the panoramic image of the courtyard of an office building<X<8000, 8000 is the horizontal pixel number of the panoramic image; 0<Y<4000, 4000 is the longitudinal pixel number of the panoramic image; the pixel coordinate of the mast panoramic image obtained by interaction is Pr(1843,1971);
Defining a panoramic ball space coordinate system as shown in figure 2.3, taking the center of a ball as an origin, setting the radius of the ball as R, establishing a conversion relation between an image space coordinate system and the panoramic ball space coordinate system, referring to a conversion formula in the step 1.1.2 of the forward interaction step of the technical scheme, and converting the image space coordinate P according to the conversion formular' (X, Y,0) into panorama sphere space coordinates Psp(Xsp,Ysp,Zsp);
Acquiring space calibration parameters of the panoramic camera and the integrated navigation system, taking the anticlockwise direction as positive, and setting a rotation matrix corresponding to an axis X, Y, Z as RZ,RX,RY6 exterior orientation elements of the ball in an object space coordinate system areω, κ, Δ X, Δ Y, Δ Z, where Δ X, Δ Y, Δ Z represent three offsets from the X, Y, Z axis,ω, κ denote three rotational angles around axis Z, X, Y, respectively; the transformation relation between the panoramic ball space coordinate and the coordinate of the integrated navigation system is shown as step 1.1.3 of the forward interaction step of the technical scheme, and the panoramic ball space coordinate P is transformed according to the transformation relation in step 1.1.3sp(Xsp,Ysp,Zsp) Conversion to combined navigation system coordinates Pimu(Ximu,Yimu,Zimu);
Acquiring inertial navigation information of a current panorama measured by an IMU, wherein the inertial navigation information comprises a space attitude angle roll angle R, a pitch angle P, a yaw angle H during panorama shooting and longitude L and latitude B under a shooting point space rectangular coordinate; wherein R is 1.5546, P is-8.0363, H is 70.2002; l-120.102415, B-35.999260; converting the coordinate of the integrated navigation system into a space rectangular coordinate (-2591047.759,4469354.956,3728163.821) according to the operation of the calculation formula of step 1.1.4 of the forward interaction step of the technical scheme;
converting the space rectangular coordinate (-2591047.759,4469354.956,3728163.821) into a three-dimensional local coordinate p by using the coordinate conversion parameters in step 1.1.5 of the forward interaction step of the technical scheme1(509235.608,3985461.338, 71.230); the conversion parameters are as follows: WGS84 coordinate system 3 degree band, central meridian L0=120。
If the width of the current panoramic image is 8000 and the height is 4000 (the unit of width and height is pixel), the image of the center point of the panoramic image is obtainedThe pixel coordinates are (w/2, h/2), that is, (4000,2000), according to the processing procedure of converting the pixel coordinates of the panoramic image into the spatial rectangular coordinates described in the steps 1.1.1 to 1.1.4 of the forward interaction step of the technical scheme, the pixel coordinates of the panoramic center (4000,2000) are converted into the spatial rectangular coordinates (-2591045.072,4469353.617,3728167.027), and then according to the step 1.1.5 of the forward interaction step of the technical scheme, the coordinate conversion parameters are utilized to finally convert into the three-dimensional local coordinates o1(509233.95,3985465.40,71.092);
Step two, as shown in fig. 3, the steps described in steps 2.1, 2.2, 2.3, 2.4 are interacted in the forward direction according to the technical scheme, and the step is performed by o1、p1Constructing a spatial three-dimensional straight line segment l', and calculating a plane projection buffer area based on the line segment; acquiring a laser point set PC in the buffer range;
step three, according to the steps of the forward interaction steps 3.1 and 3.2 of the technical scheme, traversing points in the laser point set, and acquiring a candidate point set of which the distance from l' is less than a threshold value; further screening out the point closest to the panoramic shooting point in the set, as shown in fig. 3.1, using the coordinates (509239.109,3985452.509,71.463) of the point as the laser point coordinates corresponding to the pixel coordinates of the panoramic image;
the reverse interaction steps are as follows:
firstly, interactively acquiring a two-dimensional local coordinate plAs indicated by arrows in fig. 4, the three-dimensional local coordinates of the flagpole laser point, which are alternately searched on the two-dimensional map, are (509239.109,3985452.509, 71.463); creating a square buffer area which takes the point as the center and has the side length of d x 4, wherein d is the thinning interval of the laser point cloud data of 8 cm; carrying out space query of laser point cloud based on the buffer area to obtain a laser point set C in the range;
traversing each cloud point P in the seti(1<i<N), calculating the sum of p and N) respectivelylObtaining the laser point P with the minimum distance value; reading time information contained in the P point, and acquiring a panoramic image containing the time and information thereof from an inertial navigation information file, wherein the information comprises a GPS coordinate of a panoramic shooting point and a null positionAnd (4) inter-pose information.
Secondly, the coordinate conversion parameters in the step 1.1.5 are reversely interacted by the technical scheme to convert the three-dimensional local coordinate p1(509239.109,3985452.509,71.463) into spatial rectangular coordinates (-2591047.759,4469354.956,3728163.821);
thirdly, converting the spatial rectangular coordinate into a panoramic pixel coordinate by using the panoramic GPS coordinate and the spatial attitude information, wherein the coordinate is an image pixel coordinate corresponding to the laser point coordinate; the method specifically comprises the following steps:
performing inverse operation according to a calculation formula of 1.1.4 steps of the forward interaction step of the technical scheme by using a roll angle R, a pitch angle P, a yaw angle H, longitude L and latitude B under a space rectangular coordinate of a shooting point in inertial navigation data corresponding to the current panorama and combining a common earth surface coordinate system, and converting the space rectangular coordinate (-2591047.759,4469354.956,3728163.821) into a combined navigation system coordinate;
using the spatial calibration parameters DeltaX, DeltaY, DeltaZ of the integrated navigation system,Based on the inverse operation of the matrix in the step 1.1.3, the conversion relation between the integrated navigation system and the panoramic ball space coordinate system is constructed, and the integrated navigation system coordinate is converted into the panoramic ball space coordinate; then the space coordinate of the panoramic ball is converted into the pixel coordinate of the panoramic image as follows:
let the radius of the panorama sphere space coordinate be R and the panorama sphere space coordinate be (X)sp,Ysp,Zsp) Then, then
If X issp>0, with Lon ═ π + arccos (Y)sp/R)
If X isspLess than or equal to 0, there is Lon ═ pi-arccos (Y)sp/R)
Lat=arcsin(Zsp/R),
Calculating to obtain (Lon, Lat) as (1.4476, -0.0225)
Then (Lon, Lat) is converted into (X, Y):
X=Lon/(π*2)=0.2304
Y=(lat+π/2)/π=0.4928
(X w, Y h) namely panoramic image pixel coordinates corresponding to the laser point coordinates; wherein w is an image width 8000 and h is an image height 4000; the calculation shows that:
0.2304*8000=1843.2,0.4928*4000=1971.2
therefore, the pixel coordinates of the flagpole panoramic image are (1843, 1971)

Claims (1)

1. An interactive mapping method based on laser point cloud and panoramic image is characterized by comprising a forward interactive or/and backward interactive mapping step, wherein the forward interactive step is to locate the pixel coordinate of the panoramic image to the coordinate of the laser point; the reverse interaction step is to locate the pixel coordinates of the panoramic image from the coordinates of the laser points; wherein:
the forward interaction steps are as follows:
firstly, using the space attitude information and coordinate conversion parameter of the panoramic image to convert the pixel coordinate of the image into a three-dimensional local coordinate p1(ii) a Simultaneously calculating the three-dimensional local coordinate o of the panoramic shooting point1(ii) a The method specifically comprises the following steps:
step 1.1: acquiring pixel coordinates of the panoramic image, and establishing a conversion relation between the pixel coordinates of the image and spatial rectangular coordinates:
step 1.1.1: coordinate P of panoramic pixelr(X, Y) conversion to image space coordinates Pr' (X, Y,0) wherein PrThe (X, Y) coordinate system is a Cartesian rectangular coordinate system taking the lower left corner of the panoramic image as an origin; x is more than 0 and less than w, and w is the number of horizontal pixels of the panoramic image; y is more than 0 and less than h, and h is the longitudinal pixel number of the panoramic image;
step 1.1.2: will image space coordinate Pr' (X, Y,0) into panorama sphere space coordinates Psp(Xsp,Ysp,Zsp)
Defining a panoramic ball space coordinate system, taking the center of a ball as an origin, setting the radius of the ball as R, and establishing a conversion relation between an image space coordinate system and the panoramic ball space coordinate system, wherein the conversion formula is as follows:
ψ=X/R;
θ=Y/R;
Xsp=R*sin(θ)*cos(ψ);
Ysp=R*sin(θ)*sin(ψ);
Zsp=R*cos(θ);
converting the image space coordinate P by the above formular' (X, Y,0) into panorama sphere space coordinates Psp(Xsp,Ysp,Zsp);
Step 1.1.3: the space coordinate P of the panoramic ballsp(Xsp,Ysp,Zsp) Conversion to combined navigation system coordinates
Acquiring space calibration parameters of the panoramic camera and the integrated navigation system, taking the anticlockwise direction as positive, and setting a rotation matrix corresponding to an axis X, Y, Z as RZ,RX,RY6 exterior orientation elements of the ball in an object space coordinate system areOmega, kappa, delta X, delta Y and delta Z, wherein the delta X, the delta Y and the delta Z are respectively shown in the tableThree offsets of the X, Y, Z axis are shown,ω, κ denote three rotational angles around axis Z, X, Y, respectively;
the conversion relationship between the panoramic ball space coordinate and the integrated navigation system coordinate is as follows:
wherein,
the panoramic ball space coordinate P is converted through the conversion relationsp(Xsp,Ysp,Zsp) Conversion to combined navigation system coordinates Pimu(Ximu,Yimu,Zimu);
Step 1.1.4: will combine navigation system coordinates Pimu(Ximu,Yimu,Zimu) Conversion to spatial rectangular coordinates
Acquiring inertial navigation information of a current panorama measured by an IMU, wherein the inertial navigation information comprises a space attitude angle roll angle R, a pitch angle P, a yaw angle H during panorama shooting and longitude L and latitude B under a shooting point space rectangular coordinate; wherein R represents an included angle between an inertial navigation x axis and the horizontal direction, P represents an included angle between an inertial navigation y axis and the horizontal direction, and H represents an included angle between an inertial navigation advancing direction and the due north direction;
then R, P, H is set as r, p, y respectively, and y is rotated around the Z axis; rotating p around the x axis; finally, rotating r around the y axis; then there are:
wherein
The coordinates of the point under a common earth surface coordinate system are set asRecording the coordinate value in the inertial navigation information; rotate around the x-axis firstRotate again around the z-axisThen there are:
wherein:
wherein,a is a major half axis of WGS84 ellipsoid parameters, and is 6378137 meters; b is the minor half axis of WGS84 ellipsoid parameter, 6356752.314 m;
thenThe space rectangular coordinate obtained by conversion;
step 1.1.5: finally, converting the space rectangular coordinate into a three-dimensional local coordinate by using a coordinate conversion parameter;
step 1.2: obtaining the width of the current panoramic image as w, the height as h, the unit of w and h as pixel, the pixel coordinate of the panoramic central point as (w/2, h/2), according to the processing procedure of converting the pixel coordinate of the panoramic image into space rectangular coordinate described in the steps 1.1.1-1.1.4, firstly converting the pixel coordinate of the panoramic image into space rectangular coordinate, and then converting the pixel coordinate into three-dimensional local coordinate o according to the step 1.1.5 by using coordinate conversion parameter1
Second step of1、p1Constructing a spatial three-dimensional straight line segment l', and calculating a plane projection buffer area based on the line segment; acquiring a laser point set PC in the buffer range; the method specifically comprises the following steps:
step 2.1: using two or three dimensional spatial points o1(Xo1,Yo1,Zo1) And p1(Xp1,Yp1,Zp1) Constructing a space straight line l; processing in vector mode with o1Is the initial point of the vector, p1Is a pointing point, and therefore its unit direction vector v is calculated as:
wherein Lo1p1Is o1And p1The three-dimensional spatial distance of (a);
step 2.2: calculating according to different conditions based on the Z component of the vector of l to obtain a specified line segment; for a point P' on l:
if there is Zp1≥Zo1And then:
P′=(Xo1+v[0]*Lv,Yo1+v[1]*Lv,Zo1+v[2]*Lv)
wherein L isvIs the maximum search length;
if there is Zp1<Zo1And then:
firstly, calculating the ground elevation of a panoramic shooting point: zg=Zo1-HcIn which H iscIs the camera height of the device; let plane PgThe order of the passage of (0,0,Zg) The horizontal plane of the dots, P' is then l and PgThe intersection point of (a); further calculate the current P' and o1Is a spatial distance LP′o1If L isP′O1>LvThen, then
P′=(Xo1+v[0]*Lv,Yo1+v[1]*Lv,Zo1+v[2]*Lv)
From o1And P 'as two vertices to form a line segment l';
step 2.3: carrying out projection processing on the three-dimensional line segment l 'to obtain a two-dimensional plane line segment l'; obtaining a line segment l passing through one end of l 'and perpendicular to l'v1The length of the laser point cloud data is d x 4, and d is the thinning interval of the laser point cloud data and is divided equally by the end points;
the perpendicular segment l passing through the other end of l' is obtained using the same methodv2
Is prepared fromv1And lv2A rectangle formed by four end points is used as a plane projection buffer area of the line segment;
step 2.4: based on the rectangular buffer area, acquiring a laser point set PC in the plane range by using a space query method;
traversing points in the laser point set to obtain an alternative point set with a distance l' less than a threshold value; further screening out a point closest to the panoramic shooting point in the set, and taking the coordinate of the point as a laser point coordinate corresponding to the pixel coordinate of the panoramic image; the method specifically comprises the following steps:
step 3.1: traversing each point pep (X) in the set of laser points PCpcp,Ypcp,Zpcp) Calculating its spatial straight-line distance L to the straight line Lpcp(ii) a If L ispcp<LnPutting the point cloud point into a spare point set PC'; wherein L isnIs a distance threshold from the point cloud point to L, LnD is the thinning interval of the point cloud data;
step 3.2: traversing each point pcp 'in the point cloud point set PC', and calculating the point to o1A spatial linear distance of (1) is then from o1The point with the closest distance is the final point, and the coordinate of the point is the mapping coordinate;
the reverse interaction steps are as follows:
firstly, interactively acquiring three-dimensional local coordinates and time attributes of laser points, and then determining corresponding panoramic images and space posture information thereof according to time attribute values; the method specifically comprises the following steps:
step 1.1: interactive acquisition of local two-dimensional coordinates plCreating a square buffer area which takes the point as the center and has the side length of d x 4, and carrying out space query on the laser point cloud based on the buffer area to obtain a laser point set C in the range; wherein d is the thinning interval of the laser point cloud data;
step 1.2: traversing each cloud point P in the seti(1 < i < N), and calculating the sum of the sum and plThe laser point P with the smallest distance value is obtained. Reading time information contained in the P point, and acquiring a panoramic image containing the time and information thereof from an inertial navigation information file, wherein the information comprises a GPS coordinate of a panoramic shooting point and space attitude information;
secondly, converting the three-dimensional local coordinates of the laser points into space rectangular coordinates by using coordinate conversion parameters;
thirdly, converting the spatial rectangular coordinate into a panoramic pixel coordinate by using the panoramic GPS coordinate panoramic image and the spatial attitude information, wherein the coordinate is an image pixel coordinate corresponding to the laser point coordinate; the method specifically comprises the following steps:
step 3.1: converting spatial rectangular coordinates into integrated navigation system coordinates
Performing inverse operation according to a calculation formula of 1.1.4 steps of a forward interaction step of the technical scheme based on matrix operation by using a roll angle R, a pitch angle P, a yaw angle H and longitude L and latitude B under a spatial rectangular coordinate of a shooting point in inertial navigation data corresponding to the current panorama and combining a common earth surface coordinate system, and converting the spatial rectangular coordinate into a combined navigation system coordinate;
step 3.2: converting integrated navigation system coordinates into panorama space coordinates
Using the spatial calibration parameters DeltaX, DeltaY, DeltaZ of the integrated navigation system,Six parameters of omega and kappa are adopted,based on the inverse operation of the matrix in the step 1.1.3, constructing a conversion relation between the integrated navigation system and a panoramic ball space coordinate system, and converting the coordinates of the integrated navigation system into the panoramic ball space coordinates;
step 3.3: converting the space coordinate of the panoramic ball into the pixel coordinate of the panoramic image
Let the radius of the panorama sphere space coordinate be R and the panorama sphere space coordinate be (X)sp,Ysp,Zsp) Then, then
If X isspGreater than 0, with Lon ═ π + arccos (Y)sp/R)
If X isspLess than or equal to 0, there is Lon ═ pi-arccos (Y)sp/R)
Lat=arcsin(Zsp/R)
Then (Lon, Lat) is converted into (X, Y):
X=Lon/(π*2)
Y=(lat+π/2)/π
and (X w, Y h) is the pixel coordinate of the panoramic image, wherein w is the width of the panoramic image, i.e. the number of horizontal pixels, and h is the height of the panoramic image, i.e. the number of vertical pixels.
CN201610740228.3A 2016-08-27 2016-08-27 A kind of interactive plotting method based on laser point cloud and full-view image Active CN106441242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610740228.3A CN106441242B (en) 2016-08-27 2016-08-27 A kind of interactive plotting method based on laser point cloud and full-view image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610740228.3A CN106441242B (en) 2016-08-27 2016-08-27 A kind of interactive plotting method based on laser point cloud and full-view image

Publications (2)

Publication Number Publication Date
CN106441242A CN106441242A (en) 2017-02-22
CN106441242B true CN106441242B (en) 2018-10-09

Family

ID=58182302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610740228.3A Active CN106441242B (en) 2016-08-27 2016-08-27 A kind of interactive plotting method based on laser point cloud and full-view image

Country Status (1)

Country Link
CN (1) CN106441242B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106895825B (en) * 2017-02-23 2019-06-11 周良辰 The three-dimensional amount of geographic position of target object based on streetscape map calculates method and apparatus
CN106908043B (en) * 2017-02-23 2019-06-21 周良辰 The three-dimensional amount measuring method of geographic position of target object and height based on Streetscape picture
CN107135376A (en) * 2017-05-26 2017-09-05 北京天拓灵域网络科技有限公司 The real-time splicing processing method of multichannel ultrahigh resolution panoramic video
CN110660133B (en) * 2018-06-29 2022-11-29 百度在线网络技术(北京)有限公司 Three-dimensional rarefying method and device for electronic map
CN109242966B (en) * 2018-08-07 2022-07-05 北京道亨软件股份有限公司 3D panoramic model modeling method based on laser point cloud data
CN110910802B (en) * 2019-11-25 2023-06-09 北京京东方光电科技有限公司 Holographic display method, device and readable storage medium
CN111220993B (en) * 2020-01-14 2020-07-28 长沙智能驾驶研究院有限公司 Target scene positioning method and device, computer equipment and storage medium
CN111784826A (en) * 2020-07-14 2020-10-16 深圳移动互联研究院有限公司 Method and system for generating three-dimensional structure schematic diagram based on panoramic image
CN112132954B (en) * 2020-08-31 2024-02-27 青岛秀山移动测量有限公司 Distributed management method and system for three-dimensional laser point cloud entity object
CN112465948B (en) * 2020-11-24 2023-04-18 山东科技大学 Vehicle-mounted laser pavement point cloud rarefying method capable of retaining spatial features
CN114895796B (en) * 2022-07-15 2022-11-11 杭州易绘科技有限公司 Space interaction method and device based on panoramic image and application
CN117437289B (en) * 2023-12-20 2024-04-02 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871075A (en) * 2013-12-30 2014-06-18 华中科技大学 Large ellipse remote sensing satellite and earth background relative motion estimation method
CN104019829A (en) * 2014-06-09 2014-09-03 武汉克利福昇科技有限责任公司 Vehicle-mounted panorama camera based on POS (position and orientation system) and external parameter calibrating method of linear array laser scanner
CN104156969A (en) * 2014-08-21 2014-11-19 重庆数字城市科技有限公司 Plane exploration method based on panoramic image depth map
CN104657968A (en) * 2013-11-25 2015-05-27 武汉海达数云技术有限公司 Automatic vehicle-mounted three-dimensional laser point cloud facade classification and outline extraction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657968A (en) * 2013-11-25 2015-05-27 武汉海达数云技术有限公司 Automatic vehicle-mounted three-dimensional laser point cloud facade classification and outline extraction method
CN103871075A (en) * 2013-12-30 2014-06-18 华中科技大学 Large ellipse remote sensing satellite and earth background relative motion estimation method
CN104019829A (en) * 2014-06-09 2014-09-03 武汉克利福昇科技有限责任公司 Vehicle-mounted panorama camera based on POS (position and orientation system) and external parameter calibrating method of linear array laser scanner
CN104156969A (en) * 2014-08-21 2014-11-19 重庆数字城市科技有限公司 Plane exploration method based on panoramic image depth map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
利用改进的数学形态法进行车载激光点云地面滤波;卢秀山,刘如飞,等;《武汉大学学报·信息科学版》;20140531;第39卷(第5期);全文 *

Also Published As

Publication number Publication date
CN106441242A (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN106441242B (en) A kind of interactive plotting method based on laser point cloud and full-view image
US8610708B2 (en) Method and apparatus for three-dimensional image reconstruction
AU2008322565B9 (en) Method and apparatus of taking aerial surveys
CN109115186B (en) 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system
Nüchter et al. Heuristic-based laser scan matching for outdoor 6D SLAM
WO2020062434A1 (en) Static calibration method for external parameters of camera
CN103900539A (en) Target positioning method for panoramic overhead cube imaging
CN112146629A (en) Multi-angle close-up photography track and attitude planning method
CN112184786B (en) Target positioning method based on synthetic vision
CN106023207B (en) It is a kind of to be enjoyed a double blessing the Municipal Component acquisition method of scape based on traverse measurement system
CN104655106B (en) Autonomous positioning based on GPS RTK and full-view image orients plotting method
CN112614219B (en) Space coordinate conversion method based on identification points for map navigation positioning
Liu et al. A high-accuracy pose measurement system for robotic automated assembly in large-scale space
CN106338286A (en) Movable base measurement method
Gao et al. MGG: Monocular global geolocation for outdoor long-range targets
CN104063499A (en) Space vector POI extracting method based on vehicle-mounted space information collection
Ke et al. 3D scene localization and mapping based on omnidirectional SLAM
CN109241233A (en) A kind of coordinate matching method and device
CN105427371B (en) The method that the elemental areas such as Drawing Object are shown is kept in a kind of three-dimensional perspective projection scene
CN116124094A (en) Multi-target co-location method based on unmanned aerial vehicle reconnaissance image and combined navigation information
CN106296657A (en) A kind of method video camera being carried out Fast Calibration based on geometrical principle
Gao et al. Real‐time mosaic of multiple fisheye surveillance videos based on geo‐registration and rectification
CN116907511B (en) Method for converting pipeline coordinates into image coordinates
CN108873924A (en) Airborne video ground sweep area calculation method
CN114459461B (en) Navigation positioning method based on GIS and real-time photoelectric video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant