CN104748728A - Intelligent machine attitude matrix calculating method and method applied to photogrammetry - Google Patents

Intelligent machine attitude matrix calculating method and method applied to photogrammetry Download PDF

Info

Publication number
CN104748728A
CN104748728A CN201410157539.8A CN201410157539A CN104748728A CN 104748728 A CN104748728 A CN 104748728A CN 201410157539 A CN201410157539 A CN 201410157539A CN 104748728 A CN104748728 A CN 104748728A
Authority
CN
China
Prior art keywords
intelligent machine
point
camera
matrix
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410157539.8A
Other languages
Chinese (zh)
Other versions
CN104748728B (en
Inventor
刘进
陶重芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Chuxiong Gaojing Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410157539.8A priority Critical patent/CN104748728B/en
Publication of CN104748728A publication Critical patent/CN104748728A/en
Application granted granted Critical
Publication of CN104748728B publication Critical patent/CN104748728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)
  • Measuring Magnetic Variables (AREA)

Abstract

The invention discloses an intelligent machine attitude matrix calculating method and a method applied to photogrammetry. The intelligent machine attitude matrix calculating method mainly comprises the following steps that an intelligent machine measures the components ax, ay and az of gravitational acceleration at three directions x, y and z of the intelligent machine; the intelligent machine acquires the azimuth angle phi; and the attitude matrix Rg is calculated according to the description. According to the invention, the attitude of a camera on the intelligent machine can be obtained by the attitude matrix Rg of the intelligent machine, and the intelligent machine attitude matrix calculating method and the method applied to photogrammetry can be applied to the fields of photogrammetry, panorama measurement, computer vision, LBS service, virtual reality simulation, attitude measurement reinforcement of wearable equipment and the like.

Description

Intelligent machine attitude matrix computing method and be applied to photogrammetric method
Technical field
The present invention relates to areas of information technology, be specifically related to intelligent machine attitude matrix computing method and camera monocular, multi-vision visual localization method.
Background technology
Intelligent machine and camera attitude measurement significant, computer vision and photogrammetric, and Other Engineering is measured all in the urgent need to a kind of cheap, light its attitude of attitude measurement measuring apparatus.
Because intelligent machine attitude is different, the reality scene that same position is seen is different, the Accuracy navigation accuracy of intelligent machine attitude, photogrammetric accuracy, three-dimensional artificial precision and augmented reality information superposition precision, therefore the Accurate Measurement of intelligent machine attitude is significant.
Traditional intelligent machine attitude determination, mainly utilize the single shaft course angle data message of attitude sensor, system is placed with particular requirement to intelligent machine attitude, as required, intelligent machine is parallel to ground, or put perpendicular to ground, and the attitude of user's hand-held intelligent machine is ever-changing in augmented reality application, might not perfect parallelism or perpendicular to ground, this orientation that will cause navigating is forbidden.In addition the error of intelligent machine attitude determination will bring the error of information superposition, affect the experience of augmented reality user, therefore consider in the urgent need to a kind of the attitude determination algorithm that intelligent machine X-Y-Z tri-axle rotates.
Traditional exact posture detection technique is mainly used in aircraft, the main equipments such as automobile, pursues very high precision, for the field such as military, industrial, not only very heavy, and very valuable.And the present invention mainly considers to realize popular attitude determination by cheap intelligent machine terminal.
Summary of the invention
The technical problem to be solved in the present invention is not to be suitable for intelligent machine terminal for exact posture detection technique traditional in prior art, and the existing attitude determination method measuring error for intelligent machine terminal is large, the defect that Consumer's Experience is not good, one is provided to be applicable to common intelligent machine terminal, and the high attitude matrix computing method of measuring accuracy and camera monocular multi-vision visual localization method.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of intelligent machine attitude matrix computing method are provided, comprise the following steps:
By detecting three value { v of Gravity accelerometer on intelligent machine ax', v ay', v az' calculate a x, a yand a z:
a x a y a z = R a T v ax ′ v ay ′ v az ′ ,
Or to { v ax', v ay', v az' carry out filtering after obtain { v ax, v ay, v az, calculate a x, a yand a z:
a x a y a z = R a T v ax v ay v az , Wherein R athe pose transformation matrix of Gravity accelerometer relative to intelligent machine;
Position angle is obtained by intelligent machine magnetometer trigonometric function; Or direction sensor obtains position angle
Calculate attitude matrix
Wherein
R θ = sa z L 1 - ( a y / L ) 2 - s a x a y / [ L 2 1 - ( a y / L ) 2 ] a x / L 0 s 1 - ( a y / L ) 2 a y / L - sa x L 1 - ( a y / L ) 2 - sa z a y / [ L 2 1 - ( a y / L ) 2 ] a z / L , L = a x 2 + a y 2 + a z 2
Work as a zduring <0, s=-1, a zwhen>=0, s=1;
The attitude matrix R that formula (1) obtains gfully describe the attitude of intelligent machine, attitude matrix R gthe 1st row be the unit vector of intelligent machine x-axis under local coordinate system; 2nd row is the unit vector of intelligent machine y-axis under local coordinate system; 3rd row is the unit vector of intelligent machine z-axis under local coordinate system; Wherein local coordinate system be z-axis along gravity tester to outside the earth, x-axis y-axis of being exposed to the north is exposed to the west, or the coordinate system that x-axis is exposed to the north towards eastern y-axis.
The present invention also provides a kind of camera multi-vision visual localization method, comprises the following steps:
Step 1:
In N number of geographic position, point takes same object X, wherein N >=2, surveys the following amount of i-th observation point, wherein i ∈ (1,2 ..., N):
The position that (1) shows in video frame images (ui, vi);
(2) geographic position of shooting point the longitude of i-th shooting point respectively, latitude, highly;
(3) the attitude matrix R of i-th shooting point is recorded as stated above i;
The long f of inner parameter focal length of (4) i-th shooting points i, principal point coordinate (c xi, c yi);
Step 2:
Calculate the N number of attitude matrix relative to the earth's core, i=1 ~ N, R 0i=R ir vi
Wherein
Step 3:
Utilize formula obtain the geocentric coordinate system coordinate T of N number of shooting point camera photocentre 0i={ Txi, Fyi, Fzi} ti=1 ~ N; Wherein { Δ Txi, Δ Txi, Δ Tzi} t=R vi tΔ T i; Δ T iit is the camera optical centre bias vector that method according to claim 6 calculates; by longitude, latitude, height conversion becomes the function of the X-coordinate of geocentric coordinate system; by longitude, latitude, height conversion becomes the function of the Y-coordinate of geocentric coordinate system; by longitude, latitude, height conversion becomes the function of the Z coordinate of geocentric coordinate system;
Step 4
According to the attitude matrix R of shooting point N 0N, position [T xNt yNt zN], the position coordinates (u shown in video frame images n, v n), the long f of camera focal length pixel n, principal point coordinate (c xN, c yN) all meet following equation
u i = c xi + f i r 0 i 11 ( X x - T xi ) + r 0 i 12 ( X y - T yi ) + r 0 i 13 ( X z - T zi ) r 0 i 31 ( X x - T xi ) + r 0 i 32 ( X y - T yi ) + r 0 i 33 ( X z - T zi )
v i = c yi + f i r 0 i 21 ( X x - T xi ) + r 0 i 22 ( X y - T yi ) + r 0 i 23 ( X z - T zi ) r 0 i 31 ( X x - T xi ) + r 0 i 32 ( X y - T yi ) + r 0 i 33 ( X z - T zi )
Wherein r 0iabrepresent attitude matrix R 0ithe element value of a capable b row; I=1 ~ N is row 2N equation altogether, separates (Xx, Xy, Xz) three unknown numbers; Or adopt the photogrammetric front method that crosses to calculate object X and obtain geocentric coordinate system coordinate (Xx, Xy, Xz).
The present invention also provides a kind of monocular single-point terrestrial positioning method, comprises step:
Step 1
The attitude matrix of camera is calculated according to said method R = &Delta;R &times; R g = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ;
Step 2
Camera capture video to be chosen on ground level a bit, (u v), obtains this long f of inner parameter focal length at camera in advance, records principal point coordinate (c to record its position shown in video frame images x, c y), this is as follows in the computing formula of the coordinate (X, Y) of local coordinate system:
Wherein h is camera height overhead;
Step 3
Local coordinate system coordinate (X, Y ,-h) is converted to longitude and latitude by Geodetic surveying method,
T the earth's core=T s+ R v t([X, Y ,-h] t+ Δ T)
Wherein T sintelligent machine geocentric coordinate,
Δ T is by the displacement vector of said method record intelligent machine camera photocentre relative to GPS anchor point.
θ sintelligent machine latitude, it is intelligent machine longitude;
Last according to geocentric coordinate T the earth's core, obtain T by geodesy formula the earth's corecorresponding longitude and latitude.
Present invention also offers the position of monocular two point measurement perpendicular to the object on ground and the method for height, comprise the following steps:
Step 1
1 image perpendicular to ground object is taken with intelligent machine camera, and following amount under measurements and calculations:
(1) the attitude matrix R=Δ R × R of camera is calculated according to said method g
(2) position (u ⊥, v ⊥) that object intersection point on the ground shows in video frame images is measured
(3) position (u shown in video frame images perpendicular to the top of ground object is measured h, v h)
(4) the long fx of inner parameter focal length of camera is obtained, principal point coordinate (c x, c y);
(5) camera overhead height h is obtained;
Step 2
Calculate following amount
z &perp; = - h r 13 u &perp; - c x f + r 23 v &perp; - c y f + r 33
p &perp; = r 11 u &perp; - c x f + r 21 v &perp; - c y f + r 31
q &perp; = r 12 u &perp; - c x f + r 22 v &perp; - c y f + r 32
p h = r 11 u h - c x f + r 21 v h - c y f + r 31
q h = r 12 u h - c x f + r 22 v h - c y f + r 32
r h = r 13 u h - c x f + r 23 v h - c y f + r 33
Step 3
Solution is matrix equation below, obtains [X Y H]
1 1 1 - p h r h 1 - q h r h X Y H = z &perp; p &perp; z &perp; q &perp; 0 0
Wherein (X, Y) is the coordinate of object under local coordinate system, and H is perpendicular to the height of ground object;
Step 4
The highest for object summit local coordinate system coordinate (X, Y, H) is converted to longitude and latitude by Geodetic surveying method; T the earth's core=T s+ R v t([X, Y, H] t+ Δ T)
Wherein Ts is intelligent machine geocentric coordinate, and Δ T is the intelligent machine shooting that method according to claim 6 calculates
Head photocentre is relative to the displacement vector of GPS anchor point;
θ sintelligent machine latitude, it is intelligent machine longitude;
Last according to geocentric coordinate T the earth's core, obtain T by geodesy formula the earth's corecorresponding longitude and latitude.
Present invention also offers a kind of intelligent machine full-view image generation method, comprise the following steps:
Step 1, intelligent machine have taken n at a certain point of fixity by different attitude and open image, adopt the method for claim 6 to obtain attitude matrix and are respectively R i, i is the integer between 0 ~ n-1, n be more than or equal to 1 integer;
Step 2, open up the full-view image of a wide w height h size;
Each pixel on step 3, scanning full-view image, each pixel (x, y) for panorama sketch carries out (1) (2) process as follows
(1) horizontal ordinate x converts to ordinate y converts θ to;
wherein w, h are the wide of panoramic picture, high;
(2) according to pinhole imaging system formula wherein K i = f i c ix f i c iy 1 F ifor camera i focal length length in pixels, (c ix, c iy) be the principal point of camera i, utilize R iwill convert the picture coordinate (u being mapped to each camera i to i, v iif) (u i, v i) in image range, then by camera i as coordinate (u i, v i) taking-up of place's color, weighting is rendered into panorama sketch (x, y) place; Weight and (u i, v i) relevant to this image border bee-line, the shorter weight of distance is less; If (the u of none camera i, v i) in its image range, rendering transparent look or background colour are to (x, y) place;
Scan on panoramic picture and filled a little corresponding color, namely completed playing up of panoramic picture;
If R imatrix is inaccurate, then by R ias initial value, LM adjustment Algorithm model in computer vision subject is adopted to be optimized, minimum to ensure the grey value difference of overlapping region respective pixel;
Or in step 2 first by panoramic picture color value all clear 0, open image in step 3 taken turns by each for the respective area on panoramic picture picture element scan n for n, color value is accumulated in the pixel of panorama sketch by a point n wheel successively;
The three dimensional navigation of step 4 panoramic picture and retrieval location
(A) intelligent machine gesture stability browses three-dimensional panoramic image
Be observation point with the centre of sphere in virtual scene, sphere inner wall plays up a ball using panorama sketch as texture; Setting intelligent machine camera direction of observation { r 31, r 32r 33, self upward direction {-r 21,-r 22,-r 23; r ijit is the i-th row jth column element in attitude R matrix;
(B) panorama sketch builds storehouse, retrieval and location
For effectively utilizing storage space, except the formula adopted in step 3 (1), go back available inks Kato conversion formula,
Convert panoramic picture to, keep building storehouse consistent with the conversion regime of retrieval;
Build storehouse
Extract the unique point set of panoramic picture, set up the characteristic point data storehouse associated with panoramic picture, in storehouse, each unique point points to the panoramic picture at its place;
Retrieval
Images uniting panoramic picture to be retrieved in c >=1, same place also extracts and plays up district's unique point, to each unique point, with the panoramic picture pointed by its most similar features point in inquiry storehouse, by unique point similarity degree, ballot is weighted to the panoramic picture pointed in storehouse, by the sequence of ballot summation size;
To play up district in panorama sketch to be retrieved and in the storehouse that obtains of voting, the set of pixels of panorama sketch corresponding region carries out relevant matches calculating, related coefficient is more large more similar; Export the result that the forward and related coefficient of ballot sequence is greater than preset value; If c=1, also can according to single shot attitude R, panoramic picture in storehouse is mapped to single shot image area and single shot image carries out relevant matches;
Location
In known GPS situation, the subset feature storehouse of the panoramic picture composition of retrieval GPS near zone;
Panoramic picture and its geographical position coordinates are bound together storage by server database, and result for retrieval is the image being numbered i*, can find out the geographic position corresponding to i* in a database.
The beneficial effect that the present invention produces is: the invention provides intelligent machine and camera attitude measurement method scheme thereof, user puts intelligent machine with any attitude in three dimensions can obtain its attitude matrix R g, and obtain the attitude of camera on intelligent machine further, can be applicable to photogrammetric, computer vision, LBS serves, virtual reality emulation, augmented reality, the fields such as the attitude sensing of real scale game wearable device.
Further, the present invention not only considers the position angle that the value [0] of direction sensor obtains, there are being other sensors, as under acceleration of gravity magnetometer and gyroscope situation, also contemplate the data that other several attitude sensors capture, make the attitude that detects more accurate, do not having also can obtain rough attitude under these sensor condition.The attitude of intelligent machine superposes important role for full spectrum information.The present invention can at intelligent machine, and apple system realizes such as Andriod easily.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is embodiment of the present invention R gthe polar plot of matrix.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
In the present invention, the implication of intelligent machine comprises the cell phone system of built-in camera, video, the camera system of embedded with network IP, the camera system of embedded with network IP, IPhone, panel computer, and the various system etc. simultaneously implanting video camera and network IP.
The imaging system that appearance determined by need in the present invention comprises the device that all that intelligent machine is equipped with have imaging function, module, and equipment, as camera, face battle array or line-scan digital camera, infrared camera, camera-laser 3D detecting devices, 3D camera, laser detection equipment, kinect equipment etc.
Intelligent machine is at the attitude matrix of local coordinate system R g = r g 11 r g 12 r g 13 r g 21 r g 22 r g 23 r g 31 r g 32 r g 33 = r gx r gy r gz , Intelligent machine is fixed on a certain device, the attitude matrix R that intelligent machine exports gjust reflect the attitude of this device.
R gmatrix is a 3x3 unit orthogonal matrix, is made up of three row, often row trivector, and three vectors are respectively r gx, r gy, r gz, as shown in Figure 1, these three vectors are intelligent machine self three axle three unit vectors under local coordinate system.The first row r gxthe unit vector of intelligent machine self x-axis under local coordinate system; Second row r gythe unit vector of intelligent machine self y-axis under local coordinate system; The third line r gzthe unit vector of intelligent machine self z-axis under local coordinate system.
Intelligent machine attitude matrix R grelative to local coordinate system, local coordinate system Z axis is consistent with gravity direction, if intelligent machine magnetometer direction indication is north, then north represents local coordinate system X-axis, and west represents local coordinate system Y-axis; If intelligent machine magnetometer direction indication is east, then east represents local coordinate system X-axis, and north represents local coordinate system Y-axis; If intelligent machine magnetometer direction indication is south, then south represents local coordinate system X-axis, and east represents local coordinate system Y-axis; If intelligent machine magnetometer direction indication is west, then west represents local coordinate system X-axis, and south represents local coordinate system Y-axis.
Embodiments provide two kinds of intelligent machine attitude determination methods, method (1) needs accelerometer and magnetometer and gyroscope to coordinate.Method (2) utilizes rotating vector sensor to realize intelligent machine attitude determination; Really like this solve the omnibearing attitude determination of 3 axle, photogrammetric, GIS service, augmented reality emulates, and there is important application in the fields such as game.
Intelligent machine attitude matrix R gone of following two kinds of methods can be adopted to obtain:
Method (1) adopts Gravity accelerometer and magnetometer
If intelligent machine has Gravity accelerometer, and there are magnetometer or direction sensor, calculate R with following formula (1) g
Wherein
R &theta; = sa z L 1 - ( a y / L ) 2 - s a x a y / [ L 2 1 - ( a y / L ) 2 ] a x / L 0 s 1 - ( a y / L ) 2 a y / L - sa x L 1 - ( a y / L ) 2 - sa z a y / [ L 2 1 - ( a y / L ) 2 ] a z / L
A x, a y, a zcalculate by following methods:
By detecting three value { v of Gravity accelerometer on intelligent machine ax', v ay', v az' calculate a x, a yand a z;
a x a y a z = R a T v ax &prime; v ay &prime; v az &prime;
Or to { v ax', v ay', v az' carry out filtering after obtain { v ax, v ay, v az, calculate a x, a yand a z;
a x a y a z = R a T v ax v ay v az
Wherein R athe pose transformation matrix of Gravity accelerometer relative to intelligent machine;
According to filtering data, detect in filtering and be newly worth v ax', v ay', v az' can adopt:
α v ax'+(1-α) v axreplace original v ax
α v ay'+(1-α) v ayreplace original v ay
α v az'+(1-α) v azreplace original v az
α is a value between [0,1].
R athe pose transformation matrix of gravity sensor relative to intelligent machine, for most of smart mobile phone,
R a = - 1 - 1 - 1 .
L = a x 2 + a y 2 + a z 2
Work as a zduring <0, s=-1, a zwhen>=0, s=1;
According to direction sensor: the direction sensor position angle output valve in intelligent machine, namely bring formula (1) into and calculate R g; The 1st value that direction sensor exports is position angle, and the 2nd is the angle of pitch, and the 3rd is roll angle.
According to magnetometer sensor: magnetometric sensor vector value { v detected mx', v my', v mz', calculate
m x m y m z = R m T v mx &prime; v my &prime; v mz &prime; ;
Or to { v mx', v my', v mz' carry out filtering after obtain { v mx, v my, v mz, calculate
m x m y m z = R m T v mx v my v mz
Wherein R mthe pose transformation matrix of magnetometric sensor relative to intelligent machine;
Newly v is worth as detected in process mx', v my', v mz' following methods can be adopted magnetometer signals filtering:
α mv mx'+(1-α m) v mxreplace original v mx;
α mv my'+(1-α m) v myreplace original v my;
α mv mz'+(1-α m) v mzreplace original v mz;
α mit is a value between [0,1].
And calculate
m 0 x m 0 y m 0 z = R &theta; T m x m y m z
Utilize magnetometer computer azimuth angle trigonometric function be:
Bring formula (1) into and calculate R g, recycle above-mentioned formula (1) and calculate attitude matrix R g.
Above-mentioned intelligent machine attitude matrix R ggyroscope can also be adopted to carry out iterative computation.First R has been calculated at formula (1) gafterwards, if there is gyroscope, following methods iteration can be adopted to obtain R gmatrix:
1st step setting q 0, q 1, q 2, q 3initial value
Formula (1) is selected to calculate R ginitial value, then by R gmatrix conversion becomes 4 element q 0, q 1, q 2, q 3;
By 4 yuan of number q 0, q 1, q 2, q 3as the initial value of following iterative computation;
2nd step setting exInt, eyInt, ezInt original value is 0
exInt=0,eyInt=0,ezInt=0;
3rd step is according to the magnetometer vector { m received x, m y, m z, obtain correct magnetic vector { w x, w y, w zfirst by vector { m x, m y, m zreplace to its unitization later vector
And obtain vector { b as follows x, 0, b z}
h x=2×m x×(0.5-q 2×q 2-q 3×q 3)+2×my×(q 1×q 2-q 0×q 3)+2×m z×(q 1×q 3+q 0×q 2);
h y=2×m x×(q 1×q 2+q 0×q 3)+2×m y×(0.5-q 1×q 1-q 3×q 3)+2×m z×(q 2×q 3-q 0×q 1);
h z=2×m x×(q 1×q 3-q 0×q 2)+2×m y×(q 2×q 3+q 0×q 1)+2×m z×(0.5-q 1×q 1-q 2×q 2);
b x = h x 2 + h y 2 ;
b z=h z
Be transformed into correct magnetic vector { w again x, w y, w z}
w x=2×b x×(0.5-q 2×q 2-q 3×q 3)+2×b z×(q 1×q 3-q 0×q 2);
w y=2×b x×(q 1×q 2-q 0×q 3)+2×b z×(q 0×q 1+q 2×q 3);
w z=2×b x×(q 0×q 2+q 1×q 3)+2×b z×(0.5-q 1×q 1-q 2×q 2);
4th step is according to the Gravity accelerometer data a received x, a y, a z, and { w x, w y, w zobtain error vector { e x, e y, e zand calculate e xint, e yint, e zint
First by vector { a x, a y, a zreplace to its unitization later vector
v x=2×q 1×q 3-q 0×q 2);
v y=2×(q 0×q 1+q 2×q 3);
v z=q 0×q 0-q 1×q 1-q 2×q 2+q 3×q 3
e x=(a y×v z-a z×v y)+(m y×w z-m z×w y);
e y=(a z×v x-a x×v z)+(m z×w x-m x×w z);
e z=(a x×v y-a y×v x)+(m x×w y-m y×w x);
Error of calculation aggregate-value
E xint replaces with e xint+e x× Ki;
E yint replaces with e yint+e y× Ki;
E zint replaces with e zint+e z× Ki;
Wherein Ki is an adjustable positive coefficient, and Ki chooses arbitrarily in 0.00001 to 0.5;
5th step is according to error vector { e x, e y, e zand aggregate-value correction gyro data { g x0, g y0, g z0suppose that intelligent machine reads one group of current gyro data for { g x0, g y0, g z0}
g x=g x0+Kp×e x+e xInt;
g y=g y0+Kp×e y+e yInt;
g z=g z0+Kp×e z+e zInt;
Wherein Kp is an adjustable positive coefficient, and Kp chooses arbitrarily in 0.000001 to 20.0;
6th step is according to gyro data { g x, g y, g zrevise hypercomplex number
Along with constantly receiving gyro data { g x, g y, g z, revise as follows 4 yuan of numbers, halfT is the correction cycle, halfT=0.00001 ~ 10.0,
Q 0replace with q 0+ (-q 1× g x-q 2× g y– q 3× g z) × halfT;
Q 1replace with q 1+ (q 0× g x-q 3× g y+ q 2× g z) × halfT;
Q 2replace with q 2+ (q 3× g x+ q 0× g y-q 1× g z) × halfT;
Q 3replace with q 3+ (-q 2× g x+ q 1× g y+ q 0× g z) × halfT;
7th step exports R gmatrix and hypercomplex number
By hypercomplex number { q 0, q 1, q 2, q 3unit changes into export
4 yuan of numbers turn R gmatrix Formula is as follows
R g = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 + q 0 q 3 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 1 q 2 - q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 + q 0 q 1 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 2 q 3 - q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
8th step is got back to the 3rd step and is continued to receive Motions of Gyroscope Data Update attitude 4 yuan of number q 0~ q 3, in the process of circulation, current R can be exported to the 7th step at every turn gmatrix.
Method (2) uses rotating vector sensor
When intelligent machine has rotating vector sensor,
1. first adopt any one in (a) (b) (c) to obtain R g0matrix
If a () detects that the rotating vector sensor on intelligent machine only has 3 data values [0], values [1], values [2]
Following formula is adopted to obtain
q 1=values[0],q 2=values[1],q 3=values[2],
Convert q0, q1, q2, q3 to R as follows again g0matrix
R g 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 + q 0 q 3 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 1 q 2 - q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 + q 0 q 1 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 2 q 3 - q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
If b rotating vector sensor that () is detected on intelligent machine has 4 data, values [0], values [1], values [2], values [3], make q 0=values [3], q 1=values [0], q 2=values [1], q 3=values [2].Or p 0=values [3], p 1=values [0], p 2=values [1], p 3=values [2]
Calculate
q 0=p 0/L p
q 1=p 1/L p
q 2=p 2/L p
q 3=p 3/L p
Wherein L p = p 0 2 + p 1 2 + p 2 2 + p 3 2
Again by q 0, q 1, q 2, q 3convert R to as follows g0matrix
R g 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 + q 0 q 3 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 1 q 2 - q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 + q 0 q 1 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 2 q 3 - q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
C on () intelligent machine, specified sensor type is rotating vector type TYPE_ROTATION_VECTOR, obtain the observed reading R of rotating vector sensor in real time vecthe rotating vector in intelligent machine system is utilized to turn matrix function; By R vecconvert drawing transition matrix R to 44, R 44upper left corner 3x3 matrix is exactly R g0;
2., then according to the difference of intelligent machine rotating vector sensor coordinate system definition, take following several situation to obtain R g:
If the unit matrix of rotating vector sensor definition is X-axis point to east, Y-axis points to north, then
R g = R g 0 1 - 1 1 T
If the unit matrix of rotating vector sensor definition is X-axis point to north, Y-axis points to west, then
R g=R g0
If the unit matrix of rotating vector sensor definition is X-axis energized south, Y-axis points to east, then
R g = R g 0 - 1 - 1 1 T .
In order to measure attitude matrix R intelligent machine being installed camera further, take following steps:
The attitude matrix R of intelligent machine is defined as R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
Wherein R=Δ R × R g; Δ R is camera attitude matrix R and the intelligent machine attitude matrix R of intelligent machine gbetween the conversion in a fixing direction, be camera optical centre bias vector.When camera direction and intelligent machine screen orientation are just contrary &Delta;R = 1 - 1 - 1 , Camera direction and intelligent machine screen orientation completely the same time &Delta;R = 1 1 1 , Otherwise adopt ripe photogrammetric rear Convergence method Accurate Calibration Δ R.Method is as follows:
1. set up controlling filed coordinate system, namely initial point is positioned at the GPS acceptance point of intelligent machine or the local coordinate system of Big Dipper anchor point;
2. with the point set in intelligent machine camera shooting controlling filed coordinate system; Measure R during intelligent machine shooting point set as stated above g;
3. concentrate according to controlling filed point 3 dimension coordinates of the controlling filed coordinate system of each known point and shooting image Point Set as the corresponding relation of coordinate, adopt photogrammetric resection method to measure the attitude matrix R of intelligent machine camera relative to controlling filed coordinate system awith the coordinate Δ T of camera photocentre relative to controlling filed coordinate system;
4. obtain Δ R=R by following formula ar g -1.
Δ T is also that camera photocentre offsets relative to the local coordinate system of intelligent machine GPS or Big Dipper anchor point simultaneously.
Above-mentioned attitude matrix R can be used for camera multi-vision visual location.Suppose that geocentric coordinate system is with the earth centre of sphere for initial point, the earth centre of sphere is z-axis to the arctic, and the centre of sphere 0 degree to equator warp point is the right-handed coordinate system of x-axis.Suppose longitude, latitude, height conversion becomes the function of the X-coordinate of geocentric coordinate system; longitude, latitude, height conversion becomes the function of the Y-coordinate of geocentric coordinate system; be longitude, latitude, height conversion becomes the function of the Z coordinate of geocentric coordinate system, utilizes conversion formula classical in geodesy to calculate the geocentric coordinate system coordinate (Xx, Xy, Xz) of object X
X X X Y X Z = T 0 i = F x ( h i , &theta; i , &phi; i ) F y ( h i , &theta; i , &phi; i ) F z ( h i , &theta; i , &phi; i ) = ( N i + h i ) cos &theta; i cos &phi; i ( N i + h i ) cos &theta; i sin &phi; i ( N i ( 1 - e 2 ) + h i ) sin &theta; i
Wherein
H iit is point overhead height
n iit is the radius of curvature in prime vertical with latitude change.
e = a 2 - b 2 / a
A, b are the major and minor axis of earth ellipsoid, are the known constants in geodesy;
E is excentricity, is definite value.
The method of being carried out camera multi-vision visual location by attitude matrix R is comprised the following steps:
Step 1
In N(N >=2) individual geographic position point takes same object X, surveys the following amount of i-th (i=1 ~ N) individual observation point:
(1) i-th position coordinates that observation point shows in video frame images (ui, vi);
(2) geographic position of shooting point is , be the longitude of i-th shooting point respectively, latitude, highly;
(3) the attitude matrix R of i-th shooting point is recorded as stated above i;
The long f of inner parameter focal length of (4) i-th shooting points i, principal point coordinate (c xi, c yi), principal point coordinate computer vision and photogrammetric in all have definition, be not repeated herein.
Step 2
Obtain the N number of attitude matrix i=1 ~ N relative to the earth's core further
R 0i=R iR vi
Wherein
Step 3
Utilize formula
Obtain the geocentric coordinate system coordinate T of N number of shooting point camera photocentre 0i={ Txi, Fyi, Fzi} ti=1 ~ N is { Δ Txi, Δ Txi, Δ Tzi} wherein t=R vi tΔ T i
Δ T iit is the camera optical centre bias vector recorded according to said method.
Step 4
According to
The attitude matrix R of shooting point 1 01position [T x1t y1t z1] object is as coordinate u 1, v 1the long f of camera focal length pixel 1, principal point coordinate (cx 1, cy 1)
……
The attitude matrix R of shooting point N 0Nposition [T xNt yNt zN] object is as coordinate (u n, v n) the long f of camera focal length pixel n, principal point coordinate (c xN, c yN), all meet following equation:
u i = c xi + f i r 0 i 11 ( X x - T xi ) + r 0 i 12 ( X y - T yi ) + r 0 i 13 ( X z - T zi ) r 0 i 31 ( X x - T xi ) + r 0 i 32 ( X y - T yi ) + r 0 i 33 ( X z - T zi )
v i = c yi + f i r 0 i 21 ( X x - T xi ) + r 0 i 22 ( X y - T yi ) + r 0 i 23 ( X z - T zi ) r 0 i 31 ( X x - T xi ) + r 0 i 32 ( X y - T yi ) + r 0 i 33 ( X z - T zi )
Wherein r 0iabrepresent attitude matrix R 0ithe element value of a capable b row.
I=1 ~ N is row 2N equation altogether, separates (X x, X y, X z) three unknown numbers,
Or adopt the photogrammetric front method that crosses to calculate object X=(X x, X y, X z)
Obtain geocentric coordinate system coordinate (X x, X y, X z); Finally obtain longitude and latitude corresponding to geocentric coordinate (Xx, Xy, Xz) by geodesy formula.
If spheric approximation ball in ground can be calculated with following methods
Longitude=atan2 (X y, X x);
Latitude=asin
If the earth is approximated to ellipsoid, available geodesy has ellipsoid formula and calculates.Above-mentioned attitude matrix R also can be used for camera monocular single-point or monocular two TV feel location.Definable local coordinate system be initial point at intelligent machine photocentre, Z axis points to outside the earth along gravity direction or perpendicular to ellipsoid ground, and X-axis points to north, and Y-axis points to the coordinate system in west.
Location, monocular single-point ground mainly comprises the following steps:
Step 1
The attitude matrix of camera is calculated according to said method R = &Delta;R &times; R g = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
Step 2
Camera capture video to be chosen on ground level a bit, (u v), records this long f of inner parameter focal length at camera, records principal point coordinate (c to record its position coordinates shown in video frame images x, c y), principal point coordinate refers to camera sight line central shaft and the picpointed coordinate as Plane intersects.Computer vision and photogrammetric in have definition.(1) main line of sight that principal point mainly refers to and dioramic intersection point; (2) any point on the axis of lens in such 2, this relation of 2 be point to the light of a principal point from object any point will from the parallel injection of lens, and by another principal point; (3) lens axis of aerocamera and the intersection point in film face.It is exactly this principal point coordinate that this principal point on the axis of lens is converted to image slices coordinate.
In the embodiment of the present invention, principal point is camera sight line central shaft and the picpointed coordinate as Plane intersects, and be the picture coordinate that central optical camera axis converts in the point that lens intersect, on ground level, this is as follows in coordinate (X, the Y) computing formula of local coordinate system:
Wherein h is camera height overhead.
Step 3
Local coordinate system coordinate (X, Y ,-h) is converted to longitude and latitude by geodetic surveying maturation method.
T the earth's core=T s+ R v t([X, Y ,-h] t+ Δ T)
Wherein T sintelligent machine geocentric coordinate,
Δ T is by the displacement vector of said method record intelligent machine camera photocentre relative to GPS anchor point.
θ sintelligent machine latitude, it is intelligent machine longitude;
Last according to geocentric coordinate T the earth's core=[A, B, C], obtains T by geodesy formula the earth's corecorresponding longitude and latitude.
The earth is approximately a ball, and specific implementation method is:
Longitude=atan2 (B, A);
Latitude=asin (C/ (earth radius-h));
Highly=-h;
Monocular two point is measured and is comprised the following steps perpendicular to the position of the object on ground and the method for height:
Step 1
Take 1 image perpendicular to ground object with intelligent machine camera, record following amount
(1) with the attitude matrix R=Δ R × R of said method record camera g
(2) position coordinates (u ⊥, v ⊥) that object intersection point on the ground shows in capture video two field picture is recorded
(3) position coordinates (u that shows in capture video two field picture perpendicular to the top of ground object of record h, v h)
(4) the long fx of inner parameter focal length pixel of camera is obtained, principal point coordinate (c x, c y);
(5) camera overhead height h is obtained;
Step 2
Calculate following amount
z &perp; = - h r 13 u &perp; - c x f + r 23 v &perp; - c y f + r 33
p &perp; = r 11 u &perp; - c x f + r 21 v &perp; - c y f + r 31
q &perp; = r 12 u &perp; - c x f + r 22 v &perp; - c y f + r 32
p h = r 11 u h - c x f + r 21 v h - c y f + r 31
q h = r 12 u h - c x f + r 22 v h - c y f + r 32
r h = r 13 u h - c x f + r 23 v h - c y f + r 33
Step 3
Solution is matrix equation below, obtains [X Y H]
1 1 1 - p h r h 1 - q h r h X Y H = z &perp; p &perp; z &perp; q &perp; 0 0
Wherein (X, Y) is the coordinate of object under local coordinate system, and H is perpendicular to the height of ground object.
Step 4
By geodetic surveying maturation method, the highest for object summit local coordinate system coordinate (X, Y, H) is changed into longitude and latitude.
T the earth's core=T s+ R v t([X, Y, H] t+ Δ T)
Wherein Ts is intelligent machine geocentric coordinate,
Δ T is camera photocentre relative to the displacement vector of intelligent machine GPS or Big Dipper anchor point.Approximate desirable T=[000]
θ sintelligent machine latitude, it is intelligent machine longitude;
Last according to geocentric coordinate T the earth's core=[A, B, C], obtains T by geodesy formula the earth's corecorresponding longitude and latitude.
The earth is approximately a ball, and specific implementation method is:
Longitude=atan2 (B, A);
Latitude=asin (C/ (earth radius+H));
Highly=H.
The intelligent machine full-view image generation method of the embodiment of the present invention comprises the following steps:
Step 1. intelligent machine have taken n at a certain point of fixity by different attitude and opens image, adopts the method for claim 6 to obtain attitude matrix and is respectively R i, i is the integer between 0 ~ n-1, n be greater than 1 integer;
Step 2. opens up the full-view image of a wide w height h size;
Step 3. scans each pixel on full-view image, and each pixel (x, y) for panorama sketch carries out (1) (2) process as follows
(1) horizontal ordinate x converts to ordinate y converts θ to;
wherein w, h are the wide of panoramic picture, high;
(2) according to pinhole imaging system formula foundation pinhole imaging system formula
Wherein K i = f i c ix f i c iy 1 F ifor camera i focal length length in pixels, (c ix, c iy) be the principal point of camera i, utilize R iwill convert the picture coordinate (u being mapped to each camera i to i, v iif) (u i, v i) in image range, then by camera i as coordinate (u i, v i) taking-up of place's color, weighting is rendered into panorama sketch (x, y) place; Weight and (u i, v i) relevant to this image border bee-line, the shorter weight of distance is less; If (the u of none camera i, v i) in its image range, rendering transparent look or background colour are to (x, y) place; Scan on panoramic picture and filled a little corresponding color, namely completed playing up of panoramic picture;
Be specially according to formula
i=0~n-1
Wherein K i = f i c ix f i c iy 1 F ifor camera i focal length length in pixels, (c ix, c iy) be the principal point coordinate of camera i
Will convert (u to 0, v 0..., u n-1, v n-1);
According to (u 0, v 0.., u n-1, v n-1) this n group is mapped in coordinate on n shooting angle image, can get:
N opens the color C that image takes out 0~ C n-1;
N as the distance e of coordinate to photo frame edge 0~ e n-1;
Ask each the weight w of image at this pixel place further i=e i 2/ e s 2
Wherein e s 2 = &Sigma; i = 0 n - 1 e i 2
On panoramic picture, the color value played up at pixel (x, y) place is e sduring >0, play up otherwise play up a Transparent color or background colour (x, y) place to panorama sketch;
Scan on panoramic picture and filled a little corresponding color, namely completed playing up of panoramic picture;
If step 1 terminates later R imatrix is inaccurate, then by R ior convert the result of hypercomplex number to as initial value, adopt minimum 2 multiplication of LM to be optimized, minimum to ensure the grey value difference of overlapping region respective pixel;
Or first by panoramic picture color value all clear 0, then open image for n and taken turns by each for the respective area on panoramic picture picture element scan n, color value is accumulated in the pixel of panorama sketch by a point n wheel successively.So once only load an image, memory consumption can be avoided excessive.
The three dimensional navigation of step 4 panoramic picture and retrieval location
(A) intelligent machine gesture stability browses three-dimensional panoramic image
To roam centered by observer in virtual scene, draw hollow spheroid, sphere inner wall is played up using aforementioned panorama sketch as texture.Method is as follows: any pixel coordinate (x, y) on panorama sketch is corresponding also just corresponding radius is 3 dimension coordinates on the sphere of r therefore the grid of a lot of sub-block composition will be divided in panorama sketch, the panorama sketch coordinate on each grid 4 summit corresponds to the tile that 43 dimension coordinates on sphere surround, and utilizes the graph rendering function on intelligent machine to play up texture tiles all in grid thus and just achieves playing up whole spheroid.
By regulating the above-mentioned intelligent machine attitude matrix R recorded g, or utilize said method to record the attitude matrix R of camera on intelligent machine, regulate roaming observer to browse the attitude of sphere.Alpha channel panorama sketch can not being photographed part sets to 0.
For ensureing to see the effect of observing sphere inner wall panorama texture from inside on intelligent machine display screen.Set following amount:
(1) observation place and sphere centre coordinate X d, Y d, Z d; Can be exactly 0,0,0
(2) direction of observation { r 31, r 32r 33or user's blinkpunkt coordinate (X d+ r 31, Y d+ r 32, Z d+ r 33)
(3) intelligent machine self direction vector {-r upwards 21,-r 22,-r 23}
R wherein in step (2) and (3) ijthat the i-th row j in intelligent machine camera attitude R matrix arranges, i=2 or 3, j=1,2 or 3;
(B) panorama sketch builds storehouse, retrieval and location
For effectively utilizing storage space, except the formula adopted in step 3 (1), go back available inks Kato conversion formula,
Convert panoramic picture to, keep building storehouse consistent with the conversion regime of retrieval.
The advantage that the present invention obtains panoramic picture by intelligent machine attitude is to define a kind of unified image standard based on local coordinate system.Two Zhang Quanjing image scalings mate to onesize, and ratio has been unified, and the internal reference differences such as the rotation of X-Y-Z tri-axle, camera focus have all done unitized process.The problem of ratio, rotation, internal reference these serious imaged image couplings inconsistent, retrieval is obtained for solution.Mainly consider illumination in matching process, just, such speed, accuracy will be greatly improved the factors such as intensity profile.
Build storehouse:
Extract unique point set in panoramic picture storehouse; by unique point by its position grid division on panoramic picture; each grid sets up a word bank; the descriptor of unique point is stored in corresponding word bank by the form of hash table or kdtree binary tree or balanced binary tree; each node of binary tree houses the descriptor of a unique point; and the numbering of the panorama sketch index of its correspondence of sensing, set up unique point storehouse; Unique point can adopt ORB, and Harris, Surf, SIFT do not consider the simplification version of rotation, ratio.
Retrieval:
For a c >=1 to be retrieved image, first c is opened Images uniting 1 Zhang Quanjing image, the unique point that extraction panoramic picture has been played up in district is N number of, for each unique point pi, i=1..N, only in feature database, carry out binary tree search with it near panorama sketch position is adjacent in several unique point word bank, find the image ki with the numbering of the immediate Feature point correspondence of pi descriptor di in adjacent word bank, the distance being retrieved unique point descriptor in the sub-di of record description and picture library is ei.Increase to the weights W ki of image ki wherein ε is a very little positive number.
Through the picture number k that the retrieval of N number of feature obtains 1..., k nprobably have repetition, suppose to obtain m Zhang Quanjing image after merging, respective weights is W 1~ W m.Calculate the degree of correlation M that N number of image k is similar 1m mif select maximal value j* M j*the image being then numbered enough greatly j* is preliminary search result.
Panorama sketch to be retrieved is played up district and carried out relevant matches with the set of pixels of the panorama sketch of preliminary search from storehouse or integral image block of pixel summation, the larger image of related coefficient is more similar, substantially assert to be exactly a panoramic picture as related coefficient >0.9.
Location
Reflect the surrounding enviroment in a certain region because panoramic picture is complete, the location to region can be realized to the coupling of panoramic picture.The image being numbered i* is result for retrieval and then illustrates that positioning result is the position corresponding to image being numbered i*.
Should be understood that, for those of ordinary skills, can be improved according to the above description or convert, and all these improve and convert the protection domain that all should belong to claims of the present invention.

Claims (10)

1. intelligent machine attitude matrix computing method, is characterized in that, comprise the following steps:
By detecting three value { v of Gravity accelerometer on intelligent machine ax', v ay', v az' calculate a x, a yand a z:
a x a y a z = R a T v ax &prime; v ay &prime; v az &prime; ,
Or to { v ax', v ay', v az' carry out filtering after obtain { v ax, v ay, v az, calculate a x, a yand a z:
a x a y a z = R a T v ax v ay v az , Wherein R athe pose transformation matrix of Gravity accelerometer relative to intelligent machine;
Position angle is obtained by intelligent machine magnetometer trigonometric function; Or direction sensor obtains position angle
Calculate attitude matrix
Wherein
R &theta; = sa z L 1 - ( a y / L ) 2 - s a x a y / [ L 2 1 - ( a y / L ) 2 ] a x / L 0 s 1 - ( a y / L ) 2 a y / L - sa x L 1 - ( a y / L ) 2 - sa z a y / [ L 2 1 - ( a y / L ) 2 ] a z / L , L = a x 2 + a y 2 + a z 2
Work as a zduring <0, s=-1, a zwhen>=0, s=1;
The attitude matrix R that formula (1) obtains gdescribe the attitude of intelligent machine, attitude matrix R gthe 1st row be the unit vector of intelligent machine x-axis under local coordinate system; 2nd row is the unit vector of intelligent machine y-axis under local coordinate system; 3rd row is the unit vector of intelligent machine z-axis under local coordinate system; Wherein local coordinate system be z-axis along gravity tester to outside the earth, x-axis y-axis of being exposed to the north is exposed to the west, or the coordinate system that x-axis is exposed to the north towards eastern y-axis.
2. method according to claim 1, is characterized in that, if be provided with direction sensor in intelligent machine:, obtain position angle by this intelligent machine bring formula (1) into and calculate attitude matrix R g.
3. method according to claim 1, is characterized in that, if be provided with magnetometer sensor in intelligent machine, then three value { v detected by this magnetometer sensor mx', v my', v mz', calculate m x m y m z = R m T v mx &prime; v my &prime; v mz &prime; ; Or to { v mx', v my', v mz' carry out filtering after obtain { v mx, v my, v mz, calculate m x m y m z = R m T v mx v my v mz , Wherein R mthe pose transformation matrix of magnetometric sensor relative to intelligent machine; And calculate m 0 x m 0 y m 0 z = R &theta; T m x m y m z , Wherein computer azimuth angle trigonometric function be:
Carry it into formula (1) and calculate R g.
4. method according to claim 1, is characterized in that, if be provided with rotating vector sensor in intelligent machine, then:
(1) R is obtained by any one in following (a) (b) (c) g0matrix:
If a the rotating vector sensor on () intelligent machine has 3 data values [0], values [1], values [2], then make q 1=values [0], q 2=values [1], q 3=values [2], again by q 0, q 1, q 2, q 3convert R to as follows g0matrix
R g 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 + q 0 q 3 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 1 q 2 - q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 + q 0 q 1 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 2 q 3 - q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
If b the rotating vector sensor on () intelligent machine has 4 data, values [0], values [1], values [2], values [3], make q 0=values [3], q 1=values [0], q 2=values [1], q 3=values [2], by q 0, q 1, q 2, q 3convert R to as follows g0matrix
R g 0 = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 + q 0 q 3 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 1 q 2 - q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 + q 0 q 1 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 2 q 3 - q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2 ;
If c on () intelligent machine, specified sensor type is rotating vector class TYPE_ROTATION_VECTOR, then the observed reading R of Real-time Obtaining rotating vector sensor vec, utilize the rotating vector in intelligent machine system to turn matrix function by R vecconvert drawing transition matrix R to 44, R 44upper left corner 3x3 matrix is exactly R g0;
(2) according to the difference of intelligent machine rotating vector sensor coordinate system definition, in following several situation, Rg is obtained:
If the unit matrix of rotating vector sensor definition is X-axis point to east, Y-axis points to north, then
R g = R g 0 1 - 1 1 T ;
If the unit matrix of rotating vector sensor definition is X-axis point to north, Y-axis points to west, then R g=R g0;
If the unit matrix of rotating vector sensor definition is X-axis energized south, Y-axis points to east, then
R g = R g 0 - 1 - 1 1 T .
5. method according to claim 1, is characterized in that, if intelligent machine is provided with gyroscope, then and attitude matrix R ggyroscope is adopted to carry out iterative computation;
Calculating first R gafterwards, if there is gyroscope, following methods iteration is adopted to obtain R gmatrix:
1st step setting q 0, q 1, q 2, q 3initial value
Formula (1) is selected to calculate R ginitial value, then by R gmatrix conversion becomes 4 element q 0, q 1, q 2, q 3;
By 4 yuan of number q 0, q 1, q 2, q 3as the initial value of following iterative computation;
2nd step setting exInt, eyInt, ezInt original value is 0
exInt=0,eyInt=0,ezInt=0;
3rd step is according to the magnetometer vector { m received x, m y, m z, obtain correct magnetic vector { w x, w y, w z}
First by vector { m x, m y, m zreplace to its unitization later vector { m x , m y , m z } / m x 2 + m y 2 + m z 2
And obtain vector { b as follows x, 0, b z}
h x=2×m x×(0.5-q 2×q 2-q 3×q 3)+2×my×(q 1×q 2-q 0×q 3)+2×m z×(q 1×q 3+q 0×q 2);
h y=2×m x×(q 1×q 2+q 0×q 3)+2×m y×(0.5-q 1×q 1-q 3×q 3)+2×m z×(q 2×q 3-q 0×q 1);
h z=2×m x×(q 1×q 3-q 0×q 2)+2×m y×(q 2×q 3+q 0×q 1)+2×m z×(0.5-q 1×q 1-q 2×q 2);
b x = h x 2 + h y 2 ;
b z=h z
Be transformed into correct magnetic vector { wx, wy, wz} again
w x=2×b x×(0.5-q 2×q 2-q 3×q 3)+2×b z×(q 1×q 3-q 0×q 2);
w y=2×b x×(q 1×q 2-q 0×q 3)+2×b z×(q 0×q 1+q 2×q 3);
w z=2×b x×(q 0×q 2+q 1×q 3)+2×b z×(0.5-q 1×q 1-q 2×q 2);
4th step is according to the Gravity accelerometer data a received x, a y, a z, and { w x, w y, w zobtain error vector { e x, e y, e zand calculate e xint, e yint, e zint
First by vector { a x, a y, a zreplace to its unitization later vector
v x=2×q 1×q 3-q 0×q 2);
v y=2×(q 0×q 1+q 2×q 3);
v z=q 0×q 0-q 1×q 1-q 2×q 2+q 3×q 3
e x=(a y×v z-a z×v y)+(m y×w z-m z×w y);
e y=(a z×v x-a x×v z)+(m z×w x-m x×w z);
e z=(a x×v y-a y×v x)+(m x×w y-m y×w x);
Error of calculation aggregate-value
E xint replaces with e xint+e x× Ki;
E yint replaces with e yint+e y× Ki;
E zint replaces with e zint+e z× Ki;
Wherein Ki is an adjustable positive coefficient, and Ki chooses arbitrarily in 0.00001 to 0.5;
5th step is according to error vector { e x, e y, e zand aggregate-value correction gyro data { g x0, g y0, g z0suppose that intelligent machine reads one group of current gyro data for { g x0, g y0, g z0}
g x=g x0+Kp×e x+e xInt;
g y=g y0+Kp×e y+e yInt;
g z=g z0+Kp×e z+e zInt;
Wherein Kp is an adjustable positive coefficient, and Kp chooses arbitrarily in 0.000001 to 20.0;
6th step is according to gyro data { g x, g y, g zrevise hypercomplex number
Along with constantly receiving gyro data { g x, g y, g z, revise as follows 4 yuan of numbers, halfT is the correction cycle, halfT=0.00001 ~ 10.0,
Q 0replace with q 0+ (-q 1× g x-q 2× g y– q 3× g z) × halfT;
Q 1replace with q 1+ (q 0× g x-q 3× g y+ q 2× g z) × halfT;
Q 2replace with q 2+ (q 3× g x+ q 0× g y-q 1× g z) × halfT;
Q 3replace with q 3+ (-q 2× g x+ q 1× g y+ q 0× g z) × halfT;
7th step exports R gmatrix and hypercomplex number
By hypercomplex number { q 0, q 1, q 2, q 3unit changes into export 4 yuan of numbers and turn R gmatrix Formula is as follows
R g = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 ( q 1 q 2 + q 0 q 3 ) 2 ( q 1 q 3 - q 0 q 2 ) 2 ( q 1 q 2 - q 0 q 3 ) q 0 2 - q 1 2 + q 2 2 - q 3 2 2 ( q 2 q 3 + q 0 q 1 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 2 q 3 - q 0 q 1 ) q 0 2 - q 1 2 - q 2 2 + q 3 2
8th step is got back to the 3rd step and is continued to receive Motions of Gyroscope Data Update attitude 4 yuan of number q 0~ q 3, in the process of circulation, current R can be exported to the 7th step at every turn gmatrix.
6. the method according to any one of claim 1-5, is characterized in that, to attitude matrix R gcorrect, correct attitude matrix R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
Wherein R=Δ R × R g;
Δ R is camera attitude matrix R and the intelligent machine attitude matrix R of intelligent machine gbetween the conversion in a fixing direction, when camera direction and intelligent machine screen orientation are just contrary &Delta;R = 1 - 1 - 1 , Camera direction and intelligent machine screen orientation completely the same time &Delta;R = 1 1 1 , Otherwise adopt photogrammetric rear Convergence method to demarcate Δ R, concrete grammar is as follows:
Set up controlling filed coordinate system, namely initial point is positioned at the GPS acceptance point of intelligent machine or the local coordinate system of Big Dipper anchor point;
With the point set in intelligent machine camera shooting controlling filed coordinate system; By according to any one of claim 1-5 method measure intelligent machine shooting point set time R g;
Concentrate according to controlling filed point the three-dimensional coordinate of the controlling filed coordinate system of each known point and shooting image Point Set as the corresponding relation of coordinate, adopt photogrammetric resection method to measure the attitude matrix R of intelligent machine camera relative to controlling filed coordinate system aalso be simultaneously camera photocentre relative to the local coordinate system skew of intelligent machine GPS or Big Dipper anchor point with the coordinate Δ T of camera photocentre relative to controlling filed coordinate system, Δ T; Calculate Δ R=R ar g -1.
7. a camera multi-vision visual localization method, is characterized in that, comprises the following steps:
Step 1:
In N number of geographic position, point takes same object X, wherein N >=2, surveys the following amount of i-th observation point, wherein i ∈ (1,2 ..., N):
The position that (1) shows in video frame images (ui, vi);
(2) geographic position of shooting point the longitude of i-th shooting point respectively, latitude, highly;
(3) the attitude matrix R of i-th shooting point is recorded by claim 6 method i;
The long f of inner parameter focal length of (4) i-th shooting points i, principal point coordinate (c xi, c yi);
Step 2:
Calculate the N number of attitude matrix relative to the earth's core, i=1 ~ N, R 0i=R ir vi
Wherein
Step 3:
Utilize formula obtain the geocentric coordinate system coordinate T of N number of shooting point camera photocentre 0i={ Txi, Tyi, Tzi} ti=1 ~ N; Wherein { Δ Txi, Δ Txi, Δ Tzi} t=R vi tΔ T i; Δ T iit is the camera optical centre bias vector that method according to claim 6 calculates; by longitude, latitude, height conversion becomes the function of the X-coordinate of geocentric coordinate system; by longitude, latitude, height conversion becomes the function of the Y-coordinate of geocentric coordinate system; by longitude, latitude, height conversion becomes the function of the Z coordinate of geocentric coordinate system;
Step 4
According to the attitude matrix R of shooting point N 0N, position [T xNt yNt zN], the position coordinates (u shown in video frame images n, v n), the long f of camera focal length pixel n, principal point coordinate (c xN, c yN) all meet following equation
u i = c xi + f i r 0 i 11 ( X x - T xi ) + r 0 i 12 ( X y - T yi ) + r 0 i 13 ( X z - T zi ) r 0 i 31 ( X x - T xi ) + r 0 i 32 ( X y - T yi ) + r 0 i 33 ( X z - T zi )
v i = c yi + f i r 0 i 21 ( X x - T xi ) + r 0 i 22 ( X y - T yi ) + r 0 i 23 ( X z - T zi ) r 0 i 31 ( X x - T xi ) + r 0 i 32 ( X y - T yi ) + r 0 i 33 ( X z - T zi )
Wherein r 0iabrepresent attitude matrix R 0ithe element value of a capable b row; I=1 ~ N is row 2N equation altogether, separates (Xx, Xy, Xz) three unknown numbers; Or adopt the photogrammetric front method that crosses to calculate object X and obtain geocentric coordinate system coordinate (Xx, Xy, Xz);
Finally obtain longitude and latitude corresponding to geocentric coordinate (Xx, Xy, Xz) by geodesy formula.
8. a monocular single-point terrestrial positioning method, is characterized in that, comprises step:
Step 1
Method according to claim 6 calculates the attitude matrix of camera R = &Delta;R &times; R g = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ;
Step 2
Camera capture video is chosen a bit on ground level, (u v), obtains the long f of inner parameter focal length of camera, principal point coordinate (c in advance to record its position shown in video frame images x, c y); This is as follows in the computing formula of the coordinate (X, Y) of local coordinate system:
Wherein h is camera height overhead;
Step 3
Local coordinate system coordinate (X, Y ,-h) is converted to longitude and latitude by Geodetic surveying method,
T the earth's core=T s+ R v t([X, Y ,-h] t+ Δ T)
Wherein T sintelligent machine geocentric coordinate,
Δ T is by the displacement vector of claim 6 method record intelligent machine camera photocentre relative to GPS anchor point;
θ sintelligent machine latitude, it is intelligent machine longitude;
Last according to geocentric coordinate T the earth's core, obtain T by geodesy formula the earth's corecorresponding longitude and latitude.
9. the position of monocular two point measurement perpendicular to the object on ground and a method for height, is characterized in that, comprise the following steps:
Step 1
1 image perpendicular to ground object is taken with intelligent machine camera, and following amount under measurements and calculations:
(1) method according to claim 6 calculates the attitude matrix R=Δ R × R of camera g
(2) position (u ⊥, v ⊥) that object intersection point on the ground shows in video frame images is measured
(3) position (u shown in video frame images perpendicular to the top of ground object is measured h, v h)
(4) the long fx of inner parameter focal length of camera is obtained, principal point coordinate (c x, c y)
(5) camera overhead height h is obtained;
Step 2
Calculate following amount
z &perp; = - h r 13 u &perp; - c x f + r 23 v &perp; - c y f + r 33
p &perp; = r 11 u &perp; - c x f + r 21 v &perp; - c y f + r 31
q &perp; = r 12 u &perp; - c x f + r 22 v &perp; - c y f + r 32
p h = r 11 u h - c x f + r 21 v h - c y f + r 31
q h = r 12 u h - c x f + r 22 v h - c y f + r 32
r h = r 13 u h - c x f + r 23 v h - c y f + r 33
Step 3
Solution is matrix equation below, obtains [X Y H]
1 1 1 - p h r h 1 - q h r h X Y H = z &perp; p &perp; z &perp; q &perp; 0 0
Wherein (X, Y) is the coordinate of object under local coordinate system, and H is perpendicular to the height of ground object;
Step 4
The highest for object summit local coordinate system coordinate (X, Y, H) is converted to longitude and latitude by Geodetic surveying method; T the earth's core=T s+ R v t([X, Y, H] t+ Δ T)
Wherein Ts is intelligent machine geocentric coordinate, and Δ T is the intelligent machine shooting that method according to claim 6 calculates
Head photocentre is relative to the displacement vector of GPS anchor point;
θ sintelligent machine latitude, it is intelligent machine longitude;
Last according to geocentric coordinate T the earth's core, obtain T by geodesy formula the earth's corecorresponding longitude and latitude.
10. an intelligent machine full-view image generation method, is characterized in that, comprise the following steps:
Step 1, intelligent machine open image at the different attitude shooting n of a certain point of fixity, adopt the method for claim 6 to obtain attitude matrix and are respectively R i, i is the integer between 0 ~ n-1, n be more than or equal to 1 integer;
Step 2, open up the full-view image of a wide w height h size;
Each pixel on step 3, scanning full-view image, each pixel (x, y) for panorama sketch carries out (1) (2) process as follows
(1) horizontal ordinate x converts to ordinate y converts θ to;
wherein w, h are the wide of panoramic picture, high;
(2) according to pinhole imaging system formula
Wherein K i = f i c ix f i c iy 1 F ifor camera i focal length length in pixels, (c ix, c iy) be the principal point of camera i, utilize R iwill convert the picture coordinate (u being mapped to each camera i to i, v i), if (u i, v i) in image range, then by camera i as coordinate (u i, v i) taking-up of place's color, weighting is rendered into panorama sketch (x, y) place; Weight and (u i, v i) relevant to this image border bee-line, the shorter weight of distance is less; If (the u of none camera i, v i) in its image range, rendering transparent look or background colour are to (x, y) place;
Scan on panoramic picture and filled a little corresponding color, namely completed playing up of panoramic picture;
If R imatrix is inaccurate, then by R ias initial value, LM adjustment Algorithm model in computer vision subject is adopted to be optimized, minimum to ensure the grey value difference of overlapping region respective pixel;
Or in step 2 first by panoramic picture color value all clear 0, open image in step 3 taken turns by each for the respective area on panoramic picture picture element scan n for n, weighted color value is accumulated in the pixel of panorama sketch by a point n wheel successively;
The three dimensional navigation of step 4 panoramic picture and retrieval location
(A) intelligent machine gesture stability browses three-dimensional panoramic image
Be observation point with the centre of sphere in virtual scene, sphere inner wall plays up a ball using panorama sketch as texture; Setting intelligent machine camera direction of observation { r 31, r 32r 33, self upward direction {-r 21,-r 22,-r 23; r ijit is the i-th row jth column element in attitude R matrix;
(B) panorama sketch builds storehouse, retrieval and location
For effectively utilizing storage space, except the formula adopted in step 3 (1), go back available inks Kato conversion formula,
Convert panoramic picture to, keep building storehouse consistent with the conversion regime of retrieval;
Build storehouse
Extract the unique point set of panoramic picture, set up the characteristic point data storehouse associated with panoramic picture, in storehouse, each unique point points to the panoramic picture at its place;
Retrieval
Images uniting panoramic picture to be retrieved in c >=1, same place also extracts and plays up district's unique point, to each unique point, with the panoramic picture pointed by its most similar features point in inquiry storehouse, by unique point similarity degree, ballot is weighted to the panoramic picture pointed in storehouse, by the sequence of ballot summation size;
To play up district in panorama sketch to be retrieved and in the storehouse that obtains of voting, the set of pixels of panorama sketch corresponding region carries out relevant matches calculating, related coefficient is more large more similar; Export the result that the forward and related coefficient of ballot sequence is greater than preset value; If c=1, also can according to single shot attitude R, panoramic picture in storehouse is mapped to single shot image area and single shot image carries out relevant matches;
Location
In known GPS situation, the subset feature storehouse of the panoramic picture composition of retrieval GPS near zone;
Panoramic picture and its geographical position coordinates are bound together storage by server database, and result for retrieval is the image being numbered i*, can find out the geographic position corresponding to i* in a database.
CN201410157539.8A 2013-12-29 2014-04-18 Intelligent machine attitude matrix calculation method and its applied to photogrammetric method Active CN104748728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410157539.8A CN104748728B (en) 2013-12-29 2014-04-18 Intelligent machine attitude matrix calculation method and its applied to photogrammetric method

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201310736179 2013-12-29
CN2013107361792 2013-12-29
CN201410157539.8A CN104748728B (en) 2013-12-29 2014-04-18 Intelligent machine attitude matrix calculation method and its applied to photogrammetric method

Publications (2)

Publication Number Publication Date
CN104748728A true CN104748728A (en) 2015-07-01
CN104748728B CN104748728B (en) 2019-02-22

Family

ID=53588774

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201410043385.XA Active CN104750969B (en) 2013-12-29 2014-01-29 The comprehensive augmented reality information superposition method of intelligent machine
CN201410157539.8A Active CN104748728B (en) 2013-12-29 2014-04-18 Intelligent machine attitude matrix calculation method and its applied to photogrammetric method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201410043385.XA Active CN104750969B (en) 2013-12-29 2014-01-29 The comprehensive augmented reality information superposition method of intelligent machine

Country Status (1)

Country Link
CN (2) CN104750969B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105004342A (en) * 2015-07-15 2015-10-28 成都万维图新信息技术有限公司 Internet-based travel data processing method
CN105509716A (en) * 2015-11-26 2016-04-20 武大吉奥信息技术有限公司 Geographic information collection method based on augmented reality technology and device
CN105937878A (en) * 2016-06-13 2016-09-14 歌尔科技有限公司 Indoor distance measuring method
CN106352897A (en) * 2016-08-26 2017-01-25 杨百川 Silicon MEMS (micro-electromechanical system) gyroscope error estimating and correcting method based on monocular visual sensor
CN106973214A (en) * 2015-11-18 2017-07-21 卡西欧计算机株式会社 Data processing equipment and data processing method
CN107462244A (en) * 2017-04-24 2017-12-12 北京航空航天大学 A kind of air remote sensing platform attitude angle high-precision measuring method matched based on GPS location and aerial map picture
CN107493531A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107493311A (en) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 Realize the methods, devices and systems of controlling equipment
CN107976692A (en) * 2016-10-24 2018-05-01 财团法人工业技术研究院 Positioning method and image capturing device thereof
CN110279420A (en) * 2019-07-18 2019-09-27 郑州轻工业学院 Portable falling detection device and detection method based on extreme learning machine
CN111693019A (en) * 2020-05-20 2020-09-22 西安交通大学 Attitude sensing device and data fusion and attitude calculation method
TWI742751B (en) * 2020-07-07 2021-10-11 國立陽明交通大學 Drone flight training system and method
CN113674342A (en) * 2021-08-30 2021-11-19 民航成都物流技术有限公司 Method for quickly identifying and positioning luggage basket based on area-array 3D camera
CN115265398A (en) * 2022-07-28 2022-11-01 成都理工大学 Method for monitoring slope damage accumulation under action of multiple-period earthquake

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106705837B (en) 2015-11-17 2019-12-06 华为技术有限公司 Object measuring method and device based on gestures
CN105654466B (en) * 2015-12-21 2018-06-29 大连新锐天地传媒有限公司 The position and posture detection method and its device of tellurion
WO2017147826A1 (en) * 2016-03-02 2017-09-08 武克易 Image processing method for use in smart device, and device
CN107896315B (en) * 2017-11-22 2019-09-10 中国民用航空总局第二研究所 Multisensor video fusion system and method based on A-SMGCS
CN109814704B (en) * 2017-11-22 2022-02-11 腾讯科技(深圳)有限公司 Video data processing method and device
CN109639337B (en) * 2018-11-23 2020-10-02 南京控维通信科技有限公司 Graphic auxiliary satellite alignment method suitable for satellite communication equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6867753B2 (en) * 2002-10-28 2005-03-15 University Of Washington Virtual image registration in augmented display field
US20060050070A1 (en) * 2004-09-07 2006-03-09 Canon Kabushiki Kaisha Information processing apparatus and method for presenting image combined with virtual image
CN1303501C (en) * 2004-09-30 2007-03-07 清华大学 Interdynamic information perception method and smart game platform embedded in cell phone
CN102446048B (en) * 2010-09-30 2014-04-02 联想(北京)有限公司 Information processing device and information processing method
CN102538820B (en) * 2011-12-13 2015-05-20 中国测绘科学研究院 Calibration method of aerial remote sensing integrated system
CN103369135B (en) * 2013-06-20 2014-12-31 清华大学 Method and system for recognizing position of mobile equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林绮: "手持靶标式三目视觉坐标测量系统关键技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105004342A (en) * 2015-07-15 2015-10-28 成都万维图新信息技术有限公司 Internet-based travel data processing method
CN106973214A (en) * 2015-11-18 2017-07-21 卡西欧计算机株式会社 Data processing equipment and data processing method
CN106973214B (en) * 2015-11-18 2019-12-03 卡西欧计算机株式会社 Data processing equipment and data processing method
CN105509716A (en) * 2015-11-26 2016-04-20 武大吉奥信息技术有限公司 Geographic information collection method based on augmented reality technology and device
CN105937878B (en) * 2016-06-13 2018-10-26 歌尔科技有限公司 A kind of interior distance measuring method
CN105937878A (en) * 2016-06-13 2016-09-14 歌尔科技有限公司 Indoor distance measuring method
US10769802B2 (en) 2016-06-13 2020-09-08 Goertek Technology Co., Ltd. Indoor distance measurement method
CN107493311B (en) * 2016-06-13 2020-04-24 腾讯科技(深圳)有限公司 Method, device and system for realizing control equipment
CN107493311A (en) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 Realize the methods, devices and systems of controlling equipment
CN106352897A (en) * 2016-08-26 2017-01-25 杨百川 Silicon MEMS (micro-electromechanical system) gyroscope error estimating and correcting method based on monocular visual sensor
CN107976692A (en) * 2016-10-24 2018-05-01 财团法人工业技术研究院 Positioning method and image capturing device thereof
CN107462244A (en) * 2017-04-24 2017-12-12 北京航空航天大学 A kind of air remote sensing platform attitude angle high-precision measuring method matched based on GPS location and aerial map picture
CN107493531B (en) * 2017-08-04 2019-11-08 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107493531A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN110279420A (en) * 2019-07-18 2019-09-27 郑州轻工业学院 Portable falling detection device and detection method based on extreme learning machine
CN111693019A (en) * 2020-05-20 2020-09-22 西安交通大学 Attitude sensing device and data fusion and attitude calculation method
CN111693019B (en) * 2020-05-20 2021-04-20 西安交通大学 Attitude sensing device and data fusion and attitude calculation method
TWI742751B (en) * 2020-07-07 2021-10-11 國立陽明交通大學 Drone flight training system and method
CN113674342A (en) * 2021-08-30 2021-11-19 民航成都物流技术有限公司 Method for quickly identifying and positioning luggage basket based on area-array 3D camera
CN115265398A (en) * 2022-07-28 2022-11-01 成都理工大学 Method for monitoring slope damage accumulation under action of multiple-period earthquake
CN115265398B (en) * 2022-07-28 2024-06-25 成都理工大学 Slope damage accumulation monitoring method under multi-period earthquake action

Also Published As

Publication number Publication date
CN104750969B (en) 2018-01-26
CN104748728B (en) 2019-02-22
CN104750969A (en) 2015-07-01

Similar Documents

Publication Publication Date Title
CN104748728B (en) Intelligent machine attitude matrix calculation method and its applied to photogrammetric method
CN105474033B (en) Intelligent machine attitude determination, full-view image generation and target identification method
CN106327573B (en) A kind of outdoor scene three-dimensional modeling method for urban architecture
CN105627991B (en) A kind of unmanned plane image real time panoramic joining method and system
CN102435188B (en) Monocular vision/inertia autonomous navigation method for indoor environment
Zhang et al. A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection
Manweiler et al. Satellites in our pockets: an object positioning system using smartphones
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
Xie et al. Study on construction of 3D building based on UAV images
CN104501779A (en) High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
CN109443359B (en) Geographical positioning method of ground panoramic image
CN105279750A (en) Equipment display guiding system based on IR-UWB and image moment
CN103047985A (en) Rapid positioning method for space target
CN104155765A (en) Method and equipment for correcting three-dimensional image in tiled integral imaging display
CN106780321A (en) A kind of overall tight orientation of the satellite HR sensors images of CBERS 02 and correction joining method
CN111815765A (en) Heterogeneous data fusion-based image three-dimensional reconstruction method
CN110986888A (en) Aerial photography integrated method
CN108253942B (en) Method for improving oblique photography measurement space-three quality
Al-Hamad et al. Smartphones based mobile mapping systems
RU2571300C2 (en) Method for remote determination of absolute azimuth of target point
CN104463956A (en) Construction method and device for virtual scene of lunar surface
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
CN110009740A (en) Geology based on exercise recovery structure is appeared quick three-dimensional reconstructing method
CN104539926B (en) Distance determines method and apparatus
CN110046430A (en) The accurate positioning for ground method of optical satellite image based on the refraction of ellipsoid stratified atmosphere

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210712

Address after: 430050 Building 2, Wuke dormitory, Shilipu, Hanyang District, Wuhan City, Hubei Province

Patentee after: WUHAN CHUXIONG GAOJING TECHNOLOGY Co.,Ltd.

Address before: 430000 room 801, building 6, luogui community, Chenjiawan, Xiongchu Avenue, Wuhan City, Hubei Province

Patentee before: Liu Jin

TR01 Transfer of patent right