CN103745474A - Image registration method based on inertial sensor and camera - Google Patents
Image registration method based on inertial sensor and camera Download PDFInfo
- Publication number
- CN103745474A CN103745474A CN201410027349.4A CN201410027349A CN103745474A CN 103745474 A CN103745474 A CN 103745474A CN 201410027349 A CN201410027349 A CN 201410027349A CN 103745474 A CN103745474 A CN 103745474A
- Authority
- CN
- China
- Prior art keywords
- formula
- image
- sin
- video camera
- inertial sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention puts forward an image registration method based on an inertial sensor and a camera. The camera and the inertial sensor are synchronously controlled to operate to simultaneously acquire the color image and depth image of the same scene and the three-axis motion angular velocity and three-axis motion acceleration of the camera; Kalman filtering is utilized to carry out filtering on the three-axis motion angular velocity and the three-axis motion acceleration, the Eulerian dynamics equation is used for converting the three-axis acceleration into a Eulerian angle, the uniformly accelerated motion formula is used for converting the three-axis angular velocity into three-axis displacement, and thereby an external reference matrix of the camera is obtained; by utilizing the affine transformation relation between the color image and the depth image and the proportional relation between depth image grey value and pixel depth information, the depth information of angular points in the depth image is solved; an OFCM (Orthogonal Frequency-division Multiplexing) model is used for calculation to obtain image background optical flow; the RANSAC (Random Sample Consensus) algorithm is used for completing image registration. The calculation load of the method is low, both the real-timeness and the precision are high, and the method is applicable to hand-held devices and mobile devices.
Description
Technical field
The invention belongs to digital image processing field, be specifically related to a kind of method for registering images based on inertial sensor and video camera.
Background technology
The key issue of image registration techniques is the estimation to camera motion, generally by camera calibration, completes.In prior art, the method for camera self-calibration has:
The people such as Li Lichun have proposed based on 8 method basis matrix three-dimensional reconstruction algorithm [Li Lichun, Qiu Zhiqiang, king roc etc. based on match measure, weighting solves the three-dimensional reconstruction algorithm [J] of basis matrix. computer utility, 2007,27 (10): 2530-2533], the method complexity and calculated amount are very large, are difficult to realize the real-time of image processing.
The people such as Zhang Yang the have proposed FOE modelling verification translation effect [Zhang Yang of camera motion, Kai Sele Vios, Bai Sen. the moving object detection algorithm based on mobile camera in visible-range. computer vision and the .IEEE of pattern-recognition operating room international conference, 2006,130-131], the method is not considered the internal reference matrix of rotatablely moving of video camera and video camera, and precision is not high.
The scaling method that the people such as Zheng Hong have proposed based on active vision is mainly to utilize the hardware platform of the known motions such as The Cloud Terrace that camera motion information [Zheng Hong is provided, Liu Zhenqiang. the monopod video camera self-calibration [J] based on accurate model. robot, 2013.5,35 (3): 1994-2013], the method is because the equipment such as The Cloud Terrace are larger, although can realize well real-time, be difficult to spread in handheld device and mobile device, use face is limited.
Summary of the invention
The present invention proposes a kind of method for registering images based on inertial sensor and video camera, and the method calculated amount is little, and real-time and precision are all higher, can be applicable to handheld device and mobile device.
In order to solve the problems of the technologies described above, the invention provides a kind of method for registering images based on inertial sensor and video camera, comprise the following steps:
The coloured image I of Same Scene and the three-axis moving angular velocity [w of J, depth image I ' and J ' and video camera are obtained in step 1, synchro control video camera and inertial sensor work simultaneously
x, w
y, w
z] and three-axis moving acceleration [a
x, a
y, a
z];
The angle point m of step 2, extraction coloured image I
1(x
1, y
1);
Step 3, utilize the three axis angular rate [ws of Kalman filtering to video camera
x, w
y, w
z] and 3-axis acceleration [a
x, a
y, a
z] carry out filtering, obtain Kalman Filter Estimation three axis angular rates
with Kalman Filter Estimation 3-axis acceleration
use euler dynamical equations formula will
convert Eulerian angle [α to
k, β
k, γ
k], use equation of uniformly accelerated motion will
convert three-shaft displacement [t to
xk, t
yk, t
zk], thereby obtain video camera, join rotation matrix R and the translation matrix T in matrix [R, T] outward;
Step 4, utilize the affine transformation relationship of coloured image and depth image, solve the angle point m of coloured image I
1(x
1, y
1) coordinate position m in depth image I ' and J '
1ir(x
1, y
1) and m
2ir(x
1, y
1); Utilize the proportionate relationship of depth image gray-scale value and pixel depth information to solve angle point m
1(x
1, y
1) depth information u in depth image I ' and J '
1and u
2;
The depth information u of the coloured image that step 5, the rotation matrix R that step 3 is obtained and translation matrix T and step 4 obtain
1, u
2bring OFCM model into, calculate the background light stream m that obtains image J
2(x
2, y
2), OFCM model as shown in Equation (1),
In formula (1), the internal reference matrix that K is video camera, K
-1for the inverse matrix of K;
The background light stream m of step 6, use RANSAC color image I
2carry out Robust estimation, by background light stream m
2the light stream being produced by moving object of middle existence and mismatch quasi-optical stream and weed out, thereby complete the image registration of coloured image I and coloured image J, then utilize the correct background light stream of the coloured image I retaining to carry out initialization to coloured image J, carry out the registration of successive image.
Main inventive principle:
1, the present invention has proposed the relational model (OFCM) of image background light stream and camera motion in conjunction with the rigid body variation relation of theorem in Euclid space, and this model mode of mainly deriving is as follows:
Supposing that colored CCD is pin-hole model, only there is the relative motion with colored CCD in the background in scene, and the foundation of camera coordinate system is O centered by the photocentre of colored CCD, and the straight line at optical axis place is Z axis, then sets up X, Y-axis according to left hand principle.
Before supposing camera motion, camera coordinate system overlaps with world coordinate system, and the coordinate of any point P in space background meets so:
P
W=P
C (2)
In formula (2), P
wfor the world coordinate system coordinate of space point, P
wfor the camera coordinate system coordinate of space point, the homogeneous coordinates m of the imaging point on space background point P and its imaging plane
1(x
1, y
1, 1) and meet formula (3):
u
1m
1=KP
C (3)
In formula (3), m
1(x
1, y
1) be the imaging point on imaging plane, while utilizing model to calculate, adopt Harris angle point to replace; u
1can be approximately m
1(x
1, y
1) depth information of corresponding spatial point; K is the internal reference matrix of video camera, considers the relation that in image, pixel coordinate is and physical coordinates is, can draw the five-parameter model of video camera internal reference matrix, as shown in Equation (4),
In formula (4),
wherein, dx is the physical size of pixel in x direction, and dy is the physical size of pixel in y direction, [u
0, v
0] be the coordinate of principal point in pixel coordinate system, s is camera distortion coefficient.
Suppose that video camera exists the translation motion of any direction and rotatablely moves, translation motion can be used translation vector T=[t
x, t
y, t
z]
trepresent, rotatablely move and can represent with the rotation matrix R of 3*3, now the coordinate of space background point P in camera coordinate system meets:
P'
C=R×P
W+T=R×P
C+T (5)
In formula (5), rotation matrix R can be with the Eulerian angle α of theorem in Euclid space, and beta, gamma represents, as the formula (6),
After camera motion, the homogeneous coordinates m of the imaging point on spatial point P and its imaging plane
2(x
2, y
2, 1) and meet relation:
u
2m
2=KP'
C=K(R×P
C+T) (7)
According to formula (3) and formula (5), can obtain the motion light stream of camera motion front and back spatial point P on imaging plane, i.e. the final form of OFCM:
Wherein, u
1and u
2can be approximated to be the depth information of spatial point P in camera coordinate system, the depth image obtaining by video camera calculates.Outer ginseng matrix [R, T] is obtained and is processed by hardware platform and draws, m
1can obtain by Harris Corner Detection.Degree of depth CCD and the colored CCD of supposing video camera are all operated under the condition of fixed focal length f, and this time image plane and object plane are one to one.Therefore, the internal reference matrix K of colored CCD is an invariable amount in whole image acquisition procedures, can be through once demarcating after obtaining and use as experience matrix.
Advantage: the camera model existing now has FOE model, MOF model, FOE model and MOF model are not all considered the internal reference matrix of video camera, and FOE model is only supported the translation motion of video camera, therefore under camera complex motion, cannot keep higher precision.The internal reference matrix that the OFCM model that the present invention proposes comprises video camera, supports three axle translation motions and three axles of video camera to rotatablely move, and can overcome scene brightness sudden change.Therefore, OFCM model, in camera complex motion and scene brightness sudden change situation, can keep high-precision image registration.
2, the hardware platform based on inertial sensor and video camera:
The present invention utilizes OFCM model to carry out the background light stream of computed image, and from OFCM model, the calculating of image background light stream need to be known outer ginseng matrix [R, T] and the depth information u corresponding to image pixel of video camera
1, u
2.The present invention adopts hardware platform to complete video camera to join obtaining of matrix and image pixel depth information outward.
Hardware platform of the present invention comprises three parts: control device FPGA, inertial sensor and video camera, video camera can adopt Kinect video camera, inertial sensor comprises gyroscope and accelerometer, Kinect video camera comprises that depth image obtains CCD and coloured image obtains CCD, the minimum system of FPGA and inertial sensor is all fixed on depth image and obtains on CCD, inertial sensor be positioned at Kinect video camera directly over, gyroscope is for obtaining the three-axis moving angular velocity [w of video camera
x, w
y, w
z], accelerometer is for obtaining the three-axis moving acceleration [a of video camera
x, a
y, a
z], Kinect video camera is for color image shot I, J and depth image I ', J '.After hardware platform powers on, FPGA is by the shutter work of gating pulse control Kinect video camera, thereby complete obtaining of coloured image and depth image, FPGA is by SPI protocol synchronization control Kinect video camera and inertial sensor work, corresponding one by one to guarantee the camera motion of image sequence that this step is obtained and collection.
Advantage: the image registration algorithm calculated amount based on this model is little, and system real time is good; Because inertial sensor has, volume is little, precision advantages of higher, obtains the equipment of camera motion compare with The Cloud Terrace etc., can popularize better in handheld device and mobile device.
3, the foundation of the Kalman filter model based on inertial sensor:
There is zero migration, scale factor error and random noise equal error in the camera motion collecting due to inertial sensor, therefore needs the camera motion to collecting to carry out filtering and noise reduction processing.
When using the classical formulas of Kalman filtering, need to set up according to the performance of inertial sensor and parameter error model and the state transitions relational expression of inertial sensor:
z′
k=z
k(M+1)+B
f+U (9)
z
k=Ψ
k,k-1z
k-1+V (10)
Wherein, z '
kthe actual observed value of etching system during for k, z
kthe actual value of etching system during for k, M is scale factor error, B
ffor zero point drift, V and U are random Gaussian white noise, Ψ
k, k-1for state-transition matrix.
According to formula (9) binding time sequence analysis, solve the scale factor error M of inertial sensor, zero point drift B
f, and white Gaussian noise V and U; According to formula (10) binding time sequence analysis, solve the state-transition matrix Ψ between parameter
k, k-1.
The result of formula formula (9) and formula (10) calculating is brought into the classical formulas (11) of Kalman filtering, the parameter in k moment is carried out to filtering, obtain estimated value
Wherein, the noise variance that Q is system, K
kfor kalman gain, P
kfor state covariance, w
k, k-1for state discreet value.
Compared with prior art, its remarkable advantage is in the present invention:
(1) because the estimation of camera parameters is by inertial sensor and video camera Real-time Obtaining, wherein the gyroscope in inertial sensor and accelerometer obtain respectively rotational motion parameter and the translation motion parameter of video camera; Video camera comprises that depth image obtains CCD (degree of depth CCD) and coloured image obtains CCD (colored CCD), and the two can be in conjunction with the depth information that obtains coloured image.Although therefore this method is based on hardware platform, because inertial sensor has, volume is little, precision advantages of higher, after video camera binding, can popularize well in handheld device and mobile device.
(2) from photography geometric angle, in conjunction with the rigid body variation relation of theorem in Euclid space, derive the relational model (optical flow and camera ' s motion) of image background light stream and camera motion, be denoted as OFCM model.The internal reference matrix that this model comprises video camera, and support three axle translation motions and three axles of video camera to rotatablely move; Meanwhile, the in the situation that of jump in brightness, still there is higher precision, compared to FOE model and light stream matching method, had larger breakthrough.
Accompanying drawing explanation
Fig. 1 is the method for registering images process flow diagram that the present invention is based on inertial sensor and video camera.
Fig. 2 is the image registration design sketch that uses the inventive method to obtain in scene jump in brightness situation.
Fig. 3 is the image registration design sketch that uses the inventive method to obtain when video camera generation translation motion.
Fig. 4 is the image registration design sketch that uses the inventive method to obtain when video camera rotates motion.
Fig. 5 is in video camera generation translation motion and the image registration design sketch that uses while rotatablely moving the inventive method to obtain.
Embodiment
As shown in Figure 1, the present invention is based on the method for registering images of inertial sensor and video camera, comprise the following steps
Step 1, in hardware platform, synchro control video camera and inertial sensor work are obtained the coloured image I of Same Scene and the kinematic parameter of J, depth image I ' and J ' and video camera, i.e. three-axis moving angular velocity [w simultaneously
x, w
y, w
z] and three-axis moving acceleration [a
x, a
y, a
z], be implemented as follows:
Hardware platform of the present invention comprises three parts: control device FPGA, inertial sensor and video camera, inertial sensor comprises gyroscope and accelerometer, video camera comprises that depth image obtains CCD and coloured image obtains CCD, for color image shot I, J and depth image I ', J ', the minimum system of FPGA and inertial sensor is all fixed on depth image and obtains CCD above, and gyroscope is for obtaining the three-axis moving angular velocity [w of video camera
x, w
y, w
z], accelerometer is for obtaining the three-axis moving acceleration [a of video camera
x, a
y, a
z].After hardware platform powers on, FPGA is by the shutter work of gating pulse control video camera, thereby complete obtaining of coloured image and depth image, FPGA is by SPI protocol synchronization control video camera and inertial sensor work, corresponding one by one to guarantee the camera motion of image sequence that this step is obtained and collection.
Step 2, utilize Harris Corner Detection to extract the angle point m of coloured image I
1(x
1, y
1).Be implemented as follows:
2.1 calculate pixel p[x in coloured image I, y] correlation matrix M, computing method as shown in formula (1),
In formula (1), (x, y) is the image coordinate of pixel p, and w (x, y) is that window size is Gauss's weighting function of w*w, I
xrepresent the derivative of line direction, I
yrepresent the derivative of column direction;
2.2 calculating pixel point m
1(x
1, y
1) angle point response R (x, y), computing method as shown in formula (2),
R(x,y)=detM-k*(traceM)
2 (2)
In formula (2), detM represents the value of correlation matrix M determinant, and traceM represents the mark of correlation matrix M, and k is constant, is generally 0.04-0.06;
2.3 judge pixel m
1(x
1, y
1) whether be angle point, determination methods is shown in formula (3), if pixel m
1(x
1, y
1) angle point response R (x, y) meet formula (3), pixel m
1(x
1, y
1) be angle point; If pixel m
1(x
1, y
1) angle point response R (x, y) do not meet formula (11), by m
1(x
1, y
1) give up;
[R(x,y)≥R(x′,y′)]∩[R(x,y)≥0.01R
max] (3)
In formula (3), and R (x ', y ') represent in window w*w except pixel m
1(x
1, y
1) outside the angle point response of other pixel, R
maxfor edge feature frame I
fin the response of maximum angle point.
Step 3, utilize the three axis angular rate [ws of Kalman filtering to video camera
x, w
y, w
z] and 3-axis acceleration [a
x, a
y, a
z] carry out filtering, obtain Kalman Filter Estimation three axis angular rates
with Kalman Filter Estimation 3-axis acceleration
use euler dynamical equations formula will
convert Eulerian angle [α to
k, β
k, γ
k], use equation of uniformly accelerated motion will
convert three-shaft displacement [t to
xk, t
yk, t
zkthereby] obtain video camera join rotation matrix R and the translation matrix T in matrix [R, T] outward;
3.1 utilize the kinematic parameter of Kalman filtering to video camera, i.e. three axis angular rate [w
x, w
y, w
z] and 3-axis acceleration [a
x, a
y, a
z] carry out filtering, obtain Kalman Filter Estimation three axis angular rates
with Kalman Filter Estimation 3-axis acceleration
concrete account form as follows:
3.1.1 error model and the state transitions relational expression of according to the performance of inertial sensor and kinematic parameter, setting up inertial sensor, binding time sequence analysis solves scale factor error M, the zero point drift B of inertial sensor
f, the state-transition matrix Ψ between white Gaussian noise U, kinematic parameter
k, k-1and white Gaussian noise V;
Error model as shown in Equation (4), shown in state transitions relational expression formula (5),
z′
k=z
k(M+1)+B
f+U (4)
z
k=Ψ
k,k-1z
k-1+V (5)
In formula (4) and formula (5), z represents three axis angular rate [w
x, w
y, w
z] and 3-axis acceleration [a
x, a
y, a
z] in a certain kinematic parameter, z '
kfor k moment inertial sensor is measured the measured value of a certain kinematic parameter obtaining, z
kfor the actual value of k a certain kinematic parameter of moment, M is scale factor error, B
ffor zero point drift, V and U are random Gaussian white noise, Ψ
k, k-1for state-transition matrix;
According to formula (4) binding time sequence analysis, solve the scale factor error M of inertial sensor, zero point drift B
f, and white Gaussian noise V and U; According to formula (5) binding time sequence analysis, solve the state-transition matrix Ψ between kinematic parameter
k, k-1;
3.1.2 the kinematic parameter z of the classical formulas (6) that the result of step 3.1.1 being asked for is brought Kalman filtering into the k moment
kcarry out filtering, obtain the estimated value of this kinematic parameter
In formula (6), the noise variance that Q is system, K
kfor kalman gain, P
kfor state covariance, z
k, k-1for state discreet value.
According to the method described above respectively to three-axis moving angular velocity [w
x, w
y, w
z] and three-axis moving acceleration [a
x, a
y, a
z] carry out Kalman filtering, obtain
with
3.2 use euler dynamical equations by filtered k moment three-axis moving angular velocity w
xk, w
yk, w
zkconvert the Eulerian angle α in European coordinate system to
k, β
k, γ
k, by the α of gained
k, β
k, γ
kbring in rotation matrix R, complete the calculating of colored CCD rotation matrix R;
Described euler dynamical equations as shown in formula (7),
In formula (7),
for Eulerian angle α
k, β
k, γ
kfirst order derivative, the parameter α that the k-1 moment is tried to achieve
k-1, β
k-1, γ
k-1as the initial value of formula (7);
Described rotation matrix R is as shown in formula (8):
The speed displacement formula of 3.3 uses as shown in formula (9) is by 3-axis acceleration a
xk, a
yk, a
zkconvert three-shaft displacement t to
xk, t
yk, t
zk.Because the frequency acquisition of camera motion is higher, therefore the translation motion of video camera can be approximately to uniformly accelerated motion, computing formula is as shown in formula (9):
In formula (9), t
krepresent three-shaft displacement t
xk, t
yk, t
zkin the displacement of any change in coordinate axis direction, the shooting cycle that T is video camera, a
k-1for the axial acceleration of k-1 moment respective coordinates, v
k-1and v
kbe respectively the initial velocity in k-1 moment and k moment.The present invention is to the initial velocity 0 of fixed system initial time, i.e. v
0=0.By 3-axis acceleration a
xk, a
yk, a
zkbring respectively formula (9) into, can calculate the translation vector T=[t that obtains three axle translation motions
xk, t
yk, t
zk].
Step 4, utilize the affine transformation relationship of coloured image and depth image, solve respectively the angle point m of coloured image I
1(x
1, y
1) coordinate position m in depth image I ' and J '
1ir(x
1, y
1) and m
2ir(x
1, y
1); Utilize the proportionate relationship of gray-scale value and pixel depth information in depth image to solve angle point m
1(x
1, y
1) depth information u in depth image I ' and J '
1and u
2, be implemented as follows:
4.1 utilize the affine transformation relationship of coloured image and depth image, solve respectively the angle point m of coloured image I
1(x
1, y
1) coordinate position m in depth image I ' and J '
1ir(x
1, y
1) and m
2ir(x
1, y
1), be implemented as follows:
According to direct linear transformation's algorithm, calculate the affine transformation relationship of coloured image and depth image, concrete as shown in formula (10):
m
ir=Hm (10)
In formula (10), m
irfor the pixel coordinate in depth image, m is the pixel coordinate in coloured image.By the angle point m of coloured image I
1(x
1, y
1) bring formula (9) into and solve angle point m
1(x
1, y
1) coordinate position m in depth image I ' and J '
1ir(x
1, y
1) and m
2ir(x
1, y
1).
4.2 utilize the proportionate relationship of gray-scale value and pixel depth information in depth image to solve coloured image angle point m
1(x
1, y
1) depth information u in depth image I ' and J '
1and u
2, concrete computation process is as follows
The rule-of-thumb relation that the proportionate relationship of gray-scale value and pixel depth information can propose by reverse engineering, as shown in formula (11):
u=l×m (11)
In formula (11), m (x, y) is the gray-scale value that pixel coordinate (x, y) is located, and u is that pixel coordinate (x, y) is located corresponding depth value, and l is scale-up factor; The m that step 4.2 is solved
1ir(x
1, y
1), m
2ir(x
1, y
1) bring formula (11) into and solve corresponding depth information u
1and u
2.
The depth information u of the coloured image of matrix R and T and step 4 acquisition is joined in step 5, the motion that step 3 is obtained outward
1, u
2bring OFCM model into, calculate the background light stream m that obtains image J
2(x
2, y
2), be implemented as follows:
The depth information u of the coloured image of matrix R and T and step 4 acquisition is joined in the motion that step 3 is obtained outward
1, u
2bring OFCM model as shown in Equation (12) into, calculate and obtain k frame angle point m
1background light stream m in k+1 two field picture
2,
In formula (12), the internal reference matrix that K is video camera, K
-1for the inverse matrix of K.
The background light stream m of step 6, use RANSAC color image I
2carry out Robust estimation, by background light stream m
2the light stream being produced by moving object of middle existence and mismatch quasi-optical stream and weed out, thereby complete the image registration of coloured image I and coloured image J, then utilize the correct background light stream of the coloured image I retaining to carry out initialization to coloured image J, carry out the registration of successive image.Be implemented as follows:
The RANSAC computation process of described coloured image I is:
The current all angle point m of 6.1 initialization
1(x
1, y
1), m
2(x
2, y
2) be intra-office point;
6.2 utilize DLT algorithm to calculate affine transformation matrix H
1, according to formula (13) in conjunction with angle point m
1(x
1,
y1) estimate angle point estimated value m
2' (x
2', y
2'):
m
2′=H
1m
1 (13)
6.3 calculate angle point m according to formula (14)
2(x
2, y
2) and angle point estimated value m
2' (x
2', y
2') Euclidean distance d, if d is less than threshold value d ', by angle point m
1(x
1, y
1), m
2(x
2, y
2) be set to intra-office point (being background light stream), if d is more than or equal to threshold value d ', by angle point m
1(x
1, y
1), m
2(x
2, y
2) being set to point not in the know, point not in the know is moving object light stream or mismatches quasi-optical stream.When completing after a RANSAC calculating, the current intra-office point of initialization is point not in the know, and repeating step 6.2 and 6.3, when no longer producing new intra-office point or reaching regulation multiplicity, finishes current RANSAC algorithm,
d=sqrt((x
2-x
2′)
2+(y
2-y
2′)
2) (14)
Describedly coloured image J is carried out to initialized process be: first travel through all angle points of this frame, if this angle point overlaps with the correct background light stream that coloured image I retains, the initial intra-office point of calculating using this angle point as RANSAC.Repeating step 6.1,6.2,6.3, completes the estimation of RANSAC frame by frame.
According to above-mentioned steps, constantly by the moving object light stream in image or mismatch quasi-optical stream and weed out; Utilize the angle point light stream of RANSAC algorithm extraction, complete the registration of image.
Beneficial effect of the present invention can further illustrate by following experiment:
The main control device that this experiment adopts is the FPGA that the model of ALTERA company is EP2C5Q208C6, sensor comprises two parts: the inertial sensor that model is ADIS16405 and Kinect video camera, and wherein the gyroscope in inertial sensor and accelerometer obtain respectively rotational motion parameter and the translation motion parameter of video camera; Kinect video camera comprises that depth image obtains CCD (degree of depth CCD) and coloured image obtains CCD (colored CCD), and the two can be in conjunction with the depth information that obtains coloured image.Because inertial sensor is of a size of 35mm*30mm, so hardware platform volume is little, can popularize in mobile device.
Utilize Kinect collected by camera to obtain size to be coloured image and the depth image of 640*480.According to the real time analysis to PC end algorithm, the image acquisition frequency of hardware platform is set the present embodiment and the parameter acquisition frequency of inertial sensor is 40 frames/s.The window of Harris Corner Detection is chosen for 5*5, and the calculation times of RANSAC algorithm is set to 30 times.
For the effect of OFCM model of verifying that better the inventive method proposes, this experiment has verified that respectively Kinect video camera is under jump in brightness, random translation motion, Random-Rotation motion, random translation rotation compound movement condition, use side of the present invention to carry out image registration to Same Scene, obtain respectively the design sketch as shown in Fig. 2, Fig. 3, Fig. 4, Fig. 5.In Fig. 2, Fig. 3, Fig. 4, Fig. 5, the left-half of every width figure is that left figure, right half part are right figure, and figure interval, left and right is 10 frames, and wherein left figure is k two field picture, and right figure is k+10 two field picture.From Fig. 2, Fig. 3, Fig. 4, Fig. 5, find out, the inventive method can complete the registration of image accurately.
Table 1 is under above-mentioned different cameras moving condition, PC end image registration statistical analysis of performance.According to table 1, the indices of mode of motion of the same race is carried out to longitudinal comparison, when can find the inventive method in the generation translation motion of Kinect video camera and rotatablely moving, image registration has all obtained good result, and average treatment precision can reach 69.5%; The indices of the situation to Different Exercise Mode hypograph registration carries out lateral comparison, can find the inventive method to the registration accuracy of Kinect video camera single movement the registration accuracy higher than hybrid motion, this error is mainly to be caused by the error stack of gyroscope and accelerometer; Just process real-time, the about 22.92ms of average handling time on PC, temporal difference is mainly that the probability of Kinect video camera hybrid motion generation Mismatching point becomes large, therefore RANSAC algorithm is estimated relatively consuming time; According to the time Estimate of PC end, consider the actual difficulty that hardware gathers, the synchronous acquisition frequency that the Image Acquisition frame frequency of Kinect video camera and the parameter of inertial sensor are set is 30Hz.
Table 1PC end image registration performance statistics table
Claims (8)
1. the method for registering images based on inertial sensor and video camera, is characterized in that, comprises the following steps:
The coloured image I of Same Scene and the three-axis moving angular velocity [w of J, depth image I ' and J ' and video camera are obtained in step 1, synchro control video camera and inertial sensor work simultaneously
x, w
y, w
z] and three-axis moving acceleration [a
x, a
y, a
z];
The angle point m of step 2, extraction coloured image I
1(x
1, y
1);
Step 3, utilize the three axis angular rate [ws of Kalman filtering to video camera
x, w
y, w
z] and 3-axis acceleration [a
x, a
y, a
z] carry out filtering, obtain Kalman Filter Estimation three axis angular rates
with Kalman Filter Estimation 3-axis acceleration
; Use euler dynamical equations formula will
convert Eulerian angle [α to
k, β
k, γ
k], use equation of uniformly accelerated motion will
convert three-shaft displacement [t to
xk, t
yk, t
zk], thereby obtain video camera, join rotation matrix R and the translation matrix T in matrix [R, T] outward;
Step 4, utilize the affine transformation relationship of coloured image and depth image, solve the angle point m of coloured image I
1(x
1, y
1) coordinate position m in depth image I ' and J '
1ir(x
1, y
1) and m
2ir(x
1, y
1); Utilize the proportionate relationship of depth image gray-scale value and pixel depth information to solve angle point m
1(x
1, y
1) depth information u in depth image I ' and J '
1and u
2;
The depth information u of the coloured image that step 5, the rotation matrix R that step 3 is obtained and translation matrix T and step 4 obtain
1, u
2bring OFCM model into, calculate the background light stream m that obtains image J
2(x
2, y
2), OFCM model as shown in Equation (1),
In formula (1), the internal reference matrix that K is video camera, K
-1for the inverse matrix of K;
The background light stream m of step 6, use RANSAC color image I
2carry out Robust estimation, by background light stream m
2the light stream being produced by moving object of middle existence and mismatch quasi-optical stream and weed out, thereby complete the image registration of coloured image I and coloured image J, then utilize the correct background light stream of the coloured image I retaining to carry out initialization to coloured image J, carry out the registration of successive image.
2. the method for registering images based on inertial sensor and video camera as claimed in claim 1, it is characterized in that, in step 1, FPGA is by SPI protocol synchronization control video camera and inertial sensor, inertial sensor comprises gyroscope and accelerometer, video camera comprises that depth image obtains CCD and coloured image obtains CCD, and the minimum system of FPGA and inertial sensor is all fixed on depth image and obtains on CCD.
3. the method for registering images based on inertial sensor and video camera as claimed in claim 1, is characterized in that, uses Harris Corner Detection to extract the angle point m of coloured image I in step 2
1(x
1, y
1), computation process is as follows:
3.1 calculate pixel p[x in coloured image I, y] correlation matrix M, computing method as shown in formula (2),
In formula (2), (x, y) is the image coordinate of pixel p, and w (x, y) is that window size is Gauss's weighting function of w*w, I
xrepresent the derivative of line direction, I
yrepresent the derivative of column direction;
3.2 calculating pixel point m
1(x
1, y
1) angle point response R (x, y), computing method as shown in formula (3),
R(x,y)=detM-k*(traceM)
2 (3)
In formula (3), detM represents the value of correlation matrix M determinant, and traceM represents the mark of correlation matrix M, and k is constant, is generally 0.04-0.06;
3.3 judge pixel m
1(x
1, y
1) whether be angle point, determination methods is shown in formula (4), if pixel m
1(x
1, y
1) angle point response R (x, y) meet formula (4), pixel m
1(x
1, y
1) be angle point; If pixel m
1(x
1, y
1) angle point response R (x, y) do not meet formula (4), by m
1(x
1, y
1) give up;
[R(x,y)≥R(x',y')]∩[R(x,y)≥0.01R
max] (3)
In formula (4), R (x', y') represents in window w*w except pixel m
1(x
1, y
1) outside the angle point response of other pixel, R
maxfor edge feature frame I
fin the response of maximum angle point.
4. the method for registering images based on inertial sensor and video camera as claimed in claim 1, is characterized in that, in step 3, and described acquisition Kalman Filter Estimation three axis angular rates
with Kalman Filter Estimation 3-axis acceleration
computation process be:
4.1 set up error model and the state transitions relational expression of inertial sensor according to the performance of inertial sensor and kinematic parameter, and binding time sequence analysis solves scale factor error M, the zero point drift B of inertial sensor
f, the state-transition matrix Ψ between white Gaussian noise U, kinematic parameter
k, k-1and white Gaussian noise V;
Error model as shown in Equation (4), shown in state transitions relational expression formula (5),
z'
k=z
k(M+1)+B
f+U (4)
z
k=Ψ
k,k-1z
k-1+V (5)
In formula (4) and formula (5), z represents three axis angular rate [w
x, w
y, w
z] and 3-axis acceleration [a
x, a
y, a
z] in a certain kinematic parameter, z '
kfor k moment inertial sensor is measured the measured value of a certain kinematic parameter obtaining, z
kfor the actual value of k a certain kinematic parameter of moment;
4.2 results that step 4.1 is asked for are brought the kinematic parameter z of Kalman filtering formula to the k moment into
kcarry out filtering, obtain the estimated value of this kinematic parameter
kalman filtering formula as shown in formula (6),
In formula (6), the noise variance that Q is system, K
kfor kalman gain, P
kfor state covariance, z
k, k-1for state discreet value.
5. the method for registering images based on inertial sensor and video camera as claimed in claim 1, is characterized in that, in step 3, the computation process of described acquisition video camera rotation matrix R and translation matrix T is:
5.1 use euler dynamical equations by filtered k moment three-axis moving angular velocity w
xk, w
yk, w
zkconvert the Eulerian angle α in European coordinate system to
k, β
k, γ
k, by the α of gained
k, β
k, γ
kbring in rotation matrix R, complete the calculating of colored CCD rotation matrix R;
Described euler dynamical equations as shown in formula (7),
In formula (7),
for Eulerian angle
first order derivative, the parameter α that the k-1 moment is tried to achieve
k-1, β
k-1, γ
k-1as the initial value of formula (7);
Described rotation matrix R is as shown in formula (8):
The speed displacement formula of 5.2 uses as shown in formula (9) is by 3-axis acceleration a
xk, a
yk, a
zkconvert three-shaft displacement t to
xk, t
yk, t
zk,
In formula (9), t
krepresent three-shaft displacement t
xk, t
yk, t
zkin the displacement of any change in coordinate axis direction, the shooting cycle that T is video camera, a
k-1for the axial acceleration of k-1 moment respective coordinates, v
k-1and v
kbe respectively the initial velocity in k-1 moment and k moment, giving the initial velocity of fixed system initial time is 0.
6. the method for registering images based on inertial sensor and video camera as claimed in claim 1, it is characterized in that, in step 4, according to direct linear transformation's algorithm, calculate the affine transformation relationship of coloured image and depth image, affine transformation relationship is as shown in formula (10):
m
ir=Hm (10)
In formula (10), m
irfor the pixel coordinate in depth image, m is the pixel coordinate in coloured image.
7. the method for registering images based on inertial sensor and video camera as claimed in claim 1, it is characterized in that, the rule-of-thumb relation proposing by reverse engineering in step 4 obtains the proportionate relationship of gray-scale value and pixel depth information, and the proportionate relationship of gray-scale value and pixel depth information is as shown in formula (11):
u=l×m (11)
In formula (11), m (x, y) is the gray-scale value that pixel coordinate (x, y) is located, and u is that pixel coordinate (x, y) is located corresponding depth value, and l is scale-up factor.
8. the method for registering images based on inertial sensor and video camera as claimed in claim 1, is characterized in that, in step 6,
The background light stream m of described use RANSAC color image I
2the computation process of carrying out Robust estimation is:
The current all angle point m of 8.1 initialization
1(x
1, y
1), m
2(x
2, y
2) be intra-office point;
8.2 utilize DLT algorithm to calculate affine transformation matrix H
1, according to formula (12) in conjunction with angle point m
1(x
1, y
1) estimate angle point estimated value m
2' (x
2', y
2'):
m
2′=H
1m
1 (12)
8.3 calculate angle point m according to formula (13)
2(x
2, y
2) and angle point estimated value m
2' (x
2', y
2') Euclidean distance d, if d is less than threshold value d ', by angle point m
1(x
1, y
1), m
2(x
2, y
2) be set to intra-office point (being background light stream), if d is more than or equal to threshold value d ', by angle point m
1(x
1, y
1), m
2(x
2, y
2) be set to point not in the know; When completing after a RANSAC calculating, the current intra-office point of initialization is point not in the know, and repeating step 8.2 and 8.3, when no longer producing new intra-office point or reaching regulation multiplicity, finishes current RANSAC algorithm,
d=sqrt((x
2-x
2')
2+(y
2-y
2')
2) (13)
Describedly coloured image J is carried out to initialized process be: first travel through all angle points of this frame, if this angle point overlaps with the correct background light stream that coloured image I retains, the initial intra-office point calculating this angle point as RANSAC, method described in repeating step 8.1,8.2,8.3, completes the estimation of RANSAC frame by frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410027349.4A CN103745474B (en) | 2014-01-21 | 2014-01-21 | Image registration method based on inertial sensor and camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410027349.4A CN103745474B (en) | 2014-01-21 | 2014-01-21 | Image registration method based on inertial sensor and camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103745474A true CN103745474A (en) | 2014-04-23 |
CN103745474B CN103745474B (en) | 2017-01-18 |
Family
ID=50502489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410027349.4A Expired - Fee Related CN103745474B (en) | 2014-01-21 | 2014-01-21 | Image registration method based on inertial sensor and camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103745474B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104717426A (en) * | 2015-02-28 | 2015-06-17 | 深圳市德赛微电子技术有限公司 | Multi-camera video synchronization device and method based on external sensor |
CN105096337A (en) * | 2014-05-23 | 2015-11-25 | 南京理工大学 | Image global motion compensation method based on hardware platform of gyroscope |
CN105303518A (en) * | 2014-06-12 | 2016-02-03 | 南京理工大学 | Region feature based video inter-frame splicing method |
CN106251342A (en) * | 2016-07-26 | 2016-12-21 | 杭州好好开车科技有限公司 | Camera calibration method based on sensing technology |
CN106447766A (en) * | 2016-09-28 | 2017-02-22 | 成都通甲优博科技有限责任公司 | Scene reconstruction method and apparatus based on mobile device monocular camera |
CN106485245A (en) * | 2015-08-24 | 2017-03-08 | 南京理工大学 | A kind of round-the-clock object real-time tracking method based on visible ray and infrared image |
CN107507235A (en) * | 2017-08-31 | 2017-12-22 | 山东大学 | A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment |
CN107729847A (en) * | 2017-10-20 | 2018-02-23 | 阿里巴巴集团控股有限公司 | A kind of certificate verification, auth method and device |
CN108225309A (en) * | 2016-12-21 | 2018-06-29 | 波音公司 | Enhance the method and apparatus of multiple raw sensor images by geographic registration |
CN108232451A (en) * | 2017-12-28 | 2018-06-29 | 无锡人人行拍网络科技有限公司 | A kind of double-movement terminal data transmission antenna automatic bidirectional tracking system |
CN108388830A (en) * | 2018-01-09 | 2018-08-10 | 中国农业大学 | Animal shaped detection method and device based on convolutional Neural net |
CN108668087A (en) * | 2017-03-27 | 2018-10-16 | 展讯通信(上海)有限公司 | image alignment method and device, mobile terminal |
WO2019051961A1 (en) * | 2017-09-18 | 2019-03-21 | 深圳大学 | Pipeline detection method and apparatus, and storage medium |
CN109655057A (en) * | 2018-12-06 | 2019-04-19 | 深圳市吉影科技有限公司 | A kind of six push away the filtering optimization method and its system of unmanned plane accelerator measured value |
CN109683710A (en) * | 2018-12-20 | 2019-04-26 | 北京字节跳动网络技术有限公司 | A kind of palm normal vector determines method, apparatus, equipment and storage medium |
CN109902725A (en) * | 2019-01-31 | 2019-06-18 | 北京达佳互联信息技术有限公司 | Mobile mesh object detection method, device and electronic equipment and storage medium |
CN109900338A (en) * | 2018-12-25 | 2019-06-18 | 西安中科天塔科技股份有限公司 | A kind of road surface pit slot volume measuring method and device |
CN110414392A (en) * | 2019-07-15 | 2019-11-05 | 北京天时行智能科技有限公司 | A kind of determination method and device of obstacle distance |
CN110602377A (en) * | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Video image stabilizing method and device |
CN111462177A (en) * | 2020-03-14 | 2020-07-28 | 华中科技大学 | Multi-clue-based online multi-target tracking method and system |
CN111540003A (en) * | 2020-04-27 | 2020-08-14 | 浙江光珀智能科技有限公司 | Depth image generation method and device |
CN111750850A (en) * | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Angle information acquisition method, device and system |
CN111899305A (en) * | 2020-07-08 | 2020-11-06 | 深圳市瑞立视多媒体科技有限公司 | Camera automatic calibration optimization method and related system and equipment |
WO2021035524A1 (en) * | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
WO2021035525A1 (en) * | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, and electronic device and computer-readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630406A (en) * | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
CN102999759A (en) * | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
WO2013084782A1 (en) * | 2011-12-09 | 2013-06-13 | 株式会社日立国際電気 | Image processing device |
-
2014
- 2014-01-21 CN CN201410027349.4A patent/CN103745474B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630406A (en) * | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
WO2013084782A1 (en) * | 2011-12-09 | 2013-06-13 | 株式会社日立国際電気 | Image processing device |
CN102999759A (en) * | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
Non-Patent Citations (2)
Title |
---|
袁梦笛 等: "红外搜索跟踪系统中实时图像配准的研究和实现", 《红外技术》 * |
陈晨 等: "基于卡尔曼滤波的MEMS陀螺仪漂移补偿", 《机电工程》 * |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105096337A (en) * | 2014-05-23 | 2015-11-25 | 南京理工大学 | Image global motion compensation method based on hardware platform of gyroscope |
CN105096337B (en) * | 2014-05-23 | 2018-05-01 | 南京理工大学 | A kind of image global motion compensation method based on gyroscope hardware platform |
CN105303518A (en) * | 2014-06-12 | 2016-02-03 | 南京理工大学 | Region feature based video inter-frame splicing method |
CN104717426B (en) * | 2015-02-28 | 2018-01-23 | 深圳市德赛微电子技术有限公司 | A kind of multiple-camera video synchronization device and method based on external sensor |
CN104717426A (en) * | 2015-02-28 | 2015-06-17 | 深圳市德赛微电子技术有限公司 | Multi-camera video synchronization device and method based on external sensor |
CN106485245A (en) * | 2015-08-24 | 2017-03-08 | 南京理工大学 | A kind of round-the-clock object real-time tracking method based on visible ray and infrared image |
CN106251342A (en) * | 2016-07-26 | 2016-12-21 | 杭州好好开车科技有限公司 | Camera calibration method based on sensing technology |
CN106447766A (en) * | 2016-09-28 | 2017-02-22 | 成都通甲优博科技有限责任公司 | Scene reconstruction method and apparatus based on mobile device monocular camera |
CN106447766B (en) * | 2016-09-28 | 2019-07-09 | 成都通甲优博科技有限责任公司 | A kind of scene reconstruction method and device based on mobile device monocular camera |
CN108225309B (en) * | 2016-12-21 | 2023-09-26 | 波音公司 | Method and apparatus for enhancing multiple raw sensor images by geographic registration |
CN108225309A (en) * | 2016-12-21 | 2018-06-29 | 波音公司 | Enhance the method and apparatus of multiple raw sensor images by geographic registration |
CN108668087A (en) * | 2017-03-27 | 2018-10-16 | 展讯通信(上海)有限公司 | image alignment method and device, mobile terminal |
CN107507235B (en) * | 2017-08-31 | 2020-11-10 | 山东大学 | Registration method of color image and depth image acquired based on RGB-D equipment |
CN107507235A (en) * | 2017-08-31 | 2017-12-22 | 山东大学 | A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment |
WO2019051961A1 (en) * | 2017-09-18 | 2019-03-21 | 深圳大学 | Pipeline detection method and apparatus, and storage medium |
US10931871B2 (en) | 2017-09-18 | 2021-02-23 | Shenzhen University | Pipeline detection method and apparatus, and storage medium |
CN107729847B (en) * | 2017-10-20 | 2020-08-04 | 阿里巴巴集团控股有限公司 | Certificate verification and identity verification method and device |
CN107729847A (en) * | 2017-10-20 | 2018-02-23 | 阿里巴巴集团控股有限公司 | A kind of certificate verification, auth method and device |
US10783369B2 (en) | 2017-10-20 | 2020-09-22 | Alibaba Group Holding Limited | Document verification system, device, and method using a classification model |
CN108232451A (en) * | 2017-12-28 | 2018-06-29 | 无锡人人行拍网络科技有限公司 | A kind of double-movement terminal data transmission antenna automatic bidirectional tracking system |
CN108388830A (en) * | 2018-01-09 | 2018-08-10 | 中国农业大学 | Animal shaped detection method and device based on convolutional Neural net |
CN108388830B (en) * | 2018-01-09 | 2020-08-14 | 中国农业大学 | Animal body shape detection method and device based on convolutional neural network |
CN109655057A (en) * | 2018-12-06 | 2019-04-19 | 深圳市吉影科技有限公司 | A kind of six push away the filtering optimization method and its system of unmanned plane accelerator measured value |
CN109683710B (en) * | 2018-12-20 | 2019-11-08 | 北京字节跳动网络技术有限公司 | A kind of palm normal vector determines method, apparatus, equipment and storage medium |
CN109683710A (en) * | 2018-12-20 | 2019-04-26 | 北京字节跳动网络技术有限公司 | A kind of palm normal vector determines method, apparatus, equipment and storage medium |
CN109900338A (en) * | 2018-12-25 | 2019-06-18 | 西安中科天塔科技股份有限公司 | A kind of road surface pit slot volume measuring method and device |
US11176687B2 (en) | 2019-01-31 | 2021-11-16 | Beijing Dajia Internet Information Technology Co., Ltd | Method and apparatus for detecting moving target, and electronic equipment |
CN109902725A (en) * | 2019-01-31 | 2019-06-18 | 北京达佳互联信息技术有限公司 | Mobile mesh object detection method, device and electronic equipment and storage medium |
CN110602377B (en) * | 2019-03-18 | 2021-04-23 | 上海立可芯半导体科技有限公司 | Video image stabilizing method and device |
CN110602377A (en) * | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Video image stabilizing method and device |
CN111750850B (en) * | 2019-03-27 | 2021-12-14 | 杭州海康威视数字技术股份有限公司 | Angle information acquisition method, device and system |
CN111750850A (en) * | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Angle information acquisition method, device and system |
CN110414392B (en) * | 2019-07-15 | 2021-07-20 | 北京天时行智能科技有限公司 | Method and device for determining distance between obstacles |
CN110414392A (en) * | 2019-07-15 | 2019-11-05 | 北京天时行智能科技有限公司 | A kind of determination method and device of obstacle distance |
WO2021035525A1 (en) * | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, and electronic device and computer-readable storage medium |
WO2021035524A1 (en) * | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
CN111462177B (en) * | 2020-03-14 | 2023-04-07 | 华中科技大学 | Multi-clue-based online multi-target tracking method and system |
CN111462177A (en) * | 2020-03-14 | 2020-07-28 | 华中科技大学 | Multi-clue-based online multi-target tracking method and system |
CN111540003A (en) * | 2020-04-27 | 2020-08-14 | 浙江光珀智能科技有限公司 | Depth image generation method and device |
CN111899305A (en) * | 2020-07-08 | 2020-11-06 | 深圳市瑞立视多媒体科技有限公司 | Camera automatic calibration optimization method and related system and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN103745474B (en) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103745474B (en) | Image registration method based on inertial sensor and camera | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN109029433B (en) | Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform | |
CN104167016B (en) | A kind of three-dimensional motion method for reconstructing based on RGB color and depth image | |
CN109671120A (en) | A kind of monocular SLAM initial method and system based on wheel type encoder | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN106525049B (en) | A kind of quadruped robot ontology posture tracking method based on computer vision | |
CN107767425A (en) | A kind of mobile terminal AR methods based on monocular vio | |
CN103761737A (en) | Robot motion estimation method based on dense optical flow | |
CN109859266A (en) | Vision positions and drawing practice simultaneously under a kind of big visual angle change based on pre-transform | |
CN112907631B (en) | Multi-RGB camera real-time human body motion capture system introducing feedback mechanism | |
CN103994765A (en) | Positioning method of inertial sensor | |
CN102607535A (en) | High-precision real-time stereoscopic visual positioning method utilizing parallax space bundle adjustment | |
CN114693754B (en) | Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion | |
CN108253962A (en) | New energy pilotless automobile localization method under a kind of low light environment | |
CN105303518A (en) | Region feature based video inter-frame splicing method | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
Wang et al. | Temporal and spatial online integrated calibration for camera and LiDAR | |
Tian et al. | Research on multi-sensor fusion SLAM algorithm based on improved gmapping | |
CN103236053B (en) | A kind of MOF method of moving object detection under mobile platform | |
CN113916221B (en) | Self-adaptive pedestrian dead reckoning method integrating visual odometer and BP network | |
Garro et al. | Fast Metric Acquisition with Mobile Devices. | |
CN114690230A (en) | Automatic driving vehicle navigation method based on visual inertia SLAM | |
Deng et al. | Robust 3D-SLAM with tight RGB-D-inertial fusion | |
Li et al. | A real-time indoor visual localization and navigation method based on tango smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170118 Termination date: 20200121 |
|
CF01 | Termination of patent right due to non-payment of annual fee |