CN115690910A - Helmet pose tracking system and method for assisting visual feature point capture by IMU (inertial measurement Unit) - Google Patents
Helmet pose tracking system and method for assisting visual feature point capture by IMU (inertial measurement Unit) Download PDFInfo
- Publication number
- CN115690910A CN115690910A CN202211336959.3A CN202211336959A CN115690910A CN 115690910 A CN115690910 A CN 115690910A CN 202211336959 A CN202211336959 A CN 202211336959A CN 115690910 A CN115690910 A CN 115690910A
- Authority
- CN
- China
- Prior art keywords
- helmet
- imu
- camera
- visual
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a helmet pose tracking system and method for assisting visual feature point capture by an IMU (inertial measurement Unit); the system comprises: the device comprises a camera unit, a mark point unit, an IMU unit and a tracking processing unit; the mark point unit comprises a plurality of groups of mark points which are arranged on different positions of the helmet and are in a lighting or extinguishing state; the camera unit includes a plurality of cameras disposed within the vehicle cabin; each camera shooting angle faces to the helmet moving area range in the cabin, and at least one camera is enabled to shoot at a group of lighted mark points; the IMU unit comprises a helmet IMU and a carrier IMU; the tracking processing unit utilizes the characteristics of the lighted mark points in the image shot by the camera unit to carry out visual attitude measurement on the helmet; establishing a visual and inertial integrated Kalman filter, and performing filtering processing on the visual attitude measurement data by using IMU data; and pre-judging the position of the lighted mark point in the next frame of shot image according to the filtering result, and using the position for fast matching of the next frame of visual gesture detection. The invention satisfies large-range and high-precision vision measurement.
Description
Technical Field
The invention relates to the technical field of pose tracking, in particular to a helmet pose tracking system and method with assistance of an IMU (inertial measurement unit) in visual feature point capture.
Background
Currently, in a tracking application of a relative position and a posture of a helmet in a sport vehicle, an infrared lamp set arranged on the helmet is generally used to cooperate with an infrared camera arranged in a cabin of the sport vehicle to complete calculation of the position and the posture of the helmet. In the application, the high-speed camera after accurate calibration is matched with the lamp group, the camera obtains a real-time image, the image processing of the vehicle computer is carried out, the information of the lamp characteristic points is extracted, and the position and the posture information of the helmet relative to the cockpit are solved through a computer vision PnP method after the characteristic points are matched.
The PnP pose measurement method is based on accurate feature point matching, and when the helmet moves at a slow speed, the feature point matching process can be iterated by taking the matching result of the previous frame of camera image as an initial value; however, when the helmet moves at too fast speed relative to the camera or the lamp group is switched, the position of the feature point in the image changes greatly, and the iterative judgment cannot be performed by using the previous frame data. If the iteration of the previous frame of image is not finished, the next group of image data is reached, and the feature point matching cannot be continuously finished at the moment, so that the helmet tracking fails. And (4) accurately measuring the low dynamic working condition.
In order to solve the problem of dynamic tracking of the pose of the helmet, an IMU is usually introduced as a supplementary means, a Kalman filter is designed, and a vision measurement result is used as an observed quantity for correcting the integral drift of the IMU. The delay between the vision calculation and the IMU calculation is basically a fixed value, and when the vision is accurate, the filter can obtain a good measurement result. However, when the helmet position movement range is too large, visual measurement is lacked, which causes the problems of too large feature matching computation amount and poor tracking performance.
Disclosure of Invention
In view of the foregoing analysis, the present invention aims to provide a helmet pose tracking system and method for assisting the capture of visual feature points by an IMU, so as to realize head pose tracking and improve the dynamic performance of tracking.
The technical scheme provided by the invention is as follows:
the invention discloses a helmet pose tracking system for assisting visual feature point capture by an IMU (inertial measurement Unit), which comprises: the device comprises a camera unit, a mark point unit, an IMU unit and a tracking processing unit;
the marking point unit comprises a plurality of groups of marking points arranged on different positions of the helmet; each group of mark points is in a lightening or extinguishing state;
a camera unit including a plurality of cameras disposed at different positions in the vehicle cabin; each camera shooting angle faces to the range of a helmet moving area in the cabin, so that at least one camera shoots at a group of lighted mark points;
the IMU unit comprises a helmet IMU and a carrier IMU, and is used for measuring IMU data of the helmet and the carrier respectively;
the tracking processing unit is used for carrying out visual gesture detection on the helmet by utilizing the characteristics of the lighted mark points in the image shot by the camera unit; establishing a visual and inertial integrated Kalman filter, and performing filtering processing on the visual attitude measurement data by using IMU data; and pre-judging the position of the lighted mark point in the next frame of shot image according to the filtering result, and using the position for fast matching of the next frame of visual gesture detection.
Further, synchronous control is adopted for the camera unit, the mark point unit and the IMU unit; the method specifically comprises the following steps:
numbering all cameras in the camera units, and taking one camera as a master control camera to be responsible for generating a synchronization signal Cam SYNC; after receiving the synchronizing signal Cam SYNC, the other cameras synchronously shoot and send each frame of image with the camera number to the tracking processing unit;
numbering a plurality of groups of mark points of the mark point unit; the synchronous signal Cam SYNC controls the lightening of each group of mark points and sends the number information of the lightened mark points to the tracking processing unit;
and the synchronization signal Cam SYNC is also sent to the IMU unit to control the helmet IMU and the cockpit IMU to measure synchronously.
Further, the tracking processing unit comprises an IMU difference module, a visual attitude measurement module, a Kalman filter and a position prejudgment module; wherein the content of the first and second substances,
the IMU difference module is used for carrying out difference calculation on the measurement data of the helmet IMU and the carrier IMU and acquiring the acceleration and angular speed information of the helmet relative to the cockpit;
the visual attitude measurement module is used for matching the characteristic points of the determined images with a group of complete lighted mark points in the visual field of the camera, performing PnP calculation after the characteristic points are matched, and acquiring visual attitude measurement data of the helmet relative to the cockpit and outputting the visual attitude measurement data to the Kalman filter in real time; the visual pose data comprises position and pose data;
the Kalman filter is used for establishing a Kalman filtering state vector based on acceleration and angular speed information of the helmet relative to the carrier, constructing a propagation equation and updating the filter by using visual attitude measurement data as observed quantity; outputting the position and posture information of the helmet relative to the cockpit after filtering;
and the position pre-judging module is used for pre-integrating the filtered position and posture information of the helmet relative to the cockpit in combination with the acceleration and angular velocity information of the new helmet relative to the cockpit, which is received from the IMU difference module, and estimating the possible position of the next frame of the mark point.
Furthermore, the vision gesture measuring module adopts a vision monocular working mode,
when a group of complete lighted mark points appear in a camera view, the mark point numbers are confirmed and the feature point matching is carried out, pnP calculation is carried out after the feature point matching, the position and the posture of the mark points are obtained, and the vision measurement result is output in real time and is used for subsequent filter processing;
when the movement of the lighted mark points exceeds the visual field of a shooting camera, the mark point unit lights each group of mark points in turn according to a synchronous signal Cam SYNC; and each camera continuously judges whether a group of complete mark points appear in the visual field of one camera, stops mark point switching if the mark points appear, and performs feature point matching and PnP (PnP) calculation by using images of the cameras for shooting the group of mark points.
Further, in the PnP resolving process, a space orthogonal iterative algorithm is adopted for vision pose measurement.
Further, the Kalman filter constructs a state vector by using a relative acceleration and a relative angular velocity which are obtained by differentiating the helmet IMU and the vehicle IMUConstructing a propagation equation; updating the filter using the position and attitude measurement data of the visual observation as an observation; outputting the position and posture information of the helmet relative to the cockpit after filtering;
wherein the content of the first and second substances,for the position of the helmet IMU under the coordinate system of the cockpit IMUPosition, speed, rotation quaternion; b bω 、b ba Measuring zero offset of angular velocity and acceleration for the helmet IMU; b vω 、b va Measuring zero offset of angular velocity and acceleration for the vehicle IMU; λ is the visual scale factor.
Further, in the tracking process, the fast matching process under the condition that the camera for lighting the mark point or shooting the lighted mark point is switched includes:
1) Acquiring pose data of the helmet relative to the cockpit output by a Kalman filter of the lamp group or the camera at the switching moment according to the synchronizing signal Cam SYNC;
2) Obtaining a space three-dimensional coordinate of each group of mark points in the cockpit according to the pose data of the helmet relative to the cockpit and the space position of each group of mark points on the helmet;
3) Based on the shooting angle of each camera, carrying out projection from the space three-dimensional coordinates to the two-dimensional coordinates, and calculating the two-dimensional coordinates of the mark points which can be shot by each camera in a camera shooting picture; meanwhile, the number of the mark points which are being lightened is determined according to the synchronous signal Cam SYNC, and the calculation two-dimensional coordinates of the lightened mark points on the shooting pictures of each camera are obtained;
4) Acquiring switching time, lighting up actual two-dimensional coordinates of mark points in actual shot images of each camera on shot pictures of each camera, and calculating the central distance between the actual two-dimensional coordinates and corresponding calculated two-dimensional coordinates; and when the central distance calculated by a certain camera is smaller than a set threshold value, performing feature point matching on the camera and the lighting mark point, performing PnP (PnP) calculation after the feature point matching, and acquiring the visual attitude measurement data of the helmet relative to the cockpit.
Further, each group of mark points comprises a plurality of light-emitting characteristic points; and the plurality of light emitting feature points of each set of marker points are arranged on the helmet in a geometric configuration.
Furthermore, the arrangement mode of the luminous characteristic points in each group of the mark points is tetrahedral or pyramidal; wherein the content of the first and second substances is controlled,
in the tetrahedron shape, three feature points are located on the same feature plane, and the central feature point is higher than the plane;
in the pyramid, the four feature points lie in the same feature plane, with the central feature point being above that plane.
The invention also discloses a helmet pose tracking method of the helmet pose tracking system for assisting the capture of the visual feature points by adopting the IMU, which comprises the following steps:
s1, synchronously controlling a camera unit, a mark point unit and an IMU unit in a system;
s2, acquiring the acceleration and the angular speed of the carrier and the helmet according to the IMU unit, and performing inertial differential calculation to obtain the acceleration and the angular speed information of the helmet relative to the carrier;
s3, performing feature matching according to the information of the lighted mark points, performing PnP (pseudo-random number) solution after the feature points are matched, and acquiring the position and posture measurement of the helmet which is visually observed relative to the cockpit;
s4, establishing a Kalman filtering state vector based on acceleration and angular speed information of the helmet relative to the carrier, constructing a propagation equation, and updating a filter by using visual attitude measurement data as observed quantity; outputting the position and posture information of the helmet relative to the cockpit after filtering;
and S5, performing pre-integration on the filtered position and posture information of the helmet relative to the cockpit and the new acceleration and angular velocity information of the helmet relative to the cockpit, which is calculated by combining inertial difference, and estimating the next frame possible position of the mark point.
The invention can realize at least one of the following beneficial effects:
the invention provides a helmet pose tracking system and method for assisting visual feature point capture by an IMU (inertial measurement Unit), which simultaneously meet the requirements of large-range and high-precision visual measurement by synchronously triggering and matching infrared feature point layout and a camera.
After an IMU auxiliary feature point matching method is introduced, when waiting for the next frame of image and lamp group switching, the position of a feature point can be pre-judged, and the calculation amount of feature point matching is reduced;
compared with the traditional relative pose measurement scheme, the method has the advantages that the synchronization characteristic of the equipment is more effectively utilized, the arrangement is simple and convenient, and the method is suitable for engineering application of various passenger head-mounted display systems and the like.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a schematic block diagram of a helmet pose tracking system in an embodiment of the invention;
FIG. 2 is a schematic view of an embodiment of the invention in which the arrangement of the light-emitting feature points is tetrahedral;
FIG. 3 is a schematic view of an embodiment of the present invention in which the arrangement of light-emitting features within a group is pyramidal;
FIG. 4 is a schematic view of a thin soft light sheet covering the upper side of an infrared LED light-emitting lamp bead in the embodiment of the invention;
FIG. 5 is a schematic diagram of an arrangement of inter-group lighting feature points in an embodiment of the present invention;
FIG. 6 is a diagram illustrating a synchronization method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the connection between the tracking processing units in the embodiment of the present invention;
fig. 8 is a flowchart of a helmet pose tracking method in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and which together with the embodiments of the invention serve to explain the principles of the invention.
The embodiment discloses a helmet pose tracking system with an IMU assisting visual feature point capture, as shown in fig. 1, including: the device comprises a camera unit, a mark point unit, an IMU unit and a tracking processing unit;
the marking point unit comprises a plurality of groups of marking points arranged on different positions of the helmet; each group of mark points is controlled to be in a lighting or extinguishing state;
specifically, each group of mark points are in a lighting or extinguishing state under the control of the mark point controller; the number of the mark points is LED1, LED2, LED3, \8230;, LEDm;
a camera unit including a plurality of cameras disposed at different positions within the vehicle cabin; each camera shooting angle faces to the range of a helmet moving area in the cabin, so that at least one camera shoots at a group of lighted mark points;
numbering cameras Cam1, cam2, cam3, \8230;, camn; and the camera can adopt a high-speed camera to ensure the sampling frequency of the image;
the IMU unit comprises a helmet IMU and a carrier IMU, and is used for measuring IMU data of the helmet and the carrier respectively;
the tracking processing unit is used for carrying out visual posture measurement on the helmet by utilizing the characteristics of the lighted mark points in the image shot by the camera unit; establishing a visual and inertial integrated Kalman filter, and performing filtering processing on the visual attitude measurement data by using IMU data; and pre-judging the position of the lighted mark point in the next frame of shot image according to the filtering result, and using the position for fast matching of the next frame of visual gesture detection.
Specifically, the multiple groups of mark points are arranged at different positions of the helmet and are in a lighting or extinguishing state under the control of the mark point controller; the number of the mark points is LED1, LED2, LED3, \ 8230;, LEDm;
the IMU unit comprises a helmet IMU, a carrier IMU and an IMU controller;
the vehicle IMU is arranged in a vehicle cabin and fixedly connected with the cabin and used for measuring acceleration and angular speed data of a moving vehicle;
preferably, the carrier IMU is arranged on one of the cameras on the cockpit;
the helmet IMU is arranged on the helmet and fixedly connected with the helmet and used for measuring acceleration and angular speed data of the helmet.
Specifically, in a plurality of groups of mark points arranged at different positions of the helmet, each group of mark points comprises a plurality of light-emitting characteristic points; and the plurality of light-emitting characteristic points of each group of mark points are arranged on the helmet in a certain geometric configuration; the geometrical configuration of the multiple groups of mark points can be the same or different.
Preferably, the light-emitting characteristic points in the mark points adopt infrared LED light-emitting lamp beads, and the corresponding camera of the camera unit is an infrared camera.
The light-emitting characteristic points can also emit light by adopting an active light-emitting mode or a passive light-emitting mode; the active mode is a self-luminous mode, and the passive mode is a camera or an external ambient light supplement mode.
In a typical geometry, the arrangement of the light emitting features within a group is tetrahedral, with the three features lying in the same feature plane, with the central feature point being higher than the plane, as shown in fig. 2.
In another exemplary geometry, shown in fig. 3, the arrangement of light emitting features within a group is pyramidal, with four features lying in the same feature plane, and a central feature point higher than this plane.
Moreover, in both arrangements in fig. 2 and 3, the normal direction of each light-emitting feature point is perpendicular to the feature plane, facilitating camera observation.
In a preferred scheme, as shown in fig. 4, a thin soft light sheet is covered above the infrared LED light-emitting lamp bead of the light-emitting characteristic point, and the light-emitting uniformity and the visible range of the lamp bead are controlled by the thin soft light sheet.
Because the surface area of the helmet is relatively small, more groups of mark points are arranged on the surface of the smaller helmet, so that the utilization rate of the surface of the helmet is improved. In a preferred scheme as shown in fig. 5, a plurality of groups of infrared LED light-emitting beads are arranged on the helmet in a staggered manner.
In a preferred scheme, as shown in fig. 6, in order to facilitate data transmission and work of the system, synchronous control is adopted for the camera unit, the landmark point unit and the IMU unit; the method specifically comprises the following steps:
1) Numbering all cameras in the camera units, taking one camera as a master control camera and taking charge of generating a synchronizing signal Cam SYNC; after receiving the synchronizing signal Cam SYNC, the other cameras synchronously shoot and send each frame of image with the camera number to the tracking processing unit;
in a specific example, a camera with the number of Cam1 is used as a main control camera and is responsible for generating a synchronization signal Cam SYNC, and other cameras synchronously shoot images after receiving the Cam SYNC, wherein images shot synchronously by Cam1, cam2 and Cam3 \8230 \ 8230and images shot synchronously by Cam n are sent to a tracking processing unit through Cam DATA. In order to reduce the calculation amount and transmission speed of the tracking processing unit, the camera can only send the extracted infrared feature points.
2) Numbering a plurality of groups of mark points of the mark point unit; the synchronous signal Cam SYNC lights up each group of mark points and sends the number information of the lighted mark points to the tracking processing unit;
in a specific example, a synchronization signal Cam SYNC generated by the master control camera is also sent to the landmark Controller LED Controller for controlling the LEDs to be turned on, and each camera corresponds to any one group of LEDs, that is, cam1 may correspond to LED1, LED2, LED3, \ 8230, LED n, and similarly Cam2, cam3, \\8230, cam n. And the LED Controller of the mark point Controller also sends the number information of the lighted lamp group to the tracking processing unit after controlling the lighting of the lamp group for subsequent calculation.
2) And the synchronization signal Cam SYNC is also sent to the IMU unit to control the helmet IMU and the cockpit IMU to synchronously measure.
In a specific embodiment, after receiving the synchronization signal Cam SYNC, the IMU unit may control the two IMUs to sample by using a frequency that is an integral multiple of the synchronization signal Cam SYNC. For example, the camera sampling frequency is 120Hz, then the IMU may employ a sampling frequency of 960 Hz. And the IMU Controller sends the acquired data to the tracking processing unit.
Before the measurement of the starting position and the attitude, the internal references of cameras Cam1, cam2, cam3 and 8230in the camera unit are respectively calibrated, and the external references are calibrated after each camera is installed; and calibrating the three-dimensional space position coordinates of each mark point in the mark point unit.
Specifically, as shown in fig. 7, the tracking processing unit includes an IMU difference module, a visual attitude measurement module, a kalman filter, and a position pre-judging module; wherein the content of the first and second substances,
the IMU difference module is used for carrying out difference calculation on the measurement data of the helmet IMU and the carrier IMU to obtain the acceleration and angular speed information of the helmet relative to the cockpit;
the visual attitude measurement module is used for matching the characteristic points of the determined images with a group of complete mark points in the visual field of the camera, performing PnP (pseudo-random number) calculation after the characteristic points are matched, and acquiring visual attitude measurement data of the helmet relative to the cockpit and outputting the visual attitude measurement data to a Kalman filter in real time; the visual pose data comprises position and pose data;
the Kalman filter is used for establishing a Kalman filtering state vector based on acceleration and angular speed information of the helmet relative to the carrier, constructing a propagation equation and updating the filter by using visual attitude measurement data as observed quantity; outputting the position and posture information of the helmet relative to the cockpit after filtering;
and the position pre-judging module is used for pre-integrating the filtered position and posture information of the helmet relative to the cockpit in combination with the acceleration and angular velocity information of the new helmet relative to the cockpit, which is received from the IMU difference module, and estimating the possible position of the next frame of the mark point.
Wherein, the vision gesture-measuring module adopts a vision 'monocular' working mode, in the vision monocular working mode,
when a group of complete group mark points appear in a camera visual field, feature point matching is carried out according to the group mark point numbers, pnP calculation is carried out after the feature point matching, position and attitude measurement is obtained, the vision measurement results are output to a filter for subsequent filter processing, the filter is updated, and position and attitude information at the next moment is presumed.
The feature point matching can be realized by image matching according to the number of the luminous feature points included in the mark point group corresponding to the group mark point number and the geometric configuration of the luminous feature points.
When the characteristic point moves beyond the visual field of the camera, the mark point unit lights each group of mark points in turn according to the synchronous signal Cam SYNC; each camera continuously judges whether a group of complete LED mark points appear in a camera view, if so, the mark point switching is stopped, and the group of mark points are used for feature point matching and PnP resolving.
In the vision gesture measuring module, a vision multi-view working mode can be adopted, the vision multi-view working mode is a high-precision working mode, wherein,
the multiple cameras can observe the same group of mark points at the same time, when at least two cameras observe the same group of mark points, the high-precision working mode is entered, and under the high-precision mode, the multi-camera can directly obtain optimized values according to three-dimensional reconstruction. Outputting the results of the multiple visual measurements to a filter in real time for subsequent filter processing;
and updating the filter and estimating the position and the attitude information at the next moment. More accurate position and attitude measurements are obtained.
The 'multi-view' working mode has higher precision, but can be used when a plurality of cameras can see the same group of mark points simultaneously, and the mode occupies larger computer resources.
In this embodiment, a coordinate system h of the helmet mark point is defined, an IMU coordinate system b on the helmet, and h and b are kept relatively fixed during the movement of the helmet; defining a camera coordinate system c, a vehicle IMU coordinate system v, c and v are kept relatively still when the helmet moves relative to the vehicle.
Specifically, in the PnP resolving process, a space orthogonal iterative algorithm is adopted for visual attitude measurement. Wherein, the resolving process comprises:
1) Establishing a camera model;
definition P i Is a spatial coordinate point with three-dimensional coordinate [ X i ,Y i ,Z i ] T The unit is meter. Point P i The coordinate in the camera coordinate system is p i Is [ x ] i ,y i ,z i ] T . Space point P i To point p in the camera coordinate system i Existence of external parameter transformation relation;
whereinAs a cameraA 3 x 3 rotation matrix and a 3 x 1 translation vector of the coordinate system to the helmet coordinate system. Wherein the torque matrixThe physical meaning of each row of elements is the coordinate of the coordinate axis unit vector of the helmet coordinate system in the camera coordinate system, and the translation vectorThe physical meaning of (a) is the coordinates of the origin of the helmet coordinate system in the camera coordinate system.
Point p i In the normalized plane coordinate of [ u ] i ,v i ,1] T The unit is pixel, the normalized plane coordinate and the camera coordinate system have an internal reference transformation relation
Wherein f is x 、f y 、c x 、c y The unit is a pixel, and K is a camera internal reference matrix.
2) Adopting a space orthogonal iterative algorithm meter to carry out visual attitude measurement;
definition ofFor line-of-sight projection matrix, when V i When acting on a certain vector, the vector can be projected perpendicularly to p i The above.
Defining a point p i At V i Projection on is q i Then there is
q i =V i (RP i +t) (3)
Under the ideal condition, three points of the object point, the image point and the camera origin point satisfy the space collinearity equation, namely p i At V i Should be regarded as p i Self-body
RP i +t=V i (RP i +t) (4)
The target space collinearity error obtained by deformation is
e i =(I-V i )(RP i +t) (5)
The sum of squares of errors of spatial collinear lines is taken as an objective function, and the optimal estimation of R and t is obtained by optimizing the objective function
The objective function may pass the partial derivative at a given rotation RObtaining an optimal solution of t with respect to R:
thus for a fixed R, the corresponding t can be obtained by the above formula. Next, an optimal solution of R is found, for the estimated value R of R at the k-th iteration (k) T of the kth iteration can be obtained (k) Calculating to obtain a spatial point P i Projection estimation of
k +1 rotation matrix estimation value R (k+1) Minimum resolution can be achieved by solving the following function
This equation can be viewed as a set of points P i To a set of points q i The absolute orientation problem can be solved by Singular Value Decomposition (SVD) method, which comprises the following stepsAndis the centroid of the point set, have
Define (1/n) M as a set of points { P i And set of points q i The covariance matrix of
R which minimizes E (R, t) * And t * Satisfy the requirement of
SVD decomposition of M, i.e. U T MV = Σ, when the optimal solution is
R (k+1) =VU T (13) The method has global convergence better than the algorithm, and can converge to an optimal value by repeating the steps for any initial rotation matrix R. The optimum value of convergence isBy the formula (7), can be obtained。
Specifically, the variables in the kalman filter are listed below:
in the kalman filter, in the process of filtering,
relative acceleration and relative angular velocity structure obtained by using difference of helmet IMU and carrier IMUBuilding state vectorsConstructing a propagation equation; the filter is updated using the position and attitude measurement data of the visual observations as observations.
Wherein, the first and the second end of the pipe are connected with each other,position, speed and rotation quaternion of the helmet IMU under the cockpit IMU coordinate system; b bω 、b ba Measuring zero offset of angular velocity and acceleration for the helmet IMU; b vω 、b va Measuring zero offset of angular velocity and acceleration for the vehicle IMU; λ is the visual scale factor.
wherein; the specific developments of each item are as follows:
wherein the content of the first and second substances,is less dynamic and can be derived from a linear equation.Need to be obtained by a linearization process, whereinIs a small angle approximation of the quaternion error.
The following components are obtained through linearization treatment:
the recursive relationship of the system state error, namely the state error equation, is as follows:
F X is a state transition matrix; f N Is a noise transfer matrix;
U=[a h ,ω h ,a v ,ω v ] T ;
a h 、ω h 、a v and ω v Respectively outputting acceleration and angular speed of the helmet IMU and acceleration and angular speed of the sports carrier IMU;
n is a vector of the state noise,
wherein, the first and the second end of the pipe are connected with each other,andthe variance of the helmet IMU acceleration noise, the sport carrier IMU acceleration noise, the helmet IMU angular velocity noise, the sport carrier IMU angular velocity noise, the helmet IMU acceleration zero offset noise, the sport carrier IMU acceleration zero offset noise, the helmet IMU angular velocity zero offset noise and the sport carrier IMU angular velocity zero offset noise respectively.
Further, the air conditioner is characterized in that,
Wherein, the first and the second end of the pipe are connected with each other,a helmet-to-sport vehicle relative rotation matrix free of errors;carrying a relative rotation matrix of the helmet for the motion without errors;
i is an identity matrix;
Based on the above process, the process of updating the error state covariance matrix of kalman filtering includes:
1) Acquiring IMU data of the helmet;
3) Updating the State transition matrix F X Updating covariance matrix F N NF N T ;
The observation equation of the filter for pose measurement Kalman filtering applied to the scheme is as follows:
wherein the error vectorH p Is a position measurement matrix; error vector H q Is a position measurement matrix; z is a radical of p 、z q A position vector and an attitude vector for a visual observation of kalman filtering; a position vector and an attitude vector estimated for kalman filtering.
During the observation process, the observation device can be used,
1) Column write update partial position measurement model z p ;
In the formula (I), the compound is shown in the specification,the displacement of the representative mark point relative to the camera is obtained by the vision measurement after the internal reference change;the transformation matrix from the carrier coordinate system to the camera coordinate system can be obtained through calibration;representing the displacement of the helmet coordinate system in the carrier coordinate system;a translation vector and a rotation matrix between the helmet IMU and the carrier coordinate are used as state vectors in the filter;the external parameter of the helmet relative to the IMU can be obtained through calibration; n is p To measure noise.
ignoring the second order term after expansion yields:
according to the observation equation Δ z p =H p Δ x, position measurement matrix H p Written as follows:
in the formula (I), the compound is shown in the specification,in order to be a position observation,is a corresponding cross multiplication operation matrix.
2) Column write update partial attitude measurement model z q ;
According to the observation equation Δ z q =H q Δ x, rotation measurement matrix H q Written as follows:
the process of updating the state covariance matrix and the state vector of the present embodiment includes:
2) Calculating update matrix S = HPH T +R;
3) Calculation of kalman gain K = PH T S -1 ;
5) Recursion result P ← for computing state covariance matrix (I) d -KH)P(I d -KH) T +KRK T 。
6) Update the stateAnd the original state vectorAnd obtaining an updated state vector after superposition.
And the position pre-judging module is used for performing pre-integration on the filtered position p and posture q information of the helmet relative to the cockpit in combination with the acceleration a and angular velocity omega information of the new helmet relative to the cockpit, which are received from the IMU differential module, and estimating the next frame possible position of the mark point.
And the camera unit carries out quick matching by using the possible position of the next frame of the mark point output by the position prejudging module and outputs a visual attitude measurement result.
Furthermore, in the tracking process, the fast matching process under the condition that the camera for lighting the mark point or shooting the lighted mark point is switched comprises the following steps:
1) Acquiring the pose data of the helmet relative to the cockpit output by a Kalman filter of the lamp group or the camera at the switching moment according to the synchronization signal Cam SYNC;
2) Obtaining a space three-dimensional coordinate of each group of mark points in the cockpit according to the pose data of the helmet relative to the cockpit and the space position of each group of mark points on the helmet;
3) Based on the shooting angle of each camera, carrying out projection from a space three-dimensional coordinate to a two-dimensional coordinate, and calculating the two-dimensional coordinate of a mark point which can be shot by each camera in a camera shooting picture; meanwhile, the number of the mark points which are lighted is determined according to the synchronous signal Cam SYNC, and the calculation two-dimensional coordinates of the lighted mark points on the shooting pictures of each camera are obtained;
4) Acquiring switching time, lighting up actual two-dimensional coordinates of mark points in the actual shot images of the cameras on the shot images of the cameras, and calculating the center distance between the actual two-dimensional coordinates and the corresponding calculated two-dimensional coordinates; when the central distance calculated by a certain camera is smaller than a set threshold value, performing feature point matching on the camera and the lighting mark point, performing PnP (pseudo-random number) calculation after the feature point matching, and acquiring visual attitude measurement data of the helmet relative to the cockpit;
the set threshold is sigma delta t, wherein sigma is a measurement allowable error coefficient and is set according to an empirical value; Δ t is the interval time of the synchronization signal.
If a plurality of groups of correctly matched feature points exist, the area which is closest to the center of the picture and is the largest in the circumscribed circle of the feature points can be used as a visual measurement value, and the filter is continuously updated (the calculated amount is small); and the filter updating can be continued by taking the multiple groups of vision measurement values as the vision measurement values (the precision is high).
In summary, the helmet pose tracking system with the vision feature point capture assisted by the IMU in the embodiment of the present invention simultaneously satisfies large-range and high-precision vision measurement through the synchronous triggering and matching of the infrared feature point layout and the camera. After the IMU auxiliary feature point matching method is introduced, when waiting for the next frame of image and lamp group switching, the position of the feature point can be pre-judged, and the calculation amount of feature point matching is reduced; compared with the traditional relative pose measurement scheme, the method has the advantages that the synchronization characteristic of the equipment is effectively utilized, the arrangement is simple and convenient, and the method is suitable for engineering application of various passenger head-mounted display systems and the like.
Example two
The embodiment discloses a helmet pose tracking method of a helmet pose tracking system using an IMU to assist visual feature point capture, as shown in fig. 8, including the following steps,
s1, synchronously controlling a camera unit, a mark point unit and an IMU unit in a system;
s2, acquiring the acceleration and the angular speed of the carrier and the helmet according to the IMU unit, and performing inertial differential calculation to obtain the acceleration and the angular speed information of the helmet relative to the carrier;
s3, performing feature matching according to the information of the lighted mark points, performing PnP (computational noise) solution after the feature points are matched, and obtaining the position and the posture measurement of the helmet which is observed visually relative to the cockpit;
s4, establishing a Kalman filtering state vector based on acceleration and angular speed information of the helmet relative to the carrier, constructing a propagation equation, and updating a filter by using visual attitude measurement data as observed quantity; outputting the position and posture information of the helmet relative to the cockpit after filtering;
and S5, pre-integrating the position and attitude information of the filtered helmet relative to the cockpit and the acceleration and angular velocity information of the new helmet relative to the cockpit, which are calculated by combining inertial difference, and estimating the possible position of the next frame of the mark point.
The specific technical details and advantageous effects of the present embodiment are the same as those described in the previous embodiment, and please refer to the previous embodiment, which is not repeated herein.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Claims (10)
1. A helmet pose tracking system with IMU assisted visual feature point capture, comprising: the device comprises a camera unit, a mark point unit, an IMU unit and a tracking processing unit;
the marking point unit comprises a plurality of groups of marking points arranged on different positions of the helmet; each group of mark points is in a lighting or extinguishing state;
a camera unit including a plurality of cameras disposed at different positions in the vehicle cabin; each camera shooting angle faces to the range of a helmet moving area in the cabin, so that at least one camera aims at one group of lighted mark points to shoot;
the IMU unit comprises a helmet IMU and a carrier IMU, and is used for measuring IMU data of the helmet and the carrier respectively;
the tracking processing unit is used for carrying out visual posture measurement on the helmet by utilizing the characteristics of the lighted mark points in the image shot by the camera unit; establishing a visual and inertial integrated Kalman filter, and performing filtering processing on the visual attitude measurement data by using IMU data; and pre-judging the position of the lighted mark point in the next frame of shot image according to the filtering result, and using the position for fast matching of the next frame of visual gesture detection.
2. The helmet pose tracking system of claim 1, wherein synchronous control of the camera unit, landmark unit and IMU unit is employed; the method specifically comprises the following steps:
numbering all cameras in the camera units, taking one camera as a master control camera and taking charge of generating a synchronizing signal Cam SYNC; after receiving the synchronizing signal Cam SYNC, the other cameras synchronously shoot and send each frame of image with the camera number to the tracking processing unit;
numbering a plurality of groups of mark points of the mark point unit; the synchronous signal Cam SYNC lights up each group of mark points and sends the number information of the lighted mark points to the tracking processing unit;
and a synchronization signal Cam SYNC is also sent to the IMU unit to control the helmet IMU and the cockpit IMU to synchronously measure.
3. The helmet pose tracking system of claim 1, wherein the tracking processing unit comprises an IMU difference module, a visual pose measurement module, a kalman filter, and a position prejudgment module; wherein, the first and the second end of the pipe are connected with each other,
the IMU difference module is used for carrying out difference calculation on the measurement data of the helmet IMU and the carrier IMU to obtain the acceleration and angular speed information of the helmet relative to the cockpit;
the visual attitude measurement module is used for matching the characteristic points of the determined images with a group of complete lighted mark points in the visual field of the camera, performing PnP calculation after the characteristic points are matched, and acquiring visual attitude measurement data of the helmet relative to the cockpit and outputting the visual attitude measurement data to the Kalman filter in real time; the visual pose data comprises position and pose data;
the Kalman filter is used for establishing a Kalman filtering state vector based on acceleration and angular speed information of the helmet relative to the carrier, constructing a propagation equation and updating the filter by using visual attitude measurement data as observed quantity; outputting the position and posture information of the helmet relative to the cockpit after filtering;
and the position pre-judging module is used for pre-integrating the filtered position and attitude information of the helmet relative to the cockpit and combining the acceleration and angular velocity information of the new helmet relative to the cockpit received from the IMU difference module, and estimating the possible position of the next frame of the mark point.
4. The helmet pose tracking system of claim 3,
the vision gesture measuring module adopts a vision monocular working mode,
when a group of complete lighted mark points appear in a camera view, the mark point numbers are confirmed, the feature point matching is carried out, pnP calculation is carried out after the feature point matching, the position and the posture measurement of the mark points are obtained, and the vision measurement result is output in real time and is used for subsequent filter processing;
when the movement of the lighted mark points exceeds the visual field of a shooting camera, the mark point unit lights each group of mark points in turn according to a synchronous signal Cam SYNC; and each camera continuously judges whether a group of complete mark points appear in the field of view of one camera, stops mark point switching if the mark points appear, and performs feature point matching and PnP (pseudo-random number) calculation by using images of the cameras for shooting the group of mark points.
5. The helmet pose tracking system of claim 4, wherein during the PnP solution, a spatial orthogonal iterative algorithm is used to perform visual pose measurement.
6. The helmet pose tracking system of claim 3, wherein the Kalman filter constructs state vectors using relative acceleration and relative angular velocity differentially acquired by the helmet IMU and the vehicle IMU Constructing a propagation equation; updating the filter using the position and attitude measurement data of the visual observation as an observation; outputting the position and posture information of the helmet relative to the cockpit after filtering;
wherein, the first and the second end of the pipe are connected with each other,position, speed and rotation quaternion of a helmet IMU under a cockpit IMU coordinate system; b is a mixture of bω 、b ba Measuring zero offset of angular velocity and acceleration for the helmet IMU; b is a mixture of vω 、b va Measuring zero offset of angular velocity and acceleration for the vehicle IMU; λ is the visual scale factor.
7. The helmet pose tracking system of claim 6,
in the tracking process, a fast matching process under the condition that a camera for lighting a mark point or shooting the lighted mark point is switched comprises the following steps:
1) Acquiring pose data of the helmet relative to the cockpit output by a Kalman filter of the lamp group or the camera at the switching moment according to the synchronizing signal Cam SYNC;
2) Obtaining the space three-dimensional coordinates of each group of mark points in the cockpit according to the pose data of the helmet relative to the cockpit and the space positions of each group of mark points on the helmet;
3) Based on the shooting angle of each camera, carrying out projection from the space three-dimensional coordinates to the two-dimensional coordinates, and calculating the two-dimensional coordinates of the mark points which can be shot by each camera in a camera shooting picture; meanwhile, according to the number of the lighted mark points determined by the synchronous signal Cam SYNC, the calculation two-dimensional coordinates of the lighted mark points on the shooting pictures of each camera are obtained;
4) Acquiring switching time, lighting up actual two-dimensional coordinates of mark points in the actual shot images of the cameras on the shot images of the cameras, and calculating the center distance between the actual two-dimensional coordinates and the corresponding calculated two-dimensional coordinates; and when the central distance calculated by a certain camera is smaller than a set threshold value, performing feature point matching on the camera and the lighting mark point, performing PnP (PnP) calculation after the feature point matching, and acquiring the visual attitude measurement data of the helmet relative to the cockpit.
8. The helmet pose tracking system of any of claims 1-7, wherein each set of marker points comprises a plurality of luminescent feature points; and the plurality of light-emitting feature points of each set of marker points are arranged on the helmet in a geometric configuration.
9. The helmet pose tracking system of claim 7, wherein the arrangement of the light-emitting features within each set of marker points is tetrahedral or pyramidal; wherein the content of the first and second substances,
in the tetrahedron shape, three feature points are located on the same feature plane, and the central feature point is higher than the plane;
in the pyramid, the four feature points lie in the same feature plane, with a central feature point above that plane.
10. A method of helmet pose tracking using an IMU assisted visual feature point capture helmet pose tracking system of claims 1-9, comprising:
s1, synchronously controlling a camera unit, a mark point unit and an IMU unit in a system;
s2, acquiring the acceleration and the angular speed of the carrier and the helmet according to the IMU unit, and performing inertial differential calculation to obtain the acceleration and the angular speed information of the helmet relative to the carrier;
s3, performing feature matching according to the information of the lighted mark points, performing PnP (PnP) calculation after the feature points are matched, and obtaining the position and the attitude measurement of the helmet which is observed visually relative to the cockpit;
s4, establishing a Kalman filtering state vector based on acceleration and angular speed information of the helmet relative to the carrier, constructing a propagation equation, and updating a filter by using visual attitude measurement data as observed quantity; outputting the position and posture information of the helmet relative to the cockpit after filtering;
and S5, pre-integrating the position and attitude information of the filtered helmet relative to the cockpit and the acceleration and angular velocity information of the new helmet relative to the cockpit, which are calculated by combining inertial difference, and estimating the possible position of the next frame of the mark point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211336959.3A CN115690910A (en) | 2022-10-28 | 2022-10-28 | Helmet pose tracking system and method for assisting visual feature point capture by IMU (inertial measurement Unit) |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211336959.3A CN115690910A (en) | 2022-10-28 | 2022-10-28 | Helmet pose tracking system and method for assisting visual feature point capture by IMU (inertial measurement Unit) |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115690910A true CN115690910A (en) | 2023-02-03 |
Family
ID=85045177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211336959.3A Pending CN115690910A (en) | 2022-10-28 | 2022-10-28 | Helmet pose tracking system and method for assisting visual feature point capture by IMU (inertial measurement Unit) |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115690910A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645400A (en) * | 2023-07-21 | 2023-08-25 | 江西红声技术有限公司 | Vision and inertia mixed pose tracking method, system, helmet and storage medium |
-
2022
- 2022-10-28 CN CN202211336959.3A patent/CN115690910A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645400A (en) * | 2023-07-21 | 2023-08-25 | 江西红声技术有限公司 | Vision and inertia mixed pose tracking method, system, helmet and storage medium |
CN116645400B (en) * | 2023-07-21 | 2023-12-08 | 江西红声技术有限公司 | Vision and inertia mixed pose tracking method, system, helmet and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109540126B (en) | Inertial vision integrated navigation method based on optical flow method | |
CN109272532B (en) | Model pose calculation method based on binocular vision | |
CN110728715B (en) | Intelligent inspection robot camera angle self-adaptive adjustment method | |
CN108460779A (en) | A kind of mobile robot image vision localization method under dynamic environment | |
CN106681510B (en) | Pose recognition device, virtual reality display device and virtual reality system | |
CN109974707A (en) | A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm | |
CN110288656A (en) | A kind of object localization method based on monocular cam | |
CA2526105A1 (en) | Image display method and image display apparatus | |
CN110782492B (en) | Pose tracking method and device | |
CN110070598A (en) | Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding | |
CN110887486B (en) | Unmanned aerial vehicle visual navigation positioning method based on laser line assistance | |
CN106969723A (en) | High speed dynamic object key point method for three-dimensional measurement based on low speed camera array | |
CN111854636B (en) | Multi-camera array three-dimensional detection system and method | |
JP2010145389A (en) | Method of correcting three-dimensional erroneous array of attitude angle sensor by using single image | |
CN115690910A (en) | Helmet pose tracking system and method for assisting visual feature point capture by IMU (inertial measurement Unit) | |
CN110517284B (en) | Target tracking method based on laser radar and PTZ camera | |
CN106931962A (en) | A kind of real-time binocular visual positioning method based on GPU SIFT | |
CN112819711B (en) | Monocular vision-based vehicle reverse positioning method utilizing road lane line | |
CN106296718A (en) | Camera array quick calibrating method based on stereoscopic vision navigation system | |
WO2022000713A1 (en) | Augmented reality self-positioning method based on aviation assembly | |
CN111504323A (en) | Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion | |
Dai et al. | A multi-spectral dataset for evaluating motion estimation systems | |
CN114234967A (en) | Hexapod robot positioning method based on multi-sensor fusion | |
CN110361001A (en) | One kind being used for space junk movement measurement system and scaling method | |
CN111145267B (en) | 360-degree panoramic view multi-camera calibration method based on IMU assistance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |