Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
An Inertial Measurement Unit (IMU) is a common sensor that can measure self-movement rapidly at high frequency, but as time goes on, the IMU error gradually accumulates, and the accumulated error needs to be corrected by the observation of other sensors. External sensors (such as radars, cameras and the like) can measure information such as outlines, colors and the like of a real environment, and tasks such as robot drawing, positioning and sensing can be completed by matching with the IMU. However, there is often a time offset between the IMU and the external sensors, which can have a large impact on the sensing result.
Based on this, the embodiments of the present invention provide a clock synchronization method and apparatus for multiple sensors, and a computing device, which can calculate time deviations among multiple sensors, thereby improving accuracy of sensing results.
Specifically, the embodiments of the present invention will be further explained below with reference to the drawings.
It should be understood that the following examples are provided by way of illustration and are not intended to limit the invention in any way to the particular embodiment disclosed.
Fig. 1 is a schematic flowchart illustrating a clock synchronization method for multiple sensors according to an embodiment of the present invention. The method is used for clock synchronization of a first sensor and a second sensor, wherein the first sensor and the second sensor are fixed on a required application device. The first sensor may include a radar sensor including, but not limited to, a lidar, a millimeter wave radar, a microwave radar, a Flash radar, a MEMS radar, a phased array radar, and the like, capable of generating a three-dimensional point cloud, or an image sensor including, but not limited to, a monocular camera, a panoramic camera, a binocular camera, a structured light camera, and the like, which may be used for photographing. The second sensor may comprise an IMU, embodiments of the invention can be used to calculate the time offset between the IMU and the radar sensor, or the IMU and the image sensor.
In the present embodiment, the first sensor is a laser radar, and the second sensor is an IMU. As shown in fig. 1, the method includes:
step 110, a first angular velocity of the first sensor in a first sensor coordinate system is obtained.
The first sensor coordinate system may be a three-dimensional coordinate system established by a center of the laser radar. Since the lidar is in rotational motion, the first angular velocity refers to the average angular velocity of the lidar in the first sensor coordinate system.
Specifically, as shown in fig. 2, step 110 includes:
111, acquiring a first point cloud of a first sensor at a first moment in a first sensor coordinate system;
step 112, acquiring a second point cloud of the first sensor under the first sensor coordinate system at a second moment;
113, calculating to obtain a rotation matrix between the first point cloud and the second point cloud based on an iterative closest point algorithm;
and step 114, calculating a first angular speed according to the rotation matrix between the first point cloud and the second point cloud, the first time and the second time.
The first time and the second time are two different times, for example, every preset time tdAcquiring point cloud data of the first sensor in the first sensor coordinate system, wherein the first moment and the second moment can be separated by a preset time, namely the first moment tiObtaining first point cloud data PiAt a second time ti+dAcquiring second point cloud data Pi+d. For example, if the preset time is 1 second, the first time and the second time may be separated by 1 second, and if the first time is i seconds, the second time is i +1 second.
And matching the first Point cloud and the second Point cloud by adopting an Iterative Closest Point (ICP) algorithm, so as to calculate a rotation matrix between the first Point cloud and the second Point cloud. The ICP algorithm may be embodied as: first, in the first point cloud PiSelecting point pi(ii) a Second, in the second point cloud Pi+dFind out the point piCorresponding point qiAnd make point piAnd point qiThe distance between is minimal (e.g., the second point may be traversed)Cloud Pi+dCalculating a point piAnd the second point cloud Pi+dThe distance of each point is continuously compared to find a distance point piNearest point qi) (ii) a Thirdly, calculating a rotation matrix R and a translation matrix T to minimize an error function; fourth, point to point piCarrying out rotation and translation transformation by using the rotation matrix R and the translation matrix T obtained in the previous step to obtain a new corresponding point set pi'; fifthly, calculating a point pi' AND Point qiThe average distance d of; and sixthly, stopping iterative calculation if the average distance d is smaller than a given threshold or larger than a preset maximum iteration number, otherwise, returning to the second step until a convergence condition is met.
In the present embodiment, first, in the first point cloud PiSelecting point pi∈PiAnd in the second point cloud Pi+dFind out the point piCorresponding point qi∈Pi+d(ii) a The rotation matrix R and the translation matrix T are then calculated by minimizing an error function E (R, T), wherein an error function threshold may be set, and when the error function is less than the error function threshold, the error function is considered to be minimal, wherein:
furthermore, for piUsing the rotation matrix R and the translation matrix T to carry out rotation and translation conversion to obtain a new corresponding point set pi':
p′i=R·pi+t,pi∈Pi
Calculating the point pi' AND Point qiAverage distance d of (d):
if d is smaller than the set threshold value or reaches the preset iteration times, stopping iteration, otherwise, repeating iteration until the convergence condition is met.
In step 114, after the rotation matrix between the first point cloud and the second point cloud is obtained through calculation, the first angular velocity is calculated according to the rotation matrix between the first point cloud and the second point cloud, the first time, and the second time, and specifically, the method may include: converting a rotation matrix between the first point cloud and the second point cloud into an Euler angle; dividing the Euler angle by the difference between the first time and the second time to obtain the angular velocity of the first sensor at the intermediate time between the first time and the second time; the angular velocity of the first sensor at an intermediate time between the first time and the second time is determined as a first angular velocity.
Wherein a rotation matrix R between the first point cloud and the second point cloud is converted into Euler angles
The method specifically comprises the following steps:
the rotation matrix R is represented as:
the euler angle can be adjusted
Expressed as:
the first sensor is at a first time tiAnd a second time ti+dThe angular velocity at the intermediate time of (a) is:
then will be
A first angular velocity is determined.
And 120, acquiring a second angular velocity of the second sensor in a second sensor coordinate system and a transformation relation between the first sensor coordinate system and the second sensor coordinate system.
The second sensor coordinate system may be a three-dimensional coordinate system established by using the IMU center. Since the IMU is also in rotational motion, the second angular velocity refers to the average angular velocity of the IMU in the second sensor coordinate system. The IMU can be used to measure angular velocity and acceleration, and the second angular velocity can be obtained directly from the angular velocity measured by the IMU.
The transformation relation between the first sensor coordinate system and the second sensor coordinate system may include a rotation matrix and a translation matrix of the first sensor coordinate system and the second sensor coordinate system. Before the first sensor and the second sensor are subjected to clock synchronization, the first sensor and the second sensor are subjected to external reference calibration, and the transformation relation between the coordinate system of the first sensor and the coordinate system of the second sensor can be obtained from external reference calibration results of the first sensor and the second sensor.
And step 130, calculating a third angular velocity of the second sensor in the first sensor coordinate system according to the second angular velocity and the transformation relation between the first sensor coordinate system and the second sensor coordinate system.
The third angular velocity is an angular velocity obtained by converting the second angular velocity into the first sensor coordinate system. Then, according to the second angular velocity and the transformation relationship between the first sensor coordinate system and the second sensor coordinate system, the third angular velocity of the second sensor in the first sensor coordinate system is calculated, which may specifically be: and multiplying the second angular velocity by the rotation matrix to calculate a third angular velocity. That is, the third angular velocity is:
wherein,
is the third angular velocity of the second sensor in the first sensor coordinate system,
for a second angular velocity of the second sensor in a second sensor coordinate system, R' is a rotationAnd (4) matrix.
And 140, calculating the angular speed residual sum of the first sensor and the second sensor at each preset deviation time according to the first angular speed and the third angular speed.
Wherein the preset deviation time is a time deviation between the first sensor and the second sensor which is set in advance. The preset deviation time is several, for example, the deviation range is set to-200 ms to 200ms, the search step is 1 ms, and the preset deviation time is-200 ms, -199 ms, -198 ms … … 199 ms, 200 ms. And calculating the angular speed residual sum of the first sensor and the second sensor according to the angular speed of the corresponding sampling point of the first sensor and the second sensor at each preset deviation time, so as to obtain the angular speed residual sum of the first sensor and the second sensor at each preset deviation time.
Specifically, as shown in fig. 3, step 140 includes:
step 141, determining a first reference time and a first angular velocity corresponding to the first reference time;
step 142, adding a preset deviation time to the first reference time to obtain a second reference time;
step 143, determining a third angular velocity corresponding to the second reference moment;
step 144, determining an angular velocity pair according to the first angular velocity corresponding to the first reference time and the third angular velocity corresponding to the second reference time;
and 145, calculating the sum of the vector modular lengths of the difference values of all the angular velocity pairs as the angular velocity residual sum of the first sensor and the second sensor under the preset deviation time.
Wherein, the first reference time refers to the time stamp of the first sensor, i.e. the time of the laser radar. For example, the laser radar acquires a group of point cloud data every 1 second, and acquires 4 groups of point cloud data, so that the first reference time may be 1s, 2s, 3s, and 4s, and the first angular velocity corresponding to the first reference time refers to the angular velocity corresponding to the laser radar at the time of 1s, 2s, 3s, and 4 s.
Wherein the second reference moment refers to a time stamp of the second sensor, i.e. the time of the IMU. For example, assuming that the first reference time is 1s, 2s, 3s, and 4s, if one of the preset deviation times is-200 ms, the second reference time is 0.8s, 1.8s, 2.8s, and 3.8s, and the angular velocity corresponding to the IMU at the time of 0.8s, 1.8s, 2.8s, and 3.8s is the third angular velocity corresponding to the second reference time; for another example, assuming that the first reference time is 1s, 2s, 3s, and 4s, and if one of the preset deviation times is 100ms, the second reference time is 1.1s, 2.1s, 3.1s, and 4.1s, and the angular velocity corresponding to the IMU at the time 1.1s, 2.1s, 3.1s, and 4.1s is the third angular velocity corresponding to the second reference time.
In steps 144 and 145, after determining the first angular velocity corresponding to the first reference time and the third angular velocity corresponding to the second reference time, determining the third angular velocity corresponding to the second reference time from the first angular velocity corresponding to the first reference time to form an angular velocity pair, thereby determining the angular velocity pair; from the angular velocity pairs, the sum of the vector modulo lengths of the differences of all the angular velocity pairs is calculated. For example, if the first reference time is 1s, 2s, 3s, 4s, and the second reference time is 1.1s, 2.1s, 3.1s, 4.1s, the first angular velocity corresponding to the laser radar when the time is 1s and the third angular velocity corresponding to the IMU when the time is 1.1s constitute a first angular velocity pair, the first angular velocity corresponding to the laser radar when the time is 2s and the third angular velocity corresponding to the IMU when the time is 2.1s constitute a second angular velocity pair, the first angular velocity corresponding to the laser radar when the time is 3s and the third angular velocity corresponding to the IMU when the time is 3.1s constitute a third angular velocity pair, and the first angular velocity corresponding to the laser radar when the time is 4s and the third angular velocity corresponding to the IMU when the time is 4.1s constitute a fourth angular velocity pair, all the angular velocity pairs are shared.
Wherein the sum of the vector mode lengths of the differences of all angular velocity pairs is calculated as:
where t is a predetermined offset time, δ
tIs the sum of the vector modulo lengths of the differences of all angular velocity pairs at a preset offset time t (i.e. at a preset offset time t)The angular velocity residual sum of the first sensor and the second sensor at time t), n is the number of angular velocity pairs, k ∈ [1, n]And k is an integer,
refers to the vector modulo length of the difference of the kth angular velocity pair.
And 150, determining the minimum angular velocity residual error in the angular velocity residual error sum and the corresponding preset deviation time as the time deviation of the first sensor and the second sensor.
And after the angular velocity residual sum under each preset deviation time is obtained through calculation, the magnitude of each angular velocity residual sum is compared, the smallest angular velocity residual sum in the angular velocity residual sums is determined, the smallest angular velocity residual sum and the corresponding preset deviation time are determined, and the preset deviation time is determined as the time deviation of the first sensor and the second sensor. For example, if it is determined that the sum of angular velocity residuals is minimum when the preset deviation time is 12 msec, 12 msec is determined as the time deviation of the first and second sensors.
And step 160, performing clock synchronization on the first sensor and the second sensor according to the time deviation of the first sensor and the second sensor.
And adjusting the clock of the first sensor or the second sensor according to the time deviation of the first sensor and the second sensor, so that the clocks of the first sensor and the second sensor are synchronous.
The embodiment of the invention obtains a first angular velocity of a first sensor in a first sensor coordinate system, obtains a second angular velocity of a second sensor in a second sensor coordinate system and a transformation relation between the first sensor coordinate system and the second sensor coordinate system, calculates a third angular velocity of the second sensor in the first sensor coordinate system according to the second angular velocity and the transformation relation between the first sensor coordinate system and the second sensor coordinate system, calculates the angular velocity residual sum of the first sensor and the second sensor at each preset deviation time according to the first angular velocity and the third angular velocity, determines the minimum angular velocity residual sum and the corresponding preset deviation time in the angular velocity residual sum as the time deviation of the first sensor and the second sensor, and performs clock synchronization on the first sensor and the second sensor according to the time deviation of the first sensor and the second sensor, the time deviation of the first sensor and the second sensor can be accurately calculated, so that the time deviation among the sensors is calculated, the time synchronization problem of the multiple sensors is solved, and the accuracy of a sensing result is improved.
In some embodiments, in step 143, when the third angular velocity corresponding to the second reference time cannot be directly obtained from the measurement data, the third angular velocity corresponding to the second reference time needs to be calculated. Step 143 may specifically include:
step 1431, if the third angular velocity corresponding to the second reference time does not exist, acquiring a third angular velocity corresponding to a previous timestamp and a third angular velocity corresponding to a next timestamp of the second reference time;
step 1432, calculating a fourth angular velocity of the second sensor at the second reference time according to the third angular velocity corresponding to the previous timestamp and the third angular velocity corresponding to the next timestamp;
step 1433 determines the fourth angular velocity as a third angular velocity corresponding to the second reference time.
And if the IMU data is not acquired at the second reference moment, the third angular velocity corresponding to the second reference moment is not searched.
The last timestamp of the second reference time refers to the time when the IMU data is acquired before the second reference time, and the next timestamp of the second reference time refers to the time when the IMU data is acquired next to the second reference time. And calculating a fourth angular velocity of the second sensor at the second reference time according to the third angular velocities of the two times. Wherein a second reference time t is assumed2Has a last timestamp of t21The next time stamp of the second reference time is t22(then t)21<t2<t22) Calculating to obtain a second reference time t2The corresponding fourth angular velocity is:
wherein,
the fourth angular velocity is the velocity of the fourth corner,
for IMU at time t
21The angular velocity of (a) of (b),
for IMU at time t
22The angular velocity of (c).
Fig. 4 is a flowchart illustrating an application example of a clock synchronization method for multiple sensors according to an embodiment of the present invention. The method comprises the steps of fixing a 10HZ multi-line laser radar and a 200HZ IMU on a robot body, moving the robot to a factory building with obvious characteristics and no obvious degradation scene, controlling the robot to move (including rotation and translation in all directions) and start to record data, finishing movement after one minute, storing the data, wherein the total of 600 frames of laser radar data and 12000 frames of IMU data are stored.
As shown in fig. 4, the method includes:
step 201, calculating a first angular velocity of the laser radar in a laser radar coordinate system.
Wherein, a first point cloud P of the ith frame of the laser radar is obtained
iAnd the second point cloud P of the (i + 1) th frame
i+1And calculating P by ICP algorithm
iAnd P
i+1The rotation matrix R in between. Wherein P is calculated by ICP algorithm
iAnd P
i+1The method of rotating the matrix in between is described in detail in the above embodiments, and is not described herein again. Converting rotation matrix R into Euler angle
And according to the Euler angle
Calculating a first angular velocity of a first sensor
The first angular velocity is calculated as
And 202, acquiring a second angular velocity of the IMU under the IMU coordinate system and a transformation relation between the laser radar coordinate system and the IMU coordinate system.
Wherein the second angular velocity of the IMU under the IMU coordinate system is obtained
The transformation relation between the laser radar coordinate system and the IMU coordinate system is R'.
And step 203, calculating a third angular velocity of the IMU in the laser radar coordinate system according to the second angular velocity and the transformation relation between the laser radar coordinate system and the IMU coordinate system.
Wherein, according to
The third angular velocity of the IMU under the laser radar coordinate system is obtained through calculation
And step 204, calculating the angular speed residual sum of the laser radar and the IMU under each preset deviation time according to the first angular speed and the third angular speed.
Wherein, the deviation range is set to be-200 ms to 200ms, the search step is 1 ms, and the preset deviation time is-200 ms, -199 ms, -198 ms … … 199 ms, 200 ms. The sum of the angular velocities of the lidar and the IMU at each preset offset time is calculated as shown in fig. 5.
And step 205, determining the minimum angular velocity residual error in the angular velocity residual error sum and the corresponding preset deviation time as the time deviation of the laser radar and the IMU.
In fig. 5, when the preset offset time is 12 ms, the sum of the angular velocity residuals of the lidar and the IMU is minimum, and then 12 ms is determined as the time offset of the lidar and the IMU.
And step 206, adjusting the clock of the laser radar or the IMU according to the time deviation of the laser radar and the IMU, so that the clocks of the first sensor and the second sensor are synchronous.
The embodiment of the invention calculates the third angular velocity of the IMU under the laser radar coordinate system according to the second angular velocity and the transformation relation of the laser radar coordinate system and the IMU coordinate system, calculates the angular velocity residual sum of the laser radar and the IMU under each preset deviation time according to the first angular velocity and the third angular velocity, determines the minimum angular velocity residual sum and the corresponding preset deviation time of the angular velocity residual sum as the time deviation of the laser radar and the IMU, performs clock synchronization on the laser radar and the IMU according to the time deviation of the laser radar and the IMU, can accurately calculate the time deviation of the laser radar and the IMU, and further calculates the time deviation among a plurality of sensors, the problem of time synchronization of multiple sensors is solved, and therefore accuracy of sensing results is improved.
Fig. 6 is a schematic structural diagram illustrating a clock synchronization apparatus for multiple sensors according to an embodiment of the present invention. As shown in fig. 6, the apparatus 300 includes: a first acquisition module 310, a second acquisition module 320, an angular velocity conversion module 330, an angular velocity residual sum calculation module 340, a time deviation determination module 350, and a clock synchronization module 360.
The first obtaining module 310 is configured to obtain a first angular velocity of the first sensor in a first sensor coordinate system; the second obtaining module 320 is configured to obtain a second angular velocity of the second sensor in a second sensor coordinate system and a transformation relationship between the first sensor coordinate system and the second sensor coordinate system; the angular velocity conversion module 330 is configured to calculate a third angular velocity of the second sensor in the first sensor coordinate system according to the second angular velocity and a transformation relationship between the first sensor coordinate system and the second sensor coordinate system; the angular velocity residual sum calculating module 340 is configured to calculate an angular velocity residual sum of the first sensor and the second sensor at each preset deviation time according to the first angular velocity and the third angular velocity; the time deviation determining module 350 is configured to determine a minimum angular velocity residual in the sum of angular velocity residuals and the corresponding preset deviation time as a time deviation of the first sensor and the second sensor; the clock synchronization module 360 is configured to perform clock synchronization on the first sensor and the second sensor according to the time deviation of the first sensor and the second sensor.
In an optional manner, the first obtaining module 310 is specifically configured to: acquiring a first point cloud of the first sensor under a first sensor coordinate system at a first moment; acquiring a second point cloud of the first sensor under the first sensor coordinate system at a second moment; calculating to obtain a rotation matrix between the first point cloud and the second point cloud based on an iterative closest point algorithm; and calculating the first angular speed according to a rotation matrix between the first point cloud and the second point cloud, the first time and the second time.
In an optional manner, the first obtaining module 310 is further specifically configured to: converting a rotation matrix between the first point cloud and the second point cloud into euler angles; dividing the Euler angle by the difference between the first time and the second time to obtain the angular velocity of the first sensor at the intermediate time between the first time and the second time; determining an angular velocity of the first sensor at a time intermediate the first time and the second time as the first angular velocity.
In an alternative manner, the transformation relationship of the first sensor coordinate system and the second sensor coordinate system includes a rotation matrix of the first sensor coordinate system and the second sensor coordinate system; the angular velocity conversion module 330 is specifically configured to: and multiplying the second angular velocity by the rotation matrix to calculate the third angular velocity.
In an optional manner, the angular velocity residual sum calculating module 340 is specifically configured to: determining a first reference moment and the first angular speed corresponding to the first reference moment; adding the preset deviation time to the first reference time to obtain a second reference time; determining the third angular velocity corresponding to the second reference moment; determining an angular velocity pair according to the first angular velocity corresponding to the first reference moment and the third angular velocity corresponding to the second reference moment; and calculating the sum of the vector modular lengths of the difference values of all the angular velocity pairs as the angular velocity residual sum of the first sensor and the second sensor under the preset deviation time.
In an optional manner, the angular velocity residual sum calculating module 340 is further specifically configured to: if the third angular velocity corresponding to the second reference moment does not exist, acquiring the third angular velocity corresponding to the last timestamp and the third angular velocity corresponding to the next timestamp of the second reference moment; calculating a fourth angular velocity of the second sensor at the second reference moment according to the third angular velocity corresponding to the previous timestamp and the third angular velocity corresponding to the next timestamp; and determining the fourth angular velocity as the third angular velocity corresponding to the second reference time.
In an alternative form, the first sensor includes at least one of a laser radar, a millimeter wave radar, a microwave radar, and an image sensor, and the second sensor includes an inertial measurement unit.
It should be noted that the multi-sensor clock synchronization apparatus provided in the embodiments of the present invention is an apparatus capable of executing the multi-sensor clock synchronization method, and all embodiments of the multi-sensor clock synchronization method are applicable to the apparatus and can achieve the same or similar beneficial effects.
The embodiment of the invention obtains a first angular velocity of a first sensor in a first sensor coordinate system, obtains a second angular velocity of a second sensor in a second sensor coordinate system and a transformation relation between the first sensor coordinate system and the second sensor coordinate system, calculates a third angular velocity of the second sensor in the first sensor coordinate system according to the second angular velocity and the transformation relation between the first sensor coordinate system and the second sensor coordinate system, calculates the angular velocity residual sum of the first sensor and the second sensor at each preset deviation time according to the first angular velocity and the third angular velocity, determines the minimum angular velocity residual sum and the corresponding preset deviation time in the angular velocity residual sum as the time deviation of the first sensor and the second sensor, and performs clock synchronization on the first sensor and the second sensor according to the time deviation of the first sensor and the second sensor, the time deviation of the first sensor and the second sensor can be accurately calculated, so that the time deviation among the sensors is calculated, the time synchronization problem of the multiple sensors is solved, and the accuracy of a sensing result is improved.
Embodiments of the present invention provide a computer-readable storage medium, where at least one executable instruction is stored, and the executable instruction causes a processor to execute the clock synchronization method of multiple sensors in any of the above method embodiments.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform a method of multi-sensor clock synchronization in any of the above-described method embodiments.
Fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and a specific embodiment of the present invention does not limit a specific implementation of the computing device.
Wherein the computing device comprises: a processor and a memory. The memory is configured to store at least one executable instruction that, when executed by the computing device, causes the processor to perform the steps of the multi-sensor clock synchronization method according to any of the above-described method embodiments.
Alternatively, as shown in fig. 7, the computing device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein: the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically execute the multi-sensor clock synchronization method in any of the above-described method embodiments.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU, or an application specific Integrated circuit asic, or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The embodiment of the invention obtains a first angular velocity of a first sensor in a first sensor coordinate system, obtains a second angular velocity of a second sensor in a second sensor coordinate system and a transformation relation between the first sensor coordinate system and the second sensor coordinate system, calculates a third angular velocity of the second sensor in the first sensor coordinate system according to the second angular velocity and the transformation relation between the first sensor coordinate system and the second sensor coordinate system, calculates the angular velocity residual sum of the first sensor and the second sensor at each preset deviation time according to the first angular velocity and the third angular velocity, determines the minimum angular velocity residual sum and the corresponding preset deviation time in the angular velocity residual sum as the time deviation of the first sensor and the second sensor, and performs clock synchronization on the first sensor and the second sensor according to the time deviation of the first sensor and the second sensor, the time deviation of the first sensor and the second sensor can be accurately calculated, so that the time deviation among the sensors is calculated, the time synchronization problem of the multiple sensors is solved, and the accuracy of a sensing result is improved.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.