CN106023192B - A kind of time reference real-time calibration method and system of Image-capturing platform - Google Patents

A kind of time reference real-time calibration method and system of Image-capturing platform Download PDF

Info

Publication number
CN106023192B
CN106023192B CN201610327892.5A CN201610327892A CN106023192B CN 106023192 B CN106023192 B CN 106023192B CN 201610327892 A CN201610327892 A CN 201610327892A CN 106023192 B CN106023192 B CN 106023192B
Authority
CN
China
Prior art keywords
frame image
feature point
fisrt feature
current frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610327892.5A
Other languages
Chinese (zh)
Other versions
CN106023192A (en
Inventor
晁志超
龙学军
周剑
陆宏伟
徐丹
徐一丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tongjia Youbo Technology Co Ltd
Original Assignee
Chengdu Tongjia Youbo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tongjia Youbo Technology Co Ltd filed Critical Chengdu Tongjia Youbo Technology Co Ltd
Priority to CN201610327892.5A priority Critical patent/CN106023192B/en
Publication of CN106023192A publication Critical patent/CN106023192A/en
Application granted granted Critical
Publication of CN106023192B publication Critical patent/CN106023192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of time reference real-time calibration method and system of Image-capturing platform, this method is applied in mobile terminal, includes the following steps: that S1 obtains current frame image and preceding N frame image;S2 extracts the fisrt feature point in current frame image, with the second feature point formed in fisrt feature point set and reference frame image, to form second feature point set;Fisrt feature point in fisrt feature point set and the second feature point in second feature point set are carried out matching filtering by S3, to obtain fisrt feature point to set;S4 obtains the average global displacement of current frame image according to fisrt feature point to set;S5. the gyro data for extracting the gyrosensor of current frame image, the posture global displacement of current frame image is obtained according to gyro data;S6 obtains the time difference between the timestamp of camera and the timestamp of gyrosensor according to Kalman filter method, and carries out calibration to the time difference and execute S1.

Description

A kind of time reference real-time calibration method and system of Image-capturing platform
Technical field
The present invention relates to field of image processing more particularly to a kind of time references based on mobile terminal Image-capturing platform Real-time calibration method and system.
Background technique
With the fast development of electronic technology, people carry out video using the mobile terminals platform such as mobile terminal, unmanned plane and adopt Collection using more and more.But lead to since the video of mobile terminal or unmanned plane acquisition is easy to appear shake visual effect difference It often is unable to normal use, needs to carry out surely as processing the video of acquisition.According to the different video Image-capturing platform of principle Digital image stabilization method mainly includes that three classes are respectively: mechanical surely picture, photorefractive crystals and electronic steady image.
Mechanical surely seems passively to offset various shakes using damped platform or mechanical holder, be its advantage lies in being able to preferably Counteracting largely shake, with outstanding steady as effect, thus be commonly used on the video recording device of profession, but there is knot The disadvantages of structure is complicated, volume and weight is big, cost is high;Photorefractive crystals are external in optical path using optical devices such as compensation eyeglasses Boundary's shake carries out Active Compensation, so volume and weight can do smaller, but its anti-jitter ability is weaker, and to camera lens Manufacture craft is more demanding;Compared to two kinds of front " hard " digital image stabilization method, electronic steady image is eliminated by the method for " soft " Shake carries out transformation to picture frame by application digital image processing techniques to achieve the purpose that steady picture, therefore electronics Steady picture has many advantages, such as that structure is simple, volume weight is small, cost is low, steady as range is wide.
Current electronic steady image mainly includes shake estimation, smothing filtering, jitter compensation three parts, according to shake estimation side The difference of formula can also be further subdivided into pure image electronic digital image stabilization method and the electronic image stabilization method based on gyro.
Pure image electronic digital image stabilization method estimates the amount of jitter of camera merely with image unique characteristics information, although having nothing The advantages of needing other any extras, but just because of it estimates to move merely with image unique characteristics, so on the one hand right Image imaging requires, i.e., image will have characteristic information abundant, such as then can for uniform backgrounds such as sky or sea Failure;On the other hand due to needing to carry out the work such as a large amount of feature extraction, characteristic matching to image constantly, so calculating multiple Miscellaneous, energy consumption is higher, and generally higher to power requirement for mobile platform.
Camera amount of jitter is estimated using gyrosensor based on the electronic image stabilization rule of gyro.Current micro-electro-mechanical systems The volume, weight and cost of system (MEMS, Micro-Electro-Mechanical System) gyrosensor are increasingly It is small, it has been commonly utilized on all kinds of motion platforms such as mobile terminal, unmanned plane.For example, a piece of MPU6500 chip only has 3mm × 3mm × 0.9mm size but can provide simultaneously the real-time of 3 axis accelerometers and 3 axis gyroscopes with the rate of 8000Hz Sampled data.
In conclusion the electronic image stabilization method based on gyro is more suitable for the digital image acquisition task of mobile platform.But It is that the posture information directly exported by gyro and the posture information for being not equal to camera platform itself, this is mainly manifested in two A aspect: i.e. since spatial alternation posture caused by gyro coordinate system and camera coordinate system are not parallel is poor, and since gyro passes Sensor timestamp caused time offset posture asynchronous with camera sensor time stamp is poor.Since gyrosensor and camera are sat Mark system is connects firmly structure, and gyro coordinate system in most of equipment and camera coordinate system are all substantially parallel, so spatial alternation Posture difference can be by solving multiplied by a spin matrix;But for time offset posture difference, there is no specific solution party Case.
Summary of the invention
It can not overcome the problems, such as that time offset posture is poor for the digital image collection system of existing mobile platform, now mention It is adopted for the image of the time difference between timestamp for aiming at the timestamp and gyrosensor that can obtain camera in real time a kind of Collect the time reference real-time calibration method and system of platform.
Specific technical solution is as follows:
A kind of time reference real-time calibration method of Image-capturing platform is applied to mobile terminal, in the mobile terminal It is provided with camera and gyrosensor, is included the following steps:
S1. current frame image and preceding N frame image are obtained, the preceding N frame image is the nth frame figure before current frame image Picture, using the preceding N frame image as reference frame image;
S2. the fisrt feature point in the current frame image is extracted, to form fisrt feature point set and the reference frame Second feature point in image, to form second feature point set;
S3. by the fisrt feature point set the fisrt feature point and the second feature point set in described in Second feature point carries out matching filtering, to obtain fisrt feature point to set;
S4. the average global displacement of the current frame image is obtained to set according to the fisrt feature point;
S5. the gyro data for extracting the gyrosensor of the current frame image is obtained according to the gyro data The posture global displacement of the current frame image;
S6. according to the average global displacement and the posture global displacement, according to described in the acquisition of Kalman filter method Time difference between the timestamp of camera and the timestamp of the gyrosensor, and the time difference is demarcated, it returns Execute step S1.
Preferably, the step S3 includes:
S31. using random sampling unification algorism to the institute in the fisrt feature point set in the current frame image State the second feature point in the second feature point set of fisrt feature point and the reference frame image carry out one by one it is double To matching, to obtain second feature point to set, the second feature point includes a plurality of characteristic points pair, each feature to set Point to include a fisrt feature point and with the fisrt feature point second feature point correspondingly;
S32. by the second feature point to each of set characteristic point to by carry out homography matrix calculating, sentence Disconnected to calculate whether determinant meets preset condition, all not labeled fisrt feature points are to the composition fisrt feature point To set.
Preferably, the preset condition are as follows: the determinant is more than or equal to 0.7 or the determinant is less than or equal to 1.3.
Preferably, the step S4 includes:
S41. the fisrt feature point is extracted to the fisrt feature point of current frame image described in set, according to described Fisrt feature point calculates the center position coordinates of the current frame image;
S42. the fisrt feature point is extracted to the second feature point of reference frame image described in set, according to described Second feature point calculates the center position coordinates of the reference frame image;
S43. according to the center position coordinates of the current frame image and the center position coordinates of the reference frame image, meter Calculate the average global displacement of the current frame image.
Preferably, the step S5 includes:
S51. using the time interval of the reference frame image to the current frame image as first time parameter, according to described Gyro data obtains corresponding pitch angle variable quantity and yaw angle variable quantity, according to the pitch angle variable quantity and it is described partially The angle variable quantity that navigates obtains the first global displacement;
S52. join by offset and the first time of the reference frame image to the current frame image time interval Number is added, and corresponding pitch angle variable quantity and yaw angle variable quantity is obtained, according to the pitch angle variable quantity and the yaw angle Variable quantity obtains the second global displacement;
The posture global displacement includes the first global displacement and second global displacement.
Preferably, in the step S6, the Kalman filter method is using gradient estimation, filtering gain estimation, filter Wave calculates and correction to variances is to obtain the time difference between the timestamp of the camera and the timestamp of the gyrosensor.
A kind of time reference real-time calibration system of Image-capturing platform is applied to mobile terminal, in the mobile terminal It is provided with camera and gyrosensor, comprising:
One acquiring unit, to obtain current frame image and reference frame image, the preceding N frame image is in current frame image Nth frame image before, using the preceding N frame image as reference frame image;
One extraction unit connects the acquiring unit, to extract the fisrt feature point in the current frame image, with shape At the second feature point in fisrt feature point set and the reference frame image, to form second feature point set;
One matching unit connects the extraction unit, to by the fisrt feature in the fisrt feature point set Point carries out matching filtering with the second feature point in the second feature point set, to obtain fisrt feature point to set;
One first processing units connect the matching unit, described in being obtained according to the fisrt feature point to set The average global displacement of current frame image;
One the second processing unit extracts the gyro data of the gyrosensor of the current frame image, according to described Gyro data obtains the posture global displacement of the current frame image;
One control unit is separately connected the first processing units and described the second processing unit, to according to described flat Equal global displacement and the posture global displacement obtain the timestamp and the gyro of the camera according to Kalman filter method Time difference between the timestamp of sensor, and the time difference is demarcated.
Preferably, the matching unit is special to described first in the current frame image using random sampling unification algorism Levy second spy in the second feature point set of the fisrt feature point and the reference frame image in point set Sign point carries out bi-directional matching one by one, and to obtain second feature point to set, the second feature point includes a plurality of spies to set Sign point pair, each characteristic point to include a fisrt feature point and with the fisrt feature point one-to-one described second it is special Sign point;The second feature point is judged to calculate to each of set characteristic point to by homography matrix calculating is carried out Whether determinant meets preset condition, and all not labeled fisrt feature points are to the composition fisrt feature point to collection It closes.
Preferably, the preset condition are as follows: the determinant is more than or equal to 0.7 or the determinant is less than or equal to 1.3.
Preferably, the first processing units extract the fisrt feature point to described in current frame image described in set Fisrt feature point calculates the center position coordinates of the current frame image according to the fisrt feature point;It is special to extract described first Sign point calculates the reference frame figure according to the second feature point to the second feature point of reference frame image described in set The center position coordinates of picture;It is sat according to the center of the center position coordinates of the current frame image and the reference frame image Mark calculates the average global displacement of the current frame image.
Preferably, described the second processing unit is with the time interval of the reference frame image to the current frame image for the One time parameter obtains corresponding pitch angle variable quantity and yaw angle variable quantity according to the gyro data, according to described Pitch angle variable quantity and the yaw angle variable quantity obtain the first global displacement;With the reference frame image to the present frame figure Picture time interval is that offset is added with the first time parameter, obtains corresponding pitch angle variable quantity and yaw angle variation Amount obtains the second global displacement according to the pitch angle variable quantity and the yaw angle variable quantity;The posture global displacement packet Include the first global displacement and second global displacement.
Preferably, the Kalman filter method is using gradient estimation, filtering gain estimation, filtering calculating and correction to variances To obtain the time difference between the timestamp of the camera and the timestamp of the gyrosensor.
Above-mentioned technical proposal the utility model has the advantages that
1) the time reference real-time calibration method of Image-capturing platform can obtain the timestamp of camera in real time and gyro senses Departure between the timestamp of device is to carry out real-time calibration, by by the fisrt feature point set and the second feature point The case where set carries out matching filtering, eliminates erroneous matching improves matched accuracy, is protected using Kalman filter method The convergence of the departure between the timestamp of camera and the timestamp of gyrosensor is demonstrate,proved;
2) the time reference real-time calibration system of Image-capturing platform is by matching unit to current frame image and reference frame Image carries out Feature Points Matching, the case where improving matching precision, eliminate erroneous matching;It can be according to average using control unit Global displacement and the posture global displacement obtain the deviation between the timestamp of camera and the timestamp of the gyrosensor Amount, and there is convergence, it ensure that the accuracy of time difference between the timestamp of camera and the timestamp of the gyrosensor.
Detailed description of the invention
Fig. 1 is a kind of method of embodiment of the time reference real-time calibration method of Image-capturing platform of the present invention Flow chart;
The determinant curve graph of Fig. 2 homography matrix between consecutive frame;
Average whole displacement of the Fig. 3 between consecutive frame;
Fig. 4 is the flow chart for obtaining the average global displacement of current frame image;
Fig. 5 is the flow chart for obtaining posture global displacement;
Fig. 6 is a kind of module of embodiment of the time reference real-time calibration system of Image-capturing platform of the present invention Figure.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art without creative labor it is obtained it is all its His embodiment, shall fall within the protection scope of the present invention.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combination.
The present invention will be further explained below with reference to the attached drawings and specific examples, but not as the limitation of the invention.
As shown in Figure 1, a kind of time reference real-time calibration method of Image-capturing platform, is applied to mobile terminal, it is described It is provided with camera and gyrosensor in mobile terminal, includes the following steps:
S1. current frame image and preceding N frame image are obtained, the preceding N frame image is the nth frame figure before current frame image Picture, using the preceding N frame image as reference frame image;
S2. the fisrt feature point in the current frame image is extracted, to form fisrt feature point set and the reference frame Second feature point in image, to form second feature point set;
S3. by the fisrt feature point set the fisrt feature point and the second feature point set in described in Second feature point carries out matching filtering, to obtain fisrt feature point to set;
S4. the average global displacement of the current frame image is obtained to set according to the fisrt feature point;
S5. the gyro data for extracting the gyrosensor of the current frame image is obtained according to the gyro data The posture global displacement of the current frame image;
S6. according to the average global displacement and the posture global displacement, according to described in the acquisition of Kalman filter method Time difference between the timestamp of camera and the timestamp of the gyrosensor, and the time difference is demarcated, it returns Execute step S1.
Further, feature point detection algorithm (Speeded Up Robust Features, letter can be used in step s 2 Claim SURF) current frame image and reference frame image progress characteristic point detection are worked as with obtaining the coordinate position of a large amount of characteristic points The second feature point set of fisrt feature point set and reference frame image in prior image frame, the basic energy of the movement of these characteristic points Enough reflect the mass motion size of correspondence image frame.
In the present embodiment, the time reference real-time calibration method of Image-capturing platform can obtain the timestamp of camera in real time Departure between the timestamp of gyrosensor is to carry out real-time calibration, by by fisrt feature point set and second feature The case where point set carries out matching filtering, eliminates erroneous matching improves matched accuracy, utilizes Kalman filter method It ensure that the convergence of the departure between the timestamp of camera and the timestamp of gyrosensor.
In a preferred embodiment, step S3 includes:
S31. using random sampling unification algorism (Random Sample Consensus, abbreviation RANSAC) to present frame It is described in the second feature point set of the fisrt feature point in fisrt feature point set in image and reference frame image Second feature point carries out bi-directional matching one by one, and to obtain second feature point to set, the second feature point includes multiple to set Several characteristic points pair, each characteristic point to include a fisrt feature point and with the fisrt feature point it is described correspondingly Second feature point;
S32. by the second feature point to each of set characteristic point to by carry out homography matrix calculating, sentence Disconnected to calculate whether determinant meets preset condition, all not labeled fisrt feature points are to the composition fisrt feature point To set.
Further, reference frame image is previous frame image.
In the present embodiment, the sign point of the characteristic point of current frame image and former frame spy image is carried out in step S31 Symmetrical bi-directional matching, to reject unmatched characteristic point;Feature Points Matching is carried out to adjacent two field pictures using RANSAC algorithm, In order to reduce the rate of mismatching to the greatest extent, what when matching carried out is symmetrical bi-directional matching, i.e., finds first every in current frame image Then best match of a characteristic point in previous frame image finds it current to each characteristic point in previous frame image again Best match in frame just receives the group " characteristic point when one group of " characteristic point to " best match of other side each other It is right ", otherwise, reject the group " characteristic point to ".
In the case of certain consecutive frame images still will appear characteristic point whole matching mistake, therefore use step S32 Improve the accuracy of feature point pair matching.Due to the interval time of adjacent two field pictures it is extremely short (such as frame per second be 30fps image Equipment is acquired, frame period only has 33ms), for adjacent two field pictures, whole homograph is little, that is to say, that For the homography matrix of consecutive frame image normally close in unit matrix, determinant should be near 1.Based on this standard, Ke Yifang Just efficiently the picture frame of matching error is marked.
In step s 32, using the characteristic point after matching to homography matrix H is calculated, then according to the ranks of homography matrix H The deviation of formula det (H) and 1 determines matched correctness, marks determinant det (H)<0.7 and determinant det (H)>1.3 Frame number, preset condition will be met: determinant be more than or equal to 0.7 or determinant less than or equal to 1.3 all characteristic points to combination At fisrt feature point to set.
In a preferred embodiment, step S4 includes:
S41. fisrt feature point is extracted to the fisrt feature point of current frame image in set, is worked as according to the calculating of fisrt feature point The center position coordinates of prior image frame;
S42. fisrt feature point is extracted to the second feature point of reference frame image in set, is calculated and is joined according to second feature point Examine the center position coordinates of frame image;
S43. according to the center position coordinates of current frame image and the center position coordinates of reference frame image, present frame is calculated The average global displacement of image.
In the present embodiment, when reference frame image is previous frame image, own in step S41 in current frame image Matched characteristic point coordinate carries out average calculating operation, to obtain the center position coordinates rCenter of current frame imageNow;In step Average calculating operation is carried out to matched characteristic point coordinates all in previous frame image in S42, to obtain the centre bit of previous frame image Set coordinate rCenterLast;The flat of current frame image is obtained by the change in location of consecutive frame center position coordinates in step S43 Equal global displacement rPixel, the i.e. center position coordinates of the average global displacement rPixel=previous frame image of current frame image rCenterLastThe center position coordinates rCenter of current frame imageNow
The determinant curve of homography matrix between consecutive frame in step S3 is given as shown in Figure 2, it is seen then that according to fig. 2 The determinant of the homography matrix provided can be simple and quick detect characteristic point mispairing frame of the same name, be illustrated in figure 3 consecutive frame Between average whole displacement, wherein the average global displacement of mispairing frame has apparent burr point, and wherein frame number indicates each frame The serial number of image.
Indicate that from step S1 be the process of the average global displacement of acquisition current frame image into step S4 as shown in Figure 4 Figure.In a preferred embodiment, step S5 includes:
S51. it using the time interval of reference frame image to current frame image as first time parameter, is obtained according to gyro data Corresponding pitch angle variable quantity and yaw angle variable quantity are obtained, it is whole to obtain first according to pitch angle variable quantity and yaw angle variable quantity Position moves;
S52. it is added using reference frame image to current frame image time interval as offset with first time parameter, obtains phase The pitch angle variable quantity and yaw angle variable quantity answered obtain the second whole position according to pitch angle variable quantity and yaw angle variable quantity It moves;
Posture global displacement includes the first global displacement and the second global displacement.
The flow chart that posture global displacement is obtained in step S5 is indicated as shown in Figure 5.
In the present embodiment, when reference frame image is previous frame image, in step s 51 to the gyro of gyrosensor Data carry out the running integral in the frame period time, respectively obtain in previous frame image to the time interval of current frame image (i.e. The pitch angle variable quantity dA of first time parameter t)Pitch(t) and yaw angle variable quantity dAYaw(t), then the whole position of picture frame first Move rGyro=f* (dAPitch(t)+dAYaw(t)), wherein f be camera equivalent focal length;In step S52, in step S51 (first time parameter t) carries out the offset (i.e. the time interval of consecutive frame image) of a dt size to time variable, recalculates The variable quantity dA of pitch anglePitch(t+dt) and the variable quantity dA of yaw angleYaw(t+dt), to obtain the whole position of picture frame second Move rGyro2=f* (dAPitch(t+dt)+dAYaw(t+dt)), wherein f be camera equivalent focal length.
In a preferred embodiment, in step s 6, Kalman filter method is estimated using gradient, filtering gain is estimated, Filtering calculates and correction to variances is to obtain the time difference between the timestamp of camera and the timestamp of gyrosensor.
In the present embodiment, if quantity of state X=(f, td)T, wherein f indicates that the equivalent focal length of camera, td indicate camera Time difference between timestamp and the timestamp of gyrosensor, with the average entirety of current frame image obtained in step S43 It is displaced picture frame second obtained in the first global displacement of picture frame rGyro, step S52 obtained in rPixel and step S51 Global displacement rGyro2 is input, is iterated update, Kalman filter side to initial state value using Kalman filter method Specific step is as follows for method;
The estimation of measurement equation gradient:
Hx=- [rGyro;F* (rGyro-rGyro2)/dt],
Filtering gain estimation:
K=(P+Q) * HxT/(Hx*(P+Q)*HxT+ R),
Filtering calculates:
X=X-K* (rPixel-f*rGyro),
Correction to variances:
P=(I-K*Hx) * (P+Q),
Wherein, Q is dynamical equation noise variance, and R is measurement equation noise variance;
In the technical scheme, one current frame image of every acquisition and reference frame image, then follow the steps S1 to step S6, directly To camera timestamp and gyrosensor timestamp between time difference td and the variable quantity of equivalent focal length f of camera be less than The threshold value of setting exports between the timestamp of camera and the timestamp of gyrosensor that is, it is believed that filter result has been restrained Time difference td and camera equivalent focal length f value.
As shown in fig. 6, a kind of time reference real-time calibration system of Image-capturing platform, is applied to mobile terminal, it is described Camera and gyrosensor are provided in mobile terminal, comprising:
One acquiring unit 2, to obtain current frame image and reference frame image, the preceding N frame image is in present frame figure Nth frame image before picture, using the preceding N frame image as reference frame image;
One extraction unit 1, connection acquisition unit 2, to extract the point of the fisrt feature in current frame image, to form first Second feature point in set of characteristic points and reference frame image, to form second feature point set;
One matching unit 4 connects extraction unit 1, to by the fisrt feature point and the in fisrt feature point set The second feature point in two set of characteristic points carries out matching filtering, to obtain fisrt feature point to set;
One first processing units 5, matching connection unit 4, to obtain current frame image to set according to fisrt feature point Average global displacement;
One the second processing unit 3, extracts the gyro data of the gyrosensor of current frame image, is obtained according to gyro data The posture global displacement of current frame image;
One control unit 6, is separately connected first processing units 5 and the second processing unit 3, to according to average global displacement With posture global displacement, according to Kalman filter method obtain camera timestamp and gyrosensor timestamp between when Between it is poor, and the time difference is demarcated.
Further, feature point detection algorithm (Speeded Up Robust Features, letter can be used in extraction unit 1 Claim SURF) current frame image and reference frame image progress characteristic point detection are worked as with obtaining the coordinate position of a large amount of characteristic points The second feature point set of fisrt feature point set and reference frame image in prior image frame, the basic energy of the movement of these characteristic points Enough reflect the mass motion size of correspondence image frame.
In the present embodiment, the time reference real-time calibration system of Image-capturing platform is by matching unit 4 to present frame Image and reference frame image carry out Feature Points Matching, the case where improving matching precision, eliminate erroneous matching;It is single using control Member 6 can obtain between the timestamp of camera and the timestamp of gyrosensor according to average global displacement and posture global displacement Departure, and there is convergence, it ensure that the accuracy of time difference between the timestamp of camera and the timestamp of gyrosensor.
In a preferred embodiment, matching unit 4 is using random sampling unification algorism to the institute in the current frame image State the institute in the second feature point set of the fisrt feature point and the reference frame image in fisrt feature point set It states second feature point and carries out bi-directional matching one by one, to obtain second feature point to set, the second feature point includes to set A plurality of characteristic points pair, each characteristic point to include a fisrt feature point and with the one-to-one institute of fisrt feature point State second feature point;By the second feature point to each of set characteristic point to by carry out homography matrix calculating, Judge to calculate whether determinant meets preset condition, all not labeled fisrt feature points are to the composition fisrt feature Point is to set.
Further, reference frame image is previous frame image.
In the present embodiment, the characteristic point of current frame image and the sign of former frame spy image are clicked through in matching unit 4 The symmetrical bi-directional matching of row, to reject unmatched characteristic point;Characteristic point is carried out to adjacent two field pictures using RANSAC algorithm Match, in order to reduce the rate of mismatching to the greatest extent, what when matching carried out is symmetrical bi-directional matching, i.e., finds in current frame image first Then best match of each characteristic point in previous frame image is found it to each characteristic point in previous frame image again and is being worked as Best match in previous frame just receives the group " characteristic point when one group of " characteristic point to " best match of other side each other It is right ", otherwise, reject the group " characteristic point to ".
In the case of certain consecutive frame images still will appear characteristic point whole matching mistake, therefore use homography matrix Calculate the accuracy for improving feature point pair matching.Since interval times of adjacent two field pictures is extremely short, (such as frame per second is 30fps Image capture device, frame period only have 33ms), for adjacent two field pictures, whole homograph is little, also It is to say the homography matrix of consecutive frame image normally close in unit matrix, determinant should be near 1.It, can based on this standard With convenience and high-efficiency the picture frame of matching error is marked.
Homography matrix is calculated using the characteristic point after matching to homography matrix H is calculated, then according to the ranks of homography matrix H The deviation of formula det (H) and 1 determines matched correctness, marks determinant det (H)<0.7 and determinant det (H)>1.3 Frame number, preset condition will be met: determinant be more than or equal to 0.7 or determinant less than or equal to 1.3 all characteristic points to combination At fisrt feature point to set.
In a preferred embodiment, first processing units 5 extract fisrt feature point to first of current frame image in set Characteristic point calculates the center position coordinates of current frame image according to fisrt feature point;Fisrt feature point is extracted to referring in set The second feature point of frame image calculates the center position coordinates of reference frame image according to second feature point;According to current frame image Center position coordinates and reference frame image center position coordinates, calculate the average global displacement of current frame image.
In the present embodiment, when reference frame image is previous frame image, to matched features all in current frame image Point coordinate carries out average calculating operation, to obtain the center position coordinates rCenter of current frame imageNow;Own in previous frame image Matched characteristic point coordinate carries out average calculating operation, to obtain the center position coordinates rCenter of previous frame imageLast;By adjacent The change in location of frame center's position coordinates obtains the average global displacement rPixel of current frame image, i.e. current frame image is averaged The center position coordinates rCenter of global displacement rPixel=previous frame imageLastThe center position coordinates of current frame image rCenterNow
In a preferred embodiment, the second processing unit 3 is with the time interval of reference frame image to current frame image for the One time parameter obtains corresponding pitch angle variable quantity and yaw angle variable quantity according to gyro data, is become according to pitch angle Change amount and yaw angle variable quantity obtain the first global displacement;Using reference frame image to current frame image time interval as offset with First time parameter is added, and corresponding pitch angle variable quantity and yaw angle variable quantity is obtained, according to pitch angle variable quantity and yaw Angle variable quantity obtains the second global displacement;Posture global displacement includes the first global displacement and the second global displacement.
In the present embodiment, when reference frame image is previous frame image, frame is carried out to the gyro data of gyrosensor Running integral in interval time respectively obtains in previous frame image to the time interval of current frame image and (joins at the first time The pitch angle variable quantity dA of number t)Pitch(t) and yaw angle variable quantity dAYaw(t), then the first global displacement of picture frame rGyro=f* (dAPitch(t)+dAYaw(t)), wherein f be camera equivalent focal length;To time variable (the first time parameter t) in step S51 The offset (i.e. the time interval of consecutive frame image) for carrying out a dt size, recalculates the variable quantity dA of pitch anglePitch(t+ ) and the variable quantity dA of yaw angle dtYaw(t+dt), to obtain picture frame the second global displacement rGyro2=f* (dAPitch(t+ dt)+dAYaw(t+dt)), wherein f be camera equivalent focal length.
In a preferred embodiment, Kalman filter method is using gradient estimation, filtering gain estimation, filtering calculating and side Difference amendment is to obtain the time difference between the timestamp of camera and the timestamp of gyrosensor.
In the technical scheme, SURF feature point detecting method and RANSAC random sampling unification algorism are fully utilized (characteristic point symmetrically two-way matching process), considerably increases the robustness of extracting and matching feature points;For in practical application by It distorts the situation of extremely individual frame whole matching mistakes caused by the reasons such as larger or characteristic point negligible amounts in visual field, proposes benefit The method for carrying out threshold decision with homography matrix determinant further obviates erroneous matching situation, is based on kalman to be subsequent The time unifying method of filtering method provides convergence guarantee.
The present invention using pitching, yaw in global displacement amount between consecutive frame in sequence image and camera posture by being changed Linear relationship, construction kalman filtering state equation and measurement equation.Compared to traditional off-line calibration method, the present invention Real-time calibration can not only be carried out, and low to initial value requirement, principle is simple, and calculation amount is small, so as to be widely applied to In the electronic steady image application at a large amount of mobile platform ends.
The foregoing is merely preferred embodiments of the present invention, are not intended to limit embodiments of the present invention and protection model It encloses, to those skilled in the art, should can appreciate that all with made by description of the invention and diagramatic content Equivalent replacement and obviously change obtained scheme, should all be included within the scope of the present invention.

Claims (10)

1. a kind of time reference real-time calibration method of Image-capturing platform is applied to mobile terminal, sets in the mobile terminal It is equipped with camera and gyrosensor, which is characterized in that include the following steps:
S1. current frame image and preceding N frame image are obtained, the preceding N frame image is the nth frame image before current frame image, Using the preceding N frame image as reference frame image;
S2. the fisrt feature point in the current frame image is extracted, to form fisrt feature point set and the reference frame image In second feature point, to form second feature point set;
S3. by the fisrt feature point in the fisrt feature point set and described second in the second feature point set Characteristic point carries out matching filtering, to obtain fisrt feature point to set;
S4. the average global displacement of the current frame image is obtained to set according to the fisrt feature point;
S5. the gyro data for extracting the gyrosensor of the current frame image, according to gyro data acquisition The posture global displacement of current frame image;
S6. according to the average global displacement and the posture global displacement, the camera is obtained according to Kalman filter method Timestamp and the gyrosensor timestamp between time difference, and the time difference is demarcated, returns and execute Step S1;
The step S3 includes:
S31. using random sampling unification algorism to described the in the fisrt feature point set in the current frame image The second feature point in the second feature point set of one characteristic point and the reference frame image carries out two-way one by one Match, to obtain second feature point to set, the second feature point includes a plurality of characteristic points pair, each characteristic point pair to set Including a fisrt feature point and with the fisrt feature point second feature point correspondingly;
S32. by the second feature point to each of set characteristic point to by carry out homography matrix calculating, judge to count Calculate whether determinant meets preset condition, all not labeled fisrt feature points are to the composition fisrt feature point to collection It closes.
2. the time reference real-time calibration method of Image-capturing platform as described in claim 1, which is characterized in that described default Condition are as follows: the determinant is more than or equal to 0.7 or the determinant is less than or equal to 1.3.
3. the time reference real-time calibration method of Image-capturing platform as described in claim 1, which is characterized in that the step S4 includes:
S41. the fisrt feature point is extracted to the fisrt feature point of current frame image described in set, according to described first Characteristic point calculates the center position coordinates of the current frame image;
S42. the fisrt feature point is extracted to the second feature point of reference frame image described in set, according to described second Characteristic point calculates the center position coordinates of the reference frame image;
S43. according to the center position coordinates of the current frame image and the center position coordinates of the reference frame image, institute is calculated State the average global displacement of current frame image.
4. the time reference real-time calibration method of Image-capturing platform as described in claim 1, which is characterized in that the step S5 includes:
S51. using the time interval of the reference frame image to the current frame image as first time parameter, according to the gyro The corresponding pitch angle variable quantity of data acquisition and yaw angle variable quantity change according to the pitch angle variable quantity and the yaw angle Amount obtains the first global displacement;
S52. using the reference frame image to the current frame image time interval as offset and the first time parameter phase Add, obtain corresponding pitch angle variable quantity and yaw angle variable quantity, is changed according to the pitch angle variable quantity and the yaw angle Amount obtains the second global displacement;
The posture global displacement includes the first global displacement and second global displacement.
5. the time reference real-time calibration method of Image-capturing platform as described in claim 1, which is characterized in that in the step In rapid S6, the Kalman filter method uses gradient estimation, filtering gain estimation, filtering calculating and correction to variances to obtain State the time difference between the timestamp of camera and the timestamp of the gyrosensor.
6. a kind of time reference real-time calibration system of Image-capturing platform is applied to mobile terminal, sets in the mobile terminal It is equipped with camera and gyrosensor characterized by comprising
One acquiring unit, to obtain current frame image and preceding N frame image, the preceding N frame image is before current frame image Nth frame image, using the preceding N frame image as reference frame image;
One extraction unit connects the acquiring unit, to extract the fisrt feature point in the current frame image, to form Second feature point in one set of characteristic points and the reference frame image, to form second feature point set;
One matching unit connects the extraction unit, to by the fisrt feature point set the fisrt feature point with The second feature point in the second feature point set carries out matching filtering, to obtain fisrt feature point to set;
One first processing units connect the matching unit, described current to be obtained according to the fisrt feature point to set The average global displacement of frame image;
One the second processing unit extracts the gyro data of the gyrosensor of the current frame image, according to the gyro The posture global displacement of current frame image described in data acquisition;
One control unit is separately connected the first processing units and described the second processing unit, to according to described average whole Position moves and the posture global displacement, obtains the timestamp of the camera according to Kalman filter method and the gyro senses Time difference between the timestamp of device, and the time difference is demarcated;
The matching unit is using random sampling unification algorism in the fisrt feature point set in the current frame image The fisrt feature point and the reference frame image the second feature point set in the second feature point one by one into Row bi-directional matching, to obtain second feature point to set, the second feature point includes a plurality of characteristic points pair to set, each Characteristic point to include a fisrt feature point and with the fisrt feature point second feature point correspondingly;It will be described Second feature point, to by homography matrix calculating is carried out, judges to calculate whether determinant accords with to each of set characteristic point Preset condition is closed, all not labeled fisrt feature points are to the composition fisrt feature point to set.
7. the time reference real-time calibration system of Image-capturing platform as claimed in claim 6, which is characterized in that described pre- If condition are as follows: the determinant is more than or equal to 0.7 or the determinant is less than or equal to 1.3.
8. the time reference real-time calibration system of Image-capturing platform as claimed in claim 6, which is characterized in that described One processing unit extracts the fisrt feature point to the fisrt feature point of current frame image described in set, according to described One characteristic point calculates the center position coordinates of the current frame image;The fisrt feature point is extracted to reference frame described in set The second feature point of image, the center position coordinates of the reference frame image are calculated according to the second feature point;According to The center position coordinates of the center position coordinates of the current frame image and the reference frame image calculate the current frame image The average global displacement.
9. the time reference real-time calibration system of Image-capturing platform as claimed in claim 6, which is characterized in that described Two processing units are using the time interval of the reference frame image to the current frame image as first time parameter, according to the top The corresponding pitch angle variable quantity of spiral shell data acquisition and yaw angle variable quantity become according to the pitch angle variable quantity and the yaw angle Change amount obtains the first global displacement;Using the reference frame image to the current frame image time interval as offset and described the One time parameter is added, and corresponding pitch angle variable quantity and yaw angle variable quantity is obtained, according to the pitch angle variable quantity and institute It states yaw angle variable quantity and obtains the second global displacement;The posture global displacement includes the first global displacement and second entirety Displacement.
10. the time reference real-time calibration system of Image-capturing platform as claimed in claim 6, which is characterized in that described Kalman filter method use gradient estimation, filtering gain estimation, filtering calculate and correction to variances with obtain the camera when Between stab and the timestamp of the gyrosensor between time difference.
CN201610327892.5A 2016-05-17 2016-05-17 A kind of time reference real-time calibration method and system of Image-capturing platform Active CN106023192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610327892.5A CN106023192B (en) 2016-05-17 2016-05-17 A kind of time reference real-time calibration method and system of Image-capturing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610327892.5A CN106023192B (en) 2016-05-17 2016-05-17 A kind of time reference real-time calibration method and system of Image-capturing platform

Publications (2)

Publication Number Publication Date
CN106023192A CN106023192A (en) 2016-10-12
CN106023192B true CN106023192B (en) 2019-04-09

Family

ID=57097606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610327892.5A Active CN106023192B (en) 2016-05-17 2016-05-17 A kind of time reference real-time calibration method and system of Image-capturing platform

Country Status (1)

Country Link
CN (1) CN106023192B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038736B (en) * 2017-03-17 2021-07-06 腾讯科技(深圳)有限公司 Animation display method based on frame rate and terminal equipment
CN107770437B (en) * 2017-09-08 2020-03-17 温州大学 Unmanned aerial vehicle photography and camera system and displacement compensation mechanism thereof
CN112129317B (en) * 2019-06-24 2022-09-02 南京地平线机器人技术有限公司 Information acquisition time difference determining method and device, electronic equipment and storage medium
CN112514363A (en) * 2019-12-17 2021-03-16 深圳市大疆创新科技有限公司 Image transmission system and method, control device and movable platform
CN112215899B (en) * 2020-09-18 2024-01-30 深圳市瑞立视多媒体科技有限公司 Frame data online processing method and device and computer equipment
CN114399555B (en) * 2021-12-20 2022-11-11 禾多科技(北京)有限公司 Data online calibration method and device, electronic equipment and computer readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306804A (en) * 2014-07-31 2016-02-03 北京展讯高科通信技术有限公司 Intelligent terminal and video image stabilizing method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306804A (en) * 2014-07-31 2016-02-03 北京展讯高科通信技术有限公司 Intelligent terminal and video image stabilizing method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Non-Linear Filter for Gyroscope-Based Video Stabilization;Steven Rell等;《Processings of 13th European Conference on Computer Visition》;20141231;第294-308页
Digital Video Stabilization and Rolling Shutter Correction using Gyroscopes;Alexandre Karpenko等;《Stanford Tech Report CTSR 2011》;20111231;第1-7页
基于SIFT算法的图像目标匹配与定位;傅卫平等;《仪器仪表学报》;20110131;第32卷(第1期);第163-169页

Also Published As

Publication number Publication date
CN106023192A (en) 2016-10-12

Similar Documents

Publication Publication Date Title
CN106023192B (en) A kind of time reference real-time calibration method and system of Image-capturing platform
CN107747941B (en) Binocular vision positioning method, device and system
CN105698765B (en) Object pose method under double IMU monocular visions measurement in a closed series noninertial systems
WO2017164479A1 (en) A device and method for determining a pose of a camera
US11223764B2 (en) Method for determining bias in an inertial measurement unit of an image acquisition device
EP2640057A1 (en) Image processing device, image processing method and program
EP2915139B1 (en) Adaptive scale and gravity estimation
KR101950359B1 (en) Method for position estimation of hybird motion capture system
WO2014022664A2 (en) Method and apparatus for data fusion of a three axis magnetometer and three axis accelerometer
EP2906909A1 (en) Gyroscope conditioning and gyro-camera alignment
CN110411476A (en) Vision inertia odometer calibration adaptation and evaluation method and system
WO2020228453A1 (en) Pose tracking method, pose tracking device and electronic device
CN109669533B (en) Motion capture method, device and system based on vision and inertia
CN107942090B (en) A kind of spacecraft Attitude rate estimator method for extracting Optic flow information based on fuzzy star chart
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
EP3786891A1 (en) Method and system for visual localization based on dual dome cameras
CN111723624A (en) Head motion tracking method and system
CN112414400A (en) Information processing method and device, electronic equipment and storage medium
CN109391755A (en) Picture pick-up device and the method wherein executed
JP2000213953A (en) Navigation device for flying object
KR20130022078A (en) Mobile mapping system and control method
Huai Collaborative slam with crowdsourced data
CN112129287B (en) Method and related device for processing based on visual inertial odometer
CN109462717A (en) Electronic image stabilization method and terminal
CN106441282B (en) A kind of star sensor star tracking method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant