CN105701827A - Method and device for jointly calibrating parameters of visible light camera and infrared camera - Google Patents

Method and device for jointly calibrating parameters of visible light camera and infrared camera Download PDF

Info

Publication number
CN105701827A
CN105701827A CN201610028981.XA CN201610028981A CN105701827A CN 105701827 A CN105701827 A CN 105701827A CN 201610028981 A CN201610028981 A CN 201610028981A CN 105701827 A CN105701827 A CN 105701827A
Authority
CN
China
Prior art keywords
visible light
camera
internal reference
infrared
light camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610028981.XA
Other languages
Chinese (zh)
Other versions
CN105701827B (en
Inventor
李波
汪洋
耿莹
蔡宇
黄艳金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino Forest Xinda (beijing) Science And Technology Information Co Ltd
Original Assignee
Sino Forest Xinda (beijing) Science And Technology Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino Forest Xinda (beijing) Science And Technology Information Co Ltd filed Critical Sino Forest Xinda (beijing) Science And Technology Information Co Ltd
Priority to CN201610028981.XA priority Critical patent/CN105701827B/en
Publication of CN105701827A publication Critical patent/CN105701827A/en
Application granted granted Critical
Publication of CN105701827B publication Critical patent/CN105701827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and device for jointly calibrating the parameters of a visible light camera and an infrared camera. The method comprises steps of: detecting the edges of a visible light image and an infrared image to obtain a visible light edge image and an infrared edge image; acquiring at least four groups of matched point pairs of the visible light edge image and the infrared edge image by using a scale invariant feature transform (SIFT) algorithm; and determining an extrinsic parameter matrix according to the at least four groups of matched point pairs, the intrinsic parameter matrix of the visible light camera, and the intrinsic parameter matrix of the infrared camera. The method and the device may acquire the matched point pairs in the visible light edge image and the infrared edge image without a calibration board and determine the extrinsic parameter matrix without the calibration board so as to reduce the calibration cost of camera parameters. Further, when a focal length of the camera is changed, the extrinsic parameter matrix can be determined without camera position adjustment so that parameter calibration efficiency is increased.

Description

The parametric joint scaling method of Visible Light Camera and infrared camera and device
Technical field
The present embodiments relate to camera parameter calibration technique, particularly relate to parametric joint scaling method and the device of a kind of Visible Light Camera and infrared camera。
Background technology
In the application such as multispectral photography measurement, remote sensing and targeted surveillance, owing to Visible Light Camera can obtain abundant texture information, thermal infrared imager can obtain temperature information, therefore makes the combined measurement of Visible Light Camera and infrared camera be applied widely。
In prior art, first design and one plane reference plate with electro-heat equipment of making, then pass through this scaling board and gather visible images and infrared image corresponding informance, finally utilizes traditional scaling method to obtain camera parameter。
But, need specialized designs to obtain camera parameter and make scaling board, the manpower and Material Cost that obtain camera parameter will be increased。
Summary of the invention
The present invention provides parametric joint scaling method and the device of a kind of Visible Light Camera and infrared camera, to realize can realizing the demarcation of camera parameter without scaling board, reduces manpower and Material Cost that camera parameter is demarcated。
First aspect, embodiments provides the parametric joint scaling method of a kind of Visible Light Camera and infrared camera, including:
Visible images and infrared image are carried out rim detection, obtains visible ray edge image and infrared edge image;
Scale invariant feature transfer algorithm SIFT is utilized to obtain at least four group matching double points of described visible ray edge image and infrared edge image;
Outer ginseng matrix is determined according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera。
Second aspect, the embodiment of the present invention additionally provides the parametric joint caliberating device of a kind of Visible Light Camera and infrared camera, including:
Edge detection unit, for visible images and infrared image are carried out rim detection, obtains visible ray edge image and infrared edge image;
Matching double points acquiring unit, is used at least four group matching double points of described visible ray edge image and the infrared edge image utilizing the described edge detection unit of scale invariant feature transfer algorithm SIFT acquisition to obtain;
Outer ginseng matrix determines unit, determines outer ginseng matrix at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera according to the acquisition of described matching double points acquiring unit。
The present invention is by carrying out rim detection to visible images and infrared image, and utilize scale invariant feature transfer algorithm SIFT, obtain at least four group matching double points of the visible ray edge image that obtains of rim detection and infrared edge image, and then without visible ray edge image and the matching double points in infrared edge image can be obtained by scaling board, finally according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera determine outer ginseng matrix, realize under the premise not using scaling board, determine outer ginseng matrix, and then reduce the calibration cost of camera parameter。Additionally, owing to the position of scaling board needs to adapt with camera focus in prior art, therefore when camera focus changes, it is necessary to adjust the position of scaling board and camera, complex operation。In the present invention when the focal length of camera (Visible Light Camera or infrared camera) changes, it is not necessary to adjust camera position and namely can determine that outer ginseng matrix, improve the efficiency of parameter calibration。
Accompanying drawing explanation
Fig. 1 is the flow chart of the Visible Light Camera in the embodiment of the present invention one and the parametric joint scaling method of infrared camera;
Fig. 2 is the flow chart of the Visible Light Camera in the embodiment of the present invention two and the parametric joint scaling method of infrared camera;
Fig. 3 is the flow chart of the Visible Light Camera in the embodiment of the present invention three and the parametric joint scaling method of infrared camera;
Fig. 4 is the flow chart of the Visible Light Camera in the embodiment of the present invention four and the parametric joint scaling method of infrared camera;
Fig. 5 is the structural representation of the Visible Light Camera in the embodiment of the present invention five and the parametric joint caliberating device of infrared camera。
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail。It is understood that specific embodiment described herein is used only for explaining the present invention, but not limitation of the invention。It also should be noted that, for the ease of describing, accompanying drawing illustrate only part related to the present invention but not entire infrastructure。
Embodiment one
The flow chart of a kind of Visible Light Camera that Fig. 1 provides for the embodiment of the present invention one and the parametric joint scaling method of infrared camera, the present embodiment is applicable to the situation that the parameter to Visible Light Camera and infrared camera is demarcated, the method can be performed by the terminal with data-handling capacity, terminal is PC (PersonalComputer such as, PC), notebook computer, panel computer etc., the method specifically includes following steps:
S110, visible images and infrared image are carried out rim detection, obtain visible ray edge image and infrared edge image。
When visible images is carried out rim detection, can adopt based on the rim detection searched or based on zero rim detection passed through。Border is detected by finding the minimum and maximum value in image first derivative, it is common that by boundary alignment in the maximum direction of gradient based on the edge detection method searched。Pass through based on zero edge detection method passed through find border by finding image second order derivative zero, generally the zero crossing that Laplce's (Laplacian) zero crossing or nonlinear difference represent is defined as border。
Can by infrared image being carried out rim detection based on methods such as the Edge detection of infrared images of Multiscale Morphological。The image obtained by rim detection is for representing the edge contour of original image, and contour line is made up of the pixel on contour line。
S120, scale invariant feature transfer algorithm SIFT is utilized to obtain at least four group matching double points of visible ray edge image and infrared edge image intersection。
Scale invariant feature conversion (Scale-invariantfeaturetransform, SIFT) it is the algorithm of a kind of computer vision, for the locality characteristic detected with describe in image, in space scale, such as find extreme point, and extract the position of extreme point, yardstick and rotational invariants。Specifically can be carried out by operations described below:
(1) metric space extremum extracting: search for the picture position on all yardsticks。The potential point of interest for yardstick and invariable rotary is identified by gaussian derivative function。
(2) positioning feature point: on the position of each candidate, the model fine by matching determines position and yardstick。The selection gist of characteristic point is in their degree of stability。
(3) direction is determined: based on the gradient direction of image local, distributes to the one or more direction of each characteristic point position。All operations to view data below convert both relative to the direction of characteristic point, yardstick and position, thus providing the invariance for these conversion。
(4) characteristic point describes: in the neighborhood around each characteristic point, measure the gradient of image local on selected yardstick。These gradients are transformed into a kind of expression, and this expression allows deformation and the illumination variation of relatively larger local shape。
First visible ray edge image and the part of infrared edge image coincidence are obtained, then the part according to scale invariant feature transfer algorithm, visible ray edge image overlapped with infrared edge image respectively carries out scale invariant feature conversion, obtains visible ray edge image characteristic of correspondence point and describes information and infrared edge image characteristic of correspondence point describes information。Describe information according to the visible ray edge image characteristic of correspondence point obtained after conversion and infrared edge image characteristic of correspondence point describes information and determines matching double points。Often group matching double points is made up of a point in visible ray edge image and a point in infrared edge image, and the two point has common direction, yardstick and position。
S130, basis at least four group matching double points, the internal reference matrix of Visible Light Camera and the internal reference matrix of infrared camera determine outer ginseng matrix。
In order to convenient description sets infrared camera as C1It is C with Visible Light Camera2, C2Relative to C1Spin matrix be R, translation vector t, the world coordinates of the certain point P on subject is (Xw,Yw,Zw), some P is formed picture point respectively P in the image plane of two cameras1, P2, P1Pixel coordinate be (u1,v1), P2Pixel coordinate be (u2,v2)。
Assume C1Camera coordinate system overlap with world coordinate system, then C2The available C of relation of camera coordinate system and world coordinate system2Relative to C1Spin matrix R and translation vector t represent (i.e. formula one), formula one gives the geometric transform relation between the two width images that acquisition comprises Same Scene with different view, is also referred to as the image transform model based on camera motion:
u 2 v 2 1 = Z c 1 Z c 2 K 2 [ R t ] K 1 - 1 u 1 v 1 1 (formula one)
Wherein, K1For infrared camera C1Internal reference matrix, K2For Visible Light Camera C2Internal reference matrix, R represents video camera C2Relative to video camera C1Spin matrix;T represents video camera C2Relative to video camera C1Motion vector, [Rt] is outer ginseng matrix, and R is 3 × 3 matrixes, and t is 1 × 3 matrix, Zc1, Zc2Represent some P to video camera C respectively1, C2The distance of image plane, this distance is focal length and object distance sum, and wherein object distance can be passed through to use survey tool (such as tape measure etc.) measurement to obtain, it is possible to the sensor being used for measuring distance by ultrasonic sensor etc. obtains。
Assume the characteristic point coordinate respectively (x of the S120 four groups of matching characteristic points pair obtained1,y1), (x1′,y1'), (x2,y2), (x2′,y2'), (x3,y3), (x '3,y3'), (x4,y4), (x '4,y4'), and x1≠x2≠x3≠x4, y1≠y2≠y3≠y4,x1′≠x2′≠x′3≠x′4,y1′≠y2′≠y3′≠y4′。
The internal reference matrix K of known infrared camera1Internal reference matrix K with Visible Light Camera2, it is respectively as follows:
K 1 = f c x 1 0 c x 1 0 f c y 1 c y 1 0 0 1 , K 2 = f c x 2 0 c x 2 0 f c y 2 c y 2 0 0 1 .
Wherein, fcx1For the axial scale factor of u, f in infrared camera image planecy1For the axial scale factor of v, (c in infrared camera image planex1, cy1) represent the center point coordinate of image plane of infrared camera。
fcx2For the axial scale factor of u, f in Visible Light Camera image planecy2For the axial scale factor of v, (c in Visible Light Camera image planex2, cy2) represent the center point coordinate of image plane of Visible Light Camera。
By K1And K2Substituting in formula one, the expansion obtaining formula one is:
x i ′ = [ 1 f c x 1 ( f c x 2 r 1 + c x 2 r 7 ) - c x 1 f c x 1 ( f c x 2 r 3 + c x 2 r 9 ) + f c x 2 t 1 + c x 2 t 3 ] x i + [ 1 f c y 1 ( f c x 2 r 2 + c x 2 r 8 ) - c y 1 f c y 1 ( f c x 2 r 3 + c x 2 r 9 ) + f c x 2 t 1 + c x 2 t 3 ] y i + f c x 2 r 3 + c x 2 r 9 + f c x 2 t 1 + c x 2 t 3 y i ′ = [ 1 f c x 1 ( f c y 1 r 4 + c y 2 r 7 ) - c x 1 f c x 1 ( f c y 2 r 6 + c y 2 r 9 ) + f c y 2 t 2 + c y 2 t 3 ] x i + [ 1 f c y 1 ( f c y 2 r 5 + c y 2 r 8 ) - c y 1 f c y 1 ( f c y 2 r 6 + c y 2 r 9 ) + f c y 2 t 2 + c y 2 t 3 ] y i + f c y 2 r 6 + c y 2 r 9 + f c y 2 t 2 + c y 2 t 3 1 = ( r 7 f c x 1 - c x 1 f c x 1 r 9 + t 3 ) x i + ( r 8 f c y 1 - c y 1 f c y 1 r 9 + t 3 ) y i + r 9 + t 3
(i=1,2,3 ..., N) (formula two)
N is the integer be more than or equal to 4。Owing to R is 3 × 3 matrixes, t is 1 × 3 matrix, therefore has 12 unknown quantitys。Often group matching double points can provide the equation group of three equations composition in three-dimensional, and therefore minimum 12 equations provided by 4 groups of matching double points can solve 12 unknown quantitys。
The coordinate of four groups of matching double points is substituted in the solution system of linear equations shown in formula two, solves R, t。Then, each the element orthogonalization to R, obtain outer ginseng matrix。
It should be noted that the spin matrix R in formula one can represent by 3 Eulerian angles, the angle [alpha] namely rotated around X-axis, around the angle beta that Y-axis rotates, the angle γ rotated about the z axis, the expression formula of R is
R = c o s α - s i n α 0 s i n α cos α 0 0 0 1 c o s β 0 - sin β 0 1 0 sin β 0 cos β 1 0 0 0 cos γ - sin γ 0 sin γ cos γ = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9
The expression formula of the translation vector t in formula one is t=[t1t2t3]T。Wherein represent C respectively2Relative to C1Displacement t in X-axis1, displacement t in Y-axis2With the displacement t on Z axis3
Prior art is obtained the angle [alpha] rotated around X-axis by the mode of scaling board, around the angle beta that Y-axis rotates, the angle γ rotated about the z axis and the displacement t in X-axis1, displacement t in Y-axis2With the displacement t on Z axis3, obtain outer ginseng matrix [Rt] thereby through calculating。Owing to scaling board needs be designed and make therefore relatively costly, therefore the embodiment of the present invention does not directly obtain the angle [alpha] rotated around X-axis, around the angle beta that Y-axis rotates, the angle γ rotated about the z axis and the displacement t in X-axis1, displacement t in Y-axis2With the displacement t on Z axis3These physical quantitys, but obtain these physical quantitys by calculating, and then without using scaling board, reduce the outer procurement cost joining matrix。
The technical scheme that the present embodiment provides is by carrying out rim detection to visible images and infrared image, and utilize scale invariant feature transfer algorithm SIFT, obtain at least four group matching double points of the visible ray edge image that obtains of rim detection and infrared edge image, and then without visible ray edge image and the matching double points in infrared edge image can be obtained by scaling board, finally according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera determine outer ginseng matrix, realize under the premise not using scaling board, determine outer ginseng matrix, and then reduce the calibration cost of camera parameter。Additionally, owing to the position of scaling board needs to adapt with camera focus in prior art, therefore when camera focus changes, it is necessary to adjust the position of scaling board and camera, complex operation。In the present embodiment when the focal length of camera (Visible Light Camera or infrared camera) changes, it is not necessary to adjust camera position and namely can determine that outer ginseng matrix, improve the efficiency of parameter calibration。
Embodiment two
The flow chart of a kind of Visible Light Camera that Fig. 2 provides for the embodiment of the present invention two and the parametric joint scaling method of infrared camera, preferably, S120, utilize scale invariant feature transfer algorithm SIFT to obtain at least four group matching double points of visible ray edge image and infrared edge image intersection after, also include:
S140, utilize stochastical sampling consistency algorithm RANSAC that at least four group matching double points are screened。
Stochastical sampling concordance (RANdomSAmpleConsensus, RANSAC) algorithm reaches target by the one group of random subset being chosen in data。The subset being selected is assumed to be intra-office point, and is verified by following method:
S1, structure prediction model, prediction model is adapted to the intra-office point assumed。Prediction model can generate according to the S120 at least four group matching double points obtained and obtain。
S2, with the model obtained in S1 go test other data all of, if certain point be applicable to estimate model, it is believed that it is also intra-office point。Other data are except for constructing the matching double points except preset model。
If S3 has abundant point to be classified as the intra-office point of hypothesis, then the model of estimation is sufficient for rationally。
S4, then, with the intra-office of all hypothesis point duplicate removal new estimation model, because it is only by initial hypothesis intra-office point estimation。
S5, last, by estimating that the error rate of intra-office point and model carrys out assessment models。
S1 to S5 is repeatedly executed fixing number of times, the model that every time produces or because intra-office point is rejected very little, or because better and selected than existing model。
Accordingly, S130, basis at least four group matching double points, the internal reference matrix of Visible Light Camera and the internal reference matrix of infrared camera determine outer ginseng matrix, can be carried out by operations described below:
S130 ', determine outer ginseng matrix according to the screening at least four group matching double points, the internal reference matrix of Visible Light Camera and the internal reference matrix of infrared camera that obtain。
The technical scheme that the present embodiment provides, the pixel can got rid of in edge image beyond edge lines by stochastical sampling consistency algorithm is misidentified as marginal point, and then avoid the point that the pixel beyond because of edge lines is mistaken as on edge lines to cause the outer inaccurate problem of ginseng matrix calculus, improve the outer accuracy joining matrix。
Embodiment three
The flow chart of a kind of Visible Light Camera that Fig. 3 provides for the embodiment of the present invention three and the parametric joint scaling method of infrared camera, optionally, before determining outer ginseng matrix at S130, according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera, the method also includes:
S101a, focal length according to Visible Light Camera determine the size factor of Visible Light Camera。
The size factor of Visible Light Camera includes u shaft size factor cx2With v shaft size factor cy2。U shaft size factor cx2For u axle focal distance fcx2Divided by v axle focal distance fcy2。V shaft size factor cy2For v axle focal distance fcy2Divided by u axle focal distance fcx2
S102a, determine the internal reference matrix of Visible Light Camera according to the size factor of Visible Light Camera and the picture centre coordinate of Visible Light Camera。
U shaft size factor c according to Visible Light Camerax2, v shaft size factor cy2, u axle focal distance fcx2And v axle focal distance fcy2, generate the internal reference matrix K of Visible Light Camera2
K 2 = f c x 2 0 c x 2 0 f c y 2 c y 2 0 0 1
S101b, focal length according to infrared camera determine the size factor of infrared camera。
The size factor of infrared camera includes u shaft size factor cx1With v shaft size factor cy1。U shaft size factor cx1For u axle focal distance fcx1Divided by v axle focal distance fcy1。V shaft size factor cy1For v axle focal distance fcy1Divided by u axle focal distance fcx1
S102b, determine the internal reference matrix of Visible Light Camera according to the size factor of infrared camera and the picture centre coordinate of infrared camera。
U shaft size factor c according to infrared camerax1, v shaft size factor cy1, u axle focal distance fcx1And v axle focal distance fcy1, generate the internal reference matrix K of infrared camera1
K 1 = f c x 1 0 c x 1 0 f c y 1 c y 1 0 0 1
The technical scheme that the present embodiment provides can determine the internal reference matrix of Visible Light Camera according to the focal length of Visible Light Camera, determines the internal reference matrix of infrared camera according to the focal length of infrared camera, and then quickly obtains internal reference matrix, improves treatment effeciency。
Embodiment four
Owing to, in actual photographed, the center of the object in image plane is inconsistent with the geometric center of image plane, the internal reference matrix now determined according to camera description will be unable to accurately represent actual center, and then produces error。Based on this, the embodiment of the present invention additionally provides the parametric joint scaling method of a kind of Visible Light Camera and infrared camera, as further illustrating above-described embodiment, as shown in Figure 4, after determining outer ginseng matrix at S130, according at least four group matching double points, the internal reference matrix of Visible Light Camera and the internal reference matrix of infrared camera, also include:
S150, Visible Light Camera or infrared camera are carried out the evolution of preset times。
Preset times (M-1) is be more than or equal to 1。The position of a camera in Visible Light Camera and infrared camera can be changed, it is possible to change the position of two cameras simultaneously。Carrying out an evolution every time and can obtain a position relationship, spin matrix R corresponding to each position relation is not quite similar。
Further, when changing the position of Visible Light Camera or infrared camera, the image plane center of Visible Light Camera or infrared camera can be directed at subject center, and then make motion vector t=[t1t2t3]TIn x-axis translation vector t1It is zero, reduces amount of calculation, improve computational efficiency。
S160, obtain at least six group matching double points that each evolution is corresponding。
After carrying out (M-1) secondary evolution, the match point number form that the coupling every time obtained after conversion is counted corresponding with home position becomes M matching double points set, and each matching double points set is made up of at least six matching double points。
S170, according to whole matching double points corresponding to the evolution of preset times, the internal reference matrix of Visible Light Camera, the internal reference matrix of infrared camera and outer ginseng matrix are optimized。
Further, S170 can be carried out by operations described below:
S171, by the internal reference matrix of Visible Light Camera corresponding time minimum with infrared image diversity factor for visible images, it is determined that for the internal reference matrix of Visible Light Camera after optimizing。
S172, by the internal reference matrix of infrared camera corresponding time minimum with infrared image diversity factor for visible images, it is determined that for the internal reference matrix of infrared camera after optimizing。
S173, the outer ginseng matrix that visible images is corresponding time minimum with infrared image diversity factor is entered, it is determined that enter for the outer ginseng matrix after optimizing。
Assume M station acquisition common M group matching double points set, it is seen that light image feature point pairs is designated asInfrared Image Features point is to being designated asHomography matrix between two camerasWherein, owing to Visible Light Camera and infrared camera are typically positioned on same face wall, now the optical axis of two cameras is parallel, and the actual range between camera differs greatly with subject distance, so it is believed thatDesign optimization object function:By solving this minimization problem, it is possible to obtain the Visible Light Camera intrinsic parameter K of higher precision2With infrared camera intrinsic parameter K1, and outer parameter matrix [R, t]。
The technical scheme that the present embodiment provides, it is possible to by solving-optimizing object function to Visible Light Camera intrinsic parameter K2With infrared camera intrinsic parameter K1, and outer parameter matrix [R, t] is optimized, and obtains outer ginseng matrix and internal reference matrix more accurately。
Embodiment five
The structural representation of a kind of Visible Light Camera that Fig. 5 provides for the embodiment of the present invention five and the parametric joint caliberating device 1 of infrared camera, for realizing the method described in above-described embodiment, this device 1 is arranged in the terminal with data-handling capacity, and the parametric joint caliberating device 1 of this Visible Light Camera and infrared camera includes:
Edge detection unit 11, for visible images and infrared image are carried out rim detection, obtains visible ray edge image and infrared edge image。
Matching double points acquiring unit 12, is used at least four group matching double points of described visible ray edge image and the described infrared edge image intersection utilizing the described edge detection unit 11 of scale invariant feature transfer algorithm SIFT acquisition to obtain。
Outer ginseng matrix determines unit 13, determines outer ginseng matrix at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera according to the acquisition of described matching double points acquiring unit 12。
Further, this device 1 also includes:
Screening unit 14, for utilizing stochastical sampling consistency algorithm RANSAC that at least four group matching double points described in the acquisition of described matching double points acquiring unit 12 are screened。
Accordingly, described outer ginseng matrix determines unit 13, specifically for:
Outer ginseng matrix is determined according at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera that the screening of described screening unit 14 obtains。
Further, this device 1 also include internal reference determine unit 15 for:
Focal length according to described Visible Light Camera and pixel dimension determine the size factor of Visible Light Camera。
Size factor according to described Visible Light Camera and the picture centre coordinate of Visible Light Camera determine the internal reference matrix of described Visible Light Camera。
Focal length according to described infrared camera and pixel dimension determine the size factor of infrared camera。
Size factor according to described infrared camera and the picture centre coordinate of infrared camera determine the internal reference matrix of described Visible Light Camera。
Further, also include:
Evolution unit 16, for carrying out the evolution of preset times to described Visible Light Camera or described infrared camera。
Matching double points conversion acquiring unit 17, for obtaining at least six group matching double points that each evolution is corresponding。
Optimize unit 18, for according to whole matching double points corresponding to the evolution of described preset times, the internal reference matrix of Visible Light Camera, the internal reference matrix of infrared camera and outer ginseng matrix are optimized。
Further, described optimization unit 18 specifically for:
Internal reference matrix by Visible Light Camera corresponding time minimum with described infrared image diversity factor for described visible images, it is determined that for the internal reference matrix of the Visible Light Camera after optimizing。
Internal reference matrix by infrared camera corresponding time minimum with described infrared image diversity factor for described visible images, it is determined that for the internal reference matrix of the infrared camera after optimizing。
The outer ginseng matrix that described visible images is corresponding time minimum with described infrared image diversity factor is entered, it is determined that enter for the outer ginseng matrix after optimizing。
Said apparatus 1 can perform the method that the embodiment of the present invention one to embodiment four provides, and possesses the execution corresponding functional module of said method and beneficial effect。The not ins and outs of detailed description in the present embodiment, the method that can provide referring to the embodiment of the present invention one to embodiment four。
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle。It will be appreciated by those skilled in the art that and the invention is not restricted to specific embodiment described here, various obvious change can be carried out for a person skilled in the art, readjust and substitute without departing from protection scope of the present invention。Therefore, although the present invention being described in further detail by above example, but the present invention is not limited only to above example, when without departing from present inventive concept, other Equivalent embodiments more can also be included, and the scope of the present invention is determined by appended right。

Claims (10)

1. the parametric joint scaling method of a Visible Light Camera and infrared camera, it is characterised in that including:
Visible images and infrared image are carried out rim detection, obtains visible ray edge image and infrared edge image;
Scale invariant feature transfer algorithm SIFT is utilized to obtain at least four group matching double points of described visible ray edge image and described infrared edge image intersection;
Outer ginseng matrix is determined according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera。
2. the parametric joint scaling method of Visible Light Camera according to claim 1 and infrared camera, it is characterized in that, after utilizing scale invariant feature transfer algorithm SIFT to obtain at least four group matching double points of described visible ray edge image and described infrared edge image intersection, also include:
Utilize stochastical sampling consistency algorithm RANSAC that described at least four group matching double points are screened;
Accordingly, described in described basis, at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera determine outer ginseng matrix, including:
Outer ginseng matrix is determined according at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera that screening obtains。
3. the parametric joint scaling method of Visible Light Camera according to claim 1 and infrared camera, it is characterized in that, before determining outer ginseng matrix according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera, also include:
Focal length according to described Visible Light Camera and pixel dimension determine the size factor of Visible Light Camera;
Size factor according to described Visible Light Camera and the picture centre coordinate of Visible Light Camera determine the internal reference matrix of described Visible Light Camera;
Focal length according to described infrared camera and pixel dimension determine the size factor of infrared camera;
Size factor according to described infrared camera and the picture centre coordinate of infrared camera determine the internal reference matrix of described Visible Light Camera。
4. the parametric joint scaling method of Visible Light Camera according to any one of claim 1 to 3 and infrared camera, it is characterized in that, after determining outer ginseng matrix according to described at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera, also include:
Described Visible Light Camera or described infrared camera are carried out the evolution of preset times;
Obtain at least six group matching double points that each evolution is corresponding;
Whole matching double points that evolution according to described preset times is corresponding, are optimized the internal reference matrix of Visible Light Camera, the internal reference matrix of infrared camera and outer ginseng matrix。
5. the parametric joint scaling method of Visible Light Camera according to claim 4 and infrared camera, it is characterized in that, the described whole matching double points corresponding according to the evolution of described preset times, the internal reference matrix of Visible Light Camera, the internal reference matrix of infrared camera and outer ginseng matrix are optimized, including:
Internal reference matrix by Visible Light Camera corresponding time minimum with described infrared image diversity factor for described visible images, it is determined that for the internal reference matrix of the Visible Light Camera after optimizing;
Internal reference matrix by infrared camera corresponding time minimum with described infrared image diversity factor for described visible images, it is determined that for the internal reference matrix of the infrared camera after optimizing;
The outer ginseng matrix that described visible images is corresponding time minimum with described infrared image diversity factor is entered, it is determined that enter for the outer ginseng matrix after optimizing。
6. the parametric joint caliberating device of a Visible Light Camera and infrared camera, it is characterised in that including:
Edge detection unit, for visible images and infrared image are carried out rim detection, obtains visible ray edge image and infrared edge image;
Matching double points acquiring unit, is used at least four group matching double points of described visible ray edge image and the described infrared edge image intersection utilizing the described edge detection unit of scale invariant feature transfer algorithm SIFT acquisition to obtain;
Outer ginseng matrix determines unit, determines outer ginseng matrix at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera according to the acquisition of described matching double points acquiring unit。
7. the parametric joint caliberating device of Visible Light Camera according to claim 6 and infrared camera, it is characterised in that also include:
Screening unit, for utilizing stochastical sampling consistency algorithm RANSAC that at least four group matching double points described in the acquisition of described matching double points acquiring unit are screened;
Accordingly, described outer ginseng matrix determines unit, specifically for:
Outer ginseng matrix is determined according at least four group matching double points, the internal reference matrix of described Visible Light Camera and the internal reference matrix of described infrared camera that the screening of described screening unit obtains。
8. the parametric joint caliberating device of Visible Light Camera according to claim 6 and infrared camera, it is characterised in that also include internal reference determine unit for:
Focal length according to described Visible Light Camera and pixel dimension determine the size factor of Visible Light Camera;
Size factor according to described Visible Light Camera and the picture centre coordinate of Visible Light Camera determine the internal reference matrix of described Visible Light Camera;
Focal length according to described infrared camera and pixel dimension determine the size factor of infrared camera;
Size factor according to described infrared camera and the picture centre coordinate of infrared camera determine the internal reference matrix of described Visible Light Camera。
9. the parametric joint caliberating device of the Visible Light Camera according to any one of claim 6 to 8 and infrared camera, it is characterised in that also include:
Evolution unit, for carrying out the evolution of preset times to described Visible Light Camera or described infrared camera;
Matching double points conversion acquiring unit, for obtaining at least six group matching double points that each evolution is corresponding;
Optimize unit, for according to whole matching double points corresponding to the evolution of described preset times, the internal reference matrix of Visible Light Camera, the internal reference matrix of infrared camera and outer ginseng matrix are optimized。
10. the parametric joint caliberating device of Visible Light Camera according to claim 9 and infrared camera, it is characterised in that described optimization unit specifically for:
Internal reference matrix by Visible Light Camera corresponding time minimum with described infrared image diversity factor for described visible images, it is determined that for the internal reference matrix of the Visible Light Camera after optimizing;
Internal reference matrix by infrared camera corresponding time minimum with described infrared image diversity factor for described visible images, it is determined that for the internal reference matrix of the infrared camera after optimizing;
The outer ginseng matrix that described visible images is corresponding time minimum with described infrared image diversity factor is entered, it is determined that enter for the outer ginseng matrix after optimizing。
CN201610028981.XA 2016-01-15 2016-01-15 The parametric joint scaling method and device of Visible Light Camera and infrared camera Active CN105701827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610028981.XA CN105701827B (en) 2016-01-15 2016-01-15 The parametric joint scaling method and device of Visible Light Camera and infrared camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610028981.XA CN105701827B (en) 2016-01-15 2016-01-15 The parametric joint scaling method and device of Visible Light Camera and infrared camera

Publications (2)

Publication Number Publication Date
CN105701827A true CN105701827A (en) 2016-06-22
CN105701827B CN105701827B (en) 2019-04-02

Family

ID=56227444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610028981.XA Active CN105701827B (en) 2016-01-15 2016-01-15 The parametric joint scaling method and device of Visible Light Camera and infrared camera

Country Status (1)

Country Link
CN (1) CN105701827B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108444419A (en) * 2018-02-01 2018-08-24 阿尔特汽车技术股份有限公司 Realize three coordinate arrangement on-line measurement systems and method
CN108460804A (en) * 2018-03-20 2018-08-28 重庆大学 A kind of Three Degree Of Freedom position and posture detection method of transhipment docking mechanism and transhipment docking mechanism based on machine vision
CN108805939A (en) * 2018-06-19 2018-11-13 河海大学常州校区 The caliberating device and method of trinocular vision system based on statistics feature
CN109919007A (en) * 2019-01-23 2019-06-21 绵阳慧视光电技术有限责任公司 A method of generating infrared image markup information
CN109949374A (en) * 2019-04-26 2019-06-28 清华大学深圳研究生院 A kind of reversed camera calibration system and method based on mirror image
CN110166714A (en) * 2019-04-11 2019-08-23 深圳市朗驰欣创科技股份有限公司 Double light fusion methods of adjustment, double light fusion adjustment device and double light fusion devices
CN110728713A (en) * 2018-07-16 2020-01-24 Oppo广东移动通信有限公司 Test method and test system
WO2020024576A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Camera calibration method and apparatus, electronic device, and computer-readable storage medium
CN110910457A (en) * 2019-11-22 2020-03-24 大连理工大学 Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN110956661A (en) * 2019-11-22 2020-04-03 大连理工大学 Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN110969667A (en) * 2019-11-22 2020-04-07 大连理工大学 Multi-spectrum camera external parameter self-correction algorithm based on edge features
CN111289111A (en) * 2020-02-20 2020-06-16 中国科学院半导体研究所 Self-calibration infrared body temperature rapid detection method and detection device
CN111344740A (en) * 2017-10-30 2020-06-26 深圳市柔宇科技有限公司 Camera image processing method based on marker and augmented reality equipment
CN111654677A (en) * 2020-06-17 2020-09-11 浙江大华技术股份有限公司 Method and device for determining desynchronization of holder
CN112414558A (en) * 2021-01-25 2021-02-26 深圳市视美泰技术股份有限公司 Temperature detection method and device based on visible light image and thermal imaging image
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
WO2021098081A1 (en) * 2019-11-22 2021-05-27 大连理工大学 Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm
CN112907680A (en) * 2021-02-22 2021-06-04 上海数川数据科技有限公司 Automatic calibration method for rotation matrix of visible light and infrared double-light camera
CN113012239A (en) * 2021-04-12 2021-06-22 山西省交通科技研发有限公司 Quantitative calculation method for focal length change of vehicle-road cooperative roadside perception camera
CN115861448A (en) * 2022-12-30 2023-03-28 广西电网有限责任公司钦州供电局 System calibration method and system based on angular point detection and characteristic point extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692283A (en) * 2009-10-15 2010-04-07 上海大学 Method for on-line self-calibration of external parameters of cameras of bionic landing system of unmanned gyroplane
CN101876555A (en) * 2009-11-04 2010-11-03 北京控制工程研究所 Lunar rover binocular vision navigation system calibration method
US8405717B2 (en) * 2009-03-27 2013-03-26 Electronics And Telecommunications Research Institute Apparatus and method for calibrating images between cameras
US20150341629A1 (en) * 2014-05-21 2015-11-26 GM Global Technology Operations LLC Automatic calibration of extrinsic and intrinsic camera parameters for surround-view camera system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8405717B2 (en) * 2009-03-27 2013-03-26 Electronics And Telecommunications Research Institute Apparatus and method for calibrating images between cameras
CN101692283A (en) * 2009-10-15 2010-04-07 上海大学 Method for on-line self-calibration of external parameters of cameras of bionic landing system of unmanned gyroplane
CN101876555A (en) * 2009-11-04 2010-11-03 北京控制工程研究所 Lunar rover binocular vision navigation system calibration method
US20150341629A1 (en) * 2014-05-21 2015-11-26 GM Global Technology Operations LLC Automatic calibration of extrinsic and intrinsic camera parameters for surround-view camera system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAVID NISTE´ R: "An Efficient Solution to the Five-Point Relative Pose Problem", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
宋佳乾 等: "基于改进SIFT特征点匹配的图像拼接算法", 《计算机测量与控制》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111344740A (en) * 2017-10-30 2020-06-26 深圳市柔宇科技有限公司 Camera image processing method based on marker and augmented reality equipment
CN108444419A (en) * 2018-02-01 2018-08-24 阿尔特汽车技术股份有限公司 Realize three coordinate arrangement on-line measurement systems and method
CN108460804A (en) * 2018-03-20 2018-08-28 重庆大学 A kind of Three Degree Of Freedom position and posture detection method of transhipment docking mechanism and transhipment docking mechanism based on machine vision
CN108805939B (en) * 2018-06-19 2022-02-11 河海大学常州校区 Calibration device and method of trinocular vision system based on statistical characteristics
CN108805939A (en) * 2018-06-19 2018-11-13 河海大学常州校区 The caliberating device and method of trinocular vision system based on statistics feature
CN110728713A (en) * 2018-07-16 2020-01-24 Oppo广东移动通信有限公司 Test method and test system
CN110728713B (en) * 2018-07-16 2022-09-30 Oppo广东移动通信有限公司 Test method and test system
WO2020024576A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Camera calibration method and apparatus, electronic device, and computer-readable storage medium
US11158086B2 (en) 2018-08-01 2021-10-26 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Camera calibration method and apparatus, electronic device, and computer-readable storage medium
CN109919007A (en) * 2019-01-23 2019-06-21 绵阳慧视光电技术有限责任公司 A method of generating infrared image markup information
CN109919007B (en) * 2019-01-23 2023-04-18 绵阳慧视光电技术有限责任公司 Method for generating infrared image annotation information
CN110166714A (en) * 2019-04-11 2019-08-23 深圳市朗驰欣创科技股份有限公司 Double light fusion methods of adjustment, double light fusion adjustment device and double light fusion devices
CN109949374B (en) * 2019-04-26 2020-12-25 清华大学深圳研究生院 Reverse camera calibration system and method based on mirror image
CN109949374A (en) * 2019-04-26 2019-06-28 清华大学深圳研究生院 A kind of reversed camera calibration system and method based on mirror image
CN110969667A (en) * 2019-11-22 2020-04-07 大连理工大学 Multi-spectrum camera external parameter self-correction algorithm based on edge features
CN110969667B (en) * 2019-11-22 2023-04-28 大连理工大学 Multispectral camera external parameter self-correction algorithm based on edge characteristics
US11575873B2 (en) 2019-11-22 2023-02-07 Dalian University Of Technology Multispectral stereo camera self-calibration algorithm based on track feature registration
WO2021098080A1 (en) * 2019-11-22 2021-05-27 大连理工大学 Multi-spectral camera extrinsic parameter self-calibration algorithm based on edge features
WO2021098081A1 (en) * 2019-11-22 2021-05-27 大连理工大学 Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm
CN110956661B (en) * 2019-11-22 2022-09-20 大连理工大学 Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
US11398053B2 (en) 2019-11-22 2022-07-26 Dalian University Of Technology Multispectral camera external parameter self-calibration algorithm based on edge features
CN110956661A (en) * 2019-11-22 2020-04-03 大连理工大学 Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN110910457A (en) * 2019-11-22 2020-03-24 大连理工大学 Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN111289111A (en) * 2020-02-20 2020-06-16 中国科学院半导体研究所 Self-calibration infrared body temperature rapid detection method and detection device
CN111289111B (en) * 2020-02-20 2021-07-09 中国科学院半导体研究所 Self-calibration infrared body temperature rapid detection method and detection device
CN111654677B (en) * 2020-06-17 2021-12-17 浙江大华技术股份有限公司 Method and device for determining desynchronization of holder
CN111654677A (en) * 2020-06-17 2020-09-11 浙江大华技术股份有限公司 Method and device for determining desynchronization of holder
CN112414558B (en) * 2021-01-25 2021-04-23 深圳市视美泰技术股份有限公司 Temperature detection method and device based on visible light image and thermal imaging image
CN112414558A (en) * 2021-01-25 2021-02-26 深圳市视美泰技术股份有限公司 Temperature detection method and device based on visible light image and thermal imaging image
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
CN112907680A (en) * 2021-02-22 2021-06-04 上海数川数据科技有限公司 Automatic calibration method for rotation matrix of visible light and infrared double-light camera
CN113012239A (en) * 2021-04-12 2021-06-22 山西省交通科技研发有限公司 Quantitative calculation method for focal length change of vehicle-road cooperative roadside perception camera
CN115861448A (en) * 2022-12-30 2023-03-28 广西电网有限责任公司钦州供电局 System calibration method and system based on angular point detection and characteristic point extraction

Also Published As

Publication number Publication date
CN105701827B (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN105701827A (en) Method and device for jointly calibrating parameters of visible light camera and infrared camera
Habib et al. Automatic calibration of low-cost digital cameras
Fathi et al. Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features
Li et al. A practical comparison between Zhang's and Tsai's calibration approaches
US20200074658A1 (en) Method and system for three-dimensional model reconstruction
Nousias et al. Large-scale, metric structure from motion for unordered light fields
Bermudez-Cameo et al. Automatic line extraction in uncalibrated omnidirectional cameras with revolution symmetry
Perdigoto et al. Calibration of mirror position and extrinsic parameters in axial non-central catadioptric systems
Pritts et al. Rectification from radially-distorted scales
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
Liu et al. Infrared-visible image registration for augmented reality-based thermographic building diagnostics
Zhao et al. Metric calibration of unfocused plenoptic cameras for three-dimensional shape measurement
CN116935013B (en) Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction
Wu et al. Point-matching algorithm based on local neighborhood information for remote sensing image registration
Kim et al. Target-free automatic registration of point clouds
CN109902695A (en) One kind is towards as to the correction of linear feature matched line feature and method of purification
Delmas et al. Stereo camera visual odometry for moving urban environments
Ke et al. A high precision image registration method for measurement based on the stereo camera system
Wang et al. Facilitating PTZ camera auto-calibration to be noise resilient with two images
US20220114713A1 (en) Fusion-Based Digital Image Correlation Framework for Strain Measurement
Tóth et al. A minimal solution for image-based sphere estimation
Xie et al. Real-time reconstruction of unstructured scenes based on binocular vision depth
Liu et al. Feature matching method for uncorrected fisheye lens image
Zhou et al. Meta-Calib: A generic, robust and accurate camera calibration framework with ArUco-encoded meta-board
Doucette et al. Spatial analysis of image registration methodologies for fusion applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant