Tunnel image splicing method and system
Technical field
The invention belongs to technical field of computer vision, particularly tunnel image joint technical field, tunnel scene image with great visual angle can be obtained.
Background technology
Because tunnel-liner section is general larger, in order to reach higher Crack Detection precision, the covering on tunnel-liner surface must be realized by multiple area array cameras.Certain overlapping region can be there is between every two cameras.In order to show the disease on tunnel-liner surface intuitively, need the image data of multiple area array cameras to splice.Namely several have the image of lap, different visual angles is combined into a large-scale seamless high-definition picture.
Tunnel image joint is that multiple of simultaneously taking same cross section, tunnel are had the image of intersection through registration, fusion, is spliced into the Large visual angle picture that a width comprises cross section.Current image mosaic technology mainly contains the phase correlation method etc. in the method for time domain and frequency domain.Wherein the method for feature based coupling is a kind of method relatively more conventional at present.The parametric solution method that G.P.Stein proposes, although precision is higher, the calculated amount that the textural characteristics between its detection consecutive frame brings cannot meet the requirement of quick tunnel image joint.Feature based Point matching, as some feature point detection operators such as SIFT carry out the mode of Feature Points Matching, in the process of carrying out Feature Points Matching, calculated amount is huge, tunnel image joint speed can not meet, and for the fire resistant coating of tunnel image or cement surface texture single, feature is less, and the tunnel image of method often for this class of feature based Point matching is inoperative.
The general smoother of tunnel wall surface in reality.Textural characteristics is more single, except have crack, water stain etc. some have except the facility in the region of disease and tunnel wall, other regions, tunnel comprise less feature.The method only using class feature based Point matching such as SIFT is the splicing that cannot complete whole tunnel, and cannot meet the image joint requirement of data volume great tunnel.
In order to ensure that tunnel image has higher resolution, adopt different cameral and different focal, gather tunnel image, tunnel image therefore to be spliced comprises numerous different cameral and the image captured by different focal mirror, proposes larger challenge to the robustness of splicing.
Summary of the invention
The present invention, according to prior art defect, provides the quick joining method of a kind of tunnel image and system.
For achieving the above object, the invention provides a kind of tunnel image splicing method, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, splice the tunnel image obtained at all camera single exposures of certain section, comprise the following steps
Step one, normalized image, based on the data of laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, choose reference resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Step 2, cylindrical surface projecting, according to conic projection formula, the image projection all to be spliced step one obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Step 3, characteristic matching, comprises and adopts characteristic matching to obtain match point, and the part invalid to characteristic matching adopts calibration to obtain match point, and rejects the right interference of mispairing point;
Step 4, calculates transformation matrix, calculates to the match point that step 3 process obtains the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Step 5, unified coordinate system, utilizes step 4 gained final transformation matrix to be transformed into successively by image to be spliced for step 2 gained with reference in coordinate systems in image, forms a width Large visual angle image;
Step 6, image co-registration, carries out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
And, in step 3, the implementation adopting calibration to obtain match point is, set up tunnel model in advance and calibration field is provided, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain calibration point, when there is the invalid part of characteristic matching, the normalization coefficient process corresponding according to step one gained to this partial response calibration, then according on image after cylindrical surface projecting of the mode conversion of step 2, final match point is obtained.
And in step 4, initial transformation matrix is homography matrix.
And in step 6, image co-registration adopts laplacian pyramid amalgamation mode.
And, according to the splicing result of the image of single section, the lap between adjacent sections is removed, obtains the splicing result of whole tunnel image.
The invention provides a kind of tunnel image joint system, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, the tunnel image obtained at all camera single exposures of certain section is spliced, comprises with lower module
Normalized image module, for the data based on laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, choose reference resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Cylindrical surface projecting module, for according to conic projection formula, the image projection all to be spliced normalized image module obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Characteristic matching module, for adopting characteristic matching to obtain match point, the part invalid to characteristic matching adopts calibration to obtain match point, and rejects the right interference of mispairing point;
Transformation matrix module, the match point for obtaining characteristic matching module process calculates the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Coordinate system unified modules, for utilizing the final transformation matrix of transformation matrix module gained to be transformed into cylindrical surface projecting module gained image to be spliced with reference in coordinate systems in image successively, forms a width Large visual angle image;
Image co-registration module, for carrying out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
And, in characteristic matching module, the implementation adopting calibration to obtain match point is, set up tunnel model in advance and calibration field is provided, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain calibration point, when there is the invalid part of characteristic matching, the normalization coefficient process corresponding according to normalized image module gained to this partial response calibration, then according on image after cylindrical surface projecting of the mode conversion of cylindrical surface projecting module, final match point is obtained.
And in transformation matrix module, initial transformation matrix is homography matrix.
And in image co-registration module, image co-registration adopts laplacian pyramid amalgamation mode.
And, output module is set, for the splicing result of the image according to single section, the lap between adjacent sections is removed, obtains the splicing result of whole tunnel image.
The present invention is in conjunction with the actual conditions of China, disclose the tunnel image joint technical scheme that a kind of feature based combines with photography geometry, have that splicing speed is fast, splicing precision is higher, robustness is high, the advantage such as practical, spliced tunnel image remains the resolution of raw data, is convenient to follow-up tunnel defect and detects.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the embodiment of the present invention.
Fig. 2 is the normalization schematic diagram of the embodiment of the present invention.
Fig. 3 is the conversion parameter extraction schematic diagram of the embodiment of the present invention.
Fig. 4 is that the translation parameters of the embodiment of the present invention extracts schematic diagram.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with embodiment also with reference to accompanying drawing, the present invention is described in more detail.Should be appreciated that, these describe just exemplary, and do not really want to limit the scope of the invention.
The present invention is the quick joining method of tunnel image that a kind of feature based Point matching and artificial point patterns Point matching combine.As shown in Figure 1, the implementation procedure of embodiment can be summarized as following step to concrete implementation step, being specifically implemented as follows of each step:
Step one: normalized image, based on the data by laser scanner, the camera obtained according to calibration in advance and the relative position relation of scanner, calculate the object distance that each camera is corresponding, extract normalization coefficient, image is normalized, obtains the image to be spliced that resolution is consistent.
The tunnel image acquisition system that Data Source of the present invention uses is onboard system, comprises and arranges multiple camera (such as 16) and a laser scanner.Camera is erected at one respectively and is similar on the harsh arc frame of semicircle, and arc frame is placed on the automobile vertical with garage direction.In collection vehicle driving process, camera is according to certain triggering spacing, and the tunnel image that all camera single exposures obtain, is a section, comprises the information in this section of tunnel.Laser scanner is generally placed in the middle of under arc frame.In order to ensure that the tunnel image gathered has higher resolution, be convenient to the Defect inspection of subsequent tunnel, the collection of tunnel image employs the camera of multiple different model and the camera lens of different focal.Dissimilar camera has become tunnel image capturing system with camera lens matched combined, makes the source data packet gathered containing the tunnel image of different resolution, in order to follow-up splicing, has carried out normalized, i.e. unified resolution to the image of different resolution.
See Fig. 2, in order to obtain the resolution of the image of each collected by camera, object distance corresponding when needing to obtain collected by camera calculates.Carry out the outer school of engaging in an inspection of camera and laser in advance, the outer ginseng matrix of camera and laser can be obtained, the i.e. relative position relation of the camera that obtains of calibration and scanner, according to the distance of the known laser scanner of the data of laser scanner to tunnel wall during collection, by outer ginseng matrix and camera internal reference matrix, laser coordinate system is transformed to camera coordinates system to project, resolve each camera pair, object distance (camera is to the distance of tunnel wall) corresponding when obtaining each collected by camera, the corresponding object distance of each camera.With the sequence number of camera for horizontal ordinate, object distance is ordinate, sets up rectangular coordinate system, to the institute in coordinate system a little (each camera corresponding a point) carry out matching, make the line of the point after matching be level and smooth curve.Carry out curve fitting to resolving the object distance obtained like this, i.e. matching laser data, eliminates the interference of noise.In advance by the internal reference calibration of camera, the focal length of each camera can be tried to achieve, the resolution of the image obtained after utilizing following formula to try to achieve the shooting of each camera.
Wherein, r is the resolution of image, and dpi is the size of the upper each pixel of camera CCD (charge coupled cell), and f is the focal length of camera, object distance when s is camera shooting.
After obtaining the resolution of each image shot by camera, wherein one can be chosen and as reference resolution, such as, get lowest resolution, also can get highest resolution, or any one.Embodiment with all resolution intermediate-resolutions minimum for benchmark, the camera that the image that its intermediate-resolution is minimum is corresponding is benchmark camera.Calculate the ratio of other image resolution ratios and this reference resolution, the ratio of acquisition is normalization coefficient.According to the image resolution ratio normalization coefficient calculated.Normalization coefficient corresponding with it respectively for the wide height of every piece image is made product, obtains the image after normalization.Benchmark camera itself can be considered that normalization coefficient is 1, can carry out product process respectively to other image shot by cameras beyond benchmark camera.
Step 2: cylindrical surface projecting, according to conic projection formula, by all image projection to be spliced on the face of cylinder, makes shooting image meet visual consistency.
In order to maintain the space constraint relation in actual scene, needing the tunnel image unification after to normalization to be mapped on the cylindrical surface projecting face of a standard, obtaining cylindrical picture sequence.Such as, 16 cameras in tunnel image acquisition system from left to right successively label be No. 1 camera, No. 2 camera ... No. 16 camera, each camera is followed successively by the 1st image, the 2nd image of cylindrical picture sequence in the result of image captured by certain section after normalization also projection ... 16th image.
Owing to having carried out resolution unification to image in step one, namely the focal length of conic projection herein and projection radius have been the focal lengths and projection radius that the benchmark camera when resolving normalization coefficient is corresponding.The projection radius of benchmark camera equals its focal length.Utilize the inverse projection formula of conic projection formula as follows,
The image after all original image conic projection can be obtained.
Wherein, (), xy represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius.
Step 3: characteristic matching, according to step 2 acquired results, adopts characteristic matching, obtains match point, for without obvious characteristic or the invalid part of characteristic matching, adopts calibration, acquisition match point, and rejects the right interference of mispairing point.
Embodiment adopts SIFT feature coupling, also can adopt other characteristic matching modes during concrete enforcement.Embodiment employing RANSAC algorithm (RANSAC) rejects the right interference of mispairing point, also can adopt other elimination of rough difference modes during concrete enforcement.
Because actual tunnel image includes the unconspicuous region of texture single features, also have and include crack, the obvious region of feature such as water stain.Tunnel image in reality is divided into two classes, and a class is the obvious part of feature, and a class is without obvious characteristic part.Two kinds of different matching process are taken for two class tunnel images,
For the obvious part of feature, include the feature of the tunnel defects such as crack.The SIFT algorithm situations such as illumination variation, image rotation, convergent-divergent, affine deformation, fuzzy, compression being had to better resistance has been selected for this part, but due to SIFT algorithm, to describe complexity for proper vector high, calculate very consuming time, the requirement that tunnel image splices fast can not be met, use GPU-CPU or other parallel computation SIFT method to accelerate, the efficiency of algorithm after acceleration can promote 10-20 doubly.Carry out in the process of Feature Points Matching in use with upper type, due to when carrying out tunnel image taking, the image of adjacent cameras shooting must have certain degree of overlapping, and this degree of overlapping can not exceed 50% of image size.Therefore the embodiment overlapping region that is adjacent image with 50% region of image.Known overlapping general area in advance, SIFT algorithm only need detect overlapping region, and this will accelerate the speed of feature point detection greatly.
For the part without obvious characteristic, this part is applicable to SIFT algorithm cannot extract enough and useful unique point.Therefore for this part region, select calibration point, namely so-called calibration point sets up tunnel model in advance provides calibration field, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain some match points of original image, generally at 40-50 to left and right.When reality perform this flow process to step 3 occur part without obvious characteristic time, the match point this part being demarcated in advance to gained converts according to the mode of image normalization, coordinate by match point on original image makes product with corresponding normalization coefficient (being obtained by step one), obtain the point of the images match after normalization, according to the mode of step 2, the match point in normalized image is transformed on the rear image of projection again, obtain final match point.Such as, gather in advance the calibration point of No. 1 and No. 2 camera, No. 2 and No. 3 camera calibration point ... the calibration point of No. 15 and No. 16 camera, when No. 3rd, appearance and No. 4 camera respective image SIFT feature coupling invalid, call match point that this part demarcates gained in advance and be normalized and obtain required match point with conic projection.
Feature Points Matching is carried out by the tunnel image of above two kinds of modes to a section.But it is right to there is the pseudo-point of a part in the match point obtained, if directly carry out matching can produce larger error, therefore after the match point obtaining a bore tunnel image, utilize RANSAC algorithm (RANSAC) to carry out filtering to above-mentioned match point, this algorithm is prior art, principle is one group of sample data collection according to comprising abnormal data, calculate the mathematical model parameter of sample data collection, thus obtain effective sample data.The basic assumption of this algorithm comprises correct data in sample also to comprise abnormal data and exterior point, and RANSAC supposes, for one group of given correct data, can calculate the model parameter meeting these data.This calculation ratio juris is substantially as follows: concentrate Stochastic choice 2 to determine straight line at sample point, and it is the support set of this straight line that selected point focuses on the point that air line distance is less than a certain defined threshold.Repeat Stochastic choice repeatedly, think that the straight line with maximum support feature set is the matching of sample point set.Point within the scope of matching distance threshold is available point, otherwise is then noise.
By using RANSAC algorithm to carry out essence coupling to the point (comprising the match point extracting match point and the demarcation obtained based on SIFT feature) after characteristic matching, just obtain final all match points.
Step 4: calculate transformation matrix, the match point obtained step 3 process calculates the initial transformation matrix of 3 × 3.This matrix comprises convergent-divergent, rotates, the conversion such as translation parameter.The conversion parameter calculated is simplified, obtains the conversion parameter between final image, namely obtain transformation matrix final between image.
The picture point of spatial point in two camera plane has relation one to one, and its corresponding relation is linear homogeneous, can be described, this matrix can be claimed to be homography matrix by a third-order matrix.Appoint and get four not correct matching double points on the same line, adopt straight linear conversion just can calculate a homography matrix.Owing to now still existing without match point, certain deviation is there is when match point location, therefore advise first utilizing RANSAC algorithm to estimate an initial homography matrix when homography matrix calculates, then employing L-M method minimizes the re-projection error between two width images.Optimum homography matrix is calculated, i.e. transformation matrix by minimum error function.Concrete computation process adopts existing algorithm, and it will not go into details in the present invention.
By above-mentioned obtain transformation matrix after, because a section includes multiple image, the transformation matrix obtained contains certain error, and the splicing of multiple images will cause error propagation, and error will be increasing, and such splicing result is unacceptable.Therefore, after the transformation matrix being obtained image between two by said method, need to do certain simplification to transformation matrix, transformation matrix contains the transformation relation between two between image, namely rotates, convergent-divergent, translation.In step before, utilized normalization coefficient all image resolution ratios to be spliced to be unified, therefore zoom factor herein will be very little, and the zoom factor in transformation matrix can be rejected, the effect image for splicing is very not large.Tunnel image acquisition system is rigid system, and the rotation between image is a definite value, and the twiddle factor therefore in matrix to immobilize.So last transformation matrix has been simplified to the matrix only comprising translation.Wherein the translation of vertical direction is relevant with the vertical direction displacement of photocentre between camera in acquisition system, can show that vertical direction between all cameras relatively and the translational movement of tunnel model in model calibration.The translational movement of the vertical direction now between two between image can be calculated according to real-time laser data in reality splicing.The displacement of horizontal direction can be extracted in transformation matrix obtained above, be the degree of overlapping of image reality between two.As shown in Figure 4, left side matrix is general transformation matrix, comprises the rotation of image, scaling, shift factor.After the simplification to twiddle factor and scaling factor, matrix can be reduced to right side matrix, contains only the translation of the X-direction of image and the translation of Y-direction.So simplified by above, final transformation matrix is a matrix only comprising translation parameters in fact.Wherein m
11, m
12, m
21, m
22convergent-divergent and twiddle factor, m
13, m
23horizontal and vertical direction displacement, m
31, m
32it is the deformation quantity in horizontal and vertical direction.Δ
x, Δ
xthe displacement in horizontal and vertical direction respectively.
Although above-mentioned way makes spliced image effect have a certain impact, this impact is acceptable in the engineer applied of reality.The efficiency that such simplification will improve in splicing greatly.And eliminate much for adjustment operation and the Optimum Operation of transformation matrix.
Step 5: unified coordinate system, image to be spliced for step 2 gained is transformed into reference in coordinate systems in image by transformation matrix after utilizing step 4 gained to resolve successively, forms a width Large visual angle image.
Because the tunnel image of a section comprises several.To need all image unifications under a reference frame, therefore need to calculate the conversion of every width image under unified coordinate system and position relationship.What calculate in step 4 is transformation relation between two width images, and when being spliced into figure, to determine that the list of every width image in view picture stitching image should be related to, therefore in tunnel image mosaic, according to step 2 gained cylindrical picture sequence, with the reference image preset for benchmark, if be H with reference to the homography matrix of image
0, in cylindrical picture sequence, each image transforms to according to transformation relation between two in the coordinate system determined with reference to image successively, thus forms an overall list and should be related to.According to formula
h in formula
irepresent i-th transformation matrix, therefore the homography matrix H ' of the n-th image coordinate system
nnamely be from the 0th transformation matrix H
0to the n-th transformation matrix H
nproduct.Such as, in tunnel image acquisition system be provided with 16 cameras time, n is 16 to the maximum.
By the unification of above-mentioned image coordinate system, the image that a width comprises all image informations to be spliced can be formed.
Step 6: image co-registration, carries out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
Image co-registration refers to several images of Same Scene according to the new image of certain ruled synthesis one width, and makes the information more comprehensively of the image expression after fusion.And the redundant information of image can be reduced to the full extent.What adopt in this tunnel image mosaic is that laplacian pyramid merges, and also can be suitable for other image interfusion methods such as Poisson.The laplacian pyramid fusion of image is mainly divided into two steps, and one is that image gaussian pyramid decomposes, and two is Laplacian pyramid.Original image being defined as the 0th layer, by carrying out Gassian low-pass filter and interlacing to image every row down-sampling, obtaining the ground floor of gaussian pyramid, repeat this step until can not down-sampling, namely construct gaussian pyramid.To the every one deck interpolation expansion process constructing the gaussian pyramid obtained, obtain the image onesize with its last layer, the gaussian pyramid image of this layer and lower one deck expand that obtain with difference that is this layer of image of a size through interpolation, be the laplacian image of this layer, construct laplacian pyramid in this way.The every tomographic image of Laplce's quintar obtained is merged, then carries out the image that up-sampling obtains final fusion.Image after fusion eliminates the redundant information of two original images, contains all information of two images.Laplacian pyramid fusion is carried out to section all tunnels image, constitute one complete not containing a bore tunnel image of redundant information.Due to the impact of illumination contrast between image etc., can there is the uneven situation of light and shade in the image after splicing.The even smooth methods such as Poisson equation can be utilized to carry out dodging to image, obtain the stitching image of uniform illumination.
It is more than the connection scheme for single cross-section image, in order to obtain the splicing result of whole tunnel image, certain cutting can be carried out to single section, the lap between adjacent sections be removed, makes continuous print section can form a non-overlapping continuous print tunnel image.Object distance corresponding when each camera is taken has been obtained in step one, for No. 1 camera, utilize object distance during No. 1 camera shooting, the fabric width of No. 1 camera can be determined, i.e. the width of the corresponding actual scene of the picture of one No. 1 camera acquisitions, then the triggering spacing of combining camera, can determine in travel direction, degree of overlapping between the consecutive image of the shooting of No. 1 camera, because acquisition system is changeless, the degree of overlapping of No. 1 camera consecutive image is also the degree of overlapping between adjacent two sections.According to the degree of overlapping calculated, after a section splicing terminates, the figure image height of section result images is carried out cutting.Obtain final cross-section image.
During concrete enforcement, above flow process can adopt computer software mode to realize, and modular mode also can be adopted to provide corresponding system.The embodiment of the present invention provides a kind of tunnel image joint system, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, the tunnel image obtained at all camera single exposures of certain section is spliced, comprises with lower module
Normalized image module, for the data based on laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, with minimum for reference resolution in all resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Cylindrical surface projecting module, for according to conic projection formula, the image projection all to be spliced normalized image module obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Characteristic matching module, mates for adopting SIFT feature and obtains match point, mates invalid part and adopts calibration to obtain match point, and reject the right interference of mispairing point with RANSAC algorithm to SIFT feature;
Transformation matrix module, the match point for obtaining characteristic matching module process calculates the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Coordinate system unified modules, for utilizing the final transformation matrix of transformation matrix module gained to be transformed into cylindrical surface projecting module gained image to be spliced with reference in coordinate systems in image successively, forms a width Large visual angle image;
Image co-registration module, for carrying out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
Further, can output module be set, for the splicing result of the image according to single section, the lap between adjacent sections be removed, obtains the splicing result of whole tunnel image.
Each module specific implementation can see corresponding steps, and it will not go into details in the present invention.
Specific embodiment described herein is only to the explanation for example of the present invention's spirit.Those skilled in the art can make various amendment or supplement or adopt similar mode to substitute to described specific embodiment, but can't depart from spirit of the present invention or surmount the scope that appended claims defines.