CN105550995A - Tunnel image splicing method and system - Google Patents

Tunnel image splicing method and system Download PDF

Info

Publication number
CN105550995A
CN105550995A CN201610056749.7A CN201610056749A CN105550995A CN 105550995 A CN105550995 A CN 105550995A CN 201610056749 A CN201610056749 A CN 201610056749A CN 105550995 A CN105550995 A CN 105550995A
Authority
CN
China
Prior art keywords
image
camera
tunnel
resolution
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610056749.7A
Other languages
Chinese (zh)
Other versions
CN105550995B (en
Inventor
李清泉
曹民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Optical Valley excellence Technology Co.,Ltd.
Original Assignee
WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co Ltd filed Critical WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co Ltd
Priority to CN201610056749.7A priority Critical patent/CN105550995B/en
Publication of CN105550995A publication Critical patent/CN105550995A/en
Application granted granted Critical
Publication of CN105550995B publication Critical patent/CN105550995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T3/12
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a tunnel image splicing method and a tunnel image splicing system; the method and the system are used for carrying out splicing on an image collected by each camera in a tunnel image collection system; the tunnel image collection system comprises multiple cameras and a laser scanner; when carrying out the splicing on the tunnel images obtained by all cameras via single exposure on a certain section, the method comprises the following steps: working out an object distance corresponding to each of the cameras based on data of the laser scanner, carrying out normalization processing on the image collected by each camera, projecting all to-be-spliced images to a cylindrical surface, adopting SIFT (Scale Invariant Feature Transform) characteristic matching and obtaining matching points, calculating an initial transformation matrix, simplifying and extracting translation parameters to obtain a final transformation matrix among images, unifying a coordinate system, and fusing the images to obtain the splicing reconstructed smooth seamless tunnel image. The technical scheme disclosed by the invention has the advantages of fast splicing speed, relatively high splicing precision, high robustness, strong practicability and the like, and is convenient for subsequent tunnel disease detection.

Description

Tunnel image splicing method and system
Technical field
The invention belongs to technical field of computer vision, particularly tunnel image joint technical field, tunnel scene image with great visual angle can be obtained.
Background technology
Because tunnel-liner section is general larger, in order to reach higher Crack Detection precision, the covering on tunnel-liner surface must be realized by multiple area array cameras.Certain overlapping region can be there is between every two cameras.In order to show the disease on tunnel-liner surface intuitively, need the image data of multiple area array cameras to splice.Namely several have the image of lap, different visual angles is combined into a large-scale seamless high-definition picture.
Tunnel image joint is that multiple of simultaneously taking same cross section, tunnel are had the image of intersection through registration, fusion, is spliced into the Large visual angle picture that a width comprises cross section.Current image mosaic technology mainly contains the phase correlation method etc. in the method for time domain and frequency domain.Wherein the method for feature based coupling is a kind of method relatively more conventional at present.The parametric solution method that G.P.Stein proposes, although precision is higher, the calculated amount that the textural characteristics between its detection consecutive frame brings cannot meet the requirement of quick tunnel image joint.Feature based Point matching, as some feature point detection operators such as SIFT carry out the mode of Feature Points Matching, in the process of carrying out Feature Points Matching, calculated amount is huge, tunnel image joint speed can not meet, and for the fire resistant coating of tunnel image or cement surface texture single, feature is less, and the tunnel image of method often for this class of feature based Point matching is inoperative.
The general smoother of tunnel wall surface in reality.Textural characteristics is more single, except have crack, water stain etc. some have except the facility in the region of disease and tunnel wall, other regions, tunnel comprise less feature.The method only using class feature based Point matching such as SIFT is the splicing that cannot complete whole tunnel, and cannot meet the image joint requirement of data volume great tunnel.
In order to ensure that tunnel image has higher resolution, adopt different cameral and different focal, gather tunnel image, tunnel image therefore to be spliced comprises numerous different cameral and the image captured by different focal mirror, proposes larger challenge to the robustness of splicing.
Summary of the invention
The present invention, according to prior art defect, provides the quick joining method of a kind of tunnel image and system.
For achieving the above object, the invention provides a kind of tunnel image splicing method, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, splice the tunnel image obtained at all camera single exposures of certain section, comprise the following steps
Step one, normalized image, based on the data of laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
r = d p i × f s × 1000
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, choose reference resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Step 2, cylindrical surface projecting, according to conic projection formula, the image projection all to be spliced step one obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
x = f t a n x ′ s
y = y ′ f 1 + t a n x ′ s S
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Step 3, characteristic matching, comprises and adopts characteristic matching to obtain match point, and the part invalid to characteristic matching adopts calibration to obtain match point, and rejects the right interference of mispairing point;
Step 4, calculates transformation matrix, calculates to the match point that step 3 process obtains the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Step 5, unified coordinate system, utilizes step 4 gained final transformation matrix to be transformed into successively by image to be spliced for step 2 gained with reference in coordinate systems in image, forms a width Large visual angle image;
Step 6, image co-registration, carries out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
And, in step 3, the implementation adopting calibration to obtain match point is, set up tunnel model in advance and calibration field is provided, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain calibration point, when there is the invalid part of characteristic matching, the normalization coefficient process corresponding according to step one gained to this partial response calibration, then according on image after cylindrical surface projecting of the mode conversion of step 2, final match point is obtained.
And in step 4, initial transformation matrix is homography matrix.
And in step 6, image co-registration adopts laplacian pyramid amalgamation mode.
And, according to the splicing result of the image of single section, the lap between adjacent sections is removed, obtains the splicing result of whole tunnel image.
The invention provides a kind of tunnel image joint system, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, the tunnel image obtained at all camera single exposures of certain section is spliced, comprises with lower module
Normalized image module, for the data based on laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
r = d p i × f s × 1000
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, choose reference resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Cylindrical surface projecting module, for according to conic projection formula, the image projection all to be spliced normalized image module obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
x = f t a n x ′ s
y = y ′ f 1 + t a n x ′ s s
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Characteristic matching module, for adopting characteristic matching to obtain match point, the part invalid to characteristic matching adopts calibration to obtain match point, and rejects the right interference of mispairing point;
Transformation matrix module, the match point for obtaining characteristic matching module process calculates the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Coordinate system unified modules, for utilizing the final transformation matrix of transformation matrix module gained to be transformed into cylindrical surface projecting module gained image to be spliced with reference in coordinate systems in image successively, forms a width Large visual angle image;
Image co-registration module, for carrying out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
And, in characteristic matching module, the implementation adopting calibration to obtain match point is, set up tunnel model in advance and calibration field is provided, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain calibration point, when there is the invalid part of characteristic matching, the normalization coefficient process corresponding according to normalized image module gained to this partial response calibration, then according on image after cylindrical surface projecting of the mode conversion of cylindrical surface projecting module, final match point is obtained.
And in transformation matrix module, initial transformation matrix is homography matrix.
And in image co-registration module, image co-registration adopts laplacian pyramid amalgamation mode.
And, output module is set, for the splicing result of the image according to single section, the lap between adjacent sections is removed, obtains the splicing result of whole tunnel image.
The present invention is in conjunction with the actual conditions of China, disclose the tunnel image joint technical scheme that a kind of feature based combines with photography geometry, have that splicing speed is fast, splicing precision is higher, robustness is high, the advantage such as practical, spliced tunnel image remains the resolution of raw data, is convenient to follow-up tunnel defect and detects.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the embodiment of the present invention.
Fig. 2 is the normalization schematic diagram of the embodiment of the present invention.
Fig. 3 is the conversion parameter extraction schematic diagram of the embodiment of the present invention.
Fig. 4 is that the translation parameters of the embodiment of the present invention extracts schematic diagram.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with embodiment also with reference to accompanying drawing, the present invention is described in more detail.Should be appreciated that, these describe just exemplary, and do not really want to limit the scope of the invention.
The present invention is the quick joining method of tunnel image that a kind of feature based Point matching and artificial point patterns Point matching combine.As shown in Figure 1, the implementation procedure of embodiment can be summarized as following step to concrete implementation step, being specifically implemented as follows of each step:
Step one: normalized image, based on the data by laser scanner, the camera obtained according to calibration in advance and the relative position relation of scanner, calculate the object distance that each camera is corresponding, extract normalization coefficient, image is normalized, obtains the image to be spliced that resolution is consistent.
The tunnel image acquisition system that Data Source of the present invention uses is onboard system, comprises and arranges multiple camera (such as 16) and a laser scanner.Camera is erected at one respectively and is similar on the harsh arc frame of semicircle, and arc frame is placed on the automobile vertical with garage direction.In collection vehicle driving process, camera is according to certain triggering spacing, and the tunnel image that all camera single exposures obtain, is a section, comprises the information in this section of tunnel.Laser scanner is generally placed in the middle of under arc frame.In order to ensure that the tunnel image gathered has higher resolution, be convenient to the Defect inspection of subsequent tunnel, the collection of tunnel image employs the camera of multiple different model and the camera lens of different focal.Dissimilar camera has become tunnel image capturing system with camera lens matched combined, makes the source data packet gathered containing the tunnel image of different resolution, in order to follow-up splicing, has carried out normalized, i.e. unified resolution to the image of different resolution.
See Fig. 2, in order to obtain the resolution of the image of each collected by camera, object distance corresponding when needing to obtain collected by camera calculates.Carry out the outer school of engaging in an inspection of camera and laser in advance, the outer ginseng matrix of camera and laser can be obtained, the i.e. relative position relation of the camera that obtains of calibration and scanner, according to the distance of the known laser scanner of the data of laser scanner to tunnel wall during collection, by outer ginseng matrix and camera internal reference matrix, laser coordinate system is transformed to camera coordinates system to project, resolve each camera pair, object distance (camera is to the distance of tunnel wall) corresponding when obtaining each collected by camera, the corresponding object distance of each camera.With the sequence number of camera for horizontal ordinate, object distance is ordinate, sets up rectangular coordinate system, to the institute in coordinate system a little (each camera corresponding a point) carry out matching, make the line of the point after matching be level and smooth curve.Carry out curve fitting to resolving the object distance obtained like this, i.e. matching laser data, eliminates the interference of noise.In advance by the internal reference calibration of camera, the focal length of each camera can be tried to achieve, the resolution of the image obtained after utilizing following formula to try to achieve the shooting of each camera.
r = d p i × f s × 1000
Wherein, r is the resolution of image, and dpi is the size of the upper each pixel of camera CCD (charge coupled cell), and f is the focal length of camera, object distance when s is camera shooting.
After obtaining the resolution of each image shot by camera, wherein one can be chosen and as reference resolution, such as, get lowest resolution, also can get highest resolution, or any one.Embodiment with all resolution intermediate-resolutions minimum for benchmark, the camera that the image that its intermediate-resolution is minimum is corresponding is benchmark camera.Calculate the ratio of other image resolution ratios and this reference resolution, the ratio of acquisition is normalization coefficient.According to the image resolution ratio normalization coefficient calculated.Normalization coefficient corresponding with it respectively for the wide height of every piece image is made product, obtains the image after normalization.Benchmark camera itself can be considered that normalization coefficient is 1, can carry out product process respectively to other image shot by cameras beyond benchmark camera.
Step 2: cylindrical surface projecting, according to conic projection formula, by all image projection to be spliced on the face of cylinder, makes shooting image meet visual consistency.
In order to maintain the space constraint relation in actual scene, needing the tunnel image unification after to normalization to be mapped on the cylindrical surface projecting face of a standard, obtaining cylindrical picture sequence.Such as, 16 cameras in tunnel image acquisition system from left to right successively label be No. 1 camera, No. 2 camera ... No. 16 camera, each camera is followed successively by the 1st image, the 2nd image of cylindrical picture sequence in the result of image captured by certain section after normalization also projection ... 16th image.
Owing to having carried out resolution unification to image in step one, namely the focal length of conic projection herein and projection radius have been the focal lengths and projection radius that the benchmark camera when resolving normalization coefficient is corresponding.The projection radius of benchmark camera equals its focal length.Utilize the inverse projection formula of conic projection formula as follows,
x = f t a n x ′ s
y = y ′ f 1 + t a n x ′ s s
The image after all original image conic projection can be obtained.
Wherein, (), xy represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius.
Step 3: characteristic matching, according to step 2 acquired results, adopts characteristic matching, obtains match point, for without obvious characteristic or the invalid part of characteristic matching, adopts calibration, acquisition match point, and rejects the right interference of mispairing point.
Embodiment adopts SIFT feature coupling, also can adopt other characteristic matching modes during concrete enforcement.Embodiment employing RANSAC algorithm (RANSAC) rejects the right interference of mispairing point, also can adopt other elimination of rough difference modes during concrete enforcement.
Because actual tunnel image includes the unconspicuous region of texture single features, also have and include crack, the obvious region of feature such as water stain.Tunnel image in reality is divided into two classes, and a class is the obvious part of feature, and a class is without obvious characteristic part.Two kinds of different matching process are taken for two class tunnel images,
For the obvious part of feature, include the feature of the tunnel defects such as crack.The SIFT algorithm situations such as illumination variation, image rotation, convergent-divergent, affine deformation, fuzzy, compression being had to better resistance has been selected for this part, but due to SIFT algorithm, to describe complexity for proper vector high, calculate very consuming time, the requirement that tunnel image splices fast can not be met, use GPU-CPU or other parallel computation SIFT method to accelerate, the efficiency of algorithm after acceleration can promote 10-20 doubly.Carry out in the process of Feature Points Matching in use with upper type, due to when carrying out tunnel image taking, the image of adjacent cameras shooting must have certain degree of overlapping, and this degree of overlapping can not exceed 50% of image size.Therefore the embodiment overlapping region that is adjacent image with 50% region of image.Known overlapping general area in advance, SIFT algorithm only need detect overlapping region, and this will accelerate the speed of feature point detection greatly.
For the part without obvious characteristic, this part is applicable to SIFT algorithm cannot extract enough and useful unique point.Therefore for this part region, select calibration point, namely so-called calibration point sets up tunnel model in advance provides calibration field, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain some match points of original image, generally at 40-50 to left and right.When reality perform this flow process to step 3 occur part without obvious characteristic time, the match point this part being demarcated in advance to gained converts according to the mode of image normalization, coordinate by match point on original image makes product with corresponding normalization coefficient (being obtained by step one), obtain the point of the images match after normalization, according to the mode of step 2, the match point in normalized image is transformed on the rear image of projection again, obtain final match point.Such as, gather in advance the calibration point of No. 1 and No. 2 camera, No. 2 and No. 3 camera calibration point ... the calibration point of No. 15 and No. 16 camera, when No. 3rd, appearance and No. 4 camera respective image SIFT feature coupling invalid, call match point that this part demarcates gained in advance and be normalized and obtain required match point with conic projection.
Feature Points Matching is carried out by the tunnel image of above two kinds of modes to a section.But it is right to there is the pseudo-point of a part in the match point obtained, if directly carry out matching can produce larger error, therefore after the match point obtaining a bore tunnel image, utilize RANSAC algorithm (RANSAC) to carry out filtering to above-mentioned match point, this algorithm is prior art, principle is one group of sample data collection according to comprising abnormal data, calculate the mathematical model parameter of sample data collection, thus obtain effective sample data.The basic assumption of this algorithm comprises correct data in sample also to comprise abnormal data and exterior point, and RANSAC supposes, for one group of given correct data, can calculate the model parameter meeting these data.This calculation ratio juris is substantially as follows: concentrate Stochastic choice 2 to determine straight line at sample point, and it is the support set of this straight line that selected point focuses on the point that air line distance is less than a certain defined threshold.Repeat Stochastic choice repeatedly, think that the straight line with maximum support feature set is the matching of sample point set.Point within the scope of matching distance threshold is available point, otherwise is then noise.
By using RANSAC algorithm to carry out essence coupling to the point (comprising the match point extracting match point and the demarcation obtained based on SIFT feature) after characteristic matching, just obtain final all match points.
Step 4: calculate transformation matrix, the match point obtained step 3 process calculates the initial transformation matrix of 3 × 3.This matrix comprises convergent-divergent, rotates, the conversion such as translation parameter.The conversion parameter calculated is simplified, obtains the conversion parameter between final image, namely obtain transformation matrix final between image.
The picture point of spatial point in two camera plane has relation one to one, and its corresponding relation is linear homogeneous, can be described, this matrix can be claimed to be homography matrix by a third-order matrix.Appoint and get four not correct matching double points on the same line, adopt straight linear conversion just can calculate a homography matrix.Owing to now still existing without match point, certain deviation is there is when match point location, therefore advise first utilizing RANSAC algorithm to estimate an initial homography matrix when homography matrix calculates, then employing L-M method minimizes the re-projection error between two width images.Optimum homography matrix is calculated, i.e. transformation matrix by minimum error function.Concrete computation process adopts existing algorithm, and it will not go into details in the present invention.
By above-mentioned obtain transformation matrix after, because a section includes multiple image, the transformation matrix obtained contains certain error, and the splicing of multiple images will cause error propagation, and error will be increasing, and such splicing result is unacceptable.Therefore, after the transformation matrix being obtained image between two by said method, need to do certain simplification to transformation matrix, transformation matrix contains the transformation relation between two between image, namely rotates, convergent-divergent, translation.In step before, utilized normalization coefficient all image resolution ratios to be spliced to be unified, therefore zoom factor herein will be very little, and the zoom factor in transformation matrix can be rejected, the effect image for splicing is very not large.Tunnel image acquisition system is rigid system, and the rotation between image is a definite value, and the twiddle factor therefore in matrix to immobilize.So last transformation matrix has been simplified to the matrix only comprising translation.Wherein the translation of vertical direction is relevant with the vertical direction displacement of photocentre between camera in acquisition system, can show that vertical direction between all cameras relatively and the translational movement of tunnel model in model calibration.The translational movement of the vertical direction now between two between image can be calculated according to real-time laser data in reality splicing.The displacement of horizontal direction can be extracted in transformation matrix obtained above, be the degree of overlapping of image reality between two.As shown in Figure 4, left side matrix is general transformation matrix, comprises the rotation of image, scaling, shift factor.After the simplification to twiddle factor and scaling factor, matrix can be reduced to right side matrix, contains only the translation of the X-direction of image and the translation of Y-direction.So simplified by above, final transformation matrix is a matrix only comprising translation parameters in fact.Wherein m 11, m 12, m 21, m 22convergent-divergent and twiddle factor, m 13, m 23horizontal and vertical direction displacement, m 31, m 32it is the deformation quantity in horizontal and vertical direction.Δ x, Δ xthe displacement in horizontal and vertical direction respectively.
Although above-mentioned way makes spliced image effect have a certain impact, this impact is acceptable in the engineer applied of reality.The efficiency that such simplification will improve in splicing greatly.And eliminate much for adjustment operation and the Optimum Operation of transformation matrix.
Step 5: unified coordinate system, image to be spliced for step 2 gained is transformed into reference in coordinate systems in image by transformation matrix after utilizing step 4 gained to resolve successively, forms a width Large visual angle image.
Because the tunnel image of a section comprises several.To need all image unifications under a reference frame, therefore need to calculate the conversion of every width image under unified coordinate system and position relationship.What calculate in step 4 is transformation relation between two width images, and when being spliced into figure, to determine that the list of every width image in view picture stitching image should be related to, therefore in tunnel image mosaic, according to step 2 gained cylindrical picture sequence, with the reference image preset for benchmark, if be H with reference to the homography matrix of image 0, in cylindrical picture sequence, each image transforms to according to transformation relation between two in the coordinate system determined with reference to image successively, thus forms an overall list and should be related to.According to formula h in formula irepresent i-th transformation matrix, therefore the homography matrix H ' of the n-th image coordinate system nnamely be from the 0th transformation matrix H 0to the n-th transformation matrix H nproduct.Such as, in tunnel image acquisition system be provided with 16 cameras time, n is 16 to the maximum.
By the unification of above-mentioned image coordinate system, the image that a width comprises all image informations to be spliced can be formed.
Step 6: image co-registration, carries out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
Image co-registration refers to several images of Same Scene according to the new image of certain ruled synthesis one width, and makes the information more comprehensively of the image expression after fusion.And the redundant information of image can be reduced to the full extent.What adopt in this tunnel image mosaic is that laplacian pyramid merges, and also can be suitable for other image interfusion methods such as Poisson.The laplacian pyramid fusion of image is mainly divided into two steps, and one is that image gaussian pyramid decomposes, and two is Laplacian pyramid.Original image being defined as the 0th layer, by carrying out Gassian low-pass filter and interlacing to image every row down-sampling, obtaining the ground floor of gaussian pyramid, repeat this step until can not down-sampling, namely construct gaussian pyramid.To the every one deck interpolation expansion process constructing the gaussian pyramid obtained, obtain the image onesize with its last layer, the gaussian pyramid image of this layer and lower one deck expand that obtain with difference that is this layer of image of a size through interpolation, be the laplacian image of this layer, construct laplacian pyramid in this way.The every tomographic image of Laplce's quintar obtained is merged, then carries out the image that up-sampling obtains final fusion.Image after fusion eliminates the redundant information of two original images, contains all information of two images.Laplacian pyramid fusion is carried out to section all tunnels image, constitute one complete not containing a bore tunnel image of redundant information.Due to the impact of illumination contrast between image etc., can there is the uneven situation of light and shade in the image after splicing.The even smooth methods such as Poisson equation can be utilized to carry out dodging to image, obtain the stitching image of uniform illumination.
It is more than the connection scheme for single cross-section image, in order to obtain the splicing result of whole tunnel image, certain cutting can be carried out to single section, the lap between adjacent sections be removed, makes continuous print section can form a non-overlapping continuous print tunnel image.Object distance corresponding when each camera is taken has been obtained in step one, for No. 1 camera, utilize object distance during No. 1 camera shooting, the fabric width of No. 1 camera can be determined, i.e. the width of the corresponding actual scene of the picture of one No. 1 camera acquisitions, then the triggering spacing of combining camera, can determine in travel direction, degree of overlapping between the consecutive image of the shooting of No. 1 camera, because acquisition system is changeless, the degree of overlapping of No. 1 camera consecutive image is also the degree of overlapping between adjacent two sections.According to the degree of overlapping calculated, after a section splicing terminates, the figure image height of section result images is carried out cutting.Obtain final cross-section image.
During concrete enforcement, above flow process can adopt computer software mode to realize, and modular mode also can be adopted to provide corresponding system.The embodiment of the present invention provides a kind of tunnel image joint system, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, the tunnel image obtained at all camera single exposures of certain section is spliced, comprises with lower module
Normalized image module, for the data based on laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
r = d p i × f s × 1000
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, with minimum for reference resolution in all resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Cylindrical surface projecting module, for according to conic projection formula, the image projection all to be spliced normalized image module obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
x = f t a n x ′ s
y = y ′ f 1 + t a n x ′ s s
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Characteristic matching module, mates for adopting SIFT feature and obtains match point, mates invalid part and adopts calibration to obtain match point, and reject the right interference of mispairing point with RANSAC algorithm to SIFT feature;
Transformation matrix module, the match point for obtaining characteristic matching module process calculates the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Coordinate system unified modules, for utilizing the final transformation matrix of transformation matrix module gained to be transformed into cylindrical surface projecting module gained image to be spliced with reference in coordinate systems in image successively, forms a width Large visual angle image;
Image co-registration module, for carrying out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
Further, can output module be set, for the splicing result of the image according to single section, the lap between adjacent sections be removed, obtains the splicing result of whole tunnel image.
Each module specific implementation can see corresponding steps, and it will not go into details in the present invention.
Specific embodiment described herein is only to the explanation for example of the present invention's spirit.Those skilled in the art can make various amendment or supplement or adopt similar mode to substitute to described specific embodiment, but can't depart from spirit of the present invention or surmount the scope that appended claims defines.

Claims (10)

1. a tunnel image splicing method, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, it is characterized in that: the tunnel image obtained at all camera single exposures of certain section is spliced, comprises the following steps
Step one, normalized image, based on the data of laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
r = d p i × f s × 1000
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, choose reference resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Step 2, cylindrical surface projecting, according to conic projection formula, the image projection all to be spliced step one obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
x = f t a n x ′ s
y = y ′ f 1 + t a n x ′ s s
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Step 3, characteristic matching, comprises and adopts characteristic matching to obtain match point, and the part invalid to characteristic matching adopts calibration to obtain match point, and rejects the right interference of mispairing point;
Step 4, calculates transformation matrix, calculates to the match point that step 3 process obtains the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Step 5, unified coordinate system, utilizes step 4 gained final transformation matrix to be transformed into successively by image to be spliced for step 2 gained with reference in coordinate systems in image, forms a width Large visual angle image;
Step 6, image co-registration, carries out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
2. tunnel image splicing method according to claim 1, it is characterized in that: in step 3, the implementation adopting calibration to obtain match point is, set up tunnel model in advance and calibration field is provided, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain calibration point, when there is the invalid part of characteristic matching, the normalization coefficient process corresponding according to step one gained to this partial response calibration, then according on image after cylindrical surface projecting of the mode conversion of step 2, obtain final match point.
3. tunnel image splicing method according to claim 1, it is characterized in that: in step 4, initial transformation matrix is homography matrix.
4. tunnel image splicing method according to claim 1, is characterized in that: in step 6, and image co-registration adopts laplacian pyramid amalgamation mode.
5. tunnel image splicing method according to claim 1 or 2 or 3 or 4, is characterized in that: according to the splicing result of the image of single section, is removed by the lap between adjacent sections, obtains the splicing result of whole tunnel image.
6. a tunnel image joint system, for splicing for the image of collected by camera each in tunnel image acquisition system, described tunnel image acquisition system comprises multiple camera and a laser scanner, in collection vehicle driving process, all cameras expose the tunnel image obtaining corresponding section simultaneously, it is characterized in that: the tunnel image obtained at all camera single exposures of certain section is spliced, comprises with lower module
Normalized image module, for the data based on laser scanner, according to the relative position relation of calibration gained camera and laser scanner in advance, calculate the object distance that each camera is corresponding, extract normalization coefficient, the image of each collected by camera is normalized, obtains the image to be spliced that resolution is consistent;
Described extraction normalization coefficient, be normalized the image of each collected by camera, implementation is as follows,
First, utilize following formula to try to achieve the resolution of each image shot by camera,
r = d p i × f s × 1000
Wherein, r is the resolution of image, and dpi is the size of each pixel on camera CCD, and f is the focal length of camera, object distance when s is camera shooting;
Then, after the resolution obtaining each image shot by camera, choose reference resolution, corresponding camera is designated as benchmark camera; Calculate the resolution of other image shot by cameras and the ratio of this reference resolution, the ratio of acquisition is normalization coefficient;
Finally, according to the normalization coefficient calculated; The wide height of the resolution of other image shot by cameras is made product with corresponding normalization coefficient respectively, obtains the image after corresponding normalization;
Cylindrical surface projecting module, for according to conic projection formula, the image projection all to be spliced normalized image module obtained is on the face of cylinder, and wherein, the focal length of conic projection and projection radius are the focal lengths and projection radius that benchmark camera is corresponding;
The inverse formula of described conic projection formula is as follows,
x = f t a n x ′ s
y = y ′ f 1 + t a n x ′ s s
Wherein, (x, y) represents the coordinate of the point on original image, and (x ', y ') represents the coordinate of the point after projection, and f represents the focal length of camera lens, and s represents projection radius;
Characteristic matching module, for adopting characteristic matching to obtain match point, the part invalid to characteristic matching adopts calibration to obtain match point, and rejects the right interference of mispairing point;
Transformation matrix module, the match point for obtaining characteristic matching module process calculates the initial transformation matrix that size is 3 × 3, carries out simplification and extracts translation parameters, obtain transformation matrix final between image;
Coordinate system unified modules, for utilizing the final transformation matrix of transformation matrix module gained to be transformed into cylindrical surface projecting module gained image to be spliced with reference in coordinate systems in image successively, forms a width Large visual angle image;
Image co-registration module, for carrying out merging the smooth and seamless tunnel image obtaining splicing reconstruct by the overlapping region of image to be spliced.
7. tunnel image joint system according to claim 6, it is characterized in that: in characteristic matching module, the implementation adopting calibration to obtain match point is, set up tunnel model in advance and calibration field is provided, gather image tunnel model posting and demarcates paper, feature point extraction is carried out to the adjacent image after gathering, obtain calibration point, when there is the invalid part of characteristic matching, the normalization coefficient process corresponding according to normalized image module gained to this partial response calibration, then according on image after cylindrical surface projecting of the mode conversion of cylindrical surface projecting module, obtain final match point.
8. tunnel image joint system according to claim 6, it is characterized in that: in transformation matrix module, initial transformation matrix is homography matrix.
9. tunnel image joint system according to claim 6, is characterized in that: in image co-registration module, and image co-registration adopts laplacian pyramid amalgamation mode.
10. tunnel image joint system according to claim 6 or 7 or 8 or 9, it is characterized in that: output module is set, for the splicing result of the image according to single section, the lap between adjacent sections is removed, obtains the splicing result of whole tunnel image.
CN201610056749.7A 2016-01-27 2016-01-27 tunnel image splicing method and system Active CN105550995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610056749.7A CN105550995B (en) 2016-01-27 2016-01-27 tunnel image splicing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610056749.7A CN105550995B (en) 2016-01-27 2016-01-27 tunnel image splicing method and system

Publications (2)

Publication Number Publication Date
CN105550995A true CN105550995A (en) 2016-05-04
CN105550995B CN105550995B (en) 2019-01-11

Family

ID=55830171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610056749.7A Active CN105550995B (en) 2016-01-27 2016-01-27 tunnel image splicing method and system

Country Status (1)

Country Link
CN (1) CN105550995B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897962A (en) * 2016-12-26 2017-06-27 中国科学院长春光学精密机械与物理研究所 The disk projection of big view field space earth observation image and joining method
CN107330941A (en) * 2017-05-24 2017-11-07 西安应用光学研究所 The precision target drone system and method for big visual field camera near field calibration
CN107423681A (en) * 2017-05-27 2017-12-01 上海骁达信息科技有限公司 A kind of vehicle automatic identification method and system
CN108109112A (en) * 2018-01-16 2018-06-01 上海同岩土木工程科技股份有限公司 A kind of tunnel spread figure splicing parameter processing method based on Sift features
CN109087244A (en) * 2018-07-26 2018-12-25 贵州火星探索科技有限公司 A kind of Panorama Mosaic method, intelligent terminal and storage medium
CN109120886A (en) * 2017-06-26 2019-01-01 王芮 A kind of complete picture fusion method, device and system
CN109478318A (en) * 2016-09-08 2019-03-15 三星电子株式会社 360 deg video-splicing
CN109741271A (en) * 2018-12-14 2019-05-10 陕西高速公路工程试验检测有限公司 A kind of detection method and system
CN109801216A (en) * 2018-12-20 2019-05-24 武汉武大卓越科技有限责任公司 The quick joining method of Tunnel testing image
CN109919839A (en) * 2019-01-18 2019-06-21 武汉武大卓越科技有限责任公司 A kind of tunnel graphic joining method
CN110033407A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of shield tunnel surface image scaling method, joining method and splicing system
CN110097504A (en) * 2019-05-13 2019-08-06 招商局重庆交通科研设计院有限公司 A kind of image vision acquisition system for tunnel crusing robot
CN110276286A (en) * 2019-06-13 2019-09-24 中国电子科技集团公司第二十八研究所 A kind of embedded panoramic video splicing system based on TX2
CN110381265A (en) * 2019-07-12 2019-10-25 武汉恒新动力科技有限公司 Wound packages workpiece inner wall image acquiring method, equipment and computer readable storage medium
CN110880159A (en) * 2019-11-05 2020-03-13 浙江大华技术股份有限公司 Image splicing method and device, storage medium and electronic device
CN111429430A (en) * 2020-03-23 2020-07-17 同济大学 Tunnel detection vehicle size mapping relation calculation method based on machine vision
CN111507894A (en) * 2020-04-17 2020-08-07 浙江大华技术股份有限公司 Image splicing processing method and device
CN111524070A (en) * 2020-04-21 2020-08-11 南京航空航天大学 Tunnel image acquisition device based on preset area and splicing method
CN112017114A (en) * 2020-06-08 2020-12-01 武汉精视遥测科技有限公司 Method and system for splicing full image by using half image in tunnel detection
CN112381009A (en) * 2020-11-18 2021-02-19 安徽国钜工程机械科技有限公司 Tunnel image acquisition and image splicing method of linear array camera
CN112907540A (en) * 2021-02-22 2021-06-04 浙江大华技术股份有限公司 Splicing abnormity detection method, device, equipment and medium
CN113310987A (en) * 2020-02-26 2021-08-27 保定市天河电子技术有限公司 Tunnel lining surface detection system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110122A (en) * 2007-08-31 2008-01-23 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic
JP2014010522A (en) * 2012-06-28 2014-01-20 Nikon Corp Digital camera and image determination program
CN104463778A (en) * 2014-11-06 2015-03-25 北京控制工程研究所 Panoramagram generation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110122A (en) * 2007-08-31 2008-01-23 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic
JP2014010522A (en) * 2012-06-28 2014-01-20 Nikon Corp Digital camera and image determination program
CN104463778A (en) * 2014-11-06 2015-03-25 北京控制工程研究所 Panoramagram generation method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JAERVE, INA 等: "Software automation impact on aerial triangulation accuracy and geometric quality of orthophoto mosaic based on UltraCAM D sample images", 《9TH INTERNATIONAL CONFERENCE ENVIRONMENTAL ENGINEERING》 *
曾霖: "图像拼接技术的研究、实现与应用", 《中国博士学位论文全文数据库信息科技辑》 *
漆驰 等: "摄像机图像序列的全景图拼接", 《计算机辅助设计与图形学学报》 *
邹超洋: "基于多摄像头全景图像拼接的实时视频监控技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109478318A (en) * 2016-09-08 2019-03-15 三星电子株式会社 360 deg video-splicing
CN109478318B (en) * 2016-09-08 2022-05-24 三星电子株式会社 Three hundred sixty degree video stitching
CN106897962A (en) * 2016-12-26 2017-06-27 中国科学院长春光学精密机械与物理研究所 The disk projection of big view field space earth observation image and joining method
CN106897962B (en) * 2016-12-26 2019-06-25 中国科学院长春光学精密机械与物理研究所 The disk projection of big view field space earth observation image and joining method
CN107330941A (en) * 2017-05-24 2017-11-07 西安应用光学研究所 The precision target drone system and method for big visual field camera near field calibration
CN107423681A (en) * 2017-05-27 2017-12-01 上海骁达信息科技有限公司 A kind of vehicle automatic identification method and system
CN109120886A (en) * 2017-06-26 2019-01-01 王芮 A kind of complete picture fusion method, device and system
CN108109112B (en) * 2018-01-16 2021-07-20 上海同岩土木工程科技股份有限公司 Tunnel layout graph splicing parameter processing method based on Sift characteristic
CN108109112A (en) * 2018-01-16 2018-06-01 上海同岩土木工程科技股份有限公司 A kind of tunnel spread figure splicing parameter processing method based on Sift features
CN109087244B (en) * 2018-07-26 2023-04-18 深圳禾苗通信科技有限公司 Panoramic image splicing method, intelligent terminal and storage medium
CN109087244A (en) * 2018-07-26 2018-12-25 贵州火星探索科技有限公司 A kind of Panorama Mosaic method, intelligent terminal and storage medium
CN109741271B (en) * 2018-12-14 2021-11-19 陕西高速公路工程试验检测有限公司 Detection method and system
CN109741271A (en) * 2018-12-14 2019-05-10 陕西高速公路工程试验检测有限公司 A kind of detection method and system
CN109801216A (en) * 2018-12-20 2019-05-24 武汉武大卓越科技有限责任公司 The quick joining method of Tunnel testing image
CN109919839A (en) * 2019-01-18 2019-06-21 武汉武大卓越科技有限责任公司 A kind of tunnel graphic joining method
CN110033407B (en) * 2019-03-29 2020-10-30 华中科技大学 Shield tunnel surface image calibration method, splicing method and splicing system
CN110033407A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of shield tunnel surface image scaling method, joining method and splicing system
CN110097504A (en) * 2019-05-13 2019-08-06 招商局重庆交通科研设计院有限公司 A kind of image vision acquisition system for tunnel crusing robot
CN110276286B (en) * 2019-06-13 2022-03-04 中国电子科技集团公司第二十八研究所 Embedded panoramic video stitching system based on TX2
CN110276286A (en) * 2019-06-13 2019-09-24 中国电子科技集团公司第二十八研究所 A kind of embedded panoramic video splicing system based on TX2
CN110381265A (en) * 2019-07-12 2019-10-25 武汉恒新动力科技有限公司 Wound packages workpiece inner wall image acquiring method, equipment and computer readable storage medium
CN110381265B (en) * 2019-07-12 2021-08-31 武汉恒新动力科技有限公司 Method and device for acquiring images of inner wall of cylindrical workpiece and computer-readable storage medium
CN110880159A (en) * 2019-11-05 2020-03-13 浙江大华技术股份有限公司 Image splicing method and device, storage medium and electronic device
CN113310987A (en) * 2020-02-26 2021-08-27 保定市天河电子技术有限公司 Tunnel lining surface detection system and method
CN111429430B (en) * 2020-03-23 2022-03-25 同济大学 Tunnel detection vehicle size mapping relation calculation method based on machine vision
CN111429430A (en) * 2020-03-23 2020-07-17 同济大学 Tunnel detection vehicle size mapping relation calculation method based on machine vision
CN111507894A (en) * 2020-04-17 2020-08-07 浙江大华技术股份有限公司 Image splicing processing method and device
CN111507894B (en) * 2020-04-17 2023-06-13 浙江大华技术股份有限公司 Image stitching processing method and device
CN111524070A (en) * 2020-04-21 2020-08-11 南京航空航天大学 Tunnel image acquisition device based on preset area and splicing method
CN112017114A (en) * 2020-06-08 2020-12-01 武汉精视遥测科技有限公司 Method and system for splicing full image by using half image in tunnel detection
CN112017114B (en) * 2020-06-08 2023-08-04 武汉精视遥测科技有限公司 Method and system for splicing full images of half images in tunnel detection
CN112381009A (en) * 2020-11-18 2021-02-19 安徽国钜工程机械科技有限公司 Tunnel image acquisition and image splicing method of linear array camera
CN112907540A (en) * 2021-02-22 2021-06-04 浙江大华技术股份有限公司 Splicing abnormity detection method, device, equipment and medium

Also Published As

Publication number Publication date
CN105550995B (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN105550995A (en) Tunnel image splicing method and system
Fathi et al. Automated as-built 3D reconstruction of civil infrastructure using computer vision: Achievements, opportunities, and challenges
CN110487216B (en) Fringe projection three-dimensional scanning method based on convolutional neural network
CN111462329B (en) Three-dimensional reconstruction method of unmanned aerial vehicle aerial image based on deep learning
CN100432836C (en) Method for realizing two-dimensional panoramic true imaging
KR100450469B1 (en) Image Combination Method and System Using Parallax-based Technology
WO2019100933A1 (en) Method, device and system for three-dimensional measurement
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN111080709B (en) Multispectral stereo camera self-calibration algorithm based on track feature registration
CN101840570A (en) Fast image splicing method
CN104159026A (en) System for realizing 360-degree panoramic video
CN105931185A (en) Automatic splicing method of multiple view angle image
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN104881869A (en) Real time panorama tracing and splicing method for mobile platform
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN106856000A (en) A kind of vehicle-mounted panoramic image seamless splicing processing method and system
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
Haala et al. High density aerial image matching: State-of-the-art and future prospects
CN105894448A (en) Mask matrix generation method and device and parking image synthesis method and device
CN107730558A (en) 360 ° of vehicle-running recording systems and method based on two-way fish eye camera
Liu et al. UAV image mosaic for road traffic accident scene
Xu et al. UAV-based bridge geometric shape measurement using automatic bridge component detection and distributed multi-view reconstruction
US11609332B2 (en) Method and apparatus for generating image using LiDAR
CN105335988A (en) Hierarchical processing based sub-pixel center extraction method
Li et al. Fusion of aerial, MMS and backpack images and point clouds for optimized 3D mapping in urban areas

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Patentee after: Wuhan Optical Valley excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Patentee before: Wuhan Wuda excellence Technology Co.,Ltd.

Address after: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Patentee after: Wuhan Wuda excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Patentee before: WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder