CN107665479A - A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium - Google Patents

A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium Download PDF

Info

Publication number
CN107665479A
CN107665479A CN201710790764.9A CN201710790764A CN107665479A CN 107665479 A CN107665479 A CN 107665479A CN 201710790764 A CN201710790764 A CN 201710790764A CN 107665479 A CN107665479 A CN 107665479A
Authority
CN
China
Prior art keywords
image
characteristic point
point
matrix
calculated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710790764.9A
Other languages
Chinese (zh)
Inventor
王健宗
王义文
刘奡智
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201710790764.9A priority Critical patent/CN107665479A/en
Priority to PCT/CN2017/102871 priority patent/WO2019047284A1/en
Publication of CN107665479A publication Critical patent/CN107665479A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The embodiment of the present invention provides a kind of feature extracting method, panorama mosaic method and its device, equipment, computer-readable recording medium.The panorama mosaic method includes:Receive the image that the needs of input splice;Characteristic point is calculated according to described image and carries out Feature Points Matching;Transformation matrix is calculated according to the characteristic point in described image;Line translation is entered to described image using transformation matrix with the image after being converted;Image after conversion is projected to complete to splice;The image of splicing is merged with the panoramic picture after being merged;Wherein, it is described characteristic point to be calculated according to described image and to carry out Feature Points Matching be that the characteristic vector of key point and the key point is calculated according to feature extracting method, the key point is characteristic point, and the characteristic vector of characteristic point and the characteristic point in described image carries out Feature Points Matching.The embodiment of the present invention improves the speed of feature extraction, the speed of Feature Points Matching, the speed of panoramic mosaic.

Description

A kind of feature extracting method, panorama mosaic method and its device, equipment and computer can Read storage medium
Technical field
The present invention relates to technical field of information processing, more particularly to a kind of feature extracting method, panorama mosaic method and its Device, equipment and computer-readable recording medium.
Background technology
In many commercial exhibition projects, have become a kind of development trend and Xin Si using panoramic virtual reality technology Road.During panoramic mosaic is realized, the accuracy and speed of splicing is all extremely important.Some existing panoramic mosaic algorithms, The accuracy and speed of splicing is inadequate, causes practicality poor.
The content of the invention
The embodiments of the invention provide a kind of feature extracting method, panorama mosaic method and its device, equipment and computer Readable storage medium storing program for executing, this feature extracting method is used in panorama mosaic method, on the premise of precision is ensured, improves panorama The speed of splicing, practicality are stronger.
In a first aspect, the embodiments of the invention provide following method:
A kind of feature extracting method, this method include:
Receive the image of input;
Generate the metric space of described image;
Detect the extreme point in the metric space of described image;
Key point is calculated according to the extreme point;
Calculate the directioin parameter of each key point;
On the metric space where each key point, the border circular areas centered on the position where key point is obtained;
The border circular areas is divided into N block sector cells domain, wherein, N is the natural number more than 1;
The gradient aggregate-value on M direction being assigned in each piece of sector cell domain is calculated, wherein, M is directioin parameter In direction number, for the natural number more than 1;
The N*M gradient aggregate-value according to calculating determines the characteristic vector of the key point.
A kind of panorama mosaic method, this method include:
Receive the image that the needs of input splice;
Characteristic point is calculated according to described image and carries out Feature Points Matching;
Transformation matrix is calculated according to the characteristic point in described image;
Line translation is entered to described image using transformation matrix with the image after being converted;
Image after conversion is projected to complete to splice;
The image of splicing is merged with the panoramic picture after being merged;
Wherein, it is described characteristic point to be calculated according to described image and to carry out Feature Points Matching be according to features described above extracting method The characteristic vector of key point and the key point is calculated, the key point is characteristic point, the spy in described image The characteristic vector of sign point and the characteristic point carries out Feature Points Matching.
Second aspect, the embodiments of the invention provide a kind of device, the device includes being used to perform above-mentioned first aspect institute The unit for the feature extracting method stated or including the unit for performing the panorama mosaic method described in above-mentioned first aspect.
The third aspect, the embodiment of the present invention additionally provide a kind of equipment, and the equipment includes memory, and is deposited with described The connected processor of reservoir;
The memory is used to store the routine data for realizing feature extraction, and the processor is used to run the memory The routine data of middle storage, to perform the feature extracting method described in above-mentioned first aspect;Or
The memory is used to store the routine data for realizing panoramic mosaic, and the processor is used to run the memory The routine data of middle storage, to perform the panorama mosaic method described in above-mentioned first aspect.
Fourth aspect, the embodiments of the invention provide a kind of computer-readable recording medium, the computer-readable storage Media storage has one or more than one routine data, and one either more than one routine data can be by one or one Computing device more than individual, to realize the feature extracting method or panorama mosaic method described in above-mentioned first aspect.
The embodiment of the present invention, receive the image that the needs of input splice;Characteristic point is calculated according to described image and carries out spy Levy Point matching;Transformation matrix is calculated according to the characteristic point in described image;Described image is entered using transformation matrix line translation with Image after being converted;Image after conversion is projected to complete to splice;The image of splicing is merged to obtain Panoramic picture after fusion.Wherein, when calculating the characteristic point of described image, on the metric space where each key point, obtain Take the border circular areas centered on the position where key point;The border circular areas is divided into N block sector cells domain;Calculate each The gradient aggregate-value on M direction being assigned in block sector cell domain, wherein, M is the direction number in directioin parameter;According to The N*M gradient aggregate-value calculated determines the characteristic vector of the key point.The embodiment of the present invention is calculating the spy of characteristic point It is not in the square area where characteristic point, but in the border circular areas where characteristic point when levying vectorial, on the one hand, In the border circular areas where characteristic point, the direction can without reference axis to be rotated to characteristic point meets rotational invariance, On the other hand, reduce amount of calculation, improve the efficiency of calculating.Calculate the characteristic vector of characteristic point and characteristic point, then root Matched according to the characteristic vector of characteristic point and characteristic point, improve the speed of matching, while also improve the speed of panoramic mosaic Degree.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, it is required in being described below to embodiment to use Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of feature extracting method provided in an embodiment of the present invention;
Fig. 2 is a kind of sub-process schematic diagram of feature extracting method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of characteristic vector calculated provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic flow sheet of panorama mosaic method provided in an embodiment of the present invention;
Fig. 5 is a kind of sub-process schematic diagram of panorama mosaic method provided in an embodiment of the present invention;
Fig. 6 is a kind of sub-process schematic diagram for panorama mosaic method that another embodiment of the present invention provides;
Fig. 7 is a kind of sub-process schematic diagram for panorama mosaic method that another embodiment of the present invention provides;
Fig. 8 is a kind of schematic flow sheet for panorama mosaic method that another embodiment of the present invention provides;
Fig. 9 is a kind of schematic block diagram of feature deriving means provided in an embodiment of the present invention;
Figure 10 is the schematic block diagram of characteristic vector determining unit provided in an embodiment of the present invention;
Figure 11 is the schematic block diagram of panoramic mosaic device provided in an embodiment of the present invention;
Figure 12 is the schematic block diagram of transformation matrix computing unit provided in an embodiment of the present invention;
Figure 13 is the schematic block diagram for the transformation matrix computing unit that another embodiment of the present invention provides;
Figure 14 is the schematic block diagram for the transformation matrix computing unit that another embodiment of the present invention provides;
Figure 15 is the schematic block diagram for the panoramic mosaic device that another embodiment of the present invention provides;
Figure 16 is a kind of schematic block diagram of feature extracting device provided in an embodiment of the present invention;
Figure 17 is a kind of schematic block diagram of panoramic mosaic equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
It should be appreciated that ought be in this specification and in the appended claims in use, term " comprising " and "comprising" instruction Described feature, entirety, step, operation, the presence of element and/or component, but it is not precluded from one or more of the other feature, whole Body, step, operation, element, component and/or its presence or addition for gathering.
It is also understood that refer in description of the invention to the term "and/or" used in appended claims related Join any combinations of one or more of the item listed and be possible to combine, and including these combinations.
It will also be understood that although term first, second etc. can be used for describing various elements herein, but these elements should not This is limited to these terms.These terms are only used for these elements being distinguished from each other out.For example, the scope of the invention is not being departed from Under the premise of, the first spin matrix can be referred to as the second spin matrix, and similarly, the second spin matrix can be referred to as One spin matrix.First spin matrix and the second spin matrix are spin matrix, but they are not same spin matrix.
Fig. 1 is a kind of schematic flow sheet of feature extracting method provided in an embodiment of the present invention.This method include S101~ S106。
S101, receive the image of input.
S102, generate the metric space of the image.Wherein, metric space refers to:Obtained by consecutive variations scale parameter Multiple dimensioned lower metric space expression sequence.Image is further processed using metric space, it is easier to obtain figure The substantive characteristics of picture.Metric space meets translation invariance, scale invariability, euclidean consistency and affine-invariant features. The metric space of one image is defined as the convolution of a Gaussian function for changing yardstick and original image.The metric space of image exists Represented when realizing using gaussian pyramid, the structure of gaussian pyramid is divided into two parts:Different scale Gaussian Blur is done to image; Image is done down-sampled.The image of original image and original image on different scale so is obtained, that is, generates the chi of the image Spend space.
S103, detect the extreme point in the metric space of the image.Using adjacent two layers up and down in every group of gaussian pyramid Image subtraction, obtain difference of Gaussian image.On difference of Gaussian image, difference of Gaussian Function Extreme Value point is found.Specifically, often One pixel and its all consecutive points compare, and all consecutive points are including adjacent on the metric space where the pixel Corresponding consecutive points on point and the neighbouring yardstick of the pixel, see whether the pixel is bigger than its consecutive points or small, take Point in consecutive points corresponding to maximum or minimum pixel value, using this o'clock as an extreme point.Such as in 3*3 pixel region It is interior, it is assumed that middle pixel is test point, and the test point is with it with other 8 consecutive points of yardstick and neighbouring yardstick pair Totally 26 points compare 9 × 2 points answered, and choose the point corresponding to pixel value maximum or minimum in 26 points, the point is made For an extreme point.Wherein, yardstick of each extreme point including the metric space where the extreme point, position.
S104, key point is calculated according to extreme point.The not necessarily real extreme point of the extreme point of discrete space.In order to Stability is improved, it is necessary to find real extreme point from discrete extreme point, and reject unstable edge extreme point.Tool Body, metric space difference of Gaussian function is carried out curve fitting.
S105, calculate the directioin parameter of each key point.Wherein directioin parameter includes M direction, and M is oneself more than 1 So number.For the key point detected, the gradient of pixel and direction are divided in certain neighborhood of gaussian pyramid image where gathering it Cloth feature.Specifically, the gradient using pixel in statistics with histogram neighborhood and direction, if histogram is by 0~360 degree of direction model Enclose and be divided into 36 posts (bins), wherein per 10 degree of post.The larger M posts of the Grad of wherein every post are chosen, by the side where the M posts To the direction as key point.Preferably, M=8.
S106, on the metric space where each key point, obtain the circle centered on the position where key point Region.Wherein, yardstick of each key point including the metric space where the key point, position.Because the multiple keys drawn Point is likely located on different metric spaces the yardstick, it is necessary to where knowing key point, in the metric space corresponding to the yardstick On, obtain the border circular areas centered on the position where key point.Wherein, point of the radius of border circular areas and the image of input Resolution is relevant, and the resolution ratio of image represents the quality of image.The resolution ratio of the image of input is higher, and the radius of border circular areas is got over It is small;The resolution ratio of the image of input is lower, and the radius of border circular areas is bigger.In this way, guarantee to extract useful feature information. Preferably, 4* in key point metric space in a diameter of former sift algorithms (algorithm that David Lowe are proposed) of the border circular areas Square cornerwise length where 4 window.In other embodiments, the radius of border circular areas can also be some other Fixed value etc..
S107, border circular areas is divided into N block sector cells domain, wherein, N is the natural number more than 1.Preferably, N=8.
S108, the gradient aggregate-value on M direction being assigned in each piece of sector cell domain is calculated, wherein, M is direction Direction number in parameter, for the natural number more than 1.Preferably, M=8.Calculate the ladder of pixel in each piece of sector cell domain Degree and direction, the Grad of the pixel in the sector cell domain is assigned on 8 directions, counts the gradient on 8 directions Aggregate-value.Wherein, it is identical with former sift algorithms to calculate the specific formula of gradient aggregate-value, will not be repeated here.Work as N=8, M=8 When, the characteristic vector of extraction, it can be very good to represent the feature of the key point.
S109, the characteristic vector of key point is determined according to the N*M gradient aggregate-value calculated.Specifically, such as Fig. 2 institutes Show, S109 includes S201-S202.S201, the N*M gradient aggregate-value calculated is subjected to descending arrangement.The ladder that will be calculated Degree aggregate-value is ranked up to be matched to preferably carry out characteristic point (key point).S202, to the gradient aggregate-value after arrangement It is normalized, to obtain the characteristic vector of key point.The purpose being normalized is to eliminate the influence of illumination.Specific In realization, carry out arrangement and normalized order do not limit, it can be understood as, normalizing again can be first arranged to gradient aggregate-value Change, can also first normalize rearrangement.
Fig. 3 is a kind of schematic diagram of the characteristic vector calculated.As shown in figure 3, border circular areas 30 is divided into 8 sector cells Domain 31, wherein, diameter 32 is the length of the diameter of border circular areas, and the feature 33 corresponding to each sector cell domain is by should The Grad of pixel in sector cell domain is assigned on 8 directions, counts what the gradient aggregate-value on 8 directions was drawn. The characteristic vector of key point includes the gradient aggregate-value on 8 directions in 8 sector cell domains, it should be noted that the ladder Aggregate-value is spent by normalization and descending arrangement.
Above method embodiment, when calculating the characteristic vector of key point, directly by the 4*4 areas of the key point of former sift algorithms Domain changes border circular areas into, without judging the principal direction of key point in 4*4 regions, without the 4*4 regions to former sift Carry out rotation can and meet rotational invariance.Border circular areas is divided into 8 pieces, calculates and is assigned in each piece on 8 directions Accumulative tonsure value.The characteristic vector for each key point finally calculated becomes 8*8 from original 4*4*8 (8 directions) dimensions (8 directions), i.e., 64 dimensions are become by original 128 dimension, the dimension reduction of the characteristic vector of each key point half.Keeping The speed of image characteristics extraction is substantially increased on the premise of precision.
Fig. 4 is a kind of schematic block diagram of panorama mosaic method provided in an embodiment of the present invention.The panorama mosaic method bag Include S401-S406.
S401, receive the image that the needs of input splice.The image that the needs of panorama mosaic method input splice is entering Distortion correction need not be carried out before row feature point extraction, as shot by the common anti-video camera of list/mobile phone camera. It should be noted that the image spliced according to the actual requirements to the needs of input may do some pretreatments, for example remove one A little interfering noises etc..
S402, the image spliced according to the needs of input calculate characteristic point and carry out Feature Points Matching.Specifically, by such as Embodiment shown in Fig. 1-Fig. 2 calculates the characteristic vector of characteristic point and characteristic point, will not be repeated here.According to what is calculated The characteristic vector of characteristic point and characteristic point carries out Feature Points Matching, can so find the image that can be mutually matched two-by-two. It is to be understood that there is identical characteristic point on the image that can be mutually matched two-by-two.Because the embodiment shown in Fig. 1-Fig. 2 can be with The speed of image characteristics extraction is greatly improved on the premise of precision is kept, is calculated often using the embodiment shown in Fig. 1-Fig. 2 The characteristic point of image and the characteristic vector of characteristic point are opened, then carries out the Feature Points Matching between image, image can be greatly improved The speed matched somebody with somebody.
S403, transformation matrix is calculated according to the characteristic point on image.The precision of transformation matrix directly affects panoramic mosaic Result.
Specifically, as shown in figure 5, S403 includes S501-S503.S501, a most young waiter in a wineshop or an inn is utilized according to the characteristic point on image Multiplication calculates the first spin matrix, wherein, the first spin matrix includes rotation parameter.Specifically, input two-by-two can be mutual The characteristic point for the image matched somebody with somebody, the spin matrix between the image that can be mutually matched two-by-two, institute are calculated using least square method After spin matrix between the image that some can be mutually matched two-by-two calculates, by all figures that can be mutually matched two-by-two Spin matrix as between is adjusted to same standard, and the uniform spin matrix after adjustment is referred to as into the first spin matrix. Wherein, spin matrix is the expression of the camera parameters between image.Wherein, it is adjusted to same standard, it can be understood as, such as For scale parameter, the yardstick of the spin matrix of first image that can be mutually matched two-by-two is 1, and second two-by-two can phase The yardstick of the spin matrix of the image mutually matched is 1.5, then can be with the rotation of first image that can be mutually matched two-by-two On the basis of the yardstick of torque battle array, the rescaling of the spin matrix for the image that second can be mutually matched two-by-two is to first Individual yardstick is identical;Can also be on the basis of the yardstick of the spin matrix of second image that can be mutually matched two-by-two, by The yardstick of one is changed into 1.5;Can also be by the rescaling of two to 3.Equally other parameters such as rotation parameter, translation are joined Number is also required to so be converted.S502, homography matrix is calculated according to the characteristic point on image.In the homography matrix Parameter includes rotation parameter, translation parameters and zooming parameter.Specifically, the feature on image that input can be mutually matched two-by-two Point, by the principle of Epipolar geometry, calculate homography matrix.The thought specifically calculated, it refer to the calculating of the first spin matrix Description.S503, the rotation parameter in homography matrix is replaced using the rotation parameter in the first spin matrix calculated To obtain transformation matrix.Homography square is replaced by the rotation parameter in the first spin matrix for being calculated according to least square method Rotation parameter in battle array, obtained transformation matrix are more accurate.
S404, line translation is entered to image using transformation matrix with the image after being converted.Because every image is being shot When camera parameters may change, camera parameters change, causing to shoot the image that comes, there is also corresponding difference. If image is directly done and spliced, then ghost image occurs, directly affects the effect finally spliced without corresponding adjustment Fruit, it is therefore desirable to which the image spliced using transformation matrix to the needs of input enters line translation.The video camera ginseng of image after conversion Number is the same, in the absence of relative rotation, the changing of relative scalar, relative translation.
S405, the image after conversion is projected to complete to splice.Because every image is video camera in different angle Lower shoot obtains, so every image is on same projection plane, if directly carry out seamless spelling to overlapping image Connect, the visual consistency of actual scenery can be destroyed, it is necessary to first carry out projective transformation to image, then spliced.Wherein, if 360 degree of splicings need to use cylindrical surface projecting, if 720 degree of splicings need to use spherical projection.Specific method can be:With wherein a certain On the basis of the corresponding camera parameters for opening image, another image that the image can be matched is entered using the camera parameters Row projective transformation, until all image all projective transformations finish.Final all images are all in same coordinate system, same Perspective plane, and the characteristic point between the image that can be matched two-by-two is corresponding.
S406, the image of splicing is merged with the panoramic picture after being merged.The method of fusion specifically includes:Figure Equilibrium treatment of image brightness and color etc., so that spliced panoramic picture seems more natural.
Above method embodiment utilizes transformation matrix pair by calculating the characteristic point for the image for needing to splice and being matched Image enters line translation with the image after being converted, and the image after conversion is projected to complete to splice, to the image of splicing Merged with the panoramic picture after being merged.The method meter that above method embodiment passes through the embodiment shown in Fig. 1-Fig. 2 The characteristic point of nomogram picture is simultaneously matched, and accelerates the speed of images match, improves the efficiency of Panorama Mosaic.In addition, When calculating transformation matrix, homography square is replaced by the rotation parameter in the first spin matrix for being calculated according to least square method Rotation parameter in battle array, obtained transformation matrix is more accurate, improves the precision of panoramic mosaic.
In other embodiments, as shown in fig. 6, calculating transformation matrix, i.e. step S103, bag according to the characteristic point on image Include S601-S603.S601, according to the characteristic point on image, utilize RANSAC algorithm (Random Sample Consensus, RANSAC) the second spin matrix is calculated, second spin matrix includes rotation parameter.Wherein, RANSAC algorithms By inputting the characteristic point on the image that can be matched two-by-two, using model and some believable parameters, optimal second is found Spin matrix to meet that the feature point number of the second spin matrix is most.S602, single answer is calculated according to the characteristic point on image Property matrix.Parameter in the homography matrix includes rotation parameter, translation parameters and zooming parameter.Specifically, input two is all right one way or the other With the characteristic point on the image that is mutually matched, by the principle of Epipolar geometry, homography matrix is calculated.S603, utilize calculating Rotation parameter in the second spin matrix out replaces the rotation parameter in homography matrix to obtain transformation matrix.Pass through root Rotation parameter in the second spin matrix calculated according to RANSAC algorithms replaces the rotation parameter in homography matrix, obtains Transformation matrix is more accurate, improves the precision of panoramic mosaic.
In other embodiments, as shown in fig. 7, calculating transformation matrix, i.e. step S103, bag according to the characteristic point on image Include S701-S704.S701, the first spin matrix is calculated using least square method according to the characteristic point on image, wherein, the first rotation Torque battle array includes rotation parameter.S702, according to the characteristic point on image, the is calculated using RANSAC algorithms and the first spin matrix Three spin matrixs, the 3rd spin matrix include rotation parameter.Specifically, the rotation parameter in the first spin matrix is treated as is The initial value of parameter in RANSAC algorithms.On the basis of the initial value the 3rd spin matrix is calculated using RANSAC algorithms. S703, homography matrix is calculated according to the characteristic point on image.Parameter in the homography matrix includes rotation parameter, translation ginseng Number and zooming parameter.Specifically, the characteristic point on the image that can be mutually matched two-by-two of input, by the principle of Epipolar geometry, Calculate homography matrix.S704, replaced using the rotation parameter in the 3rd spin matrix calculated in homography matrix Rotation parameter to obtain transformation matrix.In the 3rd spin matrix calculated using RANSAC algorithms and the first spin matrix Rotation parameter replaces the rotation parameter in homography matrix, greatly improves, substantially increases in obtained transformation matrix accuracy The precision of panoramic mosaic.
Fig. 8 is a kind of schematic flow sheet for panorama mosaic method that another embodiment of the present invention provides.This method includes S801-S808.The difference of this method embodiment and the embodiment described in Fig. 4 is:Before the characteristic point of image is calculated, increase Step S802-S803.Other the step of, refer to the description of the embodiment described in Fig. 4.
S802, judge whether the image that the needs of input splice is fish eye images.Specifically, can be by judging input Parameter, such as parameter of the type of the image of input/whether need to carry out the parameter of flake distortion correction, to judge the image of input Whether it is fish eye images.If the type of the image of input is the type of fish eye images or needs to carry out flake distortion correction, So judge the image of input for fish eye images.Because fish eye camera is relative to the common anti-video camera/mobile phone camera of list For, without shooting many pictures, improve the efficiency of collection image.In addition, IMAQ is fewer, panorama spelling is reduced Occur the probability of error in termination process, improve the precision of panoramic mosaic.
S803, if fish eye images, distortion correction is carried out to the fish eye images of input.Specifically, the method for distortion correction Including spherical coordinate orientation method, longitude and latitude projection method etc..Wherein, the main thought of longitude and latitude projection method is:By the coordinate of fish eye images Normalized coordinates are transformed into, i.e., fish eye images plane pixel coordinates (i, j) are converted into normalization of the scope between -1 to 1 sits Mark (u, v);R the and Q angles of polar coordinate plane are calculated, i.e., for any point P (u, v) on normalized coordinates, calculate P points to polar coordinates Distance r and the P point of origin and the angle Q of U axles;Polar coordinates are converted to spherical coordinate;Again choose spherical coordinate system, obtain through Coordinate after latitude mapping.Then to carry out flake distortion correction after image zooming-out characteristic point and carry out Feature Points Matching.
Whether above method embodiment is that fish eye images judge to the image of input, if fish eye images, to flake Image carries out distortion correction, then carries out Feature Points Matching to fish eye images.Reality of the above method embodiment shown in by Fig. 1-Fig. 2 The method for applying example calculates the characteristic point of image and matched, and accelerates the speed of images match, improves the effect of panoramic mosaic Rate;Furthermore, it is possible to calculate transformation matrix in several ways, the transformation matrix calculated is more accurate, improves panorama spelling The precision connect and the degree of accuracy;Furthermore it is possible to handle fish eye images, because the image of fish eye images collection is less, reduce There is the probability of error during panoramic mosaic, improve the precision of panoramic mosaic.
Fig. 9 is a kind of schematic block diagram of feature deriving means provided in an embodiment of the present invention.The device 90 includes first Receiving unit 901, generation unit 902, detection unit 903, key point computing unit 904, direction calculating unit 905, acquisition are single Member 906, blocking unit 907, calculate allocation unit 908, characteristic vector determining unit 909.
First receiving unit 901 is used for the image for receiving input.
Generation unit 902 is used for the metric space for generating the image.Wherein, metric space refers to:Pass through consecutive variations Metric space under scale parameter acquisition is multiple dimensioned represents sequence.Image is further processed using metric space, more Easily obtain the substantive characteristics of image.Metric space meets translation invariance, scale invariability, euclidean consistency and imitated Penetrate consistency.The metric space of one image is defined as the convolution of a Gaussian function for changing yardstick and original image.Image Metric space represents that the structure of gaussian pyramid is divided into two parts when realizing using gaussian pyramid:Different chis are made to image Spend Gaussian Blur;Image is done down-sampled.The image of original image and original image on different scale so is obtained, that is, is generated The metric space of the image.
Detection unit 903 is used to detect the extreme point in the metric space of the image.Use phase in every group of gaussian pyramid Adjacent two layers of image subtraction up and down, obtains difference of Gaussian image.On difference of Gaussian image, difference of Gaussian Function Extreme Value is found Point.Specifically, each pixel and its all consecutive points compare, and all consecutive points include the yardstick where the pixel Whether corresponding consecutive points on consecutive points and the neighbouring yardstick of the pixel spatially, see the pixel than its consecutive points It is big or small, the point corresponding to pixel value maximum or minimum in consecutive points is taken, using this o'clock as an extreme point.Such as in 3* In 3 pixel region, it is assumed that middle pixel is test point, the test point and it with other 8 consecutive points of yardstick and upper Under 9 × 2 points corresponding to adjacent yardstick totally 26 points compare, choose corresponding to pixel value maximum in 26 points or minimum Point, using this o'clock as an extreme point.Wherein, yardstick of each extreme point including the metric space where the extreme point, position.
Key point computing unit 904 is used to calculate key point according to extreme point.The extreme point of discrete space is not necessarily Real extreme point.In order to improve stability, it is necessary to find real extreme point from discrete extreme point, and reject shakiness Fixed edge extreme point.Specifically, metric space difference of Gaussian function is carried out curve fitting.
Direction calculating unit 905 is used for the directioin parameter for calculating each key point.Wherein directioin parameter includes multiple sides To.For the key point detected, where gathering it in certain neighborhood of gaussian pyramid image pixel gradient and directional spreding Feature.Specifically, the gradient using pixel in statistics with histogram neighborhood and direction, if histogram is by 0~360 degree of direction scope It is divided into 36 posts (bins), wherein per 10 degree of post.The larger M posts of the Grad of wherein every post are chosen, by the direction where the M posts Direction as key point.Preferably, M=8.
Acquiring unit 906 is used in the metric space where each key point, obtain using the position where key point as The border circular areas at center.Wherein, yardstick of each key point including the metric space where the key point, position.Because draw Multiple key points be likely located on different metric spaces the yardstick, it is necessary to where knowing key point, corresponding to the yardstick Metric space on, obtain the border circular areas centered on the position where key point.Wherein, the radius of border circular areas and input Image resolution ratio it is relevant, the resolution ratio of image represents the quality of image.The resolution ratio of the image of input is higher, border circular areas Radius it is smaller;The resolution ratio of the image of input is lower, and the radius of border circular areas is bigger.In this way, guarantee to extract useful Characteristic information.Preferably, in a diameter of former sift algorithms of the border circular areas in key point metric space where 4*4 window The cornerwise length of square.In other embodiments, the radius of border circular areas can also be some value fixed etc..
Blocking unit 907 is used to border circular areas being divided into N block sector cells domain, wherein, N is the natural number more than 1.It is preferred that Ground, N=8.
The gradient that allocation unit 908 is used to calculate on M direction being assigned in each piece of sector cell domain is calculated to add up Value, wherein, M is the direction number in directioin parameter, for the natural number more than 1.Preferably, M=8.It is small to calculate each piece of sector The gradient of pixel and direction in region, the Grad of the pixel in the sector cell domain is assigned on 8 directions, counted Gradient aggregate-value on 8 directions.Wherein, it is identical with former sift algorithms to calculate the specific formula of gradient aggregate-value, herein no longer Repeat.Work as N=8, during M=8, the characteristic vector of extraction, can be very good to represent the feature of the key point.
Characteristic vector determining unit 909 be used for according to N*M gradient aggregate-value calculating determine the feature of key point to Amount.Specifically, as shown in Figure 10, characteristic vector determining unit includes sequencing unit 101, normalization unit 102.Sequencing unit 101 are used to the gradient aggregate-value calculated carrying out descending arrangement.The gradient aggregate-value calculated is ranked up with order to more Good carry out characteristic point (key point) matching.Normalization unit 102 is used to the gradient aggregate-value after arrangement be normalized, with Obtain the characteristic vector of key point.The purpose being normalized is to eliminate the influence of illumination.In the specific implementation, being arranged Row and normalized order do not limit, it can be understood as, renormalization can be first arranged to gradient aggregate-value, can also first return One changes rearrangement.
Above-described embodiment, when calculating the characteristic vector of key point, directly the 4*4 regions of the key point of former sift algorithms are changed Into border circular areas, without judging the principal direction of key point in 4*4 regions, without the 4*4 regions progress to former sift Rotation can meets rotational invariance.Border circular areas is divided into 8 pieces, is assigned in each piece of calculating accumulative on 8 directions Tonsure value.The characteristic vector for each key point finally calculated becomes 8*8 (8 from original 4*4*8 (8 directions) dimensions Direction), i.e., 64 dimensions are become by original 128 dimension, the dimension reduction of the characteristic vector of each key point half.Keeping precision On the premise of substantially increase the speed of image characteristics extraction.
Features described above extraction element can be implemented as a kind of form of computer program, and the computer program can such as scheme Run on feature extracting device shown in 16.
Figure 11 is the schematic block diagram of panoramic mosaic device provided in an embodiment of the present invention.The device 110 connects including second Receive unit 111, matching unit 112, transformation matrix computing unit 113, converter unit 114, projecting cell 115, integrated unit 116。
Second receiving unit 111 is used to receive the image that the needs of input splice.The image is anti-by such as common list Video camera/mobile phone camera shooting.It should be noted that the image spliced according to the actual requirements to the needs of input may Some pretreatments are done, for example remove some interfering noises etc..
The image that matching unit 112 is used to be spliced according to the needs of input calculates characteristic point and carries out Feature Points Matching.Tool Body, the characteristic vector of characteristic point and characteristic point can be calculated by the feature deriving means as shown in Fig. 9-Figure 10, Feature Points Matching is carried out according to the characteristic vector of the characteristic point and characteristic point calculated with unit 112.In other embodiments, First receiving unit 901 of the matching unit including the feature deriving means shown in Fig. 9, generation unit 902, detection unit 903, pass Key point computing unit 904, direction calculating unit 905, acquiring unit 906, blocking unit 907, calculating allocation unit 908, feature Vector determination unit 909.Wherein, characteristic vector determining unit 906 includes sequencing unit 101, normalization unit 102.Matching is single Member 112 carries out Feature Points Matching further according to the characteristic vector of the characteristic point and characteristic point calculated.It can so find two-by-two The image that can be mutually matched.It is to be understood that there is identical characteristic point on the image that can be mutually matched two-by-two.Due to Fig. 9- Feature deriving means shown in Figure 10 can greatly improve the speed of image characteristics extraction on the premise of precision is kept, and utilize figure (feature deriving means that matching unit is included shown in Fig. 9 are corresponding for feature deriving means or matching unit shown in 9- Figure 10 Unit) calculate the characteristic point of every image and the characteristic vector of characteristic point, then carry out the Feature Points Matching between image, can be with Greatly improve the speed of images match.
Transformation matrix computing unit 113 is used to calculate transformation matrix according to the characteristic point on image.
Specifically, as shown in figure 12, transformation matrix computing unit includes the first spin matrix computing unit 121, homography Matrix calculation unit 122, the first replacement unit 123.First spin matrix computing unit 121 is used for according to the characteristic point on image The first spin matrix is calculated using least square method, wherein, the first spin matrix includes rotation parameter.Specifically, input two-by-two The characteristic point for the image that can be mutually matched, the rotation between the image that can be mutually matched two-by-two is calculated using least square method Torque battle array, after the spin matrix between all images that can be mutually matched two-by-two calculates, by it is all two-by-two can phase Spin matrix between the image mutually matched is adjusted to same standard, and the uniform spin matrix after adjustment is referred to as into first Spin matrix.Wherein, spin matrix is the expression of the camera parameters between image.Wherein, same standard, Ke Yili are adjusted to Xie Wei, such as scale parameter, the yardstick of the spin matrix of first image that can be mutually matched two-by-two are 1, second The yardstick of the spin matrix for the image that can be mutually matched two-by-two is 1.5, then be able to can be mutually matched two-by-two with first Image spin matrix yardstick on the basis of, the yardstick of the spin matrix for the image that second can be mutually matched two-by-two is adjusted It is whole to identical with the yardstick of first;Can also using the yardstick of the spin matrix of second image that can be mutually matched two-by-two as Benchmark, the yardstick of first is changed into 1.5;Can also be by the rescaling of two to 3.Ginseng is such as equally rotated to other parameters Number, translation parameters are also required to so be converted.Homography matrix computing unit 122 is based on according to the characteristic point on image Calculate homography matrix.Parameter in the homography matrix includes rotation parameter, translation parameters and zooming parameter.Specifically, input The characteristic point on image that can be mutually matched two-by-two, by the principle of Epipolar geometry, calculates homography matrix.It is specific to calculate Thought, refer to the first spin matrix calculating description.First replacement unit 123 is used to utilize the first rotation calculated Rotation parameter in matrix replaces the rotation parameter in homography matrix to obtain transformation matrix.By according to least square method meter Rotation parameter in the first spin matrix calculated replaces the rotation parameter in homography matrix, and obtained transformation matrix is more Accurately.
Converter unit 114 is used to using transformation matrix entering line translation to image with the image after being converted.Due to every Camera parameters of the image in shooting may change, camera parameters change, can cause to shoot the image that comes there is also Corresponding difference.If image is directly done and spliced, then ghost image occurs, directly affects most without corresponding adjustment The effect spliced afterwards, it is therefore desirable to which the image spliced using transformation matrix to the needs of input enters line translation.Image after conversion Camera parameters it is the same, in the absence of relative rotation, the changing of relative scalar, relative translation.
Projecting cell 115 is used to project the image after conversion to complete to splice.Because every image is video camera Shooting obtains under different angle, so every image is on same projection plane, if direct to overlapping image Carry out seamless spliced, the visual consistency of actual scenery can be destroyed, it is necessary to first carry out projective transformation to image, then spliced. Wherein, if 360 degree of splicings need to use cylindrical surface projecting, if 720 degree of splicings need to use spherical projection.Specifically, projecting cell Can be:On the basis of the corresponding camera parameters of wherein a certain image, another image that the image can be matched Projective transformation is carried out using the camera parameters, until all image all projective transformations finish.Final all images all exist Same coordinate system, same perspective plane, and the characteristic point between the image that can be matched two-by-two is corresponding.
Integrated unit 116 is used to merging the image of splicing with the panoramic picture after being merged.The method of fusion Specifically include:Equilibrium treatment of brightness of image and color etc., so that spliced panoramic picture seems more natural.
Above-described embodiment is by calculating the characteristic point for the image for needing to splice and being matched, using transformation matrix to image Enter line translation with the image after being converted, the image after conversion is projected to complete to splice, the image of splicing is carried out Fusion is with the panoramic picture after being merged.Above method embodiment calculates the spy of image by the embodiment shown in Fig. 9-Figure 10 Sign point is simultaneously matched, and is accelerated the speed of images match, is improved the efficiency of Panorama Mosaic.In addition, calculate conversion square During battle array, the rotation in homography matrix is replaced by the rotation parameter in the first spin matrix for being calculated according to least square method Parameter, obtained transformation matrix is more accurate, improves the precision of panoramic mosaic.
In other embodiments, as shown in figure 13, transformation matrix computing unit 113 includes the second spin matrix computing unit 131st, homography matrix computing unit 132, the second replacement unit 133.Second spin matrix computing unit 131 is used for according to image On characteristic point, calculate the second spin matrix using RANSAC algorithms, second spin matrix includes rotation parameter.Wherein, RANSAC algorithms, using model and some believable parameters, are found by inputting the characteristic point on the image that can be matched two-by-two The second optimal spin matrix to meet that the feature point number of the second spin matrix is most.Homography matrix computing unit 132 For calculating homography matrix according to the characteristic point on image.Parameter in the homography matrix includes rotation parameter, translation ginseng Number and zooming parameter.Specifically, the characteristic point on the image that can be mutually matched two-by-two of input, by the principle of Epipolar geometry, Calculate homography matrix.Second replacement unit 133 is used to replace using the rotation parameter in the second spin matrix calculated The rotation parameter changed in homography matrix is to obtain transformation matrix.Pass through the second spin matrix calculated according to RANSAC algorithms In rotation parameter replace homography matrix in rotation parameter, obtained transformation matrix is more accurate, improve panorama spelling The precision connect.
In other embodiments, as shown in figure 14, transformation matrix computing unit 113 includes the first spin matrix computing unit 141st, the 3rd spin matrix computing unit 142, homography matrix computing unit 143, the 3rd replacement unit 144.First spin moment Battle array computing unit 141 is used to calculate the first spin matrix using least square method according to the characteristic point on image, wherein, the first rotation Torque battle array includes rotation parameter.3rd spin matrix computing unit 142 is used for according to the characteristic point on image, is calculated using RANSAC Method and the first spin matrix calculate the 3rd spin matrix, and the 3rd spin matrix includes rotation parameter.Specifically, by the first rotation Rotation parameter in matrix is as the initial value for being parameter in RANSAC algorithms.Calculated on the basis of the initial value using RANSAC Method calculates the 3rd spin matrix.Homography matrix computing unit 143 is used to calculate homography matrix according to the characteristic point on image. Parameter in the homography matrix includes rotation parameter, translation parameters and zooming parameter.Specifically, input two-by-two can be mutual The characteristic point on image matched somebody with somebody, by the principle of Epipolar geometry, calculates homography matrix.3rd replacement unit 144 is used for profit The rotation parameter in homography matrix is replaced with the rotation parameter in the 3rd spin matrix calculated to obtain transformation matrix. Rotation parameter in the 3rd spin matrix calculated using RANSAC algorithms and the first spin matrix is replaced in homography matrix Rotation parameter, greatly improve in obtained transformation matrix accuracy, substantially increase the precision of panoramic mosaic.
Figure 15 is the schematic block diagram for the panoramic mosaic device that another embodiment of the present invention provides.The device 150 includes the Two receiving units 151, judging unit 152, distortion correction unit 153, matching unit 154, transformation matrix computing unit 155, change Change unit 156, projecting cell 157, integrated unit 158.Wherein, the difference of the embodiment and Figure 11 embodiments is:Add Judging unit 152, distortion correction unit 153.Other units in the embodiment refer to the corresponding list of Figure 11 embodiments Member, it will not be repeated here.
Judging unit 152 is used to judge whether the image that the needs of input splice is fish eye images.Specifically, can pass through The parameter of judgement input, such as parameter of the type of the image of input/whether need to carry out the parameter of flake distortion correction, to judge Whether the image of input is fish eye images.If the type of the image of input is the type of fish eye images or needs to carry out flake Distortion correction, then judge the image of input for fish eye images.Because fish eye camera is relative to the common anti-video camera/hand of list For machine camera, without shooting many pictures, the efficiency of collection image is improved.
If the image that the needs that distortion correction unit 153 is used to input splice is fish eye images, to the fish eye images of input Carry out distortion correction.Specifically, the method for distortion correction includes spherical coordinate orientation method, longitude and latitude projection method etc..Wherein, longitude and latitude reflects Penetrating the main thought of method is:By the Coordinate Conversion of fish eye images to normalized coordinates, i.e., by fish eye images plane pixel coordinates (i, J) normalized coordinates (u, v) of the scope between -1 to 1 are converted into;R the and Q angles of polar coordinate plane are calculated, i.e., for normalization Any point P (u, v) on coordinate, P points are calculated to distance r and the P point of polar origin and the angle Q of U axles;Polar coordinates are converted To spherical coordinate;Again spherical coordinate system is chosen, obtains the coordinate after longitude and latitude projection.Then to the figure after progress flake distortion correction As extraction characteristic point and carry out Feature Points Matching.
Whether above method embodiment is that fish eye images judge to the image of input, if fish eye images, to flake Image carries out distortion correction, then carries out Feature Points Matching to fish eye images.Implementation of the above-described embodiment shown in by Fig. 9-Figure 10 Example calculates the characteristic point of image and matched, and accelerates the speed of images match, improves the efficiency of panoramic mosaic;Moreover, Transformation matrix can be calculated in several ways, and the transformation matrix calculated is more accurate, improves the precision of panoramic mosaic And the degree of accuracy;Furthermore it is possible to handle fish eye images, because the image of fish eye images collection is less, panorama spelling is reduced Occur the probability of error in termination process, improve the precision of panoramic mosaic.
Above-mentioned panoramic mosaic device can be implemented as a kind of form of computer program, and the computer program can such as scheme Run in panoramic mosaic equipment shown in 17.
Figure 16 is a kind of schematic block diagram of feature extracting device provided in an embodiment of the present invention.This feature extraction equipment 160 include input unit 161, output device 162, memory 163 and processor 164, above-mentioned input unit 161, output dress Put 162, memory 163 and processor 164 is connected by bus 165.
Input unit 161 is used to input the image for needing to carry out feature extraction.In the specific implementation, the embodiment of the present invention is defeated Entering device 161 may include keyboard, mouse, speech input device, touch input unit etc..
Output device 162 is used for output characteristic vector etc..In the specific implementation, the output device 162 of the embodiment of the present invention can Including instantaneous speech power, display, display screen, touch-screen etc..
Memory 163 is used to store the routine data for realizing feature extraction.In the specific implementation, the storage of the embodiment of the present invention Device 163 can be system storage, such as non-volatile (such as ROM, flash memory etc.).In the specific implementation, the embodiment of the present invention Memory 163 can also be external memory storage outside system, such as, disk, CD, tape etc..
Processor 164 is used in run memory 163 routine data stored, to perform following operation:
Receive the image of input;Generate the metric space of described image;Detect the extreme value in the metric space of described image Point;Key point is calculated according to the extreme point;Calculate the directioin parameter of each key point;It is empty in the yardstick where each key point Between on, obtain the border circular areas centered on the position where key point;The border circular areas is divided into N block sector cells domain, Wherein, N is the natural number more than 1;The gradient aggregate-value on M direction being assigned in each piece of sector cell domain is calculated, its In, M is the direction number in directioin parameter, for the natural number more than 1;The N*M gradient aggregate-value according to calculating determines institute State the characteristic vector of key point.Wherein, the radius of border circular areas is relevant with the resolution ratio of the image of input, point of the image of input Resolution is higher, and the radius of border circular areas is smaller;The resolution ratio of the image of input is lower, and the radius of border circular areas is bigger.
Preferably, N=8, M=8.
Processor 114 also performs following operation:
The N*M gradient aggregate-value calculated is subjected to descending arrangement;Gradient aggregate-value after arrangement is normalized, To obtain the characteristic vector of the key point.
Figure 17 is a kind of schematic block diagram of panoramic mosaic equipment provided in an embodiment of the present invention.The panoramic mosaic equipment 170 include input unit 171, output device 172, memory 173 and processor 174, above-mentioned input unit 171, output dress Put 172, memory 173 and processor 174 is connected by bus 175.
Input unit 171 is used to input the image for needing to carry out panoramic mosaic.In the specific implementation, the embodiment of the present invention is defeated Entering device 171 may include keyboard, mouse, speech input device, touch input unit etc..
Output device 172 is used to export panoramic picture etc..In the specific implementation, the output device 172 of the embodiment of the present invention can Including display, display screen, touch-screen etc..
Memory 173 is used to store the routine data for realizing panoramic mosaic.In the specific implementation, the storage of the embodiment of the present invention Device 173 can be system storage, such as non-volatile (such as ROM, flash memory etc.).In the specific implementation, the embodiment of the present invention Memory 173 can also be external memory storage outside system, such as, disk, CD, tape etc..
Processor 174 is used in run memory 173 routine data stored, to perform following operation:
Receive the image that the needs of input splice;Characteristic point is calculated according to described image and carries out Feature Points Matching;According to Characteristic point in described image calculates transformation matrix;Line translation is entered to described image using transformation matrix with the figure after being converted Picture;Image after conversion is projected to complete to splice;The image of splicing is merged with the panorama sketch after being merged Picture;Wherein, it is described according to described image calculate characteristic point and carry out Feature Points Matching can be by obtain as described in Figure 16 Come what is realized, the key point is the key point and the characteristic vector of the key point that feature extracting device 160 calculates Characteristic point, the characteristic vector of characteristic point and the characteristic point in described image carry out Feature Points Matching.Understandably, In other embodiments, the associated program data storage that memory 163 in features above extraction equipment 160 is stored can also be arrived It is described according to described image calculating characteristic point and according to the characteristic point progress characteristic point calculated to realize in memory 173 Match somebody with somebody.
Processor 174 also performs following operation:
First spin matrix, first spin matrix are calculated using least square method according to the characteristic point in described image Including rotation parameter;Homography matrix is calculated according to the characteristic point in described image, the homography matrix includes rotation parameter; The rotation parameter in homography matrix is replaced using the rotation parameter in the first spin matrix calculated to obtain converting square Battle array.
Processor 174 also performs following operation:
According to the characteristic point in described image, the second spin matrix, second spin moment are calculated using RANSAC algorithms Battle array includes rotation parameter;Homography matrix is calculated according to the characteristic point in described image, the homography matrix, which includes rotation, joins Number;The rotation parameter in homography matrix is replaced using the rotation parameter in the second spin matrix calculated to be converted Matrix.
Processor 174 also performs following operation:
First spin matrix, first spin matrix are calculated using least square method according to the characteristic point in described image Including rotation parameter;According to the characteristic point in described image, the 3rd rotation is calculated using RANSAC algorithms and the first spin matrix Matrix, the 3rd spin matrix include rotation parameter;Homography matrix, the list are calculated according to the characteristic point in described image Answering property matrix includes rotation parameter;Replaced using the rotation parameter in the 3rd spin matrix calculated in homography matrix Rotation parameter is to obtain transformation matrix.
Characteristic point is calculated and before carrying out Feature Points Matching according to described image described, processor 174 also performs as follows Operation:
Judge whether described image is fish eye images;If described image is fish eye images, the fish eye images are carried out abnormal Become correction;It is described that characteristic point is calculated according to described image and carries out Feature Points Matching, including:Calculated according to the image after correction special Sign point simultaneously carries out Feature Points Matching.
The present invention also provides a kind of computer-readable recording medium, the computer-readable recording medium storage have one or More than one routine data of person, one either more than one routine data can be held by one or more than one processor OK, to realize following steps:
Receive the image of input;Generate the metric space of described image;Detect the extreme value in the metric space of described image Point;Key point is calculated according to the extreme point;Calculate the directioin parameter of each key point;It is empty in the yardstick where each key point Between on, obtain the border circular areas centered on the position where key point;The border circular areas is divided into N block sector cells domain, Wherein, N is the natural number more than 1;The gradient aggregate-value being assigned in each piece of sector cell domain on M direction is calculated, wherein, M is the direction number in directioin parameter, for the natural number more than 1;The N*M gradient aggregate-value according to calculating determines the pass The characteristic vector of key point.Wherein, the radius of border circular areas is relevant with the resolution ratio of the image of input, the resolution ratio of the image of input Higher, the radius of border circular areas is smaller;The resolution ratio of the image of input is lower, and the radius of border circular areas is bigger.
Preferably, N=8, M=8.
In one of the embodiments, following steps can also be realized when the routine data is executed by processor:
The N*M gradients aggregate-value calculated is subjected to descending arrangement;Gradient aggregate-value after arrangement is normalized, with Obtain the characteristic vector of the key point.
The present invention also provides another computer-readable recording medium, and the computer-readable recording medium storage has one Or more than one routine data, one either more than one routine data can be by one or more than one processors Perform, to realize following steps:
Receive the image that the needs of input splice;Characteristic point is calculated according to described image and carries out Feature Points Matching;According to Characteristic point in described image calculates transformation matrix;Line translation is entered to described image using transformation matrix with the figure after being converted Picture;Image after conversion is projected to complete to splice;The image of splicing is merged with the panorama sketch after being merged Picture;Wherein, it is described according to described image calculate characteristic point and carry out Feature Points Matching can be can by obtaining aforementioned computer Come what is realized, the key point is characterized the key point and the characteristic vector of the key point that reading storage medium calculates Point, the characteristic vector of characteristic point and the characteristic point in described image carry out Feature Points Matching.Understandably, at it In its embodiment, it is computer-readable also the associated program data storage that aforementioned computer readable storage medium storing program for executing is stored can be arrived this It is described according to described image calculating characteristic point and according to the characteristic point progress characteristic point calculated to realize in storage medium Match somebody with somebody.
In one of the embodiments, following steps can also be realized when the routine data is executed by processor:
First spin matrix, first spin matrix are calculated using least square method according to the characteristic point in described image Including rotation parameter;Homography matrix is calculated according to the characteristic point in described image, the homography matrix includes rotation parameter; The rotation parameter in homography matrix is replaced using the rotation parameter in the first spin matrix calculated to obtain converting square Battle array.
In one of the embodiments, following steps can also be realized when the routine data is executed by processor:
According to the characteristic point in described image, the second spin matrix, second spin moment are calculated using RANSAC algorithms Battle array includes rotation parameter;Homography matrix is calculated according to the characteristic point in described image, the homography matrix, which includes rotation, joins Number;The rotation parameter in homography matrix is replaced using the rotation parameter in the second spin matrix calculated to be converted Matrix.
In one of the embodiments, following steps can also be realized when the routine data is executed by processor:
First spin matrix, first spin matrix are calculated using least square method according to the characteristic point in described image Including rotation parameter;According to the characteristic point in described image, the 3rd rotation is calculated using RANSAC algorithms and the first spin matrix Matrix, the 3rd spin matrix include rotation parameter;Homography matrix, the list are calculated according to the characteristic point in described image Answering property matrix includes rotation parameter;Replaced using the rotation parameter in the 3rd spin matrix calculated in homography matrix Rotation parameter is to obtain transformation matrix.
In one of the embodiments, characteristic point is calculated and before carrying out Feature Points Matching according to described image described, The routine data can also realize following steps when being executed by processor:
Judge whether described image is fish eye images;If described image is fish eye images, the fish eye images are carried out abnormal Become correction;It is described that characteristic point is calculated according to described image and carries out Feature Points Matching, including:Calculated according to the image after correction special Sign point simultaneously carries out Feature Points Matching.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is set The specific work process of standby, device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here. Those of ordinary skill in the art are it is to be appreciated that the unit and algorithm of each example described with reference to the embodiments described herein Step, it can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware and software Interchangeability, the composition and step of each example are generally described according to function in the above description.These functions are studied carefully Unexpectedly application-specific and design constraint depending on technical scheme are performed with hardware or software mode.Professional and technical personnel Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed unit and method, can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Division, only a kind of division of logic function, can there is other dividing mode, such as multiple units or component when actually realizing Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or beg for The mutual coupling of opinion or direct-coupling or communication connection can be the INDIRECT COUPLINGs by some interfaces, device or unit Or communication connection or electricity, the connection of mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize scheme of the embodiment of the present invention according to the actual needs Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also It is that unit is individually physically present or two or more units are integrated in a unit.It is above-mentioned integrated Unit can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part to be contributed in other words to prior art, or all or part of the technical scheme can be in the form of software product Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment methods described of the present invention Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read- OnlyMemory), magnetic disc or CD etc. are various can be with the medium of store program codes.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replaced Change, these modifications or substitutions should be all included within the scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection domain be defined.

Claims (10)

1. a kind of feature extracting method, it is characterised in that methods described includes:
Receive the image of input;
Generate the metric space of described image;
Detect the extreme point in the metric space of described image;
Key point is calculated according to the extreme point;
Calculate the directioin parameter of each key point;
On the metric space where each key point, the border circular areas centered on the position where key point is obtained;
The border circular areas is divided into N block sector cells domain, wherein, N is the natural number more than 1;
The gradient aggregate-value on M direction being assigned in each piece of sector cell domain is calculated, wherein, M is in directioin parameter Direction number, for the natural number more than 1;
The N*M gradient aggregate-value according to calculating determines the characteristic vector of the key point.
2. the method as described in claim 1, it is characterised in that the N*M gradient aggregate-value that the basis calculates determines institute The characteristic vector of key point is stated, including:
The N*M gradient aggregate-value calculated is subjected to descending arrangement;
Gradient aggregate-value after arrangement is normalized, to obtain the characteristic vector of the key point.
3. a kind of panorama mosaic method, it is characterised in that methods described includes:
Receive the image that the needs of input splice;
Characteristic point is calculated according to described image and carries out Feature Points Matching;
Transformation matrix is calculated according to the characteristic point in described image;
Line translation is entered to described image using transformation matrix with the image after being converted;
Image after conversion is projected to complete to splice;
The image of splicing is merged with the panoramic picture after being merged;
It is wherein, described that to calculate characteristic point according to described image and carry out Feature Points Matching be according to any one of such as claim 1-2 Described method calculates the characteristic vector of key point and the key point, and the key point is characteristic point, according to described The characteristic vector of characteristic point and the characteristic point in image carries out Feature Points Matching.
4. method as claimed in claim 3, it is characterised in that the characteristic point according in described image calculates conversion square Battle array, including:
First spin matrix is calculated using least square method according to the characteristic point in described image, first spin matrix includes Rotation parameter;
Homography matrix is calculated according to the characteristic point in described image, the homography matrix includes rotation parameter;
The rotation parameter in homography matrix is replaced using the rotation parameter in the first spin matrix calculated to be become Change matrix.
5. method as claimed in claim 3, it is characterised in that the characteristic point according in described image calculates conversion square Battle array, including:
According to the characteristic point in described image, the second spin matrix, second rotation are calculated using RANSAC algorithm Torque battle array includes rotation parameter;
Homography matrix is calculated according to the characteristic point in described image, the homography matrix includes rotation parameter;
The rotation parameter in homography matrix is replaced using the rotation parameter in the second spin matrix calculated to be become Change matrix.
6. method as claimed in claim 3, it is characterised in that the characteristic point according in described image calculates conversion square Battle array, including:
First spin matrix is calculated using least square method according to the characteristic point in described image, first spin matrix includes Rotation parameter;
According to the characteristic point in described image, the 3rd spin moment is calculated using RANSAC algorithm and the first spin matrix Battle array, the 3rd spin matrix include rotation parameter;
Homography matrix is calculated according to the characteristic point in described image, the homography matrix includes rotation parameter;
The rotation parameter in homography matrix is replaced using the rotation parameter in the 3rd spin matrix calculated to be become Change matrix.
7. method as claimed in claim 3, it is characterised in that described that characteristic point is calculated according to described image and carries out characteristic point Before matching, methods described also includes:
Judge whether described image is fish eye images;
If described image is fish eye images, distortion correction is carried out to the fish eye images;
It is described that characteristic point is calculated according to described image and carries out Feature Points Matching, including:Feature is calculated according to the image after correction Put and carry out Feature Points Matching.
8. a kind of device, it is characterised in that the feature that described device includes being used to perform as described in claim any one of 1-2 carries Take the unit of method or including the unit for performing the panorama mosaic method as described in claim any one of 3-7.
9. a kind of equipment, it is characterised in that the equipment includes memory, and the processor being connected with the memory;
The memory is used to store the routine data for realizing feature extraction;The processor is used to run in the memory to deposit The routine data of storage, to perform the method as described in claim any one of 1-2;Or
The memory is used to store the routine data for realizing panoramic mosaic, and the processor is used to run in the memory to deposit The routine data of storage, to perform the method as described in claim any one of 3-7.
10. a kind of computer-readable recording medium, it is characterised in that characterized in that, the computer-readable recording medium is deposited Contain one or more than one routine data, one either more than one routine data can by one or more than one Computing device, to realize method as described in claim any one of 1-2 or as described in claim any one of 3-7 Method.
CN201710790764.9A 2017-09-05 2017-09-05 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium Pending CN107665479A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710790764.9A CN107665479A (en) 2017-09-05 2017-09-05 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium
PCT/CN2017/102871 WO2019047284A1 (en) 2017-09-05 2017-09-22 Methods for feature extraction and panoramic stitching, and apparatus thereof, device, readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710790764.9A CN107665479A (en) 2017-09-05 2017-09-05 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN107665479A true CN107665479A (en) 2018-02-06

Family

ID=61098406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710790764.9A Pending CN107665479A (en) 2017-09-05 2017-09-05 A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium

Country Status (2)

Country Link
CN (1) CN107665479A (en)
WO (1) WO2019047284A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305281A (en) * 2018-02-09 2018-07-20 深圳市商汤科技有限公司 Calibration method, device, storage medium, program product and the electronic equipment of image
CN109102464A (en) * 2018-08-14 2018-12-28 四川易为智行科技有限公司 Panorama Mosaic method and device
CN109272442A (en) * 2018-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Processing method, device, equipment and the storage medium of panorama spherical surface image
CN109600584A (en) * 2018-12-11 2019-04-09 中联重科股份有限公司 Observe method and apparatus, tower crane and the machine readable storage medium of tower crane
CN110223222A (en) * 2018-03-02 2019-09-10 株式会社理光 Image split-joint method, image splicing device and computer readable storage medium
CN110298817A (en) * 2019-05-20 2019-10-01 平安科技(深圳)有限公司 Object statistical method, device, equipment and storage medium based on image procossing
CN111797860A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Feature extraction method and device, storage medium and electronic equipment
CN112037130A (en) * 2020-08-27 2020-12-04 江苏提米智能科技有限公司 Adaptive image splicing and fusing method and device, electronic equipment and storage medium
CN112712518A (en) * 2021-01-13 2021-04-27 中国农业大学 Fish counting method, fish counting device, electronic equipment and storage medium
WO2022267287A1 (en) * 2021-06-25 2022-12-29 浙江商汤科技开发有限公司 Image registration method and related apparatus, and device and storage medium

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111287A (en) * 2019-04-04 2019-08-09 上海工程技术大学 A kind of fabric multi-angle image emerging system and its method
CN110232656B (en) * 2019-06-13 2023-03-28 上海倍肯智能科技有限公司 Image splicing optimization method for solving problem of insufficient feature points
CN110689485B (en) * 2019-10-14 2022-11-04 中国空气动力研究与发展中心超高速空气动力研究所 SIFT image splicing method applied to infrared nondestructive testing of large pressure container
CN111080525B (en) * 2019-12-19 2023-04-28 成都海擎科技有限公司 Distributed image and graphic primitive splicing method based on SIFT features
CN111223073A (en) * 2019-12-24 2020-06-02 乐软科技(北京)有限责任公司 Virtual detection system
CN111242221B (en) * 2020-01-14 2023-06-20 西交利物浦大学 Image matching method, system and storage medium based on image matching
CN111738920B (en) * 2020-06-12 2023-08-29 山东大学 FPGA chip oriented to panoramic stitching acceleration and panoramic image stitching method
CN111585535B (en) * 2020-06-22 2022-11-08 中国电子科技集团公司第二十八研究所 Feedback type digital automatic gain control circuit
CN111899158B (en) * 2020-07-29 2023-08-25 北京天睿空间科技股份有限公司 Image Stitching Method Considering Geometric Distortion
CN112163996B (en) * 2020-09-10 2023-12-05 沈阳风驰软件股份有限公司 Flat angle video fusion method based on image processing
CN112102169A (en) * 2020-09-15 2020-12-18 合肥英睿系统技术有限公司 Infrared image splicing method and device and storage medium
CN112419383B (en) * 2020-10-30 2023-07-28 中山大学 Depth map generation method, device and storage medium
CN112465702B (en) * 2020-12-01 2022-09-13 中国电子科技集团公司第二十八研究所 Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN112837223B (en) * 2021-01-28 2023-08-29 杭州国芯科技股份有限公司 Super-large image registration splicing method based on overlapped subareas
CN112785505B (en) * 2021-02-23 2023-01-31 深圳市来科计算机科技有限公司 Day and night image splicing method
CN113034362A (en) * 2021-03-08 2021-06-25 桂林电子科技大学 Expressway tunnel monitoring panoramic image splicing method
CN113066012B (en) * 2021-04-23 2024-04-09 深圳壹账通智能科技有限公司 Scene image confirmation method, device, equipment and storage medium
CN113256492B (en) * 2021-05-13 2023-09-12 上海海事大学 Panoramic video stitching method, electronic equipment and storage medium
CN113724176A (en) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 Multi-camera motion capture seamless connection method, device, terminal and medium
CN114339157B (en) * 2021-12-30 2023-03-24 福州大学 Multi-camera real-time splicing system and method with adjustable observation area
CN114627262B (en) * 2022-05-11 2022-08-05 武汉大势智慧科技有限公司 Image generation method and system based on oblique grid data
CN115861927A (en) * 2022-12-01 2023-03-28 中国南方电网有限责任公司超高压输电公司大理局 Image identification method and device for power equipment inspection image and computer equipment
CN116452426B (en) * 2023-06-16 2023-09-05 广汽埃安新能源汽车股份有限公司 Panorama stitching method and device
CN117011147B (en) * 2023-10-07 2024-01-12 之江实验室 Infrared remote sensing image feature detection and splicing method and device
CN117168344B (en) * 2023-11-03 2024-01-26 杭州鲁尔物联科技有限公司 Monocular panorama looking around deformation monitoring method and device and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354254A (en) * 2008-09-08 2009-01-28 北京航空航天大学 Method for tracking aircraft course
US20120148164A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Image matching devices and image matching methods thereof
CN105608667A (en) * 2014-11-20 2016-05-25 深圳英飞拓科技股份有限公司 Method and device for panoramic stitching
CN106558072A (en) * 2016-11-22 2017-04-05 重庆信科设计有限公司 A kind of method based on SIFT feature registration on remote sensing images is improved

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354254A (en) * 2008-09-08 2009-01-28 北京航空航天大学 Method for tracking aircraft course
US20120148164A1 (en) * 2010-12-08 2012-06-14 Electronics And Telecommunications Research Institute Image matching devices and image matching methods thereof
CN105608667A (en) * 2014-11-20 2016-05-25 深圳英飞拓科技股份有限公司 Method and device for panoramic stitching
CN106558072A (en) * 2016-11-22 2017-04-05 重庆信科设计有限公司 A kind of method based on SIFT feature registration on remote sensing images is improved

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305281B (en) * 2018-02-09 2020-08-11 深圳市商汤科技有限公司 Image calibration method, device, storage medium, program product and electronic equipment
CN108305281A (en) * 2018-02-09 2018-07-20 深圳市商汤科技有限公司 Calibration method, device, storage medium, program product and the electronic equipment of image
CN110223222A (en) * 2018-03-02 2019-09-10 株式会社理光 Image split-joint method, image splicing device and computer readable storage medium
CN109102464A (en) * 2018-08-14 2018-12-28 四川易为智行科技有限公司 Panorama Mosaic method and device
CN109272442A (en) * 2018-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Processing method, device, equipment and the storage medium of panorama spherical surface image
CN109600584A (en) * 2018-12-11 2019-04-09 中联重科股份有限公司 Observe method and apparatus, tower crane and the machine readable storage medium of tower crane
CN111797860A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Feature extraction method and device, storage medium and electronic equipment
CN111797860B (en) * 2019-04-09 2023-09-26 Oppo广东移动通信有限公司 Feature extraction method and device, storage medium and electronic equipment
CN110298817A (en) * 2019-05-20 2019-10-01 平安科技(深圳)有限公司 Object statistical method, device, equipment and storage medium based on image procossing
CN112037130A (en) * 2020-08-27 2020-12-04 江苏提米智能科技有限公司 Adaptive image splicing and fusing method and device, electronic equipment and storage medium
CN112037130B (en) * 2020-08-27 2024-03-26 江苏提米智能科技有限公司 Self-adaptive image stitching fusion method and device, electronic equipment and storage medium
CN112712518A (en) * 2021-01-13 2021-04-27 中国农业大学 Fish counting method, fish counting device, electronic equipment and storage medium
CN112712518B (en) * 2021-01-13 2024-01-09 中国农业大学 Fish counting method and device, electronic equipment and storage medium
WO2022267287A1 (en) * 2021-06-25 2022-12-29 浙江商汤科技开发有限公司 Image registration method and related apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2019047284A1 (en) 2019-03-14

Similar Documents

Publication Publication Date Title
CN107665479A (en) A kind of feature extracting method, panorama mosaic method and its device, equipment and computer-readable recording medium
JP2022173399A (en) Image processing apparatus, and image processing method
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN104169965B (en) For system, the method and computer program product adjusted during the operation of anamorphose parameter in more filming apparatus systems
Cyganek et al. An introduction to 3D computer vision techniques and algorithms
WO2015139574A1 (en) Static object reconstruction method and system
Pollefeys Visual 3D Modeling from Images.
US20130083966A1 (en) Match, Expand, and Filter Technique for Multi-View Stereopsis
CN107666606A (en) Binocular panoramic picture acquisition methods and device
CN107833181A (en) A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
EP3182371A1 (en) Threshold determination in for example a type ransac algorithm
CN110111248A (en) A kind of image split-joint method based on characteristic point, virtual reality system, camera
CN103971366B (en) A kind of solid matching method being polymerize based on double weights
CN106030661A (en) View independent 3d scene texturing
CN108470324A (en) A kind of binocular stereo image joining method of robust
CN107895377A (en) A kind of foreground target extracting method, device, equipment and storage medium
WO2022222077A1 (en) Indoor scene virtual roaming method based on reflection decomposition
JP4327919B2 (en) A method to recover radial distortion parameters from a single camera image
CN109840485A (en) A kind of micro- human facial feature extraction method, apparatus, equipment and readable storage medium storing program for executing
CN108093188B (en) A method of the big visual field video panorama splicing based on hybrid projection transformation model
CN109978760A (en) A kind of image split-joint method and device
CN109767381A (en) A kind of rectangle panoramic picture building method of the shape optimum based on feature selecting
CN107240126A (en) The calibration method of array image
CN109948575A (en) Eyeball dividing method in ultrasound image
Xu et al. Scalable image-based indoor scene rendering with reflections

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180206

RJ01 Rejection of invention patent application after publication