CN111445389A - Wide-view-angle rapid splicing method for high-resolution images - Google Patents

Wide-view-angle rapid splicing method for high-resolution images Download PDF

Info

Publication number
CN111445389A
CN111445389A CN202010112722.1A CN202010112722A CN111445389A CN 111445389 A CN111445389 A CN 111445389A CN 202010112722 A CN202010112722 A CN 202010112722A CN 111445389 A CN111445389 A CN 111445389A
Authority
CN
China
Prior art keywords
image
images
matching
spliced
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010112722.1A
Other languages
Chinese (zh)
Inventor
李向春
张�浩
贾欣鑫
王雷
王起维
程岩
王鑫
段利亚
蔡玉龙
李恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oceanographic Instrumentation Research Institute Shandong Academy of Sciences
Original Assignee
Oceanographic Instrumentation Research Institute Shandong Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oceanographic Instrumentation Research Institute Shandong Academy of Sciences filed Critical Oceanographic Instrumentation Research Institute Shandong Academy of Sciences
Priority to CN202010112722.1A priority Critical patent/CN111445389A/en
Publication of CN111445389A publication Critical patent/CN111445389A/en
Priority to PCT/CN2020/122297 priority patent/WO2021169334A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Abstract

The invention discloses a wide-view-angle fast splicing method of high-resolution images, aiming at improving the speed and the precision of image matching, comprising the following steps: carrying out down-sampling processing on a plurality of original images to be spliced; determining an overlapping area between the down-sampled images; detecting characteristic points in the overlapping region by adopting an SURF algorithm; aiming at the detected feature points, generating feature point descriptors by adopting a BRIEF algorithm; aiming at the characteristic point descriptor, carrying out characteristic point pair rough matching by adopting a Brute-Froce Matcher algorithm; calculating the Hamming distance between the two feature point descriptors, and eliminating error matching; performing fine matching by using a RANSAC algorithm, solving a homography matrix to obtain a relative position relation between images; and carrying out image fusion processing on the overlapped area of each original image based on a weighted average algorithm of gradual-in and gradual-out so as to fuse and splice the overlapped area into a wide view angle image.

Description

Wide-view-angle rapid splicing method for high-resolution images
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for splicing a plurality of pictures of the same scene with mutually overlapped parts into an image with a wide visual angle.
Background
The image stitching technology is a technology for stitching a plurality of images with overlapped parts (which may be images obtained at different times, different viewing angles or different sensors) into a seamless panoramic image with a wide viewing angle.
The image stitching technology has become an important research subject in computer graphics and computer vision, and has wide application in military and civil fields. Image stitching technologies can be divided into real-time video image stitching technologies and static image stitching technologies according to different application occasions. The requirements for wide-field videos and panoramic images are embodied in daily used monitoring videos, and the dynamic sensing and monitoring capability of people on objects and scenes can be greatly improved.
The technical points in the image splicing process relate to the aspects of data acquisition, image registration, image re-projection, image fusion and the like, wherein the image registration is a key technology of image splicing. At present, the main technical problem of image registration is the speed and precision of image matching, and the matching time, matching precision and robustness of a matching algorithm are main indexes for measuring the quality of an image matching algorithm.
Disclosure of Invention
The invention aims to provide a wide-view-angle fast splicing method of high-resolution images so as to improve the speed and the precision of image matching.
In order to solve the technical problems, the invention adopts the following technical scheme:
a wide-view-angle fast splicing method for high-resolution images comprises the following steps: carrying out down-sampling processing on a plurality of original images to be spliced; determining an overlapping area between the down-sampled images; detecting characteristic points in the overlapping region by adopting an SURF algorithm; aiming at the detected feature points, generating feature point descriptors by adopting a BRIEF algorithm; aiming at the characteristic point descriptor, carrying out characteristic point pair rough matching by adopting a Brute-Froce Matcher algorithm; calculating the Hamming distance between the two feature point descriptors, and eliminating error matching; performing fine matching by using a RANSAC algorithm, solving a homography matrix to obtain a relative position relation between images; and carrying out image fusion processing on the overlapped area of each original image based on a weighted average algorithm of gradual-in and gradual-out so as to fuse and splice the overlapped area into a wide view angle image.
Preferably, the multiple original images are three same scene images which are continuous in space and have overlapping parts with each other, the middle image is defined as a reference image, the other two images are a left image to be spliced and a right image to be spliced, and the reference image, the left image to be spliced and the right image to be spliced are respectively subjected to the wide-view-angle fast splicing method so as to fuse and splice the three original images into a wide-view-angle image.
Preferably, the down-sampling process reduces the resolution of the plurality of images to be stitched to 10% of the original image.
Preferably, in the process of determining the overlapping area between the down-sampled images, the method preferably includes: calculating the overlapping degree between the down-sampled reference image and the down-sampled left image to be spliced and the down-sampled right image to be spliced by adopting the following formula:
Figure BDA0002390570200000021
Figure BDA0002390570200000022
in the formula, Overlap1The overlapping degree of the left image to be spliced after the down-sampling and the reference image after the down-sampling is obtained; overlap2The overlapping degree of the right image to be spliced after the down sampling and the reference image after the down sampling; lLMThe width of an overlapping area of the left image to be spliced after the down-sampling and the reference image after the down-sampling is obtained; lMRThe width of the overlapping area of the right image to be spliced after the down-sampling and the reference image after the down-sampling is obtained; w is the width of the original image after down sampling; wherein the parameter lLM、lMRIs an empirical value; determining an overlapping area A of the left image to be spliced after down-samplingLComprises the following steps: width is from W (1-Overlap)1) A region to W; determining the overlapping area of the down-sampled reference image as: width from 0 to W × Overlap1Area A ofLMAnd, width from W (1-Overlap)2) To the region A of WMR(ii) a Determining the overlapping area A of the right image to be spliced after down samplingRComprises the following steps: width from 0 to W × Overlap2The area of (a).
Preferably, in the process of calculating the hamming distance between two feature point descriptors and eliminating the mismatch, the method preferably includes: calculating the Hamming distance between any two feature point descriptors; screening out the minimum Hamming distance from all the calculated Hamming distances as an optimal matching value and the maximum Hamming distance as a worst matching value; and (4) taking the rough matching characteristic point pairs with the Hamming distance larger than the threshold value T as error matching pairs and removing the error matching pairs.
Preferably, the threshold T is set to 2.5 times the best match value.
Compared with the prior art, the invention has the advantages and positive effects that: according to the method, the original image to be spliced is subjected to down-sampling processing, so that the problem of large calculated data volume caused by a large number of redundant feature points generated by a high-resolution image in the image splicing process can be solved; the overlapping area of the two images is used as a detection area for characteristic point detection, so that the matching time can be shortened, and the matching speed can be increased; feature points are detected in the overlapping area of the images by using a SURF algorithm, and features are described by using a BRIEF algorithm, so that the space occupation of a memory can be reduced, and the generation rate of the feature points is improved; and carrying out three-level characteristic point pair matching by using a Brute-ForceMatcher algorithm, a Hamming distance algorithm and a RANSC algorithm, and solving a relatively accurate homography matrix so as to obtain the relative position relation of the images. And finally, fusing the overlapped regions of the images to be spliced by using a gradually-in and gradually-out weighted average algorithm, and splicing the images into a wide view angle image to realize quick, accurate and seamless splicing.
Other features and advantages of the present invention will become more apparent from the detailed description of the embodiments of the present invention when taken in conjunction with the accompanying drawings.
Drawings
FIG. 1 is a general flowchart of an embodiment of a method for fast stitching a high-resolution image with a wide viewing angle according to the present invention;
FIG. 2 is three down-sampled images to be stitched;
FIG. 3 is a diagram of the results of force matching the overlapping regions of the left image and the middle image of FIG. 2;
FIG. 4 is a diagram of a matching result after Hamming distance optimization is performed on an overlapping region of the left image and the middle image in FIG. 2;
FIG. 5 is a wide-view image formed by fusing and splicing three original images;
FIG. 6 is a diagram showing the matching result of feature points when the whole image of the left image and the middle image is used as the detection region;
fig. 7 is a diagram showing the result of feature point matching when the overlapping region of the left image and the intermediate image is set as the detection region.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
The method for splicing the high-resolution images quickly in the wide view angle completes seamless splicing of a plurality of images by converting the images to be spliced into the same coordinate system of the same reference image, and is particularly suitable for being applied to the splicing process of three or more high-resolution images, so that the splicing speed is increased, and the splicing precision is improved.
In this embodiment, three high-resolution original images are taken as an example, and a method for quickly stitching a wide-view image is specifically described, which is shown in fig. 1 and specifically includes the following steps:
s101, three same scene images which are continuous in space and have overlapping portions are extracted to serve as original images to be spliced, the original image located in the middle serves as a reference image M, the other two original images serve as images to be spliced, for example, a left image L and a right image R serve as images to be spliced respectively, and subsequent splicing processes are executed.
S102, respectively carrying out down-sampling processing on the reference image M and the two images to be spliced L and R;
in the present embodiment, the resolution of the reference image M and the two images to be stitched L, R is preferably reduced to about 10% of the original image, for example, an original image with a resolution of 4912 × 3264 is down-sampled into a down-sampled image with a resolution of 480 × 318.
Three high-resolution original images are subjected to down-sampling processing to form a reference image MJAnd an image to be stitched LJ、RJAs shown in fig. 2.
S103, aiming at the reference image M after the down samplingJAnd an image to be stitched LJ、RJDetermining an overlapping area between the images;
in this embodiment, it is preferable to determine the reference images M by artificially predicting the overlapping degreeJWith the left image L to be stitchedJOverlap region therebetween and reference image MJWith the right image R to be stitchedJThe overlapping area therebetween.
As a preferable design of this embodiment, the reference image M can be calculated by the following formulaJWith the image to be stitched LJ、RJDegree of overlap of:
Figure BDA0002390570200000041
Figure BDA0002390570200000051
Wherein, Overlap1For the left image L to be stitchedJAnd a reference image MJThe degree of overlap of (c); overlap2For the right image R to be stitchedJAnd a reference image MJThe degree of overlap of (c); lLMFor the left image L to be stitchedJAnd a reference image MJThe width of the overlapping region of (a); lMRFor the right image R to be stitchedJAnd a reference image MJThe width of the overlapping region of (a); w is the down-sampled image MJ/LJ/RJIs measured. Wherein the parameter lLM、lMRMay be determined empirically.
The left image L to be stitched is determinedJOverlap region A ofLComprises the following steps: width is from W (1-Overlap)1) A region to W;
determining the middle reference image MJIncludes two, respectively:
width from 0 to W × Overlap1Area A ofLM
Width is from W (1-Overlap)2) To the region A of WMR
Determining a right image R to be splicedJOverlap region A ofRComprises the following steps: width from 0 to W × Overlap2The area of (a).
S104, detecting characteristic points in the overlapping area by adopting an SURF algorithm;
in this embodiment, it is preferable to detect each overlapping area a by using surf (speeded UpRobust features) algorithm commonly used in the current image stitching technologyL、ALM、AMR、ARThe characteristic points in. Since it is a mature existing technology to calculate feature points in an image by using the SURF algorithm, the embodiment is described in the followingThis will not be described in detail.
S105, aiming at the detected feature points, performing feature description by adopting a BRIEF algorithm to generate feature point descriptors;
the specific process comprises the following steps:
(1) for each overlap region A, Gaussian filtering (variance of 2, filter window size of 9 × 9) was usedL、ALM、AMR、ARPerforming smoothing processing to reduce the sensitivity to noise;
(2) let p be the overlap region AL、ALM、AMR、ARDefining a tau test in the window, randomly selecting two points x and y, comparing the sizes of pixels of the two points x and y, and carrying out binary assignment as follows:
Figure BDA0002390570200000052
wherein p (x), p (y) are random points x ═ u, respectively1,v1),y=(u2,v2) Gray value after the image is smoothed.
(3) And (3) randomly selecting N pairs of random points in the window, repeating the binary value assignment in the step (2) to form an N-dimensional binary code, wherein the code is the description of the characteristic points, namely, the characteristic descriptors. In the present embodiment, N-128 is preferably configured.
The specific method for generating the feature point descriptor by using the BRIEF algorithm can refer to the related description in the documents Calonder M, L epetitV, Strecha C, et al.
S106, aiming at the characteristic point descriptor, carrying out rough matching on the characteristic point pair by adopting a Brute-Froce Matcher algorithm;
after step S105, a 256-bit binary code is obtained for each feature point extracted from each image, and the left and middle images L are subjected to image matchingJ、MJAnd right and middleTwo images MJ、RJThe feature descriptors in the image are respectively matched by using a Brute-force matching (Brute-Froce Matcher) algorithm, so that rough matching feature point pairs can be obtained.
The Brute-force matching (Brute-force matching) algorithm is a method for calculating the distance between a certain feature point descriptor and all other feature point descriptors, the obtained distances are sorted, and the closest one is taken as a matching pointJ、MJThe result after violent matching is shown in fig. 3, and it can be seen from fig. 3 that there are a large number of mismatching, and some mechanisms are needed to further optimize the matching result.
S107, in the rough matching feature point pair, calculating the Hamming distance between two feature point descriptors, eliminating error matching and finishing the optimization of the matching feature point pair;
and calculating the Hamming distance between any two feature point descriptors aiming at the rough matching feature point pair obtained in the step S106. And (3) screening out the best matching value (Hamming distance minimum) and the worst matching value (Hamming distance maximum) from all the calculated Hamming distances, and preferably selecting the rough matching feature point pairs with the Hamming distances larger than 2.5 times of the best matching values as error matching pairs to be removed.
Using the hamming distance between two feature point descriptors (the hamming distance between two different binary systems refers to the number of different bits of two binary strings) as the similarity criterion for feature point matching, the following steps can be specifically performed in the calculation process:
①, setting the initial value of the worst matching value max _ dist as 0 and the initial value of the best matching value min _ dist as 100;
② from reference picture M, respectivelyJAnd an image to be stitched LJ、RJSelecting corresponding feature points in the overlapping area to calculate the Hamming distance between the two feature point descriptors;
③ if the Hamming distance between two feature point descriptors is less than the best matching value, then making the current Hamming distance the best matching value;
④ from reference picture MJAnd an image to be stitched LJ、RJSelects other feature points to calculate hamming distance, and repeats step ③ until reference image M is traversedJAnd an image to be stitched LJ、RJAfter all the feature points in the overlapping area, the best matching value and the worst matching value are found out;
⑤, setting the Hamming distance not greater than the threshold T as the judgment basis to screen out the correctly matched feature point pairs, in this embodiment, the threshold T preferably takes the best matching value of 2.5 times, if not greater than the threshold T, it is considered as a correct match, if greater than the threshold T, it is considered as an incorrect match, and filtering out, thereby completing the optimization of the matched feature point pairs.
Two left and middle images LJ、MJThe result after the optimization matching is shown in fig. 4.
S108, aiming at the optimized matching feature point pairs, performing fine matching by using a RANSAC algorithm, solving a homography matrix, and obtaining a relative position relation between target images;
in this embodiment, for the matching feature point pairs obtained after the optimization in step S107, a RANSAC (RANdom Sample Consensus) algorithm is preferably adopted in this embodiment to further filter out erroneous matching, and solve an optimal homography matrix, so as to convert the reference image M and the images to be stitched L, R into the same coordinate system, and further obtain the relative position relationship between the three images.
Specifically, a findHomography () function of a homography matrix can be solved by utilizing encapsulation in OpenCV, mismatching points are removed by using a RANSAC method, the homography matrix between two images is calculated by utilizing the matching points, then whether one matching is correct or not is judged by utilizing a reprojection error, so that an optimal homography transformation matrix H (3 rows × 3 columns) of perspective transformation is solved, the homography matrix H is adjusted by utilizing an adjustment matrix adjust Mat, and after images L and R to be spliced are subjected to perspective transformation through the adjusted matrixes, the images L and R to be spliced can be converted to be under the same coordinate system as that of a reference image M.
The specific process of using the RANSAC algorithm to perform the fine matching on the feature points may refer to the related description in the chinese patent application with the application number of 201410626230.9 and the name of "a fast image stitching method based on the improved SURF algorithm", and this embodiment is not described in detail herein.
S109, performing image fusion processing on the overlapped area of the original reference image M and the original images to be spliced L and R based on a weighted average algorithm which gradually goes in and out so as to fuse and splice the original reference image M and the original images to be spliced L and R into a wide view angle image;
the gradual-in and gradual-out method is to use a linear weighting transition function to weight the gray values of the two images in the image overlapping area to obtain the gray value of the fused image so as to realize the smooth transition of the boundary of the overlapping area.
Specifically, the calculation formula of the weighted average algorithm based on fade-in and fade-out is as follows:
Figure BDA0002390570200000081
in the formula (f)1Representing the original images to be stitched L or R;/f2Representing the original reference image M; f represents a wide-view image obtained after fusion; w is a1And w2Represents the weight value of the corresponding pixel, and w1+w2=1,0<w1<1,0<w2Is less than 1. Wherein, w1Is determined according to the size of the image overlapping area, and w1In the process of gradually changing from 1 to 0, w2The reference image M and the images L and R to be spliced are smoothly transited from 0 to 1, so that splicing marks are avoided.
Therefore, seamless splicing of the three images is completed, and the spliced wide-view-angle image is shown in fig. 5.
In this embodiment, when three images are spliced into a wide view angle image, the middle image is used as the same reference image, the left and right images are used as images to be spliced, and the images to be spliced are converted into the same coordinate system as the reference image through the mapping transformation matrix, so that seamless splicing of the three images can be rapidly and efficiently realized.
And taking the three spliced images as one image, and continuously splicing the three images with the rest images to be spliced by adopting the processes S101-S109 until the splicing of all the images is finished.
A specific application example is listed below:
the hardware environment of the simulation experiment is an association workbench, the hardware configuration is Intel Core i3-4170 CPU, the main frequency is 3.70GHz, the memory is 3.42GB, and the operating system is Windows7/32 bits. The software environment is Visual Studio 2010.
In a simulation experiment, three images with overlapped areas are taken, the image resolution is 480 × 318 pixels, the format is jpg, as shown in fig. 2, the middle image is taken as a reference image, the left image and the right image are taken as images to be spliced, the three images are extracted with a SURF algorithm in the overlapped areas, a BRIEF algorithm describes feature points, a BFMatcher () function (Brute-Froce Matcher algorithm) is used for carrying out rough matching on the feature points, a Hamming distance is used for further eliminating mismatching points, and a RANSAC algorithm is used for carrying out fine matching.
Comparing fig. 6 and fig. 7, when the whole image is used as the detection area, the optimal matching pair obtained after optimization is 57 pairs, which takes 151.5 ms; when the overlapping area is used as the detection area, the optimal matching pair matched after optimization is 40 pairs, which takes 117.843ms, and the matching speed is improved by 70%.
After the fusion splicing of the left, middle and right images, which is performed by the gradual-in and gradual-out weighted average, is completed, 3/4 of the whole image is resampled to the left, middle and right splicing results, 3/4 of the whole image is resampled to the middle and right splicing results, and finally the wide-view-angle image formed by fusion splicing of the reference image and the image to be spliced is shown in fig. 5.
The method provided by the invention can be realized in DM8168, and a system with real-time video splicing is developed.
The invention carries out space matching alignment on a plurality of pictures of the same scene with mutually overlapped parts, forms a new image which comprises a plurality of pieces of image information, a scene with a wide view angle, integrity and high definition after resampling and fusion, can effectively reduce the hardware cost and the labor cost of equipment (such as a wide-angle lens, a fish glasses lens and the like) for acquiring the image with the wide view angle, and well solves the contradiction between the data volume redundancy in the splicing of the image with the wide view angle and the image.
The wide-view-angle rapid splicing method of the high-resolution images can be suitable for the fields of large-scene video monitoring, video conferences, traffic safety, virtual reality, super-resolution reconstruction, medical image analysis, remote sensing image processing, vision S L AM and the like.
Of course, the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A wide-view-angle fast splicing method for high-resolution images is characterized by comprising the following steps:
carrying out down-sampling processing on a plurality of original images to be spliced;
determining an overlapping area between the down-sampled images;
detecting characteristic points in the overlapping region by adopting an SURF algorithm;
aiming at the detected feature points, generating feature point descriptors by adopting a BRIEF algorithm;
aiming at the characteristic point descriptor, carrying out characteristic point pair rough matching by adopting a Brute-Froce Matcher algorithm;
calculating the Hamming distance between the two feature point descriptors, and eliminating error matching;
performing fine matching by using a RANSAC algorithm, solving a homography matrix to obtain a relative position relation between images;
and carrying out image fusion processing on the overlapped area of each original image based on a weighted average algorithm of gradual-in and gradual-out so as to fuse and splice the overlapped area into a wide view angle image.
2. The method according to claim 1, wherein the plurality of original images are three images of a same scene that are spatially continuous and have overlapping portions with each other, the middle image is defined as a reference image, the other two images are a left image to be stitched and a right image to be stitched, and the reference image and the left image to be stitched and the right image to be stitched are respectively subjected to the method for fast stitching with wide viewing angle, so as to fuse and stitch the three original images into a wide viewing angle image.
3. The method according to claim 1, wherein the down-sampling process is to reduce the resolution of the images to be stitched to 10% of the original image.
4. The method for fast stitching of high resolution images in wide view angle according to claim 2, wherein in the determining of the overlapping area between the down-sampled images, the method comprises:
calculating the overlapping degree between the down-sampled reference image and the down-sampled left image to be spliced and the down-sampled right image to be spliced by adopting the following formula:
Figure FDA0002390570190000011
Figure FDA0002390570190000021
in the formula, Overlap1The overlapping degree of the left image to be spliced after the down-sampling and the reference image after the down-sampling is obtained; overlap2The overlapping degree of the right image to be spliced after the down sampling and the reference image after the down sampling; lLMThe width of an overlapping area of the left image to be spliced after the down-sampling and the reference image after the down-sampling is obtained; lMRFor the down-sampled right image to be spliced and the down-sampled right imageA width of an overlapping region of the reference image; w is the width of the original image after down sampling; wherein the parameter lLM、lMRIs an empirical value;
determining an overlapping area A of the left image to be spliced after down-samplingLComprises the following steps: width is from W (1-Overlap)1) A region to W;
determining the overlapping area of the down-sampled reference image as: width from 0 to W × Overlap1Area A ofLMAnd, width from W (1-Overlap)2) To the region A of WMR
Determining the overlapping area A of the right image to be spliced after down samplingRComprises the following steps: width from 0 to W × Overlap2The area of (a).
5. The method for fast stitching the high resolution images with wide viewing angles according to claim 1, wherein in the process of calculating the hamming distance between the two feature point descriptors and eliminating the mismatching, the method comprises the following steps:
calculating the Hamming distance between any two feature point descriptors;
screening out the minimum Hamming distance from all the calculated Hamming distances as an optimal matching value and the maximum Hamming distance as a worst matching value;
and (4) taking the rough matching characteristic point pairs with the Hamming distance larger than the threshold value T as error matching pairs and removing the error matching pairs.
6. The method for fast stitching of high resolution images with wide viewing angle according to claim 5, wherein the threshold T is 2.5 times the optimal matching value.
CN202010112722.1A 2020-02-24 2020-02-24 Wide-view-angle rapid splicing method for high-resolution images Withdrawn CN111445389A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010112722.1A CN111445389A (en) 2020-02-24 2020-02-24 Wide-view-angle rapid splicing method for high-resolution images
PCT/CN2020/122297 WO2021169334A1 (en) 2020-02-24 2020-10-21 Rapid wide-angle stitching method for high-resolution images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010112722.1A CN111445389A (en) 2020-02-24 2020-02-24 Wide-view-angle rapid splicing method for high-resolution images

Publications (1)

Publication Number Publication Date
CN111445389A true CN111445389A (en) 2020-07-24

Family

ID=71655656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010112722.1A Withdrawn CN111445389A (en) 2020-02-24 2020-02-24 Wide-view-angle rapid splicing method for high-resolution images

Country Status (2)

Country Link
CN (1) CN111445389A (en)
WO (1) WO2021169334A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365404A (en) * 2020-11-23 2021-02-12 成都唐源电气股份有限公司 Contact net panoramic image splicing method, system and equipment based on multiple cameras
CN112669278A (en) * 2020-12-25 2021-04-16 中铁大桥局集团有限公司 Beam bottom inspection and disease visualization method and system based on unmanned aerial vehicle
CN113066010A (en) * 2021-04-06 2021-07-02 无锡安科迪智能技术有限公司 Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
CN113298853A (en) * 2021-06-28 2021-08-24 郑州轻工业大学 Step-by-step progressive two-stage medical image registration method
WO2021169334A1 (en) * 2020-02-24 2021-09-02 山东省科学院海洋仪器仪表研究所 Rapid wide-angle stitching method for high-resolution images

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677365B (en) * 2022-04-18 2024-04-05 北京林业大学 High-precision tree annual ring analysis method and system
CN116402693B (en) * 2023-06-08 2023-08-15 青岛瑞源工程集团有限公司 Municipal engineering image processing method and device based on remote sensing technology
CN117422617B (en) * 2023-10-12 2024-04-09 华能澜沧江水电股份有限公司 Method and system for realizing image stitching of video conference system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104167003A (en) * 2014-08-29 2014-11-26 福州大学 Method for fast registering remote-sensing image
CN106940876A (en) * 2017-02-21 2017-07-11 华东师范大学 A kind of quick unmanned plane merging algorithm for images based on SURF
CN107918927A (en) * 2017-11-30 2018-04-17 武汉理工大学 A kind of matching strategy fusion and the fast image splicing method of low error
CN108010045A (en) * 2017-12-08 2018-05-08 福州大学 Visual pattern characteristic point error hiding method of purification based on ORB

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443295A (en) * 2019-07-30 2019-11-12 上海理工大学 Improved images match and error hiding reject algorithm
CN111445389A (en) * 2020-02-24 2020-07-24 山东省科学院海洋仪器仪表研究所 Wide-view-angle rapid splicing method for high-resolution images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104167003A (en) * 2014-08-29 2014-11-26 福州大学 Method for fast registering remote-sensing image
CN106940876A (en) * 2017-02-21 2017-07-11 华东师范大学 A kind of quick unmanned plane merging algorithm for images based on SURF
CN107918927A (en) * 2017-11-30 2018-04-17 武汉理工大学 A kind of matching strategy fusion and the fast image splicing method of low error
CN108010045A (en) * 2017-12-08 2018-05-08 福州大学 Visual pattern characteristic point error hiding method of purification based on ORB

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾海长: "基于特征的无人机遥感图像拼接技术研究" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021169334A1 (en) * 2020-02-24 2021-09-02 山东省科学院海洋仪器仪表研究所 Rapid wide-angle stitching method for high-resolution images
CN112365404A (en) * 2020-11-23 2021-02-12 成都唐源电气股份有限公司 Contact net panoramic image splicing method, system and equipment based on multiple cameras
CN112365404B (en) * 2020-11-23 2023-03-17 成都唐源电气股份有限公司 Contact net panoramic image splicing method, system and equipment based on multiple cameras
CN112669278A (en) * 2020-12-25 2021-04-16 中铁大桥局集团有限公司 Beam bottom inspection and disease visualization method and system based on unmanned aerial vehicle
CN113066010A (en) * 2021-04-06 2021-07-02 无锡安科迪智能技术有限公司 Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113066010B (en) * 2021-04-06 2022-11-15 无锡安科迪智能技术有限公司 Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
CN113298853A (en) * 2021-06-28 2021-08-24 郑州轻工业大学 Step-by-step progressive two-stage medical image registration method

Also Published As

Publication number Publication date
WO2021169334A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
CN111445389A (en) Wide-view-angle rapid splicing method for high-resolution images
CN110738697B (en) Monocular depth estimation method based on deep learning
CN107918927B (en) Matching strategy fusion and low-error rapid image splicing method
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN110992263B (en) Image stitching method and system
CN111553939B (en) Image registration algorithm of multi-view camera
CN112085659B (en) Panorama splicing and fusing method and system based on dome camera and storage medium
CN112365404B (en) Contact net panoramic image splicing method, system and equipment based on multiple cameras
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN110120013A (en) A kind of cloud method and device
CN109697696A (en) Benefit blind method for panoramic video
CN113112403B (en) Infrared image splicing method, system, medium and electronic equipment
CN112465702B (en) Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN110120012A (en) The video-splicing method that sync key frame based on binocular camera extracts
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner
WO2021248564A1 (en) Panoramic big data application monitoring and control system
CN115330655A (en) Image fusion method and system based on self-attention mechanism
CN110120029B (en) Image fusion method based on perceptual hash algorithm
CN110059651B (en) Real-time tracking and registering method for camera
CN110717471B (en) B-ultrasonic image target detection method based on support vector machine model and B-ultrasonic scanner
CN113159158A (en) License plate correction and reconstruction method and system based on generation countermeasure network
CN113723465B (en) Improved feature extraction method and image stitching method based on same
WO2004086751A2 (en) Method for estimating logo visibility and exposure in video
Shao et al. Digital Image Aesthetic Composition Optimization Based on Perspective Tilt Correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200724

WW01 Invention patent application withdrawn after publication