CN107301620A - Method for panoramic imaging based on camera array - Google Patents
Method for panoramic imaging based on camera array Download PDFInfo
- Publication number
- CN107301620A CN107301620A CN201710407833.3A CN201710407833A CN107301620A CN 107301620 A CN107301620 A CN 107301620A CN 201710407833 A CN201710407833 A CN 201710407833A CN 107301620 A CN107301620 A CN 107301620A
- Authority
- CN
- China
- Prior art keywords
- image
- mtd
- images
- spliced
- mrow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000003384 imaging method Methods 0.000 title claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 25
- 238000013519 translation Methods 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims description 9
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 4
- 239000012141 concentrate Substances 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims description 2
- 239000010931 gold Substances 0.000 claims description 2
- 229910052737 gold Inorganic materials 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 5
- 238000002156 mixing Methods 0.000 abstract description 4
- 239000000284 extract Substances 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of method for panoramic imaging based on camera array, the problem of smaller prior art splicing scene and presence " ghost " is mainly solved.Its scheme is:1) multiple image is obtained using array camera, read in two images and extract SIFT feature respectively, carry out characteristic matching lookup, obtain the match point of two images and screened, calculate optimal transform matrix, line translation is entered to image according to optimal transform matrix, anastomosing and splicing is carried out to image using improved optimal stitching line algorithm and weighted average blending algorithm;2) repeat step 1) the two-part image mosaic above and below completion, it is allowed to as image to be spliced, continues to splice after 90 ° of rotate counterclockwise, result is turned clockwise 90 °, obtain final spliced panoramic image.The present invention has substantially eliminated " ghost " phenomenon, and obtains panoramic picture visual field greatly, and high resolution more presses close to real panorama sketch, the splicing available for the more large scene image of both direction anyhow.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of method for panoramic imaging based on camera array can
Splicing for the more large scene image of both direction anyhow.
Background technology
With the development of science and technology, digital imaging technology progressively upgrades to a new height, digital imaging apparatus
Also begin to be widely used in daily life, using digital camera, the equipment such as mobile phone shoots photograph turns into the daily life of people
An indispensable part in work, at the same time, the limitation of single camera imaging are also increasingly showing:In some special applications
In scene, due to the limitation of digital imaging apparatus itself so that the demand of user can not be met well.For example, working as people
To obtain wide visual field, during high-resolution image, can only many times use wide angle camera, but its expensive price but makes
People hangs back.
In order to solve the above problems, image mosaic technology is arisen at the historic moment.The technology can be by a series of sides that overlap
The small angle image on boundary carries out matching alignment and then fusion according to respective algorithms, is finally spliced into a breadth multi-view image.
Image mosaic technology is most directly using the panoramic imaging mode for being exactly mobile phone, but its limitation is also aobvious and easy
See:Shake-hands grip or bat can only can only be erected, the extension in the image only one of which direction finally given;Meanwhile, shooting such image needs
Want have high stability during hand-held shooting, otherwise the image finally given can be caused to be deformed, it is impossible to obtain desired effect
Really so that Consumer's Experience is had a greatly reduced quality.
Patent " method and mobile terminal that a kind of panorama is taken pictures " (application number that vivo Mobile Communication Co., Ltd. possesses
201610515352.x, applying date 2016.06.30, grant number CN 1059779156A, grant date 2016.09.28) propose
Method and mobile terminal that a kind of panorama is taken pictures.The patented technology includes first, second, and third camera, by being clapped in panorama
Take the photograph middle three cameras of control and obtain three width images, while carrying out image mosaic, generate target panoramic picture, can be revolved in not level
In the case of turning mobile terminal, once photo taking just gets panoramic picture.This method exist weak point be once photo taking only
The image in a direction can be obtained, final result can not meet the demand of some large scene splicings.
The content of the invention
It is an object of the invention to the deficiency for above-mentioned prior art, a kind of panoramic imagery based on camera array is proposed
Method, to obtain the image of both direction anyhow simultaneously by once photo taking, meets the demand of more large scene image mosaic.
The present invention basic ideas be:Array image is obtained using 2 × 3 array cameras, all there is weight between every two images
Folded region, controls six camera synchronous acquisition images;Feature extraction is carried out respectively to image with matching two-by-two, image is finally carried out
Fusion completes splicing.Implementation step is as follows:
(1) gathered while completing multiple image using array camera, obtain i width images, m≤i;
(2) two images are read in and Scale invariant features transform characteristic point, i.e. SIFT feature is extracted respectively;
(3) characteristic matching lookup is carried out to SIFT feature obtained by step (2), obtains the match point of every two images;
(4) every two images match point obtained by step (3) is screened and calculates optimal transform matrix H;
(5) optimal transform matrix according to obtained by step (4) enters line translation to image, and carries out image co-registration:
(5a) optimal transform matrix according to obtained by step (4) enters line translation to any one width in input two images, makes
Two images are obtained in the same coordinate system and two images have overlapping region;
(5b) carries out gamma correction to the two images that the same coordinate system is transformed in (5a), makes the luminance difference of two images
Different minimum;
(5c) finds an optimal stitching line on the image of registration;
(5d) is weighted average fusion to the rectangle comprising optimal stitching line, obtains the spliced panoramic figure of two images
Picture;
(6) array image splices:
The image and image to be spliced that (6a) has once been spliced using before it continue repeat step as the two images of input
(2)~(5), circulation is carried out, and untill image to be spliced is m width images, m <=i finally give the transverse direction of m width images
Spliced panoramic figure;
(6b) repeats (6a) k times, and k <=i and k × m≤i obtain k horizontally-spliced figure, every horizontally-spliced figure is by m width
Image mosaic is formed;
(6c) using two width before k width landscape images as input picture, 90 ° of rotate counterclockwise, repeat step (2)~(5),
The longitudinal spliced figure of two images is obtained, for remaining k-2 width landscape images, with the image that has once spliced before it and counterclockwise
Image to be spliced after being rotated by 90 ° repeats step (2)~(5), circulation is carried out, until waiting to spell as the two images of input
Untill map interlinking picture is kth width image, then longitudinal image clockwise obtained by splicing is rotated by 90 °, finally gives the horizontal stroke of i width images
Longitudinal spliced panorama sketch.
The present invention has the advantage that compared with prior art:
Firstth, the present invention is combined array image with SIFT algorithms, and SIFT algorithms are improved, basic in the depth of field
It can significantly reduce the splicing time in the case of consistent;
Secondth, the present invention is effective realizes the combination of array image and blending algorithm, and to the optimal suture of existing searching
The algorithm of line is improved, i.e., using array camera obtain image, i width images can be gathered simultaneously and are spliced, reduction because
Scene error caused by time change, the especially ohject displacement under dynamic scene are relatively minimal so that splicing effect is more preferable;
Improved optimal stitching line algorithm can effectively avoid moving object and be spliced, and experimental result display splicing image is hardly deposited
In splicing seams, the frequency of occurrences of " ghost " phenomenon also declines to a great extent;
3rd, the present invention gathers image using array camera, can meet and once shoot with regard to joining image-forming, reduce work
Amount, obtains bigger visual field, the spliced map of higher resolution.
Brief description of the drawings
Fig. 1 is implementation process figure of the invention;
Fig. 2 is the camera array figure in the present invention for IMAQ;
The six width images that Fig. 3 gathers for camera array in the present invention;
Fig. 4 is the panoramic picture after six width images in Fig. 3 splice and merged.
Embodiment
The specific embodiment of the present invention is described in detail below in conjunction with the accompanying drawings:
Reference picture 1, step is as follows for of the invention realizing:
Step 1, image is obtained.
Image is gathered using camera array as shown in Figure 2, the camera array carries out two-dimentional group in transverse and longitudinal direction by i camera
Arrangement is closed, by the position and the focal length that adjust each camera so that these cameras can embark on journey all the time in column and entire combination is
Rectangle, is met the image of different requirements.
Each camera in the camera array can continuous acquisition multiple image;By the position acquisition for changing camera
The image in the different visuals field;Meanwhile, each camera can be focused, and obtain the image of the different depth of field.
Regulate after camera position and focal length, quickly press beginning and end key, obtain that each camera collects is some
Width image, the image collected per two neighboring camera is respectively provided with overlapping region, and i width, which is obtained, has the image of overlapping region,
This example provides but is not limited to six width images.
Step 2, two images are read in and SIFT feature is extracted respectively.
Common feature point extraction algorithm includes:Harris operators, LOG operators, SUSAN operators, SIFT algorithms etc., this hair
Bright use Scale invariant features transform algorithm extracts characteristic point, i.e. SIFT feature, and step is as follows:
(2a) builds gaussian pyramid and difference of Gaussian pyramid, detects yardstick spatial extrema;
The process that (2a1) builds gaussian pyramid includes doing image down-sampled and does the step of Gaussian smoothing two to image;
According to the original size of image and the size of tower top image, pyramidal number of plies n is calculated:
N=log2{min(M,N)}-t,t∈[0,log2{min(M,N)})
Wherein M, N are respectively the length and width of original image, and t is the logarithm value of the minimum dimension of tower top image.
In the present invention, each camera focus is adjusted so that the depth of field of image shot by camera is basically identical, now two width figures
The yardstick of picture is basically identical so that pyramid number of plies n is a value more than 1 less than 4 when building pyramid to two images, with
Reduce the splicing time;
(2a2) using original image as Gauss pyramid first layer, then to original image successively depression of order sample, every time drop adopt
New images obtained by sample are pyramidal new one layer, untill n-th layer, a series of descending images are obtained, under
Tower-like model is constituted to upper, initial pictures pyramid is obtained;
One image of every layer of initial pictures pyramid is done Gaussian Blur by (2a3) using different parameters so that pyramid
Every layer contain multiple Gaussian Blur images, every layer of multiple image of pyramid is collectively referred to as one group, gaussian pyramid is obtained;
(2a4) builds difference of Gaussian pyramid, i.e. DOG pyramids:It is adjacent in every group of the gaussian pyramid that (2a3) is obtained
Two image subtractions, obtain difference of Gaussian pyramid up and down;
(2a5) carries out spatial extrema point detection:
Each pixel in every group of difference of Gaussian pyramid is taken, respectively with this and up and down the 26 of two by them
All pixels point is made comparisons in individual neighborhood:If the value of the pixel taken from difference of Gaussian pyramid is maximum or minimum
Value, then the pixel value of taken point is a metric space extreme value of the image under current scale, and wherein metric space is by Gauss gold
The realization of word tower, each group of the corresponding different yardstick of each image;
Metric space extreme value in (2a) as key point, is positioned and direction is determined by (2b) to the key point:
(2b1) removes the point of low contrast by interpolation, and eliminates skirt response, completes to the accurate fixed of key point
Position;
(2b2) is to characteristic point travel direction assignment:
For (2b1) pinpoint key point, pixel in the σ neighborhood windows of gaussian pyramid image 3 where gathering it
Gradient and directional spreding feature, the modulus value m (x, y) of gradient and direction θ (x, y) are as follows:
θ (x, y)=tan-1((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
Wherein, L is the metric space value where key point, and the modulus value m (x, y) of gradient is divided by σ=1.5 σ _ oct Gauss
Cloth addition, 3 σ neighborhoods windows radius are 3 × 1.5 σ _ oct;
(2b3) counts the gradient of pixel and direction in each crucial vertex neighborhood window using histogram successively, i.e., by Nogata
Figure is using every 10 degree of directions as a post, totally 36 posts, and the direction that post is represented is pixel gradient direction, and the length of post is gradient width
Value, the direction that most long column is represented using in histogram completes direction and determined as the principal direction of each key point;
(2c) completes image-region progress piecemeal around the key point that positioning and direction are determined for each, and in the key
The histogram of gradients in 8 directions is calculated in 4 × 4 block centered on point, the cumulative of each gradient direction is drawn, generation has only
The vector of 128 dimensions of characteristic, this key point is depicted with the vector to obtain the SIFT feature of two images.
Step 3, characteristic matching lookup is carried out, the match point of every two images is obtained.
It is that k-d tree algorithms and optimal node priority algorithm, i.e. BBF algorithms carry out feature to image using k-d tree algorithm
Matched and searched, realizes the Feature Points Matching of two images, and step is as follows:
(3a) treats the characteristic point in stitching image using characteristic point of the k-d tree algorithm according to obtained by step (2) and sets up k-d
Tree;
(3b) carries out characteristic matching lookup using BBF algorithms to image, realizes the Feature Points Matching of two images:
(3b1) finds out in image to be spliced Euclidean distance therewith to each characteristic point in input picture in k-d tree
Nearest the first two arest neighbors characteristic point;
(3b2) is by the Euclidean distance and specific characteristic of first arest neighbors in specific characteristic point and two arest neighbors characteristic points
The Euclidean distance of point and second arest neighbors is asked, and the ratio and the proportion threshold value 0.49 of setting are compared:
If ratio is less than the proportion threshold value, it is a pair of match points to receive specific characteristic point and first nearest neighbor point,
Realize the Feature Points Matching of two images;Otherwise it is a pair of match points not receive specific characteristic point and first nearest neighbor point.
Step 4, every two images match point obtained by step 3 is screened and calculates optimal transform matrix H.
This step is carried out using RANSAC algorithms, and its step is as follows:
(4a) using in step 3 gained matching double points as sample set, one RANSAC sample of random selection from sample set,
I.e. 4 matching double points;
(4b) calculates current transform matrix L according to this 4 matching double points;
(4c) is met the one of current transform matrix L according to sample set, current transform matrix L and error metrics function
Collection C is caused, and records the number a of consistent concentration element;
(4d) set an optimal consistent collection, finite element number be 0, currently will unanimously concentrate element number a with it is optimal
It is consistent to concentrate element number to compare:If current consistent concentration element number a is more than optimal consistent concentration element number, will
Optimal consistent collection is updated to current consistent collection, otherwise, does not then update optimal consistent collection;
(4e) calculates current erroneous Probability p:
P=(1-in_fracs)o
Wherein, in_frac is the current optimal consistent percentage for concentrating element number to account for total sample number in sample set, and s is
The minimal characteristic point of transformation matrix needs is calculated to number, value is s=4, and o is iterations;
The minimum error probability 0.01 that current erroneous Probability p is calculated with allowing is compared by (4f):
If p is more than the minimum error probability allowed, return to step (4a), until current erroneous Probability p is less than minimum
Untill error probability;
If p is less than the minimum error probability allowed, current optimal unanimously to collect corresponding transformation matrix L be required
Optimal transform matrix H, the size of the optimal transform matrix is 3 × 3.
Step 5, line translation and image co-registration are entered to image according to step 4 gained optimal transform matrix.
Because traditional weighted average blending algorithm is also easy to produce " ghost " phenomenon, especially under dynamic scene, when using battle array
When row camera gathers image, such as there is moving object, directly traditional weighted average blending algorithm poor effect, it is difficult to embody fortune
Animal body, therefore the present invention uses a kind of improved optimal stitching line algorithm to be combined with Weighted Fusion algorithm, to " ghost " " phenomenon
Splicing fusion be significantly improved, its step is as follows:
(5a) transformation matrix according to obtained by step (4) enters line translation to any one width in input two images so that two
Width image is located in the same coordinate system;
(5b) carries out gamma correction to the two images that the same coordinate system is transformed in (5a), makes the luminance difference of two images
Different minimum, process is as follows:
Image to be spliced and input picture are converted into gray-scale map by (5b1), and the pixel of image to be spliced is calculated respectively
With the pixel with input picture and, i.e., the pixel value sum g and image to be spliced of the first non-overlapped part of calculating input image are non-heavy
The pixel value sum v of folded part;The pixel and q of the central rectangular of overlapping region, a height of overlay region of the central rectangular are calculated again
The 1/2 of domain height value, the 1/1.5 of a width of overlapping region width;Then obtain the pixel of input picture and for g+q, obtain waiting to spell
The pixel of map interlinking picture and for v+q;
(5b2) calculate image to be spliced and input image pixels and ratio b, and ratio b is compared with 1:
If b is less than 1, the input picture pixel value of every is multiplied with b, performed (5c);
If b is more than 1, the pixel value of every and b reciprocal multiplication of image to be spliced are performed (5c);
(5c) finds an optimal stitching line on the image that registration and brightness correction are crossed:
Input picture and image to be spliced are converted into gray-scale map by (5c1), by input picture and image to be spliced in weight
The respective pixel in folded region is subtracted each other successively, obtains every in the error image of two images overlapping region, calculating difference image
The intensity level E (x, y) of individual pixel:
E (x, y)=| Egray(x,y)|+Egeometry(x, y),
Wherein, EgrayRepresent the difference of the gray value of overlapping region pixel, EgeometryRepresent the knot of overlapping region pixel
The difference of structure value:
Egeometry=(▽ x1-▽x2)×(▽y1-▽y2)
Wherein, ▽ x1-▽x2For input picture and image to be spliced overlapping region respective pixel x directions gradient
Difference,
▽y1-▽y2For input picture and image to be spliced overlapping region respective pixel y directions gradient difference;
▽x1Each put the gradient in x directions in overlapping region for input picture, the gradient by x directions core SxWith input
The computing that each pixel of the image in the image of overlapping region does convolution sum is obtained;
▽x2Each put the gradient in x directions in overlapping region for image to be spliced, the gradient by x directions core SxWith treating
The computing that each pixel of the stitching image in the image of overlapping region does convolution sum is obtained;
▽y1Each put the gradient in y directions in overlapping region for input picture, the gradient by y directions core SyWith input
The computing that each pixel of the image in the image of overlapping region does convolution sum is obtained;
▽y2Each put the gradient in y directions in overlapping region for image to be spliced, the gradient by y directions core SyWith treating
The computing that each pixel of the stitching image in the image of overlapping region does convolution sum is obtained;
Sx, SyIt is improved Sobel operators template, is respectively:
(5c2) uses the thoery of dynamic programming, using each pixel of error image the first row rising as suture
Point, downwards extension finds the minimum point of intensity level in three adjacent points of next line, is allowed to the propagation direction as suture,
The like arrive last column, a minimum suture of E (x, y) sum is found out in all sutures of generation as optimal
Suture;
(5d) is weighted average fusion to the rectangle comprising optimal stitching line, obtains the spliced panoramic figure of two images
Picture:
(5d1) is found after minimum suture, takes the rectangular area that 10 pixels are respectively extended comprising including suture and left and right,
Pixel therein is weighted averagely, obtains the fusion figure of the rectangular area;
(5d2) obtains rectangular area left-hand component from input picture, and rectangle is obtained from the image to be spliced after conversion
Region right-hand component, obtains final fusion figure, so far, completes the splicing of two width input pictures.
Step 6, the splicing of array image.
The image and image to be spliced that (6a) has once been spliced using before it continue repeat step as the two images of input
2~step 5, circulation is carried out, and untill image to be spliced is the 3rd width image, finally gives the horizontally-spliced panorama of 3 width images
Figure;
(6b) repeats (6a) twice, obtains two horizontally-spliced figures, every horizontally-spliced figure is formed by 3 width image mosaics;
(6c) judge this two horizontally-spliced figures length and it is wide whether be 4 multiple:If it is not, horizontally-spliced by this two
The length of figure and it is wide be revised as 4 multiple nearby, if so, then keeping the length and width of this two horizontally-spliced figures constant;
(6d), using two horizontally-spliced figures as input picture, 90 ° of rotate counterclockwise, 2~step 5 of repeat step is obtained
The longitudinal spliced figure of two images, then longitudinal image clockwise obtained by splicing is rotated by 90 °, finally give the horizontal stroke of 6 width images
Longitudinal spliced panorama sketch.
It should be noted that the inventive method is not limited solely to the splicing of six width array images, this method has extensive
Applicability, in the case that camera is enough or when mobile camera is found a view respectively, can complete the splicing of more images, can also
Two width are completed on the basis of existing, three width, the splicing of four width images meets a variety of demands, it is adaptable to a variety of
Occasion.Simultaneously as array camera can realize that six cameras are shot simultaneously, the accuracy of splicing result also ensure that.
The effect of the present invention can be further illustrated by some experiments.
1. experiment condition
Experimental system includes an array camera, as shown in Fig. 2 this experiment is carried out under VS2010 software environment.
2. experiment content
Outdoor images are gathered using the inventive method, scene includes a people moved, and array camera is collected
Image as shown in figure 3, including the image that six width have overlapping region in wherein Fig. 3, using the inventive method to the width of Fig. 3 six
Image obtains Fig. 4 panorama sketch after being spliced.
As seen from Figure 4, the present invention works well to the image mosaic that there is dynamic object, in the absence of " ghost " phenomenon;
In view picture spliced panoramic figure, obvious splicing seams are not observed;Compared to single image, the spliced panoramic figure visual field is bigger, figure
As details is more, high-quality splicing result can be obtained.
Claims (9)
1. a kind of method for panoramic imaging based on camera array, including:
(1) gathered while completing multiple image using array camera, obtain i width images, m≤i;
(2) two images are read in and Scale invariant features transform characteristic point, i.e. SIFT feature is extracted respectively;
(3) characteristic matching lookup is carried out to SIFT feature obtained by step (2), obtains the match point of every two images;
(4) every two images match point obtained by step (3) is screened and calculates optimal transform matrix H;
(5) optimal transform matrix according to obtained by step (4) enters line translation to image, and carries out image co-registration:
(5a) optimal transform matrix according to obtained by step (4) enters line translation to any one width in input two images so that two
Width image is located in the same coordinate system and two images have overlapping region;
(5b) carries out gamma correction to the two images that the same coordinate system is transformed in (5a), makes the luminance difference of two images most
It is small;
(5c) finds an optimal stitching line on the image of registration;
(5d) is weighted average fusion to the rectangle comprising optimal stitching line, obtains the spliced panoramic image of two images;
(6) array image splices:
The image and image to be spliced that (6a) has once been spliced using before it continue repeat step (2) as the two images of input
~(5), circulation is carried out, and untill image to be spliced is m width images, m <=i finally give the horizontally-spliced of m width images
Panorama sketch;
(6b) repeats (6a) k times, and k <=i and k × m≤i obtain k horizontally-spliced figure, every horizontally-spliced figure is by m width images
It is spliced;
(6c) using two width before k width landscape images as input picture, 90 ° of rotate counterclockwise, repeat step (2)~(5) are obtained
The longitudinal spliced figure of two images, for remaining k-2 width landscape images, with the image and rotate counterclockwise once spliced before it
Image to be spliced after 90 ° repeats step (2)~(5), circulation is carried out, until figure to be spliced as the two images of input
As untill being kth width image, then longitudinal image clockwise obtained by splicing is rotated by 90 °, finally gives the transverse and longitudinal of i width images
Spliced panoramic figure.
2. according to the method described in claim 1, it is characterised in that:Array camera described in step (1), is by i camera
Two-dimensional combination arrangement is carried out in transverse and longitudinal direction, by the position and the focal length that adjust each camera so that these cameras all the time can
Embark on journey in column and entire combination is rectangle, be met the image of different requirements.
3. according to the method described in claim 1, it is characterised in that:Two images are read in step (2) and yardstick is extracted respectively
Invariant features transform characteristics point, is carried out as follows:
(2a) constructs gaussian pyramid and difference of Gaussian pyramid, detects yardstick spatial extrema;
Metric space extreme value in (2a) as key point, is positioned and direction is determined by (2b) to the key point;
(2c) completes image-region progress piecemeal around the key point that positioning and direction are determined for each, and is in the key point
The histogram of gradients in 8 directions is calculated in 4 × 4 block at center, the cumulative of each gradient direction is drawn, generates unique
128 dimension vectors, this key point is depicted with the vector to obtain the SIFT feature of two images.
4. according to the method described in claim 1, it is characterised in that:Characteristic matching in step (3) is searched, and is entered as follows
OK:
(3a) treats the characteristic point in stitching image using characteristic point of the k-d tree algorithm according to obtained by step (2) and sets up k-d tree;
(3b) carries out characteristic matching lookup using BBF algorithms to image, realizes the Feature Points Matching of two images:
(3b1) finds out in image to be spliced that Euclidean distance is nearest therewith to each characteristic point in input picture in k-d tree
The first two arest neighbors characteristic point;
(3b2) by the Euclidean distance of first arest neighbors in specific characteristic point and two arest neighbors characteristic points and specific characteristic point with
The Euclidean distance of second arest neighbors is asked, and the ratio and the proportion threshold value 0.49 of setting are compared:
If ratio is less than the proportion threshold value, it is a pair of match points to receive specific characteristic point and first nearest neighbor point, is realized
The Feature Points Matching of two images;Otherwise it is a pair of match points not receive specific characteristic point and first nearest neighbor point.
5. according to the method described in claim 1, it is characterised in that:Described in step (4) to every two width figure obtained by step (3)
As match point is screened and calculates optimal transform matrix H, carried out using RANSAC algorithms, its step is as follows:
(4a) using gained matching double points are as sample set in step (3), one RANSAC sample of random selection from sample set, i.e.,
4 matching double points;
(4b) calculates current transform matrix L according to this 4 matching double points;
(4c) is met current transform matrix L consistent collection according to sample set, current transform matrix L and error metrics function
C, and record the number a of consistent concentration element;
(4d) sets an optimal consistent collection, and finite element number is 0, currently will unanimously concentrate element number a and optimal consistent
Element number is concentrated to compare:If current consistent concentration element number a is more than optimal consistent concentration element number, will be optimal
Consistent collection is updated to current consistent collection, otherwise, does not then update optimal consistent collection;
(4e) calculates current erroneous Probability p:
P=(1-in_fracs)o
Wherein, in_frac is the current optimal consistent percentage for concentrating element number to account for total sample number in sample set, and s is to calculate
The minimal characteristic point that transformation matrix needs is to number, and value is s=4, and o is iterations;
The minimum error probability 0.01 that current erroneous Probability p is calculated with allowing is compared by (4f):
If p is more than the minimum error probability allowed, return to step (4a), until current erroneous Probability p is less than minimal error
Untill probability;
If p is less than the minimum error probability allowed, current optimal unanimously to collect corresponding transformation matrix L be required optimal
Transformation matrix H.
6. according to the method described in claim 1, it is characterised in that:To transforming to the same coordinate system in (5a) in step (5b)
Two images carry out gamma correction, carry out as follows:
Image to be spliced and input picture are converted into gray-scale map by (5b1), calculate respectively image to be spliced pixel and with
The pixel of input picture and;
(5b2) calculate image pixel to be spliced and with input image pixels and ratio b;
The input picture pixel value of every is multiplied by (5b3) if the ratio b obtained by being calculated in (5b2) is less than 1 with b;If
Ratio b is more than 1, then by the pixel value of every and b reciprocal multiplication of image to be spliced.
7. according to the method described in claim 1, it is characterised in that:In step (5c) one is found on the image of registration most preferably
Suture, is carried out as follows:
Input picture and image to be spliced are converted into gray-scale map by (5c1), by input picture and image to be spliced in overlay region
The respective pixel in domain is subtracted each other successively, obtains each picture in the error image of two images overlapping region, calculating difference image
The intensity level E (x, y) of element:
E (x, y)=| Egray(x,y)|+Egeometry(x, y),
Wherein, EgrayRepresent the difference of the gray value of overlapping region pixel, EgeometryRepresent the structured value of overlapping region pixel
Difference:
Egeometry=(▽ x1-▽x2)×(▽y1-▽y2)
Wherein, ▽ x1-▽x2For input picture and image to be spliced overlapping region respective pixel x directions gradient difference,
▽y1-▽y2For input picture and image to be spliced overlapping region respective pixel y directions gradient difference;
▽x1Each put the gradient in x directions in overlapping region for input picture, the gradient by x directions core SxWith input picture
The computing that each pixel in the image of overlapping region does convolution sum is obtained;
▽x2Each put the gradient in x directions in overlapping region for image to be spliced, the gradient by x directions core SxWith it is to be spliced
The computing that each pixel of the image in the image of overlapping region does convolution sum is obtained;
▽y1Each put the gradient in y directions in overlapping region for input picture, the gradient by y directions core SyWith input picture
The computing that each pixel in the image of overlapping region does convolution sum is obtained;
▽y2Each put the gradient in y directions in overlapping region for image to be spliced, the gradient by y directions core SyWith it is to be spliced
The computing that each pixel of the image in the image of overlapping region does convolution sum is obtained;
The Sx, SyIt is improved Sobel operators template, is respectively:
<mrow>
<msub>
<mi>S</mi>
<mi>x</mi>
</msub>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>3</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>3</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<msub>
<mi>S</mi>
<mi>y</mi>
</msub>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>3</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>3</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>3</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
(5c2) uses the thoery of dynamic programming, using each pixel of error image the first row as a suture starting point, to
Lower extension, finds the minimum point of intensity level in three adjacent points of next line, is allowed to the propagation direction as suture, successively class
Shift last column onto, a minimum suture of E (x, y) sum is found out in all sutures of generation as optimal suture
Line.
8. method according to claim 3, it is characterised in that:Construction gaussian pyramid and Gaussian difference parting in step (2a)
Word tower, detects yardstick spatial extrema, carries out as follows:
(2a1) calculates pyramidal number of plies n according to the original size of image and the size of tower top image:
N=log2{min(M,N)}-t,t∈[0,log2{min(M,N)})
Wherein M, N are respectively the length and width of original image, and t is the logarithm value of the minimum dimension of pyramid tower top image;
(2a2), using original image as the first layer of Gauss pyramid, then to original image, successively depression of order is sampled, each down-sampled institute
Obtained new images are pyramidal new one layer, untill n-th layer, obtain a series of descending images, from top to bottom
Tower-like model is constituted, initial pictures pyramid is obtained;
One image of every layer of initial pictures pyramid is done Gaussian Blur by (2a3) using different parameters so that pyramidal every
Layer contains multiple Gaussian Blur images, and every layer of multiple image of pyramid is collectively referred to as into one group, gaussian pyramid is obtained;
Adjacent two image subtractions up and down, obtain Gaussian difference parting word in every group of the gaussian pyramid that (2a4) obtains (2a3)
Tower;
(2a5) takes each pixel in every group of difference of Gaussian pyramid, respectively two with this and up and down by them
All pixels point is made comparisons in 26 neighborhoods, if the value of the pixel taken from difference of Gaussian pyramid is maximum or minimum
Value, then the pixel value of taken point is a metric space extreme value of the image under current scale, and wherein metric space is by Gauss gold
The realization of word tower, each group of the corresponding different yardstick of each image.
9. method according to claim 3, it is characterised in that:Crucial point location is carried out in step (2b) and direction is determined,
Carry out as follows:
(2b1) removes the point of low contrast by interpolation, and eliminates skirt response, and completion is accurately positioned to key point;
(2b2) gathers pixel in its σ neighborhood window of place gaussian pyramid image 3 for pinpoint key point in (2b1)
Gradient and directional spreding feature, the modulus value of gradient and direction are as follows:
<mrow>
<mi>m</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mi>L</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<mi>L</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>L</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
<mo>-</mo>
<mi>L</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mrow>
θ (x, y)=tan-1((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
Wherein, L is the metric space value where key point, and the modulus value m (x, y) of gradient is added by σ=1.5 σ _ oct Gaussian Profile
Into the 3 σ principles sampled by yardstick, neighborhood windows radius is 3 × 1.5 σ _ oct;
(2b3) counts the gradient of pixel and direction in each crucial vertex neighborhood window using histogram successively, i.e., by histogram with
Every 10 degree of directions are a post, totally 36 posts, and the direction that post is represented is pixel gradient direction, and the length of post is gradient magnitude,
The direction that most long column is represented using in histogram completes direction and determined as the principal direction of each key point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407833.3A CN107301620B (en) | 2017-06-02 | 2017-06-02 | Method for panoramic imaging based on camera array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407833.3A CN107301620B (en) | 2017-06-02 | 2017-06-02 | Method for panoramic imaging based on camera array |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107301620A true CN107301620A (en) | 2017-10-27 |
CN107301620B CN107301620B (en) | 2019-08-13 |
Family
ID=60134594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710407833.3A Active CN107301620B (en) | 2017-06-02 | 2017-06-02 | Method for panoramic imaging based on camera array |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107301620B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107948547A (en) * | 2017-12-29 | 2018-04-20 | 北京奇艺世纪科技有限公司 | Processing method, device and the electronic equipment of panoramic video splicing |
CN108111753A (en) * | 2017-12-14 | 2018-06-01 | 中国电子科技集团公司电子科学研究院 | A kind of high-resolution real time panoramic monitoring device and monitoring method |
CN108876723A (en) * | 2018-06-25 | 2018-11-23 | 大连海事大学 | A kind of construction method of the color background of gray scale target image |
CN109005334A (en) * | 2018-06-15 | 2018-12-14 | 清华-伯克利深圳学院筹备办公室 | A kind of imaging method, device, terminal and storage medium |
CN109166178A (en) * | 2018-07-23 | 2019-01-08 | 中国科学院信息工程研究所 | A kind of significant drawing generating method of panoramic picture that visual characteristic is merged with behavioral trait and system |
CN109470698A (en) * | 2018-09-27 | 2019-03-15 | 钢研纳克检测技术股份有限公司 | Across scale field trash quick analytic instrument device and method based on microphotograph matrix |
CN109754437A (en) * | 2019-01-14 | 2019-05-14 | 北京理工大学 | A method of adjustment figure sample frequency |
CN109961398A (en) * | 2019-02-18 | 2019-07-02 | 鲁能新能源(集团)有限公司 | Fan blade image segmentation and grid optimization joining method |
CN110018153A (en) * | 2019-04-23 | 2019-07-16 | 钢研纳克检测技术股份有限公司 | The full-automatic scanning positioning of large scale sample universe ingredient and quantified system analysis |
CN110020995A (en) * | 2019-03-06 | 2019-07-16 | 沈阳理工大学 | For the image split-joint method of complicated image |
CN110136224A (en) * | 2018-02-09 | 2019-08-16 | 三星电子株式会社 | Image interfusion method and equipment |
CN110232673A (en) * | 2019-05-30 | 2019-09-13 | 电子科技大学 | A kind of quick steady image split-joint method based on medical micro-imaging |
CN110390640A (en) * | 2019-07-29 | 2019-10-29 | 齐鲁工业大学 | Graph cut image split-joint method, system, equipment and medium based on template |
CN110569927A (en) * | 2019-09-19 | 2019-12-13 | 浙江大搜车软件技术有限公司 | Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal |
CN112365404A (en) * | 2020-11-23 | 2021-02-12 | 成都唐源电气股份有限公司 | Contact net panoramic image splicing method, system and equipment based on multiple cameras |
CN112529028A (en) * | 2019-09-19 | 2021-03-19 | 北京声迅电子股份有限公司 | Networking access method and device for security check machine image |
CN112541507A (en) * | 2020-12-17 | 2021-03-23 | 中国海洋大学 | Multi-scale convolutional neural network feature extraction method, system, medium and application |
CN113012030A (en) * | 2019-12-20 | 2021-06-22 | 北京金山云网络技术有限公司 | Image splicing method, device and equipment |
CN113079325A (en) * | 2021-03-18 | 2021-07-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
CN113822800A (en) * | 2021-06-11 | 2021-12-21 | 无锡安科迪智能技术有限公司 | Panoramic image splicing and fusing method and device |
CN114339157A (en) * | 2021-12-30 | 2022-04-12 | 福州大学 | Multi-camera real-time splicing system and method with adjustable observation area |
CN114463170A (en) * | 2021-12-24 | 2022-05-10 | 河北大学 | Large scene image splicing method for AGV application |
CN114549301A (en) * | 2021-12-29 | 2022-05-27 | 浙江大华技术股份有限公司 | Image splicing method and device |
CN114723757A (en) * | 2022-06-09 | 2022-07-08 | 济南大学 | High-precision wafer defect detection method and system based on deep learning algorithm |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101278565A (en) * | 2005-08-08 | 2008-10-01 | 康涅狄格大学 | Depth and lateral size control of three-dimensional images in projection integral imaging |
US20120105574A1 (en) * | 2010-10-28 | 2012-05-03 | Henry Harlyn Baker | Panoramic stereoscopic camera |
CN105245841A (en) * | 2015-10-08 | 2016-01-13 | 北京工业大学 | CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system |
-
2017
- 2017-06-02 CN CN201710407833.3A patent/CN107301620B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101278565A (en) * | 2005-08-08 | 2008-10-01 | 康涅狄格大学 | Depth and lateral size control of three-dimensional images in projection integral imaging |
US20120105574A1 (en) * | 2010-10-28 | 2012-05-03 | Henry Harlyn Baker | Panoramic stereoscopic camera |
CN105245841A (en) * | 2015-10-08 | 2016-01-13 | 北京工业大学 | CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system |
Non-Patent Citations (1)
Title |
---|
田军等: "全景图中投影模型与算法", 《计算机系统应用》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108111753A (en) * | 2017-12-14 | 2018-06-01 | 中国电子科技集团公司电子科学研究院 | A kind of high-resolution real time panoramic monitoring device and monitoring method |
CN107948547A (en) * | 2017-12-29 | 2018-04-20 | 北京奇艺世纪科技有限公司 | Processing method, device and the electronic equipment of panoramic video splicing |
CN110136224A (en) * | 2018-02-09 | 2019-08-16 | 三星电子株式会社 | Image interfusion method and equipment |
CN109005334A (en) * | 2018-06-15 | 2018-12-14 | 清华-伯克利深圳学院筹备办公室 | A kind of imaging method, device, terminal and storage medium |
CN108876723A (en) * | 2018-06-25 | 2018-11-23 | 大连海事大学 | A kind of construction method of the color background of gray scale target image |
CN109166178A (en) * | 2018-07-23 | 2019-01-08 | 中国科学院信息工程研究所 | A kind of significant drawing generating method of panoramic picture that visual characteristic is merged with behavioral trait and system |
CN109470698B (en) * | 2018-09-27 | 2020-02-14 | 钢研纳克检测技术股份有限公司 | Cross-scale inclusion rapid analysis instrument and method based on photomicrography matrix |
CN109470698A (en) * | 2018-09-27 | 2019-03-15 | 钢研纳克检测技术股份有限公司 | Across scale field trash quick analytic instrument device and method based on microphotograph matrix |
CN109754437A (en) * | 2019-01-14 | 2019-05-14 | 北京理工大学 | A method of adjustment figure sample frequency |
CN109961398A (en) * | 2019-02-18 | 2019-07-02 | 鲁能新能源(集团)有限公司 | Fan blade image segmentation and grid optimization joining method |
CN110020995A (en) * | 2019-03-06 | 2019-07-16 | 沈阳理工大学 | For the image split-joint method of complicated image |
CN110020995B (en) * | 2019-03-06 | 2023-02-07 | 沈阳理工大学 | Image splicing method for complex images |
CN110018153A (en) * | 2019-04-23 | 2019-07-16 | 钢研纳克检测技术股份有限公司 | The full-automatic scanning positioning of large scale sample universe ingredient and quantified system analysis |
CN110018153B (en) * | 2019-04-23 | 2021-11-02 | 钢研纳克检测技术股份有限公司 | Full-automatic scanning, positioning and quantitative analysis system for global components of large-scale samples |
CN110232673B (en) * | 2019-05-30 | 2023-06-23 | 电子科技大学 | Rapid and steady image stitching method based on medical microscopic imaging |
CN110232673A (en) * | 2019-05-30 | 2019-09-13 | 电子科技大学 | A kind of quick steady image split-joint method based on medical micro-imaging |
CN110390640A (en) * | 2019-07-29 | 2019-10-29 | 齐鲁工业大学 | Graph cut image split-joint method, system, equipment and medium based on template |
CN112529028A (en) * | 2019-09-19 | 2021-03-19 | 北京声迅电子股份有限公司 | Networking access method and device for security check machine image |
CN112529028B (en) * | 2019-09-19 | 2022-12-02 | 北京声迅电子股份有限公司 | Networking access method and device for security check machine image |
CN110569927A (en) * | 2019-09-19 | 2019-12-13 | 浙江大搜车软件技术有限公司 | Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal |
CN113012030A (en) * | 2019-12-20 | 2021-06-22 | 北京金山云网络技术有限公司 | Image splicing method, device and equipment |
CN112365404B (en) * | 2020-11-23 | 2023-03-17 | 成都唐源电气股份有限公司 | Contact net panoramic image splicing method, system and equipment based on multiple cameras |
CN112365404A (en) * | 2020-11-23 | 2021-02-12 | 成都唐源电气股份有限公司 | Contact net panoramic image splicing method, system and equipment based on multiple cameras |
CN112541507A (en) * | 2020-12-17 | 2021-03-23 | 中国海洋大学 | Multi-scale convolutional neural network feature extraction method, system, medium and application |
CN112541507B (en) * | 2020-12-17 | 2023-04-18 | 中国海洋大学 | Multi-scale convolutional neural network feature extraction method, system, medium and application |
CN113079325A (en) * | 2021-03-18 | 2021-07-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113079325B (en) * | 2021-03-18 | 2023-01-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113822800A (en) * | 2021-06-11 | 2021-12-21 | 无锡安科迪智能技术有限公司 | Panoramic image splicing and fusing method and device |
CN113689331B (en) * | 2021-07-20 | 2023-06-23 | 中国铁路设计集团有限公司 | Panoramic image stitching method under complex background |
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
CN114463170A (en) * | 2021-12-24 | 2022-05-10 | 河北大学 | Large scene image splicing method for AGV application |
CN114463170B (en) * | 2021-12-24 | 2024-06-04 | 河北大学 | Large scene image stitching method for AGV application |
CN114549301A (en) * | 2021-12-29 | 2022-05-27 | 浙江大华技术股份有限公司 | Image splicing method and device |
CN114339157A (en) * | 2021-12-30 | 2022-04-12 | 福州大学 | Multi-camera real-time splicing system and method with adjustable observation area |
CN114723757A (en) * | 2022-06-09 | 2022-07-08 | 济南大学 | High-precision wafer defect detection method and system based on deep learning algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN107301620B (en) | 2019-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107301620B (en) | Method for panoramic imaging based on camera array | |
Qi et al. | Geonet: Geometric neural network for joint depth and surface normal estimation | |
US20230123664A1 (en) | Method for stitching images of capsule endoscope, electronic device and readable storage medium | |
CN110390640B (en) | Template-based Poisson fusion image splicing method, system, equipment and medium | |
CN110020985B (en) | Video stitching system and method of binocular robot | |
US9811946B1 (en) | High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image | |
CN104732482B (en) | A kind of multi-resolution image joining method based on control point | |
CN104699842B (en) | Picture display method and device | |
CN103745449B (en) | Rapid and automatic mosaic technology of aerial video in search and tracking system | |
CN105245841A (en) | CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system | |
CN109961399B (en) | Optimal suture line searching method based on image distance transformation | |
CN101394573B (en) | Panoramagram generation method and system based on characteristic matching | |
EP1903498B1 (en) | Creating a panoramic image by stitching a plurality of images | |
WO2023024697A1 (en) | Image stitching method and electronic device | |
US20080074489A1 (en) | Apparatus, method, and medium for generating panoramic image | |
CN102938145B (en) | Consistency regulating method and system of splicing panoramic picture | |
CN101689292A (en) | The BANANA codec | |
CN110232673A (en) | A kind of quick steady image split-joint method based on medical micro-imaging | |
CN106709878A (en) | Rapid image fusion method | |
CN111292413A (en) | Image model processing method and device, storage medium and electronic device | |
CN108848367A (en) | A kind of method, device and mobile terminal of image procossing | |
CN110009567A (en) | For fish-eye image split-joint method and device | |
CN116681636A (en) | Light infrared and visible light image fusion method based on convolutional neural network | |
CN110880159A (en) | Image splicing method and device, storage medium and electronic device | |
CN106780326A (en) | A kind of fusion method for improving panoramic picture definition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |