CN108510445A - A kind of Panorama Mosaic method - Google Patents
A kind of Panorama Mosaic method Download PDFInfo
- Publication number
- CN108510445A CN108510445A CN201810296392.9A CN201810296392A CN108510445A CN 108510445 A CN108510445 A CN 108510445A CN 201810296392 A CN201810296392 A CN 201810296392A CN 108510445 A CN108510445 A CN 108510445A
- Authority
- CN
- China
- Prior art keywords
- sensation
- image
- corresponding circle
- weight
- center line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000035807 sensation Effects 0.000 claims abstract description 141
- 230000004927 fusion Effects 0.000 claims abstract description 22
- 238000012937 correction Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000007500 overflow downdraw method Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 11
- 230000014509 gene expression Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 5
- 238000002156 mixing Methods 0.000 claims description 5
- 239000000203 mixture Substances 0.000 claims description 4
- 238000000205 computational method Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000003702 image correction Methods 0.000 claims 1
- 230000007704 transition Effects 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
A kind of Panorama Mosaic method, including step:Panorama camera is obtained per the original image captured by the camera of road, and pattern distortion correction is carried out to each original image;Processing is zoomed in and out to the image after distortion correction and obtains image to be fused;Calculate optimal suture center line of the image to be fused of each camera of panorama camera in corresponding circle of sensation, corresponding circle of sensation includes the first corresponding circle of sensation and the second corresponding circle of sensation, by being constituted positioned at the top of each image to be fused and overlapped region, the second corresponding circle of sensation is made of first corresponding circle of sensation the overlapped region in its edge of adjacent two images to be fused;Weight table based on the first corresponding circle of sensation of optimal suture center line computation and the second corresponding circle of sensation;All images to be fused are merged according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation;Image stretch is carried out to the image after fusion.The Panorama Mosaic method of the present invention realizes gentle transition so that image mosaic seems more natural.
Description
Technical field
The invention belongs to technical field of image processing, and in particular to a kind of Panorama Mosaic method.
Background technology
Panorama Mosaic technology is exactly the seamless high-resolution of image mosaic that several overlap at width large size
The technology of rate image.Panorama Mosaic is one of the key technology in image procossing, is some other image procossing, such as panorama
Video, intelligent video monitoring, video compress and transmission, virtual reality technology, medical image analysis and super-resolution reconstruction etc.
Basis, how to realize the image mosaic of high speed high quality, be all vital for whole system.
Two most important steps are exactly image registration and image co-registration in Panorama Mosaic technology.It is currently used
Image interfusion method be using directly merge method (such as:Mean value method, weighted mean method, median filtering method), it can cause
The panoramic picture of generation occurs significantly splicing gap because of the detail differences of overlapping region.In order to solve splicing gap this
Technical problem, currently used method are that optimal suture center line is searched using the method for Dynamic Programming and graph-cut, tool
It is exactly to be found in overlapping region using the gray difference and color distortion between pixel in two images overlapping region for body
The line of one gray scale and color distortion minimum, this line are thus referred to as optimal suture center line.
The panoramic picture that linear Weighted Fusion method generates is combined using Dynamic Programming in the prior art, in panoramic picture
Top be susceptible to the unnatural phenomenon of stitching image, the panoramic picture of generation is not carried out gentle transition at top, together
The a large amount of system resources of Shi Zhanyong, processing speed are very slow.
Invention content
In the prior art, the panoramic picture that linear Weighted Fusion method generates is combined using Dynamic Programming, in panoramic picture
Top be susceptible to the unnatural phenomenon of stitching image, the panoramic picture of generation is not carried out gentle transition at top, together
The a large amount of system resources of Shi Zhanyong, processing speed is very slow, and in order to solve this problem, the present invention provides a kind of panoramic picture spelling
Method is connect, concrete scheme is as follows:
A kind of Panorama Mosaic method, specifically comprises the following steps:
Step S1, it obtains panorama camera and carries out image per the original image captured by the camera of road, and to each original image
Distortion correction;
Step S2, processing is zoomed in and out to the image after distortion correction, obtains image to be fused;
Step S3, optimal suture center line of the image to be fused of each camera of panorama camera in corresponding circle of sensation, institute are calculated
It includes the first corresponding circle of sensation and the second corresponding circle of sensation to state corresponding circle of sensation, and first corresponding circle of sensation is by positioned at the top of each image to be fused and phase
The region being mutually overlapped is constituted, and second corresponding circle of sensation is made of the overlapped region in its edge of adjacent two images to be fused;
Step S4, the weight table based on the first corresponding circle of sensation of optimal suture center line computation and the second corresponding circle of sensation;
Step S5, all images to be fused are merged according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation;
Step S6, image stretch is carried out to the image after fusion.
Wherein, in the above-mentioned methods, the method for carrying out pattern distortion correction in the step S1 to each original image is specific
Include the following steps:
Step S101, the intrinsic parameter and distortion factor that each camera of panorama camera is calculated using gridiron pattern scaling method, with
And the spatial relationship parameters between adjacent camera;
Step S102, it is calculated using the spatial relationship parameters between the intrinsic parameter of camera and distortion factor and camera
Original image and go the coordinate map between fault image;
Step S103, the original image of each camera shooting of panorama camera is mapped to according to the coordinate map
On fault image, institute is determined using the coordinate map using the pixel gone in fault image as target pixel points
State the correspondence of target pixel points and the source image vegetarian refreshments on the original image.
Wherein, in the above-mentioned methods, the coordinate map is utilized in the step S101, determines target pixel points and original
The calculation formula of the correspondence of source image vegetarian refreshments on beginning image is as follows:
Dst (x, y)=Src (Lut_x (x, y), Lut_y (x, y))
Wherein, Dst (x, y) denotation coordination is the target pixel points of (x, y), and Lxt_x (x, y) indicates object pixel point coordinates
(x, y) is mapped in original image the coordinate value in X-direction by coordinate map, and Lut_y (x, y) indicates that target pixel points are sat
Mark (x, y) is mapped in original image the coordinate value in Y-direction by coordinate map, Src (Lut_x (x, y), Lut_y (x,
Y)) indicate that object pixel point coordinates (x, y) is mapped to the position in original image by coordinate map.
Wherein, in the above-mentioned methods, the step S2 zooms in and out processing, used contracting to the image after distortion correction
It is as follows to put formula:
Iscale(y, x)=remap (Isrc(y, x), mapx(y, x), mapy(y, x))
Wherein, remap () function representation remaps, i.e., the pixel of certain position in piece image is placed into another width figure
As the process of designated position;Iscale(y, x) indicates to remove fault image, I after scalingsrc(y, x) indicates to remove the figure to be mapped of distortion
Picture;mapx(y, x) indicate go distortion image to be mapped in the x direction remap figure, mapy(y, x) indicates to go waiting for for distortion
Mapping image in y-direction remap figure;Scale indicates that zoom factor, W indicate that the width of fault image, D expressions is gone to go abnormal
The height zoomed in and out is needed in the image of change.
Wherein, in the above-mentioned methods, the value range of the zoom factor scale is 1/2~1/16, the height D of scaling
Value range be 128~256.
Wherein, in the above-mentioned methods, the image to be fused that each camera of panorama camera is calculated in the step S3 is merging
The method of optimal suture center line in area specifically includes:
The optimal suture center line for calculating the first corresponding circle of sensation, according to the optimal suture center line of the first corresponding circle of sensation in phase mutual respect
Position in folded area immobilizes, and optimal suture center line includes left optimal suture sL and right optimal suture sR;
The optimal suture center line for calculating the second corresponding circle of sensation, according to adjacent two images to be fused, using overall edge spy
The Dynamic iterations method of sign and gray difference feature finds out optimal suture center line.
Wherein, in the above-mentioned methods, described to be found out using overall edge feature and the Dynamic iterations method of gray difference feature
The method of optimal suture center line specifically includes:
Calculate the second corresponding circle of sensation inward flange feature and gray difference feature and If;
Using the Filtering Template that size is n to IfIt is filtered line by line, wherein 9 n≤16 < <;
I is calculated based on Dynamic iterations methodfMinimum add up difference and its corresponding path, error is most in the cumulative difference of minimum
The position of the corresponding path in small position, that is, optimal suture center line.
Wherein, in the above-mentioned methods, the step S4 is melted based on the first corresponding circle of sensation of optimal suture center line computation with second
The weight table for closing area specifically includes:
If the size of image to be fused is (H, W), the size of the first corresponding circle of sensation is (D, W);
According to the optimal suture center line for the first corresponding circle of sensation found out, the weight table of the first corresponding circle of sensation is calculated, including as follows
Step:
Step 1: calculating in the first corresponding circle of sensation, often the stepping factor of row in a column direction, calculation formula are as follows:
Wherein, bw_delta (y) indicates that the stepping factor of y rows in a column direction, std_bw_delta indicate standard
The stepping factor, value range are 64~128;
Step 2: calculating weight table weight according to the stepping factor of every row in a column direction, calculation formula is as follows:
Step 3: the weight table that overlapped area calculates is normalized, make its value range between 0~1, and
The sum of weight of overlapped area's corresponding position is 1;
According to the optimal suture center line for the second corresponding circle of sensation found out, the weight table of the second corresponding circle of sensation is calculated;
Centered on optimal suture center line, the size is symmetrically taken to be on the both sides at the optimal suture centerOne
A area is combined into oneSection, since the weight table of the second corresponding circle of sensation is opposite with the combination section
The value for the position answered is linear change, and size is 1 in the value of the weight table on the combination section left side, at this all in 0~1
The value for combining the weight table on the right of section is 0, then the computational methods of the weight table of the second corresponding circle of sensation are as follows:
Wherein, bw indicates actually to carry out the width of linear fusion in the second corresponding circle of sensation, and seli (y) indicates the optimal of y rows
The position of center line is sutured, w indicates that the width of the second corresponding circle of sensation, weight (y, x) indicate the weight table of the second corresponding circle of sensation.
Wherein, in the above-mentioned methods, the step S5 according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation to all
Image to be fused carries out fusion and specifically includes:
Calculation formula is as follows used by the fusion method of first corresponding circle of sensation:
Istitch(y, x)=weight1(y, x) × Img1(y, x)+weight2(y, x) × Img2(y, x)
+weight3(y, x) × Img3(y, x)
Wherein, IstitchStitching image after overlapping areas is weighted in the first corresponding circle of sensation of (y, x) expression,
weight1(y, x) indicates overlapping region Img1(y, x) corresponding weight table, weight2(y, x) indicates overlapping region Img2(y,
X) corresponding weight table, weight3(y, x) indicates overlapping region Img3(y, x) corresponding weight table;
The fusion method of second corresponding circle of sensation is using the method for linear weighted function to two source image vegetarian refreshments of overlapping region
Pixel value does weighted blend and obtains mixed pixel value, and calculation formula is as follows:
I′stitch(y, x)=weight (y, x) × I1(y, x)+(1-weight (y, x)) × I2(y, x)
Wherein, I 'stitch(y, x) indicates the second corresponding circle of sensation I1And I2Coordinate in the stitching image obtained after linear weighted function
Pixel value at (y, x), I1(y, x) indicates overlapping region I1Pixel value at middle coordinate (y, x), I2(y, x) indicates overlay region
Domain I2Pixel value at middle coordinate (y, x), weight (y, x) indicate overlapping region I1And I2Corresponding weight table is at (y, x)
Number.Second corresponding circle of sensation I3And I4And I5And I0Fusion method and I1And I2It is identical.
Wherein, in the above-mentioned methods, the step S6 specifically includes the image progress image stretch after fusion:
The stretching formula used for:
I′end(y, x)=(1- α) × Iend(y, floor (x*scale))+α × Iend(y, floor (x*scale)+1)
Wherein, mod () indicates that modulo operation, floor () indicate downward rounding operation, IendAfter (y, x) indicates fusion
Splice small image, IendBlending image after (y, x) expression is stretched, i.e., the stitching image obtained after Panorama Mosaic.
In the Panorama Mosaic method of the present invention, by including the following steps:Step S1, panorama camera is obtained to take the photograph per road
Original image as captured by head, and pattern distortion correction is carried out to each original image;Step S2, to the image after distortion correction
Processing is zoomed in and out, image to be fused is obtained;Step S3, the image to be fused of each camera of panorama camera is calculated in corresponding circle of sensation
Optimal suture center line, the corresponding circle of sensation includes the first corresponding circle of sensation and the second corresponding circle of sensation, and first corresponding circle of sensation is each by being located at
The top of image to be fused and overlapped region composition, second corresponding circle of sensation is by its edge of adjacent two images to be fused
Overlapped region is constituted;Step S4, the weight based on optimal suture center line computation the first corresponding circle of sensation and the second corresponding circle of sensation
Table;Step S5, all images to be fused are merged according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation;Step S6,
Image stretch is carried out to the image after fusion.Image to be fused is divided into the first corresponding circle of sensation and the second corresponding circle of sensation, wherein to described
First corresponding circle of sensation by the way of merging entirely, that is, the overlapping region number for participating in fusion is equal to picture number to be fused, to described the
The weight table of one corresponding circle of sensation by the way of the Dynamic step factor, i.e., in overlapping areas often row stepping in a column direction because
Son is variation.All information for taking full advantage of image top to be fused in this way realize the gentle transition of top corresponding circle of sensation,
Namely the effect at top " cross " is eliminated, so that stitching image is seemed nature at top;Wherein, to second fusion
The overlapping region in area, i.e., adjacent two images can make lap left image " gradually going out " using the method for linear weighted function,
And lap right image " being fade-in ", which achieves gentle transition so that image mosaic seems more natural.
Description of the drawings
Fig. 1 is the method flow diagram for the example that Panorama Mosaic method of the present invention provides;
Fig. 2 is the method flow diagram that the present invention corrects pattern distortion;
Fig. 3 is image co-registration area to be fused of the invention schematic diagram.
Specific implementation mode
In order to make the objectives, technical solutions and advantages of the present invention clearer, With reference to embodiment and join
According to attached drawing, the present invention is described in more detail.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this hair
Bright range.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid this is unnecessarily obscured
The concept of invention.
Two most important steps are exactly image registration and image co-registration in Panorama Mosaic technology.It is currently used
Image interfusion method be using directly merge method (such as:Mean value method, weighted mean method, median filtering method), it can cause
The panoramic picture of generation occurs significantly splicing gap because of the detail differences of overlapping region.In order to solve splicing gap this
Technical problem, currently used method are that optimal suture center line is searched using the method for Dynamic Programming and graph-cut, tool
It is exactly to be found in overlapping region using the gray difference and color distortion between pixel in two images overlapping region for body
The line of one gray scale and color distortion minimum, this line are thus referred to as optimal suture center line.
The panoramic picture that linear Weighted Fusion method generates is combined using Dynamic Programming in the prior art, in panoramic picture
Top be susceptible to the unnatural phenomenon of stitching image, the panoramic picture of generation is not carried out gentle transition at top, together
The a large amount of system resources of Shi Zhanyong, processing speed are very slow.
By this programme provide Panorama Mosaic method, can by by image to be fused be divided into the first corresponding circle of sensation and
Second corresponding circle of sensation carries out anastomosing and splicing respectively, realizes the gentle transition of corresponding circle of sensation at the top of panoramic picture, namely eliminates top
The effect of " cross ", in addition adjacent area realize gentle transition so that stitching image is seen at top and adjacent area
Get up more natural.
Panorama Mosaic method provided by the invention, for panorama camera per the original image captured by the camera of road
Panorama Mosaic is carried out, as shown in Figure 1, specifically comprising the following steps:
Step S1, it obtains panorama camera and carries out image per the original image captured by the camera of road, and to each original image
Distortion correction;
Step S2, processing is zoomed in and out to the image after distortion correction, obtains image to be fused;
Step S3, optimal suture center line of the image to be fused of each camera of panorama camera in corresponding circle of sensation, institute are calculated
It includes the first corresponding circle of sensation and the second corresponding circle of sensation to state corresponding circle of sensation, and first corresponding circle of sensation is by positioned at the top of each image to be fused and phase
The region being mutually overlapped is constituted, and second corresponding circle of sensation is made of the overlapped region in its edge of adjacent two images to be fused;
Step S4, the weight table based on the first corresponding circle of sensation of optimal suture center line computation and the second corresponding circle of sensation;
Step S5, all images to be fused are merged according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation;
Step S6, image stretch is carried out to the image after fusion.
In the above method, as shown in Fig. 2, the method for carrying out pattern distortion correction to each original image in the step S1 has
Body includes the following steps:
Step S101, the intrinsic parameter and distortion factor that each camera of panorama camera is calculated using gridiron pattern scaling method, with
And the spatial relationship parameters between adjacent camera;
Step S102, it is calculated using the spatial relationship parameters between the intrinsic parameter of camera and distortion factor and camera
Original image and go the coordinate map between fault image;
Step S103, the original image of each camera shooting of panorama camera is mapped to according to the coordinate map
On fault image, institute is determined using the coordinate map using the pixel gone in fault image as target pixel points
State the correspondence of target pixel points and the source image vegetarian refreshments on the original image.
In specific implementation, the coordinate map is utilized in the step S101, determines target pixel points and original image
On source image vegetarian refreshments correspondence calculation formula it is as follows:
Dst (x, y)=Src (Lut_x (x, y), (Lut_y (x, y))
Wherein, Dst (x, y) denotation coordination is the target pixel points of (x, y), and Lut_x (x, y) indicates object pixel point coordinates
(x, y) is mapped in original image the coordinate value in X-direction by coordinate map, and Lut_y (x, y) indicates that target pixel points are sat
Mark (x, y) is mapped in original image the coordinate value in Y-direction by coordinate map, Src (Lut_x (x, y), Lut_y (x,
Y)) indicate that object pixel point coordinates (x, y) is mapped to the position in original image by coordinate map.
Preferably due to object pixel point coordinates, which is mapped to the coordinate value obtained in original image, is not necessarily integer, it is non-
Corresponding pixel value will appear as decimal under rounded coordinate, it is contemplated that the pixel value of image is indicated with integer value, therefore is needed
Difference operation is carried out to obtained non-integer coordinates, generate the integer pixel values under non-integer coordinates, used in the present embodiment
Bi-cubic interpolation algorithm generates integer pixel values.
In specific implementation, carry out that distortion correction is obtained that fault image is gone to zoom in and out as to be fused to original image
Image, wherein it is the image after distortion correction to remove fault image.It, need to be in step S1 in order to improve the efficiency of image mosaic
The image of the distortion correction obtained zooms in and out and the step S2 zooms in and out processing to the image after distortion correction,
The scaling formula that it is used is as follows:
Iscale(y, x)=remap (Isrc(y, x), mapx(y, x), mapy(y, x))
Wherein, remap () function representation remaps, i.e., the pixel of certain position in piece image is placed into another width figure
As the process of designated position;Iscale(y, x) indicates to remove fault image, I after scalingsrc(y, x) indicates to remove the figure to be mapped of distortion
Picture;mapx(y, x) indicate go distortion image to be mapped in the x direction remap figure, mapy(y, x) indicates to go waiting for for distortion
Mapping image in y-direction remap figure;Scale indicates that zoom factor, W indicate that the width of fault image, D expressions is gone to go abnormal
The height zoomed in and out is needed in the image of change.
Preferably, the value range of the zoom factor scale is 1/2~1/16, and the value range of the height D of scaling is
128~256.
Specifically, it is treated using above-mentioned scaling formula and is zoomed in and out at the top of blending image so that melted in below step
The image information at top can be made full use of to be merged during conjunction, to make the panoramic picture after fusion in top corresponding circle of sensation
Seem naturally, realizing gentle transition.
In specific implementation, in the step S3 calculate each camera of panorama camera image to be fused in corresponding circle of sensation most
The method of excellent suture center line specifically includes:
The optimal suture center line for calculating the first corresponding circle of sensation, according to the optimal suture center line of the first corresponding circle of sensation in phase mutual respect
Position in folded area immobilizes, and optimal suture center line includes left optimal suture sL and right optimal suture sR;
The optimal suture center line for calculating the second corresponding circle of sensation, according to adjacent two images to be fused, using overall edge spy
The Dynamic iterations method of sign and gray difference feature finds out optimal suture center line.
Wherein, the side of optimal suture center line is found out using the Dynamic iterations method of overall edge feature and gray difference feature
Method specifically includes:
Calculate the second corresponding circle of sensation inward flange feature and gray difference feature and If;
Using the Filtering Template that size is n to IfIt is filtered line by line, wherein 9 n≤16 < <;
I is calculated based on Dynamic iterations methodfMinimum add up difference and its corresponding path, error is most in the cumulative difference of minimum
The position of the corresponding path in small position, that is, optimal suture center line.
As shown in figure 3, in order to allow technical staff to further understand corresponding circle of sensation, three adjacent images to be fused are taken below,
The overlapped region in its top is respectively Img1、Img2、Img3Constitute the first corresponding circle of sensation;Two adjacent image phases to be fused
The region I being mutually overlapped1And I2、I3And I4、I5And I0The second corresponding circle of sensation is respectively constituted, there are three the second corresponding circle of sensation in total.
In specific implementation, the power of the step S4 based on optimal suture center line computation the first corresponding circle of sensation and the second corresponding circle of sensation
Weight table specifically includes:
If the size of image to be fused is (H, W), the size of the first corresponding circle of sensation is (D, W);
According to the optimal suture center line for the first corresponding circle of sensation found out, the weight table of the first corresponding circle of sensation is calculated, including as follows
Step:
Step 1: calculating in the first corresponding circle of sensation, often the stepping factor of row in a column direction, calculation formula are as follows:
Wherein, bw_delta (y) indicates that the stepping factor of y rows in a column direction, std_bw_delta indicate standard
The stepping factor, value range are 64~128;
Step 2: calculating weight table weight according to the stepping factor of every row in a column direction, calculation formula is as follows:
Step 3: the weight table that overlapped area calculates is normalized, make its value range between 0~1, and
The sum of weight of overlapped area's corresponding position is 1;
According to the optimal suture center line for the second corresponding circle of sensation found out, the weight table of the second corresponding circle of sensation is calculated;
Centered on optimal suture center line, the size is symmetrically taken to be on the both sides at the optimal suture centerOne
A area is combined into oneSection, since the weight table of the second corresponding circle of sensation is opposite with the combination section
The value for the position answered is linear change, and size is 1 in the value of the weight table on the combination section left side, at this all in 0~1
The value for combining the weight table on the right of section is 0, then the computational methods of the weight table of the second corresponding circle of sensation are as follows:
Wherein, bw indicates actually to carry out the width of linear fusion in the second corresponding circle of sensation, and seli (y) indicates the optimal of y rows
The position of center line is sutured, w indicates that the width of the second corresponding circle of sensation, weight (y, x) indicate the weight table of the second corresponding circle of sensation.
In specific implementation, the step S5 is according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation to all figures to be fused
It is specifically included as carrying out fusion:
Calculation formula is as follows used by the fusion method of first corresponding circle of sensation:
Istitch(y, x)=weight1(y, x) × Img1(y, x)+weight2(y, x) × Img2(y, x)
+weight3(y, x) × Img3(y, x)
Wherein, IstitchStitching image after overlapping areas is weighted in the first corresponding circle of sensation of (y, x) expression,
weight1(y, x) indicates overlapping region Img1(y, x) corresponding weight table, weight2(y, x) indicates overlapping region Img2(y,
X) corresponding weight table, weight3(y, x) indicates overlapping region Img3(y, x) corresponding weight table;
The fusion method of second corresponding circle of sensation is using the method for linear weighted function to two source image vegetarian refreshments of overlapping region
Pixel value does weighted blend and obtains mixed pixel value, and calculation formula is as follows:
I′stitch(y, x)=weight (y, x) × I1(y, x)+(1-weight (y, x)) × I2(y, x)
Wherein, I 'stitch(y, x) indicates the second corresponding circle of sensation I1And I2Coordinate in the stitching image obtained after linear weighted function
Pixel value at (y, x), I1(y, x) indicates overlapping region I1Pixel value at middle coordinate (y, x), I2(y, x) indicates overlay region
Domain I2Pixel value at middle coordinate (y, x), weight (y, x) indicate overlapping region I1And I2Corresponding weight table is at (y, x)
Number.Second corresponding circle of sensation I3And I4And I5And I0Fusion method and I1And I2It is identical.
In specific implementation, the top of image to be fused is scaled in above-mentioned steps S2, in order not to change splicing
The size of image afterwards needs to stretch the small image of splicing after fusion, and the step S6 carries out figure to the image after fusion
It is specifically included as stretching:
The stretching formula used for:
I′end(y, x)=(1- α) × Iend(y, floor (x*scale))+α × Iend(y, floor (x*scale)+1)
Wherein, mod () indicates that modulo operation, floor () indicate downward rounding operation, IendAfter (y, x) indicates fusion
Splice small image, IendBlending image after (y, x) expression is stretched, i.e., the stitching image obtained after Panorama Mosaic.
Image to be fused is divided into the first corresponding circle of sensation and the second corresponding circle of sensation, and divided by the Panorama Mosaic method of the present invention
Optimal suture center line is not found out carries out image co-registration.
Wherein, to first corresponding circle of sensation by the way of merging entirely, that is, participate in fusion overlapping region number be equal to wait melting
The picture number of conjunction, it is to the weight table of first corresponding circle of sensation by the way of the Dynamic step factor, i.e., every in overlapping areas
The stepping factor of row in a column direction is variation, takes full advantage of all information of image top to be fused, realizes top
The gentle transition of corresponding circle of sensation also eliminates the effect at top " cross ", stitching image is made to seem more natural at top.
Wherein, to second corresponding circle of sensation, i.e., the overlapping regions of adjacent two images uses the method for linear weighted function can be with
Make lap left image " gradually going out ", and lap right image " being fade-in ", which achieves gentle transition so that
Image mosaic seems more natural.
It should be understood that the above-mentioned specific implementation mode of the present invention is used only for exemplary illustration or explains the present invention's
Principle, but not to limit the present invention.Therefore, that is done without departing from the spirit and scope of the present invention is any
Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.In addition, appended claims purport of the present invention
Covering the whole variations fallen into attached claim scope and boundary or this range and the equivalent form on boundary and is repairing
Change example.
Claims (10)
1. a kind of Panorama Mosaic method, which is characterized in that include the following steps:
Step S1, it obtains panorama camera and carries out pattern distortion per the original image captured by the camera of road, and to each original image
Correction;
Step S2, processing is zoomed in and out to the image after distortion correction, obtains image to be fused;
Step S3, optimal suture center line of the image to be fused of each camera of panorama camera in corresponding circle of sensation is calculated, it is described to melt
It includes the first corresponding circle of sensation and the second corresponding circle of sensation to close area, and first corresponding circle of sensation is by positioned at the top of each image to be fused and phase mutual respect
Folded region is constituted, and second corresponding circle of sensation is made of the overlapped region in its edge of adjacent two images to be fused;
Step S4, the weight table based on the first corresponding circle of sensation of optimal suture center line computation and the second corresponding circle of sensation;
Step S5, all images to be fused are merged according to the weight table of the first corresponding circle of sensation and the second corresponding circle of sensation;
Step S6, image stretch is carried out to the image after fusion.
2. according to the method described in claim 1, it is characterized in that, carrying out pattern distortion to each original image in the step S1
The method of correction specifically comprises the following steps:
Step S101, the intrinsic parameter and distortion factor of each camera of panorama camera, Yi Jixiang are calculated using gridiron pattern scaling method
Spatial relationship parameters between adjacent camera;
Step S102, it is calculated using the spatial relationship parameters between the intrinsic parameter of camera and distortion factor and camera original
Image and go the coordinate map between fault image;
Step S103, the original image of each camera shooting of panorama camera is mapped to according to the coordinate map and is distorted
On image, the mesh is determined using the coordinate map using the pixel gone in fault image as target pixel points
Mark the correspondence of pixel and the source image vegetarian refreshments on the original image.
3. according to the method described in claim 2, it is characterized in that, utilizing the coordinate map in the step S101, really
Set the goal pixel and the correspondence of the source image vegetarian refreshments on original image calculation formula it is as follows:
Dst (x, y)=Src (Lut_x (x, y), Lut_y (x, y))
Wherein, Dst (x, y) denotation coordination is the target pixel points of (x, y), Lut_x (x, y) indicate object pixel point coordinates (x,
Y) it is mapped to the coordinate value in original image on the directions x by coordinate map, Lut_y (x, y) indicates object pixel point coordinates
(x, y) is mapped in original image the coordinate value in Y-direction, Src (Lut_x (x, y), Lut_y (x, y)) by coordinate map
Indicate that object pixel point coordinates (x, y) is mapped to the position in original image by coordinate map.
4. according to the method described in claim 3, it is characterized in that, the step S2 zooms in and out the image after distortion correction
Processing, used scaling formula are as follows:
Iscale(y, x)=remap (Isrc(y, x), mapx(y, x), mapy(y, x))
Wherein, remap () function representation remaps, i.e., the pixel of certain position in piece image is placed into another piece image refers to
Position the process set;Iscale(y, x) indicates to remove fault image, I after scalingsrc(y, x) indicates to remove the image to be mapped of distortion;
mapx(y, x) indicate go distortion image to be mapped in the x direction remap figure, mapy(y, x) indicates to remove the to be mapped of distortion
Image in y-direction remap figure;Scale indicates that zoom factor, W indicate that the width of fault image, D is gone to indicate to go distortion
The height zoomed in and out is needed in image.
5. according to the method described in claim 4, it is characterized in that, the value range of the zoom factor scale is 1/2~1/
The value range of 16, the height D of scaling are 128~256.
6. according to the method described in claim 5, it is characterized in that, calculating waiting for for each camera of panorama camera in the step S3
The method of optimal suture center line of the blending image in corresponding circle of sensation specifically includes:
The optimal suture center line for calculating the first corresponding circle of sensation, according to the optimal suture center line of the first corresponding circle of sensation in overlapped area
In position immobilize, optimal suture center line includes left optimal suture sL and right optimal suture sR;
The optimal suture center line for calculating the second corresponding circle of sensation, according to adjacent two images to be fused, using overall edge feature with
The Dynamic iterations method of gray difference feature finds out optimal suture center line.
7. according to the method described in claim 6, it is characterized in that, described using overall edge feature and gray difference feature
The method that Dynamic iterations method finds out optimal suture center line specifically includes:
Calculate the second corresponding circle of sensation inward flange feature and gray difference feature and If;
Using the Filtering Template that size is n to IfIt is filtered line by line, wherein 9 n≤16 < <;
I is calculated based on Dynamic iterations methodfMinimum add up difference and its corresponding path, error is minimum in the cumulative difference of minimum
The position of the corresponding path in position, that is, optimal suture center line.
8. the method according to the description of claim 7 is characterized in that the step S4 is based on optimal suture center line computation first
The weight table of corresponding circle of sensation and the second corresponding circle of sensation specifically includes:
If the size of image to be fused is (H, W), the size of the first corresponding circle of sensation is (D, W);
According to the optimal suture center line for the first corresponding circle of sensation found out, the weight table of the first corresponding circle of sensation is calculated, is included the following steps:
Step 1: calculating in the first corresponding circle of sensation, often the stepping factor of row in a column direction, calculation formula are as follows:
Wherein, bw_delta (y) indicates that the stepping factor of y rows in a column direction, std_bw_delta indicate the stepping of standard
The factor, value range are 64~128;
Step 2: calculating weight table weight according to the stepping factor of every row in a column direction, calculation formula is as follows:
Step 3: the weight table that overlapped area calculates is normalized, make its value range between 0~1, and mutually
The sum of weight of overlay region corresponding position is 1;
According to the optimal suture center line for the second corresponding circle of sensation found out, the weight table of the second corresponding circle of sensation is calculated;
Centered on optimal suture center line, the size is symmetrically taken to be on the both sides at the optimal suture centerOne
Area is combined into oneSection, since the weight table of the second corresponding circle of sensation is corresponding with the combination section
The value of position be linear change, size is 1 in the value of the weight table on the combination section left side, in the group all in 0~1
The value for closing the weight table on the right of section is 0, then the computational methods of the weight table of the second corresponding circle of sensation are as follows:
Wherein, bw indicates actually to carry out the width of linear fusion in the second corresponding circle of sensation, and seli (y) indicates the optimal suture of y rows
The position of center line, W indicate that the width of the second corresponding circle of sensation, weight (y, x) indicate the weight table of the second corresponding circle of sensation.
9. according to the method described in claim 8, it is characterized in that, the step S5 is according to the first corresponding circle of sensation and the second corresponding circle of sensation
Weight table to all images to be fused carry out fusion specifically include:
Calculation formula is as follows used by the fusion method of first corresponding circle of sensation:
Istich(y, x)=weight1(y, x) × Img1(y, x)+weight2(y, x) × Img2(y, x)
+weight3(y, x) × Img3(y, x)
Wherein, IstitchStitching image after overlapping areas is weighted in the first corresponding circle of sensation of (y, x) expression, weight1(y,
X) overlapping region Img is indicated1(y, x) corresponding weight table, weight2(y, x) indicates overlapping region Img2(y, x) corresponding power
Weight table, weight3(y, x) indicates overlapping region Img3(y, x) corresponding weight table;
The fusion method of second corresponding circle of sensation is using the method for linear weighted function to the pixels of two source image vegetarian refreshments of overlapping region
Value does weighted blend and obtains mixed pixel value, and calculation formula is as follows:
I′stitch(y, x)=weight (y, x) xI1(y, x)+(1-weight (y, x)) × I2(y, x)
Wherein, I 'stitch(y, x) indicates the second corresponding circle of sensation I1And I2Coordinate in the stitching image obtained after linear weighted function (y,
X) pixel value at place, I1(y, x) indicates overlapping region I1Pixel value at middle coordinate (y, x), I2(y, x) indicates overlapping region I2
Pixel value at middle coordinate (y, x), weight (y, x) indicate overlapping region I1And I2Coefficient of the corresponding weight table at (y, x).
Second corresponding circle of sensation I3And I4And I5And I0Fusion method and I1And I2It is identical.
10. according to the method described in claim 9, it is characterized in that, the step S6 carries out image drawing to the image after fusion
It stretches and specifically includes:
The stretching formula used for:
I′end(y, x)=(1- α) × Iend(y, floor (x*scale))+α × Iend(y, floor (x*scale)+1)
Wherein, mod () indicates that modulo operation, floor () indicate downward rounding operation, Iend(y, x) indicates the splicing after fusion
Small image, IendBlending image after (y, x) expression is stretched, i.e., the stitching image obtained after Panorama Mosaic.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810296392.9A CN108510445A (en) | 2018-03-30 | 2018-03-30 | A kind of Panorama Mosaic method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810296392.9A CN108510445A (en) | 2018-03-30 | 2018-03-30 | A kind of Panorama Mosaic method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108510445A true CN108510445A (en) | 2018-09-07 |
Family
ID=63380342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810296392.9A Pending CN108510445A (en) | 2018-03-30 | 2018-03-30 | A kind of Panorama Mosaic method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108510445A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544455A (en) * | 2018-11-22 | 2019-03-29 | 重庆市勘测院 | A kind of overlength high-definition live-action long paper seamless integration method |
CN109600584A (en) * | 2018-12-11 | 2019-04-09 | 中联重科股份有限公司 | Method and device for observing tower crane, tower crane and machine readable storage medium |
CN109658334A (en) * | 2018-12-18 | 2019-04-19 | 北京易道博识科技有限公司 | A kind of ancient books image split-joint method and device |
CN109671045A (en) * | 2018-12-28 | 2019-04-23 | 广东美电贝尔科技集团股份有限公司 | A kind of more image interfusion methods |
CN110176040A (en) * | 2019-04-30 | 2019-08-27 | 惠州华阳通用电子有限公司 | A kind of panoramic looking-around system automatic calibration method |
CN110572621A (en) * | 2019-09-26 | 2019-12-13 | 湖州南太湖智能游艇研究院 | Method for splicing panoramic video in real time |
CN110738608A (en) * | 2019-05-27 | 2020-01-31 | 首都师范大学 | plane image correction method and system |
CN111242998A (en) * | 2018-11-28 | 2020-06-05 | 株式会社理光 | Image fusion method and device |
CN111292413A (en) * | 2020-02-24 | 2020-06-16 | 浙江大华技术股份有限公司 | Image model processing method and device, storage medium and electronic device |
CN111311359A (en) * | 2020-01-21 | 2020-06-19 | 杭州微洱网络科技有限公司 | Jigsaw method for realizing human shape display effect based on e-commerce image |
CN111626933A (en) * | 2020-05-14 | 2020-09-04 | 湖南国科智瞳科技有限公司 | Accurate and rapid microscopic image splicing method and system |
CN112224132A (en) * | 2020-10-28 | 2021-01-15 | 武汉极目智能技术有限公司 | Vehicle panoramic all-around obstacle early warning method |
CN112308985A (en) * | 2020-11-03 | 2021-02-02 | 豪威科技(武汉)有限公司 | Vehicle-mounted image splicing method, system and device |
CN113012047A (en) * | 2021-03-26 | 2021-06-22 | 广州市赋安电子科技有限公司 | Dynamic camera coordinate mapping establishing method and device and readable storage medium |
CN113724157A (en) * | 2021-08-11 | 2021-11-30 | 浙江大华技术股份有限公司 | Image blocking method, image processing method, electronic device, and storage medium |
CN113781373A (en) * | 2021-08-26 | 2021-12-10 | 云从科技集团股份有限公司 | Image fusion method, device and computer storage medium |
CN114119410A (en) * | 2021-11-19 | 2022-03-01 | 航天宏康智能科技(北京)有限公司 | Method and device for correcting cells in distorted tabular image |
CN115798400A (en) * | 2023-01-09 | 2023-03-14 | 永林电子股份有限公司 | LED display control method and device based on image processing and LED display system |
CN116188275A (en) * | 2023-04-28 | 2023-05-30 | 杭州未名信科科技有限公司 | Single-tower crane panoramic image stitching method and system |
CN116993591A (en) * | 2023-09-26 | 2023-11-03 | 中汽智联技术有限公司 | Image stitching fusion method for panoramic automobile, electronic equipment and medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6411742B1 (en) * | 2000-05-16 | 2002-06-25 | Adobe Systems Incorporated | Merging images to form a panoramic image |
CN101938599A (en) * | 2009-06-30 | 2011-01-05 | 爱国者全景(北京)网络科技发展有限公司 | Method for generating interactive dynamic panoramic image |
CN102013110A (en) * | 2010-11-23 | 2011-04-13 | 李建成 | Three-dimensional panoramic image generation method and system |
US20140347439A1 (en) * | 2013-05-22 | 2014-11-27 | Nvidia Corporation | Mobile device and system for generating panoramic video |
CN104182951A (en) * | 2014-08-15 | 2014-12-03 | 张建伟 | Multiband image stitching method based on panorama of dual cameras |
US20150172620A1 (en) * | 2013-12-16 | 2015-06-18 | National Chiao Tung University | Optimal dynamic seam adjustment system and method for image stitching |
CN106657983A (en) * | 2016-11-16 | 2017-05-10 | 深圳六滴科技有限公司 | Parameter test method and device for panoramic camera |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
CN106709878A (en) * | 2016-11-30 | 2017-05-24 | 长沙全度影像科技有限公司 | Rapid image fusion method |
CN106791351A (en) * | 2015-11-24 | 2017-05-31 | 腾讯科技(深圳)有限公司 | Panoramic picture treating method and apparatus |
CN106780326A (en) * | 2016-11-30 | 2017-05-31 | 长沙全度影像科技有限公司 | A kind of fusion method for improving panoramic picture definition |
CN107135376A (en) * | 2017-05-26 | 2017-09-05 | 北京天拓灵域网络科技有限公司 | The real-time splicing processing method of multichannel ultrahigh resolution panoramic video |
CN107203965A (en) * | 2016-03-18 | 2017-09-26 | 中国科学院宁波材料技术与工程研究所 | A kind of Panorama Mosaic method merged based on multichannel image |
-
2018
- 2018-03-30 CN CN201810296392.9A patent/CN108510445A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6411742B1 (en) * | 2000-05-16 | 2002-06-25 | Adobe Systems Incorporated | Merging images to form a panoramic image |
CN101938599A (en) * | 2009-06-30 | 2011-01-05 | 爱国者全景(北京)网络科技发展有限公司 | Method for generating interactive dynamic panoramic image |
CN102013110A (en) * | 2010-11-23 | 2011-04-13 | 李建成 | Three-dimensional panoramic image generation method and system |
US20140347439A1 (en) * | 2013-05-22 | 2014-11-27 | Nvidia Corporation | Mobile device and system for generating panoramic video |
US20150172620A1 (en) * | 2013-12-16 | 2015-06-18 | National Chiao Tung University | Optimal dynamic seam adjustment system and method for image stitching |
CN104182951A (en) * | 2014-08-15 | 2014-12-03 | 张建伟 | Multiband image stitching method based on panorama of dual cameras |
CN106791351A (en) * | 2015-11-24 | 2017-05-31 | 腾讯科技(深圳)有限公司 | Panoramic picture treating method and apparatus |
CN107203965A (en) * | 2016-03-18 | 2017-09-26 | 中国科学院宁波材料技术与工程研究所 | A kind of Panorama Mosaic method merged based on multichannel image |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
CN106657983A (en) * | 2016-11-16 | 2017-05-10 | 深圳六滴科技有限公司 | Parameter test method and device for panoramic camera |
CN106709878A (en) * | 2016-11-30 | 2017-05-24 | 长沙全度影像科技有限公司 | Rapid image fusion method |
CN106780326A (en) * | 2016-11-30 | 2017-05-31 | 长沙全度影像科技有限公司 | A kind of fusion method for improving panoramic picture definition |
CN107135376A (en) * | 2017-05-26 | 2017-09-05 | 北京天拓灵域网络科技有限公司 | The real-time splicing processing method of multichannel ultrahigh resolution panoramic video |
Non-Patent Citations (2)
Title |
---|
YAN GONG: "Research and Analysis of Key Technologies in Image Mosaic", 《INTERNATIONAL JOURNAL OF SIGNAL PROCESSIN》 * |
谷雨: "结合最佳缝合线和多分辨率融合的图像拼接", 《中国图象图形学报》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544455B (en) * | 2018-11-22 | 2023-05-02 | 重庆市勘测院 | Seamless fusion method for ultralong high-definition live-action long rolls |
CN109544455A (en) * | 2018-11-22 | 2019-03-29 | 重庆市勘测院 | A kind of overlength high-definition live-action long paper seamless integration method |
CN111242998B (en) * | 2018-11-28 | 2023-04-25 | 株式会社理光 | Image fusion method and device |
CN111242998A (en) * | 2018-11-28 | 2020-06-05 | 株式会社理光 | Image fusion method and device |
CN109600584A (en) * | 2018-12-11 | 2019-04-09 | 中联重科股份有限公司 | Method and device for observing tower crane, tower crane and machine readable storage medium |
CN109658334A (en) * | 2018-12-18 | 2019-04-19 | 北京易道博识科技有限公司 | A kind of ancient books image split-joint method and device |
CN109671045A (en) * | 2018-12-28 | 2019-04-23 | 广东美电贝尔科技集团股份有限公司 | A kind of more image interfusion methods |
CN110176040A (en) * | 2019-04-30 | 2019-08-27 | 惠州华阳通用电子有限公司 | A kind of panoramic looking-around system automatic calibration method |
CN110738608A (en) * | 2019-05-27 | 2020-01-31 | 首都师范大学 | plane image correction method and system |
CN110738608B (en) * | 2019-05-27 | 2022-02-25 | 首都师范大学 | Plane image correction method and system |
CN110572621A (en) * | 2019-09-26 | 2019-12-13 | 湖州南太湖智能游艇研究院 | Method for splicing panoramic video in real time |
CN111311359A (en) * | 2020-01-21 | 2020-06-19 | 杭州微洱网络科技有限公司 | Jigsaw method for realizing human shape display effect based on e-commerce image |
CN111292413A (en) * | 2020-02-24 | 2020-06-16 | 浙江大华技术股份有限公司 | Image model processing method and device, storage medium and electronic device |
CN111626933A (en) * | 2020-05-14 | 2020-09-04 | 湖南国科智瞳科技有限公司 | Accurate and rapid microscopic image splicing method and system |
CN111626933B (en) * | 2020-05-14 | 2023-10-31 | 湖南国科智瞳科技有限公司 | Accurate and rapid microscopic image stitching method and system |
CN112224132B (en) * | 2020-10-28 | 2022-04-19 | 武汉极目智能技术有限公司 | Vehicle panoramic all-around obstacle early warning method |
CN112224132A (en) * | 2020-10-28 | 2021-01-15 | 武汉极目智能技术有限公司 | Vehicle panoramic all-around obstacle early warning method |
CN112308985B (en) * | 2020-11-03 | 2024-02-02 | 豪威科技(武汉)有限公司 | Vehicle-mounted image stitching method, system and device |
CN112308985A (en) * | 2020-11-03 | 2021-02-02 | 豪威科技(武汉)有限公司 | Vehicle-mounted image splicing method, system and device |
CN113012047A (en) * | 2021-03-26 | 2021-06-22 | 广州市赋安电子科技有限公司 | Dynamic camera coordinate mapping establishing method and device and readable storage medium |
CN113724157A (en) * | 2021-08-11 | 2021-11-30 | 浙江大华技术股份有限公司 | Image blocking method, image processing method, electronic device, and storage medium |
CN113781373A (en) * | 2021-08-26 | 2021-12-10 | 云从科技集团股份有限公司 | Image fusion method, device and computer storage medium |
CN113781373B (en) * | 2021-08-26 | 2024-08-23 | 云从科技集团股份有限公司 | Image fusion method, device and computer storage medium |
CN114119410B (en) * | 2021-11-19 | 2022-04-22 | 航天宏康智能科技(北京)有限公司 | Method and device for correcting cells in distorted tabular image |
CN114119410A (en) * | 2021-11-19 | 2022-03-01 | 航天宏康智能科技(北京)有限公司 | Method and device for correcting cells in distorted tabular image |
CN115798400B (en) * | 2023-01-09 | 2023-04-18 | 永林电子股份有限公司 | LED display control method and device based on image processing and LED display system |
CN115798400A (en) * | 2023-01-09 | 2023-03-14 | 永林电子股份有限公司 | LED display control method and device based on image processing and LED display system |
CN116188275A (en) * | 2023-04-28 | 2023-05-30 | 杭州未名信科科技有限公司 | Single-tower crane panoramic image stitching method and system |
CN116188275B (en) * | 2023-04-28 | 2023-10-20 | 杭州未名信科科技有限公司 | Single-tower crane panoramic image stitching method and system |
CN116993591A (en) * | 2023-09-26 | 2023-11-03 | 中汽智联技术有限公司 | Image stitching fusion method for panoramic automobile, electronic equipment and medium |
CN116993591B (en) * | 2023-09-26 | 2024-01-02 | 中汽智联技术有限公司 | Image stitching fusion method for panoramic automobile, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108510445A (en) | A kind of Panorama Mosaic method | |
CN103763479B (en) | The splicing apparatus and its method of real time high-speed high definition panorama video | |
CN101276465B (en) | Method for automatically split-jointing wide-angle image | |
US7024053B2 (en) | Method of image processing and electronic camera | |
CN106791623A (en) | A kind of panoramic video joining method and device | |
TWI238006B (en) | Method for creating brightness filter and virtual space creation system | |
CN101931772B (en) | Panoramic video fusion method, system and video processing device | |
US8068693B2 (en) | Method for constructing a composite image | |
CN104618648B (en) | A kind of panoramic video splicing system and joining method | |
EP2765769A1 (en) | Image processing method and image processing device | |
CN105354796B (en) | Image processing method and system for auxiliary of driving a vehicle | |
CN201523430U (en) | Panoramic video monitoring system | |
JP2014212519A (en) | Stereoscopic panoramas | |
JP7093015B2 (en) | Panorama video compositing device, panoramic video compositing method, and panoramic video compositing program | |
CN107403408A (en) | A kind of double fish eye images spliced panoramic image seam fusion methods | |
CN108447021A (en) | The video scaling method optimized based on piecemeal and frame by frame | |
CN107203965A (en) | A kind of Panorama Mosaic method merged based on multichannel image | |
JP2019033474A (en) | Multi-sensor video camera, and method and processing pipeline for the same | |
JP2006515128A (en) | Stereo panoramic image capturing device | |
Liu et al. | Shape-optimizing and illumination-smoothing image stitching | |
Liu et al. | Head-size equalization for better visual perception of video conferencing | |
CN105078492A (en) | Correction method and device for sawtooth artifacts in digital breast mammary tomographic reconstruction | |
JP4008333B2 (en) | Multi-image projection method using a plurality of projectors, projector apparatus for using the method, program, and recording medium | |
CN107426561A (en) | The virtual reality live broadcasting method and device of a kind of 3D360 degree | |
CN110140148A (en) | In the method and apparatus that abutment joint carries out multiband mixing from the image that multiple cameras obtain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180907 |
|
WD01 | Invention patent application deemed withdrawn after publication |