CN111292232B - Lens array image stitching method, device and storage medium - Google Patents
Lens array image stitching method, device and storage medium Download PDFInfo
- Publication number
- CN111292232B CN111292232B CN201811488704.2A CN201811488704A CN111292232B CN 111292232 B CN111292232 B CN 111292232B CN 201811488704 A CN201811488704 A CN 201811488704A CN 111292232 B CN111292232 B CN 111292232B
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- maximum
- matrix
- filtering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000001914 filtration Methods 0.000 claims abstract description 44
- 230000004927 fusion Effects 0.000 claims abstract description 26
- 230000008859 change Effects 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims description 121
- 238000003384 imaging method Methods 0.000 abstract description 10
- 238000012545 processing Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 101100491335 Caenorhabditis elegans mat-2 gene Proteins 0.000 description 1
- 102100040428 Chitobiosyldiphosphodolichol beta-mannosyltransferase Human genes 0.000 description 1
- 101000891557 Homo sapiens Chitobiosyldiphosphodolichol beta-mannosyltransferase Proteins 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G06T3/18—
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The embodiment of the invention discloses a lens array image stitching method, a lens array image stitching device and a storage medium, and relates to the technical field of image processing, wherein the method comprises the following steps: acquiring an original image obtained through the lens array, and performing energy compensation on the sub-image formed by the sub-lenses; performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity area; filtering abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image; and carrying out splicing fusion on the sub-images after the scaling treatment. According to the invention, the similar areas of the sub-images are obtained, the sub-images are subjected to filtering and scaling treatment according to the similar areas of the sub-images, and then the sub-images are spliced, so that distortion is avoided when the sub-images are spliced into an integral image, and the imaging quality of the lens array is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a lens array image stitching method, a lens array image stitching device, and a storage medium.
Background
The lens array is an optical device comprising a plurality of sub-lenses arranged in an array, and is widely used in square cameras, compound eye cameras and large-field microscopic cameras. The sub-lenses are arranged in an ordered, equally spaced square/rectangular array within the lens array. Each sub-lens can image objects in the self-vision range, and the image presented by each sub-lens is called a sub-image. Therefore, the image graph (the original image obtained by imaging after passing through the lens array) obtained by the lens array in one imaging contains a plurality of sub-images, and the arrangement form of the sub-images in the image graph corresponds to the arrangement form of the sub-lenses in the lens array. The task of lens array imaging is to process an image map comprising a plurality of sub-images, the final output being the complete image formed by stitching the sub-images.
When image stitching is performed after lens array imaging, stitching is often performed by selecting a circular area with a proper size, which is also called an image circle, from sub-images, and adjacent image circles can generate a partial overlapping area.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a lens array image stitching method, a lens array image stitching device, and a storage medium, which are used for solving the technical problem that distortion occurs during sub-image stitching in the prior art, and affects the final imaging effect.
In a first aspect, the present invention provides a lens array image stitching method, where the lens array includes a plurality of sub-lenses arranged in an array, and the method includes:
acquiring an original image obtained through the lens array, and performing energy compensation on the sub-image formed by the sub-lenses;
performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity area;
filtering abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image;
and carrying out splicing fusion on the sub-images after the scaling treatment.
Further, the energy compensation of the sub-image formed by the sub-lens includes:
determining the size of the sub-image pixels, and the number of lines and columns of sub-lenses;
and carrying out energy compensation on the sub-image according to the gray value and RGB value of the image around the sub-image by the vignetting area.
Further, the performing image similarity calculation on the sub-images formed by the adjacent sub-lenses, and obtaining the maximum similarity area includes:
obtaining a maximum similar pixel point matrix of the sub-image along the row direction and a maximum similar pixel point matrix of the sub-image along the column direction;
obtaining the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction;
and determining the maximum similar area according to the maximum similar pixel number.
Further, the acquiring the maximum similar pixel point matrix of the sub-image along the row direction and the maximum similar pixel point matrix of the sub-image along the column direction includes:
selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculated row number and the maximum calculated column number;
respectively acquiring a row direction combination matrix and a column direction combination matrix of adjacent sub-images;
acquiring the maximum correlation coefficient of the row direction combined image according to the first appointed data of the row direction combined matrix; acquiring the maximum correlation coefficient of the column direction combined image according to the second designated data of the column direction combined matrix;
determining a position value corresponding to the maximum correlation coefficient of the combined image in the row direction, and storing the position value in a matrix of the pixel points with the maximum similarity in the row direction of the sub-image; and determining a position value corresponding to the maximum correlation coefficient of the combined image in the column direction, and storing the position value in a matrix of the pixel points similar to the maximum column direction of the sub-image.
Further, the acquiring the maximum similar pixel point matrix of the sub-image along the row direction and the maximum similar pixel point matrix of the sub-image along the column direction includes:
selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculated row number and a deflectable pixel point value; setting the size of a local sub-lens image;
acquiring the maximum correlation coefficient of the line direction combined image according to third appointed data of the adjacent sub-image line direction combined matrix;
and determining a position value corresponding to the maximum correlation coefficient of the combined image in the row direction, and storing the position value into a sub-image row direction maximum similar pixel point matrix and a sub-image row direction offset matrix.
Further, the performing image similarity calculation on the sub-images formed by the adjacent sub-lenses, and obtaining the maximum similarity area includes:
selecting adjacent image circles, wherein the image circles are circular areas;
determining a common area intersected by the circles;
respectively determining a first image circle radius and a second image circle radius according to the public area, and respectively extracting a first image circle matrix and a second image circle matrix;
calculating correlation coefficients corresponding to different moving pixel points according to the first image circle matrix and the second image circle matrix;
and taking the number of the offset pixel points corresponding to the maximum correlation coefficient as the maximum similarity area.
Further, filtering the abrupt change region in the original image by filtering; scaling each sub-image includes:
median filtering is respectively carried out on the row-direction maximum similar pixel point matrix and the column-direction maximum similar pixel point matrix to obtain a row-direction filtering matrix and a column-direction filtering matrix;
determining the expansion coefficients of the sub-images in the up-down, left-right directions according to the row direction filtering matrix and the column direction filtering matrix;
and scaling the sub-image according to the four-direction expansion coefficients to obtain a scaled sub-image.
Further, the splicing and fusing of the sub-images includes:
clipping the scaled sub-image according to the original focusing sub-image size;
dividing the cut sub-image into a fixed area and a splicing area, and dividing the splicing area into two image fusion areas and four image fusion areas;
and respectively carrying out splicing fusion on the two image fusion areas and the four image fusion areas.
In another aspect, the present invention provides a lens array image stitching apparatus, including:
the compensation module is used for acquiring an original image obtained through the lens array and carrying out energy compensation on the sub-image formed by the sub-lenses;
the computing module is used for carrying out image similarity computation on the sub-images formed by the adjacent sub-lenses to obtain a maximum similarity area;
the scaling module is used for filtering out abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image;
and the splicing module is used for splicing and fusing the scaled sub-images.
In another aspect, the present invention further provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps of any of the above-described lens array image stitching methods.
According to the lens array image stitching method, device and storage medium, the similar areas of the sub-images are obtained, the sub-images are subjected to filtering and scaling according to the similar areas of the sub-images, then the sub-images are stitched, distortion is avoided when the sub-images are stitched into an integral image, and the imaging quality of the lens array is improved. In addition, in the invention, the sub-image formed by the lens array is provided with depth information, and the spliced integral image is also provided with depth information, so that a full-depth stereoscopic image can be formed.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are needed to be used in the embodiments of the present invention will be briefly described, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flowchart of a lens array image stitching method according to an embodiment of the present invention;
FIGS. 2 a-2 b show pictures before and after image energy compensation in accordance with an embodiment of the present invention;
FIG. 2c shows another picture after sub-image compensation in FIG. 2 a;
FIG. 3 shows partial sub-lens images (1) (2) (3) (4) with a number of columns 2 x 2 in accordance with an embodiment of the present invention;
fig. 4 is a flowchart of a method for obtaining a matrix of pixels with maximum similarity between sub-images along a row direction and a matrix of pixels with maximum similarity between sub-images along a column direction according to an embodiment of the present invention;
FIG. 5 is a flowchart of another method for obtaining a matrix of maximum similar pixels of a sub-image along a row direction and a matrix of maximum similar pixels of a sub-image along a column direction according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of acquiring similar regions according to an embodiment of the present invention;
FIG. 7 shows the number of similar pixel points up_pixel, right_pixel, down_pixel, and left_pixel in four directions per sub-image in an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating image scaling according to an example embodiment of the present invention;
FIG. 9 is a schematic view showing the partitioning of an image stitching region according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of image partitioning according to an example embodiment of the present invention;
FIG. 11 is a schematic diagram showing image interpolation region division according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of random pixel selection according to an embodiment of the invention;
FIG. 13 is a schematic diagram of interpolation regions in an embodiment of the invention;
fig. 14 shows a schematic view of a lens array image stitching device according to another embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the invention by showing examples of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Example 1
The invention provides a lens array image stitching method, wherein the lens array comprises a plurality of sub-lenses which are arranged in an array, as shown in fig. 1, the method comprises the following steps:
s101, acquiring an original image obtained through the lens array, and performing energy compensation on a sub-image formed by the sub-lenses;
in this step, a sub-lens image circle is selected according to the actual system design, and the Pixel size Pixel0_row is equal to Pixel0_col, wherein Pixel 0_row=pixel 0_col, and the number of sub-lenses seg_row and the number of sub-lenses seg_col are determined.
And carrying out energy compensation on each sub-image according to the gray value and RGB value of the image around the sub-image by the vignetting area. The pictures before and after the energy compensation are shown in fig. 2a and 2 b.
The sub-images may be compensated using square areas in fig. 2b or round areas in fig. 2 c.
S102, performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity area;
specifically, the method comprises the following steps:
s1021, acquiring a maximum similar pixel point matrix of the sub-image along the row direction and a maximum similar pixel point matrix of the sub-image along the column direction;
in one embodiment, the step includes:
s1, selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculated row number and the maximum calculated column number;
s2, respectively acquiring a row direction combination matrix and a column direction combination matrix of adjacent sub-images;
s3, acquiring the maximum correlation coefficient of the row direction combined image according to the first appointed data of the row direction combined matrix; acquiring the maximum correlation coefficient of the column direction combined image according to the second designated data of the column direction combined matrix;
s4, determining a position value corresponding to the maximum correlation coefficient of the combined image in the row direction, and storing the position value in a matrix of the maximum similar pixel points in the row direction of the sub-image; and determining a position value corresponding to the maximum correlation coefficient of the combined image in the column direction, and storing the position value in a matrix of the pixel points similar to the maximum column direction of the sub-image.
In the following, a partial sub-lens image (1) (2) (3) (4) with a row number of 2 x 2 shown in fig. 3 is extracted by selecting a sub-lens a (seg_ii, seg_jj), and the maximum calculated row number ii_max and the maximum calculated column number jj_max are set as an example, and the specific procedure is shown in fig. 4,
s10, selecting a certain sub-lens A (seg_ii, seg_jj), extracting partial sub-lens images (1) (2) (3) (4) with the row number of 2 x 2 as shown in FIG. 3, and setting the maximum calculated row number ii_max and the maximum calculated column number jj_max.
The following description will take a row-direction combination matrix as an example, and the column-direction combination matrix is the same as the row-direction combination matrix.
S11, combining the row direction images (1) and (2) to be Com_img_up and the size of Pixel0_row multiplied by 2Pixel0_col; the combined line direction image (3) (4) is Com img dow, and the size is pixel0_row×2pixel0_col;
s12, ii=1, extracting the last ii rows of data in the image com_img_up, calculated as matrix part_img_up, and the size is ii×2pixel0_col; extracting the previous ii rows of data in the image com_img_dow, which is calculated as a matrix part_img_dow, and the size is ii×2pixel0_col;
s13, calculating a correlation coefficient part_r of part_img_up and part_img_dow, and taking a local coefficient matrix part_corr_row (ii) =part_r into account;
s14, judging whether ii is equal to ii_max;
and S15, when ii is not equal to ii_max, ii=ii+1, and steps S12 to S14 are circularly executed.
S16, when ii=ii_max, searching a position value corresponding to the maximum correlation coefficient part_r_max in the matrix part_corr_row:
ii_get=find (part_corr_row= max (part_corr_row)), if the maximum value corresponds to a plurality of positions, the intermediate position value is taken, and the intermediate position value is stored in all_corr_row (seg_ii, seg_jj) =ii_get stored in the pixel point matrix with the maximum similarity in the sub-image row direction.
In another embodiment, the step comprises:
s21, selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculated row number and a deflectable pixel point value; setting the size of a local sub-lens image;
s22, acquiring the maximum correlation coefficient of the line direction combined image according to third appointed data of the adjacent sub-image line direction combined matrix;
s23, determining a position value corresponding to the maximum correlation coefficient of the combined image in the row direction, and storing the position value into a sub-image row direction maximum similar pixel point matrix and a sub-image row direction offset matrix.
The execution of the above steps will be specifically described by taking a case where a sub-lens (seg_ii, seg_jj) is selected, the number is (1), the partial sub-lens images (1) (2) (3) (4) having a row number of 2 x 2 are extracted, and the maximum calculated row number ii_max and the deflectable pixel value d_pixel0 are set as examples. See in particular fig. 5.
S31, selecting any sub-lens B, extracting partial sub-lens images (1) (2) (3) (4) with the row number of 2 x 2, and setting the maximum calculated row number ii_max and the deflectable pixel point value d_pixel0; setting the partial sub-lens image size to be Pixel0_row×pixel0_col;
s32, extracting (1) (2) an image of Mat_img1 (end-ii+1:end, (d_pixe0+1:end-d_pixe0)), mat_img2 (end-ii+1:end, (d_pixe0+1:end-d_pixe0)),
the combined matrix is counted as part_img_up and has the size of ii×2 (Pixel 0_col-2 d_pixel 0);
initially ii=1, delta_pixel= -delta_pixel0,
s33, extracting (3) and (4) images:
Mat_img3(1:ii,(d_pixel0+1:end-d_pixel0)+delta_pixel),
Mat_img4(1:ii,(d_pixel0+1:end-d_pixel0)+delta_pixel),
the combined matrix is counted as part_img_down and has the size of ii×2 (Pixel 0_col-2 d_pixel 0);
initially ii=1, delta_pixel= -delta_pixel0.
S34, calculating a correlation coefficient part_r of part_img_up and part_img_dow, and taking a local coefficient matrix part_corr_row (ii, d_pixel 0+delta_pixel+1) =part_r into account;
s35, judging whether d_pixel is equal to d_pixel0;
s36, when d_pixel+.d_pixel 0, ii=ii performs step 33.
When d_pixel=d_pixel 0, it is judged whether ii is equal to ii_max;
s37, when ii+.ii_max, ii=ii+1, d_pixel= -d_pixel0, execute step 32.
When ii=ii_max, find the position value corresponding to the largest correlation coefficient part_r_max in the matrix part_corr_row:
(ii_get, col_get) =find (part_corr_row= max (part_corr_row)), and save to sub-image row direction maximum similar pixel point matrix all_corr_row (seg_ii, seg_jj) =ii_get; and a sub-image line direction offset matrix delta_corr_row (seg_ii, seg_jj) =col_get;
s1022, obtaining the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction;
firstly, calculating correlation coefficients of the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction, wherein the correlation coefficients can be specifically calculated by the following formula:
where r (X, Y) represents the correlation coefficient of matrix X and matrix Y, cov (X, Y) represents the covariance between matrix X and matrix Y, var (X) and Var (Y) represent the variances of X and Y, respectively.
And then, determining the maximum similar pixel point according to the correlation coefficient of the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction of each sub-image.
S1023, determining the maximum similar area according to the maximum similar pixel point number.
In another embodiment, the step comprises:
s41, selecting adjacent image circles, wherein the image circles are circular areas.
S42, determining a common area intersected by the image circle;
s43, respectively determining a first image circle radius and a second image circle radius according to the public area, and respectively extracting a first image circle matrix and a second image circle matrix.
S44, calculating correlation coefficients corresponding to different moving pixel points according to the first image circle matrix and the second image circle matrix.
S45, taking the number of the offset pixel points corresponding to the maximum correlation coefficient as the maximum similarity area.
Specifically, a similar area is determined according to an optical design principle, two image circle images are selected to be intersected with different sizes along a common direction, as shown in fig. 6, wherein an effective area of an image circle 1 is a circular surface with a radius R1, and an extraction matrix is Mat1; the effective area of the image circle 2 is a circular surface with a radius R2, the extraction matrix is Mat2, correlation coefficients corresponding to different moving pixel points can be calculated in sequence, and finally the offset pixel point corresponding to the maximum correlation coefficient is taken as the maximum similar area.
The embodiment has the specific advantages that compared with a square sub-image circle, the round sub-image element has more image information, and is favorable for image similarity calculation and image fusion.
S103, filtering out abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image;
specifically, the method comprises the following steps:
s1031, respectively carrying out median filtering on the row-direction maximum similar pixel point matrix and the column-direction maximum similar pixel point matrix to obtain a row-direction filtering matrix and a column-direction filtering matrix;
s1032, determining the expansion coefficients of the sub-image in the up-down, left-right directions according to the row direction filtering matrix and the column direction filtering matrix;
specifically, it is known that the number of Pixel points fix_pixel (2 x (focus_pixel+fixed_pixel) +1 or 2 x (focus_pixel+fixed_pixel) of the repeated Pixel focus_pixel and the single-side unrepeated information in the row-column direction of the individual sub-lenses of the sub-lens image line column size Pixel0_row×pixel0_col, the sub-image size is odd or even, the similar number of Pixel points up_pixel, right_pixel, down_pixel, and left_pixel in four directions of each sub-image as shown in fig. 7 can be obtained from the row direction filter matrix all_filter_row and the column direction filter matrix all_filter_col, and the four-direction expansion coefficients up_par, right_par, down_par, and left_par are calculated from the focus plane data, wherein:
up_par=(focus_pixel+fixed_pixel-up_pixel)/fixed_pixel;
right_par=(focus_pixel+fixed_pixel-right_pixel)/fixed_pixel;
down_par=(focus_pixel+fixed_pixel-down_pixel)/fixed_pixel;
left_par=(focus_pixel+fixed_pixel-left_pixel)/fixed_pixel;
and S1033, scaling the sub-image according to the telescopic coefficients in the four directions to obtain a scaled sub-image.
Illustratively, as shown in fig. 8, a point a on the scaled sub-image, a and the sub-image center line are OA, and a is restored at a point a' of the original sub-image. The included angle between OA and horizontal line OR is theta, and the original scaling factor relation of OA can be obtained by right_par and up_par. The adjacent 4 original sub-image pixels P1, P2, P3 and P4 and the corresponding pixel center distances d1, d2, d3 and d4 can be obtained according to the position of the A'. And calculating corresponding weight values according to the distances of the image points, wherein the weights are larger as the distances are shorter, then, the weights obtained by the 4 image point values are assigned, and the weights W1, W2, W3 and W4 of the image points P1, P2, P3 and P4 can be obtained in sequence.
And finally, calculating the pixel point value A_RGB=P1_RGB, W1+P2_RGB, W2+P3_RGB, W3+P4_RGB, W4 of the corresponding A point according to the calculated weight. And sequentially calculating the position points of the original sub-image where the scaled image is located, and calculating the values of the points to obtain the final scaled sub-image seg_scale_img0.
S104, splicing and fusing the sub-images after the scaling treatment.
Specifically, the method comprises the following steps:
s1041, cutting the scaled sub-image according to the original focusing sub-image size;
specifically, the scaled sub-image seg_scale_img0 is cut according to the original focusing sub-image size, the center is reserved, the edges are cut, and finally the corresponding seg_scale_img can be obtained.
S1042, dividing the cut sub-image into a fixed area and a splicing area, and dividing the splicing area into two image fusion areas and four image fusion areas;
specifically, the cut single-frame subgraph is divided into a central fixed area and an edge splicing area. The fixed area is the similar part in the adjacent sub-image, which is marked as fixed_pixel, and the splicing area is the similar part in the adjacent sub-image, which is marked as focus_pixel. Further, as shown in fig. 9, the stitching region is divided into two image fusion regions (i.e., up-down-left-right fusion) and four image fusion regions (four corner fusion),
s1043, respectively performing splicing fusion on the two image fusion areas and the four image fusion areas.
Specifically, the two image fusion areas and the four image fusion areas can be spliced and fused by adopting a harr wavelet algorithm, and the specific image fusion algorithm is not limited by the invention.
According to the lens array image stitching method, the similar areas of the sub-images are obtained, the sub-images are stitched after being filtered and scaled according to the similar areas of the sub-images, distortion is avoided when the sub-images are stitched into an integral image, and the imaging quality of the lens array is improved.
Example two
The present invention provides another lens array image stitching method, which is different from the first embodiment in that the following step S103' is adopted in step S103.
And S103', carrying out interpolation processing on each sub-image according to the maximum similarity area.
Specifically, the method comprises the following steps:
s51, obtaining the maximum relevant pixel number of the sub-image at the focal plane;
s52, sequentially calculating interpolation coefficients of all directions;
and S53, obtaining interpolation sub-images according to interpolation coefficients of all directions.
In a specific embodiment, the maximum similar pixel number and interpolation coefficient can be obtained by adopting a blocking area extraction mode. Specifically, the single sub-image is partitioned, as shown in fig. 10, into four areas, and prior repeated blocks are obtained according to the system theoretical design, namely, the areas at two sides of the adjacent image circle, such as the (4) area of the image circle 1 and the (1) area of the image circle 2; or a (1) region like circle 1 and a (4) region like circle 2. The maximum similar pixel point and interpolation coefficient in 8 directions of the sub-pixel can be obtained by referring to the mode of taking the similar value of a single sub-image. As shown in fig. 11. The method can better distinguish the imaging object scene into the image circle where the near scene and the distant scene meet, and the accuracy of a system algorithm.
In another embodiment, the number of the maximum similar pixels and the interpolation coefficient can be obtained by adopting a mode of interval value. Specifically, according to the obtained adjacent sub-images, determining the direction of the sub-images with similarity according to priori knowledge, randomly selecting a pixel point matrix according to the required row and column numbers in one sub-image, selecting a pixel point matrix with the same position relationship at the symmetrical position of the other image, as shown in fig. 12, wherein a black area is a random selection area (a plurality of points are randomly selected in a 13×3 matrix), and calculating the correlation coefficient of the condition matrix. Meanwhile, the correlation coefficients of a plurality of groups of random values can be calculated in a group of row and column matrixes, and the accuracy of image correlation under the row and column matrixes is improved. And finally, selecting the pixel point with the best similarity to design interpolation coefficients in all directions. As shown in fig. 13.
Based on the above-described method embodiments, the following device embodiments are presented.
Example III
The third embodiment of the present invention provides a lens array image stitching device based on the first and second embodiments, as shown in fig. 14, the lens array image stitching device 8 includes:
a compensation module 81, configured to acquire an original image obtained by the lens array, and perform energy compensation on a sub-image formed by the sub-lens;
the calculating module 82 is configured to perform image similarity calculation on sub-images formed by adjacent sub-lenses, so as to obtain a maximum similarity area;
a scaling module 83, configured to filter out abrupt change regions in the original image by filtering according to the maximum similarity region; scaling each sub-image;
and the stitching module 84 is configured to stitch and fuse the sub-images.
In addition, the lens array splicing method of the embodiment of the present invention described in connection with the first and second embodiments may be implemented by a lens array splicing apparatus.
The lens array stitching device may include a processor and a memory storing computer program instructions.
In particular, the processor may comprise a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
The memory may include mass storage for data or instructions. By way of example, and not limitation, the memory may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a non-volatile solid state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor reads and executes the computer program instructions stored in the memory to implement any of the lens array image stitching methods of the above embodiments.
In one example, the lens array image stitching device may also include a communication interface and a bus. The processor, the memory and the communication interface are connected through a bus and complete communication with each other.
The communication interface is mainly used for realizing communication among the modules, the devices, the units and/or the equipment in the embodiment of the invention.
The bus includes hardware, software, or both, that couples the components of the lens array image stitching device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. The bus may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
Example IV
In combination with the lens array image stitching method in the above embodiment, an embodiment of the present invention may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement the lens array image stitching method of any one of the first and second embodiments described above.
According to the lens array image stitching method, device and storage medium, the similar areas of the sub-images are obtained, the sub-images are subjected to filtering and scaling according to the similar areas of the sub-images, then the sub-images are stitched, distortion is avoided when the sub-images are stitched into an integral image, and the imaging quality of the lens array is improved.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.
Claims (8)
1. A lens array image stitching method, wherein the lens array includes a plurality of sub-lenses arranged in an array, the method comprising:
acquiring an original image obtained through the lens array, and performing energy compensation on the sub-image formed by the sub-lenses;
performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity area;
filtering abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image;
splicing and fusing the scaled sub-images;
the image similarity calculation is performed on the sub-images formed by the adjacent sub-lenses, and the obtaining of the maximum similarity area includes:
obtaining a maximum similar pixel point matrix of the sub-image along the row direction and a maximum similar pixel point matrix of the sub-image along the column direction;
obtaining the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction;
determining the maximum similar area according to the maximum similar pixel number;
filtering abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image includes:
median filtering is respectively carried out on the row-direction maximum similar pixel point matrix and the column-direction maximum similar pixel point matrix to obtain a row-direction filtering matrix and a column-direction filtering matrix;
determining the expansion coefficients of the sub-images in the up-down, left-right directions according to the row direction filtering matrix and the column direction filtering matrix;
and scaling the sub-image according to the four-direction expansion coefficients to obtain a scaled sub-image.
2. The lens array image stitching method according to claim 1, wherein the acquiring the maximum similar pixel point matrix of the sub-image along the row direction and the maximum similar pixel point matrix of the sub-image along the column direction includes:
selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculated row number and the maximum calculated column number;
respectively acquiring a row direction combination matrix and a column direction combination matrix of adjacent sub-images;
acquiring the maximum correlation coefficient of the row direction combined image according to the first appointed data of the row direction combined matrix; acquiring the maximum correlation coefficient of the column direction combined image according to the second designated data of the column direction combined matrix;
determining a position value corresponding to the maximum correlation coefficient of the combined image in the row direction, and storing the position value in a matrix of the pixel points with the maximum similarity in the row direction of the sub-image; and determining a position value corresponding to the maximum correlation coefficient of the combined image in the column direction, and storing the position value in a matrix of the pixel points similar to the maximum column direction of the sub-image.
3. The lens array image stitching method according to claim 1, wherein the acquiring the maximum similar pixel point matrix of the sub-image along the row direction and the maximum similar pixel point matrix of the sub-image along the column direction includes:
selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculated row number and a deflectable pixel point value; setting the size of a local sub-lens image;
acquiring the maximum correlation coefficient of the line direction combined image according to third appointed data of the adjacent sub-image line direction combined matrix;
and determining a position value corresponding to the maximum correlation coefficient of the combined image in the row direction, and storing the position value into a sub-image row direction maximum similar pixel point matrix and a sub-image row direction offset matrix.
4. A lens array image stitching method, wherein the lens array includes a plurality of sub-lenses arranged in an array, the method comprising:
acquiring an original image obtained through the lens array, and performing energy compensation on the sub-image formed by the sub-lenses;
performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity area;
filtering abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image;
splicing and fusing the scaled sub-images;
the image similarity calculation is performed on the sub-images formed by the adjacent sub-lenses, and the obtaining of the maximum similarity area includes:
selecting adjacent image circles, wherein the image circles are circular areas;
determining a common area intersected by the circles;
respectively determining a first image circle radius and a second image circle radius according to the public area, and respectively extracting a first image circle matrix and a second image circle matrix;
calculating correlation coefficients corresponding to different moving pixel points according to the first image circle matrix and the second image circle matrix;
and taking the number of the offset pixel points corresponding to the maximum correlation coefficient as the maximum similarity area.
5. The lens array image stitching method according to any one of claims 1-4, wherein the energy compensating the sub-images imaged by the sub-lenses comprises:
determining the size of the sub-image pixels, and the number of lines and columns of sub-lenses;
and carrying out energy compensation on the sub-image according to the gray value and RGB value of the image around the sub-image by the vignetting area.
6. The lens array image stitching method according to claim 5, wherein the stitching fusion of the sub-images comprises:
clipping the scaled sub-image according to the original focusing sub-image size;
dividing the cut sub-image into a fixed area and a splicing area, and dividing the splicing area into two image fusion areas and four image fusion areas;
and respectively carrying out splicing fusion on the two image fusion areas and the four image fusion areas.
7. The utility model provides a lens array image stitching device which characterized in that, lens array includes a plurality of sub-lenses that are array arrangement, includes:
the compensation module is used for acquiring an original image obtained through the lens array and carrying out energy compensation on the sub-image formed by the sub-lenses;
the computing module is used for carrying out image similarity computation on the sub-images formed by the adjacent sub-lenses to obtain a maximum similarity area;
the scaling module is used for filtering out abrupt change areas in the original image through filtering according to the maximum similar area; scaling each sub-image;
the splicing module is used for splicing and fusing the scaled sub-images;
wherein, the method for obtaining the maximum similarity region comprises the following steps:
obtaining a maximum similar pixel point matrix of the sub-image along the row direction and a maximum similar pixel point matrix of the sub-image along the column direction; obtaining the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction; determining the maximum similar area according to the maximum similar pixel number; filtering out abrupt change areas in the original image by filtering according to the maximum similar area; scaling each sub-image includes: median filtering is respectively carried out on the row-direction maximum similar pixel point matrix and the column-direction maximum similar pixel point matrix to obtain a row-direction filtering matrix and a column-direction filtering matrix; determining the expansion coefficients of the sub-images in the up-down, left-right directions according to the row direction filtering matrix and the column direction filtering matrix; and scaling the sub-image according to the four-direction expansion coefficients to obtain a scaled sub-image.
8. A computer readable storage medium storing one or more programs executable by one or more processors to perform the steps of the lens array image stitching method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811488704.2A CN111292232B (en) | 2018-12-06 | 2018-12-06 | Lens array image stitching method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811488704.2A CN111292232B (en) | 2018-12-06 | 2018-12-06 | Lens array image stitching method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111292232A CN111292232A (en) | 2020-06-16 |
CN111292232B true CN111292232B (en) | 2023-08-15 |
Family
ID=71024848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811488704.2A Active CN111292232B (en) | 2018-12-06 | 2018-12-06 | Lens array image stitching method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111292232B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04195811A (en) * | 1990-11-28 | 1992-07-15 | Hitachi Ltd | Magnetic image sensor |
CN1203490A (en) * | 1997-06-12 | 1998-12-30 | 惠普公司 | Image processing method and device |
JP2008276301A (en) * | 2007-04-25 | 2008-11-13 | Fujifilm Corp | Image processing apparatus, method and program |
JP2009224982A (en) * | 2008-03-14 | 2009-10-01 | Sony Corp | Image processing apparatus, image processing program, and display apparatus |
CN105812625A (en) * | 2014-12-30 | 2016-07-27 | 深圳超多维光电子有限公司 | Micro lens array imaging device and imaging method |
CN105842888A (en) * | 2016-05-31 | 2016-08-10 | 成都微晶景泰科技有限公司 | Quick focusing optical element and imaging device |
CN108062007A (en) * | 2016-11-07 | 2018-05-22 | 俞庆平 | A kind of method for improving photoetching energy uniformity and improving splicing |
CN108471513A (en) * | 2018-03-28 | 2018-08-31 | 国网辽宁省电力有限公司信息通信分公司 | Video fusion method, apparatus and server |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0216641D0 (en) * | 2002-07-18 | 2002-08-28 | Univ Nottingham | Image analysis method, apparatus and software |
US7593597B2 (en) * | 2003-08-06 | 2009-09-22 | Eastman Kodak Company | Alignment of lens array images using autocorrelation |
US7251078B2 (en) * | 2004-01-21 | 2007-07-31 | Searete, Llc | Image correction using a microlens array as a unit |
US9052759B2 (en) * | 2007-04-11 | 2015-06-09 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Dynamically reconfigurable pixel array for optical navigation |
US9554107B2 (en) * | 2014-02-07 | 2017-01-24 | Sony Corporation | Method and apparatus for reducing color fringing in composite images |
JP6214457B2 (en) * | 2014-04-18 | 2017-10-18 | キヤノン株式会社 | Image processing method, image processing apparatus, imaging apparatus, image processing program, and storage medium |
-
2018
- 2018-12-06 CN CN201811488704.2A patent/CN111292232B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04195811A (en) * | 1990-11-28 | 1992-07-15 | Hitachi Ltd | Magnetic image sensor |
CN1203490A (en) * | 1997-06-12 | 1998-12-30 | 惠普公司 | Image processing method and device |
JP2008276301A (en) * | 2007-04-25 | 2008-11-13 | Fujifilm Corp | Image processing apparatus, method and program |
JP2009224982A (en) * | 2008-03-14 | 2009-10-01 | Sony Corp | Image processing apparatus, image processing program, and display apparatus |
CN105812625A (en) * | 2014-12-30 | 2016-07-27 | 深圳超多维光电子有限公司 | Micro lens array imaging device and imaging method |
CN105842888A (en) * | 2016-05-31 | 2016-08-10 | 成都微晶景泰科技有限公司 | Quick focusing optical element and imaging device |
CN108062007A (en) * | 2016-11-07 | 2018-05-22 | 俞庆平 | A kind of method for improving photoetching energy uniformity and improving splicing |
CN108471513A (en) * | 2018-03-28 | 2018-08-31 | 国网辽宁省电力有限公司信息通信分公司 | Video fusion method, apparatus and server |
Also Published As
Publication number | Publication date |
---|---|
CN111292232A (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106469431B (en) | Image processing apparatus | |
US8803984B2 (en) | Image processing device and method for producing a restored image using a candidate point spread function | |
KR101429371B1 (en) | Algorithms for estimating precise and relative object distances in a scene | |
CN111353948B (en) | Image noise reduction method, device and equipment | |
EP3373241A1 (en) | Method and device for image splicing | |
CN103369233B (en) | For by utilizing self-adaptive kernel to perform the system and method for estimation of Depth | |
US10306210B2 (en) | Image processing apparatus and image capturing apparatus | |
CN104079818B (en) | Camera device, image processing system, camera system and image processing method | |
US20110267485A1 (en) | Range measurement using a coded aperture | |
CN113079325B (en) | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions | |
US20220343520A1 (en) | Image Processing Method and Image Processing Apparatus, and Electronic Device Using Same | |
CN111383278A (en) | Calibration method, device and equipment for double cameras | |
CN110909750A (en) | Image difference detection method and device, storage medium and terminal | |
JP5900017B2 (en) | Depth estimation apparatus, reconstructed image generation apparatus, depth estimation method, reconstructed image generation method, and program | |
CN111292232B (en) | Lens array image stitching method, device and storage medium | |
CN111292233B (en) | Lens array image stitching method, device and storage medium | |
CN113298187A (en) | Image processing method and device, and computer readable storage medium | |
CN110874814B (en) | Image processing method, image processing device and terminal equipment | |
CN106683044B (en) | Image splicing method and device of multi-channel optical detection system | |
CN115631171B (en) | Picture definition evaluation method, system and storage medium | |
CN106296580A (en) | A kind of method and device of image mosaic | |
US8412002B2 (en) | Method for generating all-in-focus image | |
CN114241446A (en) | Method, device and equipment for marking corner points of guideboard and storage medium | |
CN117501303A (en) | Image stitching method and device | |
Vuong et al. | Initial direction and speed decision system for auto focus based on blur detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |