CN111292232A - Lens array image splicing method and device and storage medium - Google Patents

Lens array image splicing method and device and storage medium Download PDF

Info

Publication number
CN111292232A
CN111292232A CN201811488704.2A CN201811488704A CN111292232A CN 111292232 A CN111292232 A CN 111292232A CN 201811488704 A CN201811488704 A CN 201811488704A CN 111292232 A CN111292232 A CN 111292232A
Authority
CN
China
Prior art keywords
image
sub
maximum
matrix
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811488704.2A
Other languages
Chinese (zh)
Other versions
CN111292232B (en
Inventor
王金
叶茂
王起飞
曾俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Microlcl Technology Co ltd
Original Assignee
Chengdu Microlcl Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Microlcl Technology Co ltd filed Critical Chengdu Microlcl Technology Co ltd
Priority to CN201811488704.2A priority Critical patent/CN111292232B/en
Publication of CN111292232A publication Critical patent/CN111292232A/en
Application granted granted Critical
Publication of CN111292232B publication Critical patent/CN111292232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a lens array image splicing method, a lens array image splicing device and a storage medium, and relates to the technical field of image processing, wherein the method comprises the following steps: acquiring an original image obtained by the lens array, and performing energy compensation on the sub-image formed by the sub-lens; performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity region; filtering out a mutation region in the original image through filtering according to the maximum similar region; zooming each subimage; and splicing and fusing the sub-images after the zooming processing. According to the invention, the similar areas of the sub-images are obtained, and the sub-images are spliced after filtering and zooming processing is carried out on the sub-images according to the similar areas of the sub-images, so that distortion is avoided when the sub-images are spliced into the whole image, and the imaging quality of the lens array is improved.

Description

Lens array image splicing method and device and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a lens array image splicing method, a lens array image splicing device and a storage medium.
Background
The lens array is an optical device including a plurality of sub-lenses arranged in an array, and is widely used in a square camera, a compound eye camera and a large-field microscope camera. The sub-lenses are arranged in the lens array in an ordered and equally spaced square/rectangular array form. Each sub-lens can image objects within its own field of view, and the image presented by each sub-lens is called a sub-image. Therefore, an image pattern (an original image obtained by imaging after passing through the lens array) obtained by the lens array in one imaging process contains a plurality of sub-images, and the arrangement form of the sub-images in the image pattern corresponds to the arrangement form of the sub-lenses in the lens array. The task of lens array imaging is to process an image map containing multiple sub-images, and the final output is a complete image formed by the sub-image stitching.
When image splicing is performed after lens array imaging, a circular area with a proper size, also called as an image circle, is often selected from sub-images to complete the splicing, adjacent image circles can generate partial overlapping areas, and how to process images in the overlapping areas causes the images to naturally transition between the adjacent image circles without distortion, which is also a key step affecting the imaging quality of the lens array.
Disclosure of Invention
In view of this, embodiments of the present invention provide a lens array image stitching method, a lens array image stitching device, and a storage medium, which are used to solve the technical problem that distortion occurs during sub-image stitching in the prior art affects a final imaging effect.
In a first aspect, the present invention provides a lens array image stitching method, where the lens array includes a plurality of sub-lenses arranged in an array, the method includes:
acquiring an original image obtained by the lens array, and performing energy compensation on the sub-image formed by the sub-lens;
performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity region;
filtering out a mutation region in the original image through filtering according to the maximum similar region; zooming each subimage;
and splicing and fusing the sub-images after the zooming processing.
Further, the energy compensation of the sub-image formed by the sub-lenses comprises:
determining the pixel size of the sub-image, and the row number and the column number of the sub-lenses;
and performing energy compensation on the sub-image by the vignetting area according to the image gray value and the RGB value around the sub-image.
Further, the performing image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain the maximum similarity region includes:
acquiring a maximum similar pixel point matrix of the subimage along the row direction and a maximum similar pixel point matrix of the subimage along the column direction;
acquiring the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction;
and determining the maximum similar area according to the number of the maximum similar pixels.
Further, the acquiring the maximum similar pixel point matrix of the subimage along the row direction and the maximum similar pixel point matrix of the subimage along the column direction includes:
selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculation row number and the maximum calculation column number;
respectively acquiring a row direction combination matrix and a column direction combination matrix of adjacent sub-images;
acquiring the maximum correlation coefficient of the row direction combined image according to the first specified data of the row direction combined matrix; acquiring the maximum correlation coefficient of the column direction combined image according to second specified data of the column direction combined matrix;
determining a position value corresponding to the maximum correlation coefficient of the row direction combined image, and storing the position value into a maximum similar pixel point matrix of the sub-image in the row direction; and determining a position value corresponding to the maximum correlation coefficient of the column direction combined image, and storing the position value into the maximum similar pixel point matrix of the sub-image in the column direction.
Further, the acquiring the maximum similar pixel point matrix of the subimage along the row direction and the maximum similar pixel point matrix of the subimage along the column direction includes:
selecting any sub-lens, extracting a local sub-lens image with the row and column number of 2 x 2, and setting the maximum calculation row number and the value of a deflectable pixel point; setting the image size of a local sub-lens;
acquiring the maximum correlation coefficient of the row direction combined image according to the third specified data of the row direction combined matrix of the adjacent sub-images;
and determining a position value corresponding to the maximum correlation coefficient of the row direction combined image, and storing the position value into the maximum similar pixel point matrix of the sub-image row direction and the sub-image row direction offset matrix.
Further, the performing image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain the maximum similarity region includes:
selecting adjacent image circles, wherein the image circles are circular areas;
determining a common area where the image circles intersect;
respectively determining the radius of a first image circle and the radius of a second image circle according to the public area, and respectively extracting a first image circle matrix and a second image circle matrix;
calculating correlation coefficients corresponding to different moving pixel points according to the first image circle matrix and the second image circle matrix;
and taking the number of the deviated pixel points corresponding to the maximum correlation coefficient as the maximum similar area.
Further, filtering out a mutation region in the original image through filtering; the scaling process for each sub-image comprises:
performing median filtering on the row direction maximum similar pixel point matrix and the column direction maximum similar pixel point matrix respectively to obtain a row direction filtering matrix and a column direction filtering matrix;
determining expansion coefficients of the sub-image in the upper, lower, left and right directions according to the row direction filter matrix and the column direction filter matrix;
and zooming the sub-image according to the expansion coefficients in the four directions to obtain a zoomed sub-image.
Further, the splicing and fusing the sub-images comprises:
cutting the zoom subimage according to the size of the original focusing subimage;
dividing the cut sub-images into a fixed area and a splicing area, and dividing the splicing area into two image fusion areas and four image fusion areas;
and respectively splicing and fusing the two image fusion areas and the four image fusion areas.
In another aspect, the present invention further provides a lens array image stitching apparatus, including:
the compensation module is used for acquiring an original image obtained by the lens array and performing energy compensation on the sub-image formed by the sub-lenses;
the calculating module is used for carrying out image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain a maximum similarity region;
the scaling module is used for filtering out a mutation area in the original image through filtering according to the maximum similar area; zooming each subimage;
and the splicing module is used for splicing and fusing the sub-images subjected to the zooming processing.
In another aspect, the present invention further provides a computer readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of any of the lens array image stitching methods described above.
According to the lens array image splicing method, the lens array image splicing device and the storage medium, the similar areas of the sub-images are obtained, the sub-images are spliced after filtering and zooming processing is carried out on the sub-images according to the similar areas of the sub-images, distortion is avoided when the sub-images are spliced into the whole image, and the imaging quality of the lens array is improved. In addition, in the invention, because the sub-images formed by the lens array have depth information, and the spliced whole image also has the depth information, a full-depth-of-field stereo image can be formed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a lens array image stitching method provided by an embodiment of the invention;
FIGS. 2 a-2 b show pictures before and after image energy compensation according to an embodiment of the present invention;
FIG. 2c shows another picture of FIG. 2a after sub-image compensation;
fig. 3 shows a partial sub-lens image ①②③④ with a routine number of 2 x 2 for an implementation of the invention;
fig. 4 is a flowchart illustrating a method for obtaining a maximum similar pixel point matrix of a sub-image along a row direction and a maximum similar pixel point matrix of the sub-image along a column direction according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating another method for obtaining a maximum similar pixel point matrix of a sub-image along a row direction and a maximum similar pixel point matrix of the sub-image along a column direction according to the embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating acquisition of similar regions according to an embodiment of the present invention;
FIG. 7 illustrates the number of similar pixel points up _ pixel, right _ pixel, down _ pixel, and left _ pixel in four directions per sub-image in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating image scaling according to an example embodiment of the invention;
FIG. 9 is a schematic diagram illustrating the partitioning of an image stitching region according to an exemplary embodiment of the present invention;
FIG. 10 is a diagram illustrating image partitioning in accordance with an exemplary embodiment of the present invention;
FIG. 11 is a diagram illustrating image interpolation region division according to an embodiment of the present invention;
FIG. 12 is a diagram illustrating random pixel selection according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of an interpolation region in an embodiment of the invention;
FIG. 14 is a schematic diagram of a lens array image stitching device according to another embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Example one
The invention provides a lens array image splicing method, wherein the lens array comprises a plurality of sub lenses arranged in an array, as shown in figure 1, the method comprises the following steps:
s101, acquiring an original image obtained through the lens array, and performing energy compensation on sub-images formed by the sub-lenses;
in this step, a Pixel size Pixel0_ row × Pixel0_ col of a sub-mirror circle is selected according to an actual system design, where Pixel0_ row is Pixel0_ col, and the row number seg _ row and the column number seg _ col of the sub-lens are determined.
And according to the image gray value and the RGB value around the sub-images, the vignetting area carries out energy compensation on each sub-image. The pictures before and after the energy compensation are shown in fig. 2a and 2 b.
The sub-image may be compensated by using the square area in fig. 2b, or may be compensated by using the circular area in fig. 2 c.
S102, performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity region;
specifically, the method comprises the following steps:
s1021, acquiring a maximum similar pixel point matrix of the subimages along the row direction and a maximum similar pixel point matrix of the subimages along the column direction;
in one embodiment, the method comprises the steps of:
s1, selecting any sub-lens, extracting a local sub-lens image with the row and column number of 2 x 2, and setting the maximum calculation row number and the maximum calculation column number;
s2, respectively acquiring a row direction combination matrix and a column direction combination matrix of adjacent sub-images;
s3, acquiring the maximum correlation coefficient of the row direction combined image according to the first specified data of the row direction combined matrix; acquiring the maximum correlation coefficient of the column direction combined image according to second specified data of the column direction combined matrix;
s4, determining a position value corresponding to the maximum correlation coefficient of the row direction combined image, and storing the position value in the maximum similar pixel point matrix of the sub-image row direction; and determining a position value corresponding to the maximum correlation coefficient of the column direction combined image, and storing the position value into the maximum similar pixel point matrix of the sub-image in the column direction.
In the following, a specific process is described by selecting a certain sub-lens a (seg _ ii, seg _ jj), extracting a partial sub-lens image ①②③④ with the row and column number of 2 × 2 as shown in fig. 3, setting the maximum calculation row number ii _ max and the maximum calculation column number jj _ max, as shown in fig. 4,
s10, select a sub-lens a (seg _ ii, seg _ jj), extract a partial sub-lens image ①②③④ with the row and column number of 2 × 2 as shown in fig. 3, and set the maximum calculation row number ii _ max and the maximum calculation column number jj _ max.
The following description will be given by taking a row direction combination matrix as an example, and the column direction combination matrix and the row direction combination matrix are similar in manner.
S11, the combined line direction image ①② is Com _ img _ up with size of Pixel0_ row × 2Pixel0_ col, the combined line direction image ③④ is Com _ img _ dow with size of Pixel0_ row × 2Pixel0_ col;
s12, ii equals 1, and the last ii rows of data in the image Com _ img _ up are extracted, and are counted as a matrix part _ img _ up, with the size ii × 2Pixel0_ col; extracting the previous ii rows of data in the image Com _ img _ dow, and calculating the data into a matrix part _ img _ dow with the size of ii multiplied by 2Pixel0_ col;
s13, calculating a correlation coefficient part _ r between part _ img _ up and part _ img _ dow, and including the local coefficient matrix part _ corr _ row (ii) equal to part _ r;
s14, judging whether ii is equal to ii _ max;
s15, if ii ≠ ii _ max, ii ≠ ii +1, and steps S12 to S14 are executed in a loop.
S16, when ii _ max is equal to ii _ max, finding a position value corresponding to the largest correlation coefficient part _ r _ max in the matrix part _ corr _ row:
and ii _ get ═ find (part _ corr _ row ═ max (part _ corr _ row)), and if the maximum value corresponds to a plurality of positions, taking an intermediate position value and storing the intermediate position value into All _ corr _ row (seg _ ii, seg _ jj) ═ ii _ get stored in the maximum similar pixel point matrix in the row direction of the sub-image.
In another embodiment, the method comprises the steps of:
s21, selecting any sub-lens, extracting a partial sub-lens image with the row and column number of 2 x 2, and setting the maximum calculation row number and the value of a deflectable pixel point; setting the image size of a local sub-lens;
s22, acquiring the maximum correlation coefficient of the row direction combined image according to the third specified data of the row direction combined matrix of the adjacent sub-images;
and S23, determining a position value corresponding to the maximum correlation coefficient of the row direction combined image, and storing the position value into the maximum similar pixel point matrix of the sub-image row direction and the sub-image row direction offset matrix.
The following describes the implementation of the above steps by taking an example of selecting a sub-lens (seg _ ii, seg _ jj), numbered ①, extracting a partial sub-lens image ①②③④ with 2 × 2 rows and columns, and setting the maximum number of calculated rows ii _ max and the value of the deflectable pixel point d _ pixel 0.
S31, selecting any sub-lens B, extracting a local sub-lens image ①②③④ with the row number of 2 x 2, setting the maximum calculation row number ii _ max and the Pixel point value d _ Pixel0 which can deviate, and setting the size of the local sub-lens image to be Pixel0_ row × Pixel0_ col;
s32, extracting ①② images of Mat _ img1(end-ii +1: end, (d _ pixel0+1: end-d _ pixel0)), Mat _ img2(end-ii +1: end, (d _ pixel0+1: end-d _ pixel0)),
the combinatorial matrix counts as part _ img _ up, size ii × 2(Pixel0_ col-2 × d _ Pixel 0);
at the start ii-1, delta _ pixel-delta _ pixel0,
s33, extraction ③④ image:
Mat_img3(1:ii,(d_pixel0+1:end-d_pixel0)+delta_pixel),
Mat_img4(1:ii,(d_pixel0+1:end-d_pixel0)+delta_pixel),
the combinatorial matrix counts as part _ img _ down, size ii × 2(Pixel0_ col-2 × d _ Pixel 0);
at the start ii-1, delta _ pixel-delta _ pixel 0.
S34, calculating a correlation coefficient part _ r of part _ img _ up and part _ img _ dow, and adding the correlation coefficient part _ r into a local coefficient matrix part _ corr _ row (ii, d _ pixel0+ delta _ pixel +1) ═ part _ r;
s35, judging whether the d _ pixel is equal to the d _ pixel 0;
s36, when d _ pixel ≠ d _ pixel0, ii ≠ ii executes step 33.
When d _ pixel0, determine whether ii is equal to ii _ max;
s37, when ii ≠ ii _ max, ii +1, d _ pixel ≠ d _ pixel0, executing step 32.
When ii is equal to ii _ max, the position value corresponding to the maximum correlation coefficient part _ r _ max in the matrix part _ corr _ row is searched:
(ii _ get, col _ get) ═ find (part _ corr _ row) ═ max (part _ corr _ row)), and save to the sub-image row direction maximum similar pixel point matrix All _ corr _ row (seg _ ii, seg _ jj) ═ ii _ get; and a sub-image row direction offset matrix Delta _ corr _ row (seg _ ii, seg _ jj) ═ col _ get;
s1022, acquiring the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction;
firstly, calculating a correlation coefficient between the row-direction maximum similar pixel point matrix and the column-direction maximum similar pixel point matrix, which can be specifically represented by the following formula:
Figure BDA0001895167330000091
where r (X, Y) represents the correlation coefficient of matrix X with matrix Y, Cov (X, Y) represents the covariance between matrix X and matrix Y, and var (X) and var (Y) represent the variances of X and Y, respectively.
And then, determining the maximum similar pixel point according to the correlation coefficient of the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction of each sub-image.
And S1023, determining the maximum similar area according to the maximum similar pixel point number.
In another embodiment, the method comprises the steps of:
and S41, selecting adjacent image circles, wherein the image circles are circular areas.
S42, determining a common area intersected by the image circles;
and S43, respectively determining the radius of the first image circle and the radius of the second image circle according to the public area, and respectively extracting the first image circle matrix and the second image circle matrix.
And S44, calculating correlation coefficients corresponding to different moving pixel points according to the first image circle matrix and the second image circle matrix.
And S45, taking the number of the deviated pixel points corresponding to the maximum correlation coefficient as the maximum similarity area.
Specifically, a similar area is determined according to an optical design principle, and intersecting areas with different sizes are selected from the two image circle images along a common direction, as shown in fig. 6, wherein an effective area of an image circle 1 is a circular surface with a radius of R1, and an extraction matrix is Mat 1; the effective area of the image circle 2 is a circular surface with a radius of R2, the extraction matrix is Mat2, correlation coefficients corresponding to different moving pixel points can be sequentially calculated, and finally the number of deviated pixel points corresponding to the maximum correlation coefficient is taken as the maximum similar area.
The embodiment has the specific advantage that compared with the square sub-pixel circle, the round sub-pixel has more image information, which is beneficial to image similarity calculation and image fusion.
S103, filtering out a mutation region in the original image through filtering according to the maximum similar region; zooming each subimage;
specifically, the method comprises the following steps:
s1031, performing median filtering on the row direction maximum similar pixel point matrix and the column direction maximum similar pixel point matrix respectively to obtain a row direction filter matrix and a column direction filter matrix;
s1032, determining expansion coefficients of the sub-image in the upper, lower, left and right directions according to the row direction filter matrix and the column direction filter matrix;
specifically, it is known that a Pixel point number of a repeated Pixel focus _ Pixel and one-side unrepeated information in the row-column direction of a single sub-lens in the focal plane position (2 (focus _ Pixel + fixed _ Pixel) +1 or 2 (focus _ Pixel + fixed _ Pixel) are used as the sub-lens image row number size Pixel0_ row × Pixel0_ col, and the sub-image size is an odd number or an even number, similar Pixel points up _ Pixel, right _ Pixel, down _ Pixel, and left _ Pixel in each of the four directions shown in fig. 7 can be obtained from the row direction filter matrix al _ filter _ row and the column direction filter matrix al _ filter _ col, and the sub-images in the four directions are calculated from the focal plane data:
up_par=(focus_pixel+fixed_pixel-up_pixel)/fixed_pixel;
right_par=(focus_pixel+fixed_pixel-right_pixel)/fixed_pixel;
down_par=(focus_pixel+fixed_pixel-down_pixel)/fixed_pixel;
left_par=(focus_pixel+fixed_pixel-left_pixel)/fixed_pixel;
s1033, zooming the sub-image according to the expansion coefficients of the four directions to obtain a zoomed sub-image.
Illustratively, as shown in fig. 8, a point a on the scaled sub-image, a being connected to the center of the sub-image is OA, a is restored to a point a' on the original sub-image. The angle between OA and the horizontal line OR is θ, and the original scaling factor relationship of OA can be obtained from right _ par and up _ par. The 4 original sub-image points P1, P2, P3 and P4 adjacent to A 'and the corresponding center distances d1, d2, d3 and d4 can be obtained according to the position of A'. And calculating corresponding weight values according to the distances of the image points, wherein the shorter the distance, the higher the weight, then assigning the weights obtained by the 4 image point values, and sequentially obtaining the weights W1, W2, W3 and W4 of the image points P1, P2, P3 and P4.
And finally, calculating the pixel point value A _ RGB (P1 _ RGB W1+ P2_ RGB W2+ P3_ RGB W3+ P4_ RGB W4 of the corresponding point A according to the calculated weight. And sequentially calculating the position points of the original sub-image where the zoom image is located, and calculating each point value to obtain the final zoom sub-image Seg _ scale _ img 0.
And S104, splicing and fusing the sub-images after the zooming processing.
Specifically, the method comprises the following steps:
s1041, cutting the zoomed subimage according to the size of the original focusing subimage;
specifically, the zoomed sub-image Seg _ scale _ img0 is clipped according to the size of the original focusing sub-image, the center is reserved, the edge is clipped, and finally the corresponding Seg _ scale _ img can be obtained.
S1042, dividing the cut sub-images into a fixed area and a splicing area, and dividing the splicing area into two image fusion areas and four image fusion areas;
specifically, the single sub-image after being cut is divided into a center fixing area and an edge splicing area. Wherein, the fixed area refers to the similar part which does not appear in the adjacent sub-image and is marked as fixed _ pixel, and the splicing area refers to the similar part in the adjacent sub-image and is marked as focus _ pixel. Further, as shown in fig. 9, the splicing region is divided into two image fusion regions (i.e., top-bottom-left-right fusion) and four image fusion regions (four corners fusion),
and S1043, respectively splicing and fusing the two image fusion areas and the four image fusion areas.
Specifically, the two-image fusion region and the four-image fusion region can be spliced and fused by adopting a harr wavelet algorithm, and the specific image fusion algorithm is not limited by the invention.
According to the lens array image splicing method provided by the invention, the similar areas of the sub-images are obtained, the sub-images are spliced after being filtered and zoomed according to the similar areas of the sub-images, the sub-images are not distorted when the sub-images are spliced into the whole image, and the imaging quality of the lens array is improved.
Example two
The present invention provides another lens array image stitching method, which is different from the first embodiment in that step S103 adopts the following step S103'.
S103', interpolation processing is carried out on each sub-image according to the maximum similar area.
Specifically, the method comprises the following steps:
s51, acquiring the maximum related pixel point number of the sub-image at the focusing surface;
s52, sequentially calculating interpolation coefficients in all directions;
and S53, acquiring an interpolation sub-image according to the interpolation coefficient of each direction.
In a specific embodiment, the maximum similar pixel point number and the interpolation coefficient can be obtained by adopting a block region extraction mode, specifically, a single sub-image can be divided into four regions as shown in fig. 10, and a priori repeated block is obtained according to the theoretical design of a system, namely, two side regions of adjacent image circles, such as ④ region of the image circle 1 and ① region of the image circle 2, or ① region of the image circle 1 and ④ region of the image circle 2.
In another embodiment, the maximum similar pixel point number and the interpolation coefficient can be obtained by adopting an interval value mode. Specifically, the directions of the sub-images with the similarity are determined according to the obtained adjacent sub-images and the priori knowledge, a pixel matrix is randomly selected in one sub-image according to the required number of rows and columns, a pixel matrix with the same position relation is selected at the symmetrical position of the other image circle, as shown in fig. 12, a black area is a randomly selected area (a plurality of points are randomly selected in a 13 x 3 matrix), and the correlation coefficient of the matrix under the condition is calculated. Meanwhile, the correlation coefficients of a plurality of groups of random values can be calculated in a group of row and column matrixes, so that the accuracy of the image correlation under the row and column matrixes is improved. And finally, selecting the pixel points with the optimal similarity to design interpolation coefficients in all directions. As shown in fig. 13.
Based on the above-described method embodiments, the following apparatus embodiments are proposed.
EXAMPLE III
Third embodiment of the present invention provides a lens array image stitching apparatus based on the first and second embodiments, as shown in fig. 14, the lens array image stitching apparatus 8 includes:
a compensation module 81, configured to acquire an original image obtained through the lens array, and perform energy compensation on the sub-image formed by the sub-lenses;
the calculating module 82 is configured to perform image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain a maximum similarity region;
a scaling module 83, configured to filter, according to the maximum similar region, a sudden change region in the original image by filtering; zooming each subimage;
and the splicing module 84 is used for splicing and fusing the sub-images.
In addition, the lens array stitching method according to the embodiment of the present invention described in conjunction with the first embodiment and the second embodiment may be implemented by a lens array stitching apparatus.
The lens array stitching device may include a processor and a memory storing computer program instructions.
In particular, the processor may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
The memory may include mass storage for data or instructions. By way of example, and not limitation, memory may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is non-volatile solid-state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor reads and executes the computer program instructions stored in the memory to implement any one of the lens array image stitching methods in the above embodiments.
In one example, the lens array image stitching device may further include a communication interface and a bus. The processor, the memory and the communication interface are connected through a bus and complete mutual communication.
The communication interface is mainly used for realizing communication among modules, devices, units and/or equipment in the embodiment of the invention.
The bus comprises hardware, software, or both that couple the components of the lens array image stitching device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. A bus may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
Example four
In combination with the lens array image stitching method in the foregoing embodiments, the embodiments of the present invention can be implemented by providing a computer-readable storage medium. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement the lens array image stitching method of any one of the first and second embodiments.
According to the lens array image splicing method, the lens array image splicing device and the storage medium, the similar areas of the sub-images are obtained, the sub-images are spliced after filtering and zooming processing is carried out on the sub-images according to the similar areas of the sub-images, distortion is avoided when the sub-images are spliced into the whole image, and the imaging quality of the lens array is improved.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (10)

1. A lens array image stitching method, wherein the lens array comprises a plurality of sub-lenses arranged in an array, the method comprising:
acquiring an original image obtained by the lens array, and performing energy compensation on the sub-image formed by the sub-lens;
performing image similarity calculation on sub-images formed by adjacent sub-lenses to obtain a maximum similarity region;
filtering out a mutation region in the original image through filtering according to the maximum similar region; zooming each subimage;
and splicing and fusing the sub-images after the zooming processing.
2. The lens array image stitching method according to claim 1, wherein the energy compensation of the sub-images formed by the sub-lenses comprises:
determining the pixel size of the sub-image, and the row number and the column number of the sub-lenses;
and performing energy compensation on the sub-image by the vignetting area according to the image gray value and the RGB value around the sub-image.
3. The lens array image stitching method according to claim 1, wherein the performing image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain the maximum similarity region comprises:
acquiring a maximum similar pixel point matrix of the subimage along the row direction and a maximum similar pixel point matrix of the subimage along the column direction;
acquiring the maximum similar pixel point number of the sub-image according to the maximum similar pixel point matrix in the row direction and the maximum similar pixel point matrix in the column direction;
and determining the maximum similar area according to the number of the maximum similar pixels.
4. The lens array image stitching method of claim 3, wherein the obtaining of the maximum similar pixel point matrix of the sub-images in the row direction and the maximum similar pixel point matrix of the sub-images in the column direction comprises:
selecting any sub-lens, extracting a local sub-lens image with the row number and the column number of 2 x 2, and setting the maximum calculation row number and the maximum calculation column number;
respectively acquiring a row direction combination matrix and a column direction combination matrix of adjacent sub-images;
acquiring the maximum correlation coefficient of the row direction combined image according to the first specified data of the row direction combined matrix; acquiring the maximum correlation coefficient of the column direction combined image according to second specified data of the column direction combined matrix;
determining a position value corresponding to the maximum correlation coefficient of the row direction combined image, and storing the position value into a maximum similar pixel point matrix of the sub-image in the row direction; and determining a position value corresponding to the maximum correlation coefficient of the column direction combined image, and storing the position value into the maximum similar pixel point matrix of the sub-image in the column direction.
5. The lens array image stitching method of claim 3, wherein the obtaining of the maximum similar pixel point matrix of the sub-images in the row direction and the maximum similar pixel point matrix of the sub-images in the column direction comprises:
selecting any sub-lens, extracting a local sub-lens image with the row and column number of 2 x 2, and setting the maximum calculation row number and the value of a deflectable pixel point; setting the image size of a local sub-lens;
acquiring the maximum correlation coefficient of the row direction combined image according to the third specified data of the row direction combined matrix of the adjacent sub-images;
and determining a position value corresponding to the maximum correlation coefficient of the row direction combined image, and storing the position value into the maximum similar pixel point matrix of the sub-image row direction and the sub-image row direction offset matrix.
6. The lens array image stitching method according to claim 1, wherein the performing image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain the maximum similarity region comprises:
selecting adjacent image circles, wherein the image circles are circular areas;
determining a common area where the image circles intersect;
respectively determining the radius of a first image circle and the radius of a second image circle according to the public area, and respectively extracting a first image circle matrix and a second image circle matrix;
calculating correlation coefficients corresponding to different moving pixel points according to the first image circle matrix and the second image circle matrix;
and taking the number of the deviated pixel points corresponding to the maximum correlation coefficient as the maximum similar area.
7. The lens array image stitching method according to claim 1, wherein the filtering is used for filtering out abrupt change regions in the original image; the scaling process for each sub-image comprises:
performing median filtering on the row direction maximum similar pixel point matrix and the column direction maximum similar pixel point matrix respectively to obtain a row direction filtering matrix and a column direction filtering matrix;
determining expansion coefficients of the sub-image in the upper, lower, left and right directions according to the row direction filter matrix and the column direction filter matrix;
and zooming the sub-image according to the expansion coefficients in the four directions to obtain a zoomed sub-image.
8. The lens array image stitching method according to claim 7, wherein the stitching and fusing the sub-images comprises:
cutting the zoom subimage according to the size of the original focusing subimage;
dividing the cut sub-images into a fixed area and a splicing area, and dividing the splicing area into two image fusion areas and four image fusion areas;
and respectively splicing and fusing the two image fusion areas and the four image fusion areas.
9. A lens array image stitching device, comprising:
the compensation module is used for acquiring an original image obtained by the lens array and performing energy compensation on the sub-image formed by the sub-lenses;
the calculating module is used for carrying out image similarity calculation on the sub-images formed by the adjacent sub-lenses to obtain a maximum similarity region;
the scaling module is used for filtering out a mutation area in the original image through filtering according to the maximum similar area; zooming each subimage;
and the splicing module is used for splicing and fusing the sub-images subjected to the zooming processing.
10. A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of the lens array image stitching method according to any one of claims 1 to 8.
CN201811488704.2A 2018-12-06 2018-12-06 Lens array image stitching method, device and storage medium Active CN111292232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811488704.2A CN111292232B (en) 2018-12-06 2018-12-06 Lens array image stitching method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811488704.2A CN111292232B (en) 2018-12-06 2018-12-06 Lens array image stitching method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111292232A true CN111292232A (en) 2020-06-16
CN111292232B CN111292232B (en) 2023-08-15

Family

ID=71024848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811488704.2A Active CN111292232B (en) 2018-12-06 2018-12-06 Lens array image stitching method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111292232B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04195811A (en) * 1990-11-28 1992-07-15 Hitachi Ltd Magnetic image sensor
CN1203490A (en) * 1997-06-12 1998-12-30 惠普公司 Image processing method and device
US20050057664A1 (en) * 2003-08-06 2005-03-17 Eastman Kodak Company Alignment of lens array images using autocorrelation
US20050157394A1 (en) * 2004-01-21 2005-07-21 Hillis W. D. Image correction using a microlens array as a unit
US20060098861A1 (en) * 2002-07-18 2006-05-11 See Chung W Image analysis method, apparatus and software
US20080252602A1 (en) * 2007-04-11 2008-10-16 Ramakrishna Kakarala Dynamically reconfigurable pixel array for optical navigation
JP2008276301A (en) * 2007-04-25 2008-11-13 Fujifilm Corp Image processing apparatus, method and program
JP2009224982A (en) * 2008-03-14 2009-10-01 Sony Corp Image processing apparatus, image processing program, and display apparatus
US20150304633A1 (en) * 2014-04-18 2015-10-22 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and non-transitory computer-readable storage medium
CN105812625A (en) * 2014-12-30 2016-07-27 深圳超多维光电子有限公司 Micro lens array imaging device and imaging method
CN105842888A (en) * 2016-05-31 2016-08-10 成都微晶景泰科技有限公司 Quick focusing optical element and imaging device
US20160249028A1 (en) * 2014-02-07 2016-08-25 Sony Corporation Method and Apparatus for Reducing Color Fringing in Composite Images
CN108062007A (en) * 2016-11-07 2018-05-22 俞庆平 A kind of method for improving photoetching energy uniformity and improving splicing
CN108471513A (en) * 2018-03-28 2018-08-31 国网辽宁省电力有限公司信息通信分公司 Video fusion method, apparatus and server

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04195811A (en) * 1990-11-28 1992-07-15 Hitachi Ltd Magnetic image sensor
CN1203490A (en) * 1997-06-12 1998-12-30 惠普公司 Image processing method and device
US20060098861A1 (en) * 2002-07-18 2006-05-11 See Chung W Image analysis method, apparatus and software
US20050057664A1 (en) * 2003-08-06 2005-03-17 Eastman Kodak Company Alignment of lens array images using autocorrelation
US20050157394A1 (en) * 2004-01-21 2005-07-21 Hillis W. D. Image correction using a microlens array as a unit
US20080252602A1 (en) * 2007-04-11 2008-10-16 Ramakrishna Kakarala Dynamically reconfigurable pixel array for optical navigation
JP2008276301A (en) * 2007-04-25 2008-11-13 Fujifilm Corp Image processing apparatus, method and program
JP2009224982A (en) * 2008-03-14 2009-10-01 Sony Corp Image processing apparatus, image processing program, and display apparatus
US20160249028A1 (en) * 2014-02-07 2016-08-25 Sony Corporation Method and Apparatus for Reducing Color Fringing in Composite Images
US20150304633A1 (en) * 2014-04-18 2015-10-22 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and non-transitory computer-readable storage medium
CN105812625A (en) * 2014-12-30 2016-07-27 深圳超多维光电子有限公司 Micro lens array imaging device and imaging method
CN105842888A (en) * 2016-05-31 2016-08-10 成都微晶景泰科技有限公司 Quick focusing optical element and imaging device
CN108062007A (en) * 2016-11-07 2018-05-22 俞庆平 A kind of method for improving photoetching energy uniformity and improving splicing
CN108471513A (en) * 2018-03-28 2018-08-31 国网辽宁省电力有限公司信息通信分公司 Video fusion method, apparatus and server

Also Published As

Publication number Publication date
CN111292232B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
US8803984B2 (en) Image processing device and method for producing a restored image using a candidate point spread function
US20070279618A1 (en) Imaging Apparatus And Image Improving Method
CN111353948B (en) Image noise reduction method, device and equipment
EP1841207B1 (en) Imaging device, imaging method, and imaging device design method
EP2336816B1 (en) Imaging device
EP2533198B1 (en) Imaging device and method, and image processing method for imaging device
CN104079818B (en) Camera device, image processing system, camera system and image processing method
US9581787B2 (en) Method of using a light-field camera to generate a three-dimensional image, and light field camera implementing the method
CN103369233A (en) System and method for performing depth estimation by utilizing adaptive kernel
CN104867125A (en) Image obtaining method and image obtaining device
CN106204554A (en) Depth of view information acquisition methods based on multiple focussing image, system and camera terminal
CN114022662A (en) Image recognition method, device, equipment and medium
CN104883520A (en) Image processing apparatus and method for controlling image processing apparatus
KR20070085795A (en) Image correction device and image correction method
CN104754316A (en) 3D imaging method and device and imaging system
CN111292232A (en) Lens array image splicing method and device and storage medium
CN111292233A (en) Lens array image splicing method and device and storage medium
CN112150355A (en) Image processing method and related equipment
CN110874814A (en) Image processing method, image processing device and terminal equipment
CN115631171A (en) Picture definition evaluation method, system and storage medium
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
JP2018133064A (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN113141447B (en) Full-field-depth image acquisition method, full-field-depth image synthesis device, full-field-depth image equipment and storage medium
CN110896469B (en) Resolution testing method for three-shot photography and application thereof
TW201913561A (en) Image calibration system and image calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant