CN114399429A - Image splicing method and device, electronic equipment and storage medium - Google Patents

Image splicing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114399429A
CN114399429A CN202111649158.8A CN202111649158A CN114399429A CN 114399429 A CN114399429 A CN 114399429A CN 202111649158 A CN202111649158 A CN 202111649158A CN 114399429 A CN114399429 A CN 114399429A
Authority
CN
China
Prior art keywords
sub
images
image
region
adjacent sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111649158.8A
Other languages
Chinese (zh)
Inventor
严雪飞
于长志
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Ankedi Intelligent Technology Co ltd
Original Assignee
Wuxi Ankedi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Ankedi Intelligent Technology Co ltd filed Critical Wuxi Ankedi Intelligent Technology Co ltd
Priority to CN202111649158.8A priority Critical patent/CN114399429A/en
Publication of CN114399429A publication Critical patent/CN114399429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image splicing method, an image splicing device, electronic equipment and a storage medium, wherein the image splicing method comprises the following steps: carrying out initial splicing on a plurality of sub-images by using an initial splicing model to obtain an initial spliced image; connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of adjacent sub-images; communicating the at least one region into a communicated region; selecting a subimage from the communicated area as a root subimage, and generating at least one path from the root subimage; and according to the path sequence, sequentially adjusting the next sub-image from the root sub-image according to the adjustment parameters between the next sub-image and the previous sub-image in each path to obtain a global adjustment splicing image.

Description

Image splicing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image splicing method and device, electronic equipment and a storage medium.
Background
With the development of image acquisition technology, the requirements of users for acquiring images are changed correspondingly, and many application scenes are not limited to the current single image of an acquisition lens any more, but a wide-range panoramic image needs to be acquired, such as landscape photography, monitoring of a large field of view (usually, a long-distance bridge, a road, an airport or a high-rise building, and the like). Due to limitations of a lens visual angle, a visual distance and the like, it is generally difficult to acquire an image of a whole large visual field in a single image at a time, and a panoramic image of a specified visual field is generally obtained by video panoramic stitching or by stitching a plurality of sub-images acquired by a camera array.
The existing panoramic stitching technology is mainly realized through a characteristic point matching mode, but the mode has certain requirements on the characteristic points of the pictures, and can obtain a high-quality stitched image in scenes with rich and definite processing characteristic points, but in some scenes with monotonous and missing characteristic points, such as bridges, roads and the like, the stitching effect is not ideal because effective characteristic points are difficult to extract for matching and aligning. Although some methods for adjusting the sub-image exist in the prior art, it is easy to adjust the sub-image and deviate the other sub-image more greatly.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image stitching method, an apparatus, an electronic device, and a storage medium, so as to solve a problem that when a sub-image is adjusted by an image stitching method in the prior art, the sub-image is easily adjusted, but another sub-image has a larger deviation.
According to a first aspect, an embodiment of the present invention provides an image stitching method, including: carrying out initial splicing on a plurality of sub-images by using an initial splicing model to obtain an initial spliced image; connecting the sub-images into at least one region according to the connection confidence degree between each pair of adjacent sub-images, wherein the region at least comprises two sub-images; communicating the at least one region into a communicated region; selecting one sub-image in the communicated region as a root sub-image, and generating at least one path from the root sub-image by using a breadth first-connection confidence degree first search mode, wherein the at least one path covers all the sub-images; and according to the path sequence, sequentially adjusting the next sub-image from the root sub-image according to the adjustment parameters between the next sub-image and the previous sub-image in each path to obtain a global adjustment splicing image.
Optionally, the initially stitching the plurality of sub-images by using the initial stitching model includes: extracting characteristic points of each subimage by using the initial splicing model; identifying mutually matched feature points in different sub-images; and aligning the coordinates of the matched characteristic points according to a uniform panorama coordinate system.
Optionally, between the step of initially stitching the plurality of sub-images by using the initial stitching model and the step of obtaining the confidence of the connection between each pair of adjacent sub-images in the initially stitched image, the method further includes: acquiring an adjustment parameter between each pair of adjacent sub-images; and performing primary adjustment on the initial spliced image according to the adjustment parameters.
Optionally, the acquiring the adjustment parameter between each pair of adjacent sub-images includes: when a pair of adjacent sub-images have enough mutually matched feature points, calculating a homography matrix between the adjacent sub-images according to the mutually matched feature points; and acquiring the adjustment parameters between the adjacent sub-images according to the homography matrix.
Optionally, after obtaining the adjustment parameters between the adjacent sub-images according to the homography matrix, the method further includes: and optimizing the adjustment parameters by utilizing a fusion reduction degree matching search mode which does not depend on the feature points.
Optionally, the acquiring the adjustment parameter between each pair of adjacent sub-images includes: when the feature points which are matched with each other are not enough between a pair of adjacent sub-images, the adjustment parameters between the adjacent sub-images are obtained by utilizing a fusion reduction degree matching search mode which does not depend on the feature points.
Optionally, the fused reduction degree matching search method includes the following steps: acquiring overlapped parts, namely acquiring the parts of the adjacent sub-images, which respectively appear in the overlapped areas of the adjacent sub-images, according to the coordinate ranges of the adjacent sub-images in a unified panorama coordinate system, wherein the part of the adjacent sub-images, which appears in the overlapped areas of the adjacent sub-images, is a first overlapped part, and the part of the adjacent sub-images, which appears in the overlapped areas of the adjacent sub-images, is a second overlapped part; a fusion step of fusing the first and second overlapping parts to obtain a fused part; a gradient difference obtaining step of performing gradient extraction calculation on the first overlapping portion, the second overlapping portion, and the fused portion to obtain a gradient difference, where a gradient of the first overlapping portion is a first gradient, a gradient of the second overlapping portion is a second gradient, and a gradient of the fused portion is a third gradient, and the gradient difference is equal to an absolute value of a difference between the third gradient and the first gradient plus an absolute value of a difference between the third gradient and the second gradient; a searching step, namely fixing the coordinate of a preset position of one of the adjacent sub-images in a panoramic image coordinate system, searching the coordinate of the corresponding position of the other sub-image in the panoramic image coordinate system within a preset range, executing the step of acquiring the overlapped part, the step of fusing and the step of acquiring the gradient difference for each search, and recording the moving distance and the gradient difference of each search; and taking the moving distance corresponding to the searched minimum gradient difference as an adjustment parameter between adjacent sub-images.
Optionally, the method further comprises: and acquiring the connection confidence coefficient between the adjacent sub-images according to the mutually matched feature points between the adjacent sub-images.
Optionally, the method further comprises: the confidence of the connection between the adjacent sub-images is set to a value greater than 0 and smaller.
Optionally, the connecting the sub-images into at least one region according to the connection confidence between each pair of adjacent sub-images includes: connecting adjacent sub-images with the connection confidence coefficient larger than 0 together to form at least one initial region, taking the rest sub-images as an isolated point set, wherein the connection confidence coefficient between the sub-images in the isolated point set is 0, and the connection confidence coefficient between the sub-images in the isolated point set and each sub-image in the initial region is 0; traversing each isolated point sub-image in the isolated point set, when a sub-image belonging to a certain initial region exists in the adjacent sub-images of the traversed isolated point sub-images, moving the isolated point sub-images from the isolated point set into the corresponding initial region, and setting the connection confidence coefficient between the isolated point sub-images and the connected sub-images to be a value which is larger than 0 and is very small until the isolated point set is empty so as to form at least one region.
Optionally, the communicating the at least one region into one communicated region includes: when the number of the regions is 1, the regions are connected regions; and when the number of the areas is more than 1, selecting one area as a connected area, traversing each sub-image of each unconnected area, if the sub-image belonging to the connected area exists in the adjacent sub-images of the traversed unconnected area, setting the unconnected area comprising the sub-image as the connected area, and setting the connection confidence coefficient between the sub-image of the unconnected area and the sub-image of the connected area as a small value which is more than zero until all the areas are connected areas.
Optionally, after obtaining the globally adjusted stitched image, the method further includes: when the manual adjustment parameters between the adjacent sub-images are acquired, adjusting the overall adjustment spliced image according to the manual adjustment parameters, setting the connection confidence coefficient between each pair of the adjacent sub-images after manual adjustment to be a value larger than the connection confidence coefficients of all automatic adjustments, and returning to the step of connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of the adjacent sub-images so as to acquire the overall adjustment spliced image again.
Optionally, after obtaining the globally adjusted stitched image, the method further includes: when the picture content of the sub-images changes, returning to the step of connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of adjacent sub-images so as to obtain a global adjustment splicing image again; or when the picture content of the sub-images changes, returning to the step of acquiring the adjustment parameters between each pair of adjacent sub-images to obtain the global adjustment splicing image again.
According to a second aspect, an embodiment of the present invention provides an image stitching apparatus, including: the initial splicing unit is used for initially splicing the sub-images by using the initial splicing model to obtain an initial spliced image; the region unit is used for connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of adjacent sub-images, and the region at least comprises two sub-images; a communicating unit for communicating the at least one region into one communicated region; the path unit is used for selecting one sub-image in the communicated area as a root sub-image, and generating at least one path from the root sub-image by using a breadth-first-connection confidence priority search mode, wherein the at least one path covers all the sub-images; and the global adjusting unit is used for adjusting the next sub-image according to the adjusting parameters between the next sub-image and the previous sub-image in each path in sequence from the root sub-image according to the path sequence so as to obtain a global adjusting splicing image.
Optionally, the image stitching device further includes: and the primary adjustment unit is used for acquiring adjustment parameters between each pair of adjacent sub-images after the initial splicing unit obtains the initial spliced image and before the area unit connects the sub-images into at least one area, and performing primary adjustment on the initial spliced image according to the adjustment parameters.
Optionally, the area unit is specifically configured to: connecting adjacent sub-images with the connection confidence coefficient larger than 0 together to form at least one initial region, taking the rest sub-images as an isolated point set, wherein the connection confidence coefficient between the sub-images in the isolated point set is 0, and the connection confidence coefficient between the sub-images in the isolated point set and each sub-image in the initial region is 0; traversing each isolated point sub-image in the isolated point set, when a sub-image belonging to a certain initial region exists in the adjacent sub-images of the traversed isolated point sub-images, moving the isolated point sub-images from the isolated point set into the corresponding initial region, and setting the connection confidence coefficient between the isolated point sub-images and the connected sub-images to be a value which is larger than 0 and is very small until the isolated point set is empty so as to form at least one region.
Optionally, the communication unit is specifically configured to: when the number of the regions is 1, the regions are connected regions; and when the number of the areas is more than 1, selecting one area as a connected area, traversing each sub-image of each unconnected area, if the sub-image belonging to the connected area exists in the adjacent sub-images of the traversed unconnected area, setting the unconnected area comprising the sub-image as the connected area, and setting the connection confidence coefficient between the sub-image of the unconnected area and the sub-image of the connected area as a small value which is more than zero until all the areas are connected areas.
Optionally, the image stitching device further includes: and the manual adjusting unit is used for adjusting the overall adjustment spliced image according to the manual adjusting parameters when the manual adjusting parameters between the adjacent sub-images are acquired, setting the connection confidence coefficient between each pair of adjacent sub-images after manual adjustment to be a value larger than the connection confidence coefficients of all automatic adjustments, and enabling the region unit, the communicating unit, the path unit and the overall adjusting unit to execute corresponding steps again.
Optionally, the image stitching device further includes: the secondary adjusting unit is used for enabling the area unit, the communication unit, the path unit and the global adjusting unit to execute corresponding steps again when the picture content of the sub-image is changed; or the secondary adjusting unit is used for enabling the primary adjusting unit, the area unit, the communication unit, the path unit and the global adjusting unit to execute corresponding steps again when the picture content of the sub-picture is changed.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor performing the method of any of the above first aspects by executing the computer instructions.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium stores computer instructions for causing the computer to execute the method of any one of the above first aspects.
According to the image splicing method, the device, the electronic equipment and the storage medium, the plurality of sub-images are connected into at least one region, the at least one region is connected into the connected region, at least one path starting from a root sub-image in the connected region is generated, and the sub-images in each path are adjusted according to the path sequence, so that the method and the device can be used for calculating the overall adjustment of every two adjacent sub-images based on the connection confidence coefficient between every two adjacent sub-images without generating accumulated errors, and the situation that the sub-image is well adjusted and the other sub-image has larger deviation is avoided.
Further, according to the image stitching method, the image stitching device, the electronic device and the storage medium provided by the embodiment of the invention, the initial stitched image is subjected to initial adjustment by acquiring the adjustment parameters between each pair of adjacent sub-images, so that the efficiency of subsequent global adjustment can be improved. Moreover, in an optional implementation mode, a fused reduction degree matching search mode is adopted for primary adjustment, and for the condition that enough mutually matched feature points exist between adjacent sub-images, the fused reduction degree matching search mode is adopted, so that the alignment precision of the primary adjustment based on the matched feature points can be improved; and for the condition that the adjacent sub-images have insufficient mutually matched feature points, longitudinal and transverse translation parameters between the adjacent pair of sub-images can be acquired, so that relatively accurate primary adjustment is realized.
Further, according to the image splicing method, the image splicing device, the electronic device and the storage medium provided by the embodiment of the invention, global adjustment can be performed again according to the arrangement and the connection confidence of the sub-images after manual adjustment, and the adjacent sub-image pairs with high connection confidence after manual adjustment are preferentially placed in the same path, so that the effect of the adjacent sub-image pairs between every two sub-images after manual adjustment is preferentially integrated into the global adjustment, thereby achieving efficient man-machine cooperation, improving the image splicing effect, and preferentially optimizing according to the will of people.
Further, according to the image stitching method, the image stitching device, the electronic device and the storage medium of the embodiment of the invention, when the picture content of the sub-images changes, the connection confidence between partial adjacent sub-image pairs may be improved, and the steps of primary adjustment and global adjustment may be performed again to improve the accuracy of image stitching; or only the step of global adjustment can be executed, and the precision of image splicing can be improved.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
FIG. 1 shows a flow diagram of an image stitching method according to an embodiment of the invention;
fig. 2 shows a flowchart of the detailed steps of step S11 in fig. 1;
FIG. 3 illustrates an example of stitched images of an application scene in accordance with an embodiment of the present invention;
fig. 4 shows a flowchart of the detailed steps of step S12 in fig. 1;
fig. 5 shows an example of a stitched image performing step S12;
fig. 6 shows a flowchart of the detailed steps of step S13 in fig. 1;
fig. 7 shows an example of a stitched image performing step S13;
fig. 8 shows an example of a stitched image performing step S14;
FIG. 9 shows a flow diagram of an image stitching method according to another embodiment of the present invention;
fig. 10 is a flowchart showing a specific step of step S22 in fig. 9;
fig. 11 is a flowchart showing the detailed steps of the fused reduction degree matching search method in step S22 in fig. 9;
FIG. 12 shows a schematic diagram of an image stitching device according to an embodiment of the invention;
FIG. 13 shows a schematic view of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows an image stitching method according to an embodiment of the present invention, which may be used, for example, for stitching a plurality of sub-images continuously acquired by rotating a single camera or a plurality of sub-images acquired together by an angularly dispersed camera array to obtain a panoramic stitched image, and the method may include the following steps:
s11, carrying out initial splicing on the multiple sub-images by using an initial splicing model to obtain an initial splicing image.
In the image splicing method of the embodiment of the invention, a plurality of sub-images of the same scene are spliced according to the existing splicing mode to obtain an initial spliced image which is initially aligned. Here, the initial stitching model is automated image processing software, and may be, for example, an artificial intelligence model based on machine learning, such as an OpenCV-based software model, and the panoramic image stitching may be performed according to feature point matching between the sub-images.
In an alternative implementation manner of the embodiment of the present invention, as shown in fig. 2, the step S11 may include:
s11a, extracting feature points of each sub-image by using an initial splicing model.
In this embodiment, for example, a Scale-invariant feature transform (SIFT) algorithm may be used to extract feature points of each sub-image.
S11b, identifying mutually matched feature points in different sub-images.
In this embodiment, for example, a KD tree algorithm may be employed to match feature points between adjacent sub-images.
And S11c, aligning the coordinates of the feature points matched with each other according to a uniform panorama coordinate system.
In the present embodiment, the operation of step S11c may be performed in an existing coordinate conversion alignment manner.
And S12, connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of adjacent sub-images.
Here, the neighboring sub-images refer to images immediately adjacent to each other up, down, left, and right, for example, in the example of fig. 3, sub-images 0 and 1, sub-images 0 and 6, sub-images 1 and 7, and the like are all neighboring sub-images. When the initial splicing model performs initial splicing on a plurality of sub-images, the connection confidence coefficient between each pair of adjacent sub-images can be obtained according to the matched feature points between each pair of adjacent sub-images. Here, each region includes at least two sub-images.
In an alternative implementation manner of the embodiment of the present invention, as shown in fig. 4, the step S12 may include:
s12a, connecting adjacent sub-images with the connection confidence degree larger than 0 to form at least one initial region, and taking the rest sub-images as an isolated point set.
Specifically, each sub-image may be regarded as a node, for example, a region that can be connected together is identified based on a breadth-first search, the connection confidence between adjacent sub-images in the region is greater than 0, and the number of nodes in the region is greater than 1. In the example of fig. 3, sub-images 0,1,2,3,4,5,10,11,16,17 have mutually matched feature points, so that the confidence of connection between adjacent sub-images in the sub-images is greater than 0, and the sub-images can be connected into an initial region, which is referred to as region a; likewise, the sub-images 6,12 may be connected as an initial area, here denoted as area B. And the remaining sub-images are all river surfaces, no feature points are provided, the confidence of connection between the sub-images is 0, the confidence of connection between the sub-images and each sub-image in the area A and the area B is also 0, and the remaining sub-images are put into the isolated point set. The connected images are shown in fig. 5, and for clarity, the image content of each sub-image is removed in fig. 5, and only the connection state of the initial area is shown.
S12b, traversing each solitary point sub-image in the solitary point set, when a sub-image belonging to a certain initial region exists in the adjacent sub-images of the traversed solitary point sub-images, moving the solitary point sub-images from the solitary point set into the corresponding initial region, and setting the connection confidence coefficient between the solitary point sub-images and the connected sub-images to be a value which is larger than 0 and very small.
Following the above example, the set of initial regions A is {0,1,2,3,4,5,10,11,16,17}, the set of initial regions B is {6,12}, and the set of orphans is {7,8,9,13,14,15 }. Traversing each isolated point sub-image in the isolated point set, for example, the sub-image 7, wherein the adjacent sub-images of the sub-image 7 refer to the sub-images that are immediately adjacent to each other on the upper, lower, left and right sides of the sub-image 7, if the system detects that the adjacent sub-image 1 of the sub-image 7 belongs to the initial area a, moving the sub-image 7 from the isolated point set into the initial area a, and setting the connection confidence between the sub-image 7 and the sub-image 1 to a value that is greater than 0 and very small, so that the initial area a set becomes {0,1,2,3,4,5,7,10,11,16,17}, the initial area B set is {6,12}, and the isolated point set is {8,9,13,14,15 }.
S12c, detecting whether the isolated point set is empty, returning to the step S12b when the isolated point set is not empty, and determining the attribution of the next sub-image in the isolated point set; when the set of the orphans is empty, the operation of step S12 is completed, all the sub-images are connected into at least one region, and the process proceeds to step S13.
S13, communicating at least one area into a communicated area.
This step is divided into two cases, for the first case, i.e. the case where there is only one region, which is itself a connected region at this time, without performing an additional operation.
For the second case, i.e. the case that the number of the regions is greater than 1, as shown in fig. 6, in an alternative embodiment, the step S13 may include:
s13a, selecting an area as a connected area.
In this embodiment, the system may randomly select one area as a connected area, or select one area as a connected area and the other areas as unconnected areas according to a user instruction. Taking fig. 7 as an example, assuming that the sub-images are connected into two areas, namely area a and area B, via step S12, the system may select area a as a connected area, and correspondingly, area B as an unconnected area.
And S13b, traversing each sub-image of each unconnected area, if the sub-images belonging to the connected areas exist in the adjacent sub-images of the traversed unconnected areas, setting the unconnected areas including the sub-images as the connected areas, and setting the connection confidence coefficient between the sub-images of the unconnected areas and the sub-images of the connected areas to be a small value which is larger than zero.
Still referring to fig. 7, if the adjacent sub-images 0 and 7 of the sub-image 6 in the unconnected area B belong to the connected area a, the unconnected area B may be set as the connected area, the confidence of the connection between the sub-image 6 and the adjacent sub-images 0 and 7 is set to a small value greater than zero, and similarly, the adjacent sub-image 13 of the sub-image 12 in the unconnected area B belongs to the connected area, and the confidence of the connection between the sub-image 12 and the adjacent sub-image 13 is set to a small value greater than zero.
S13c, detecting whether an unconnected area exists, and returning to the step S13b to execute connected operation on the next unconnected area when the unconnected area still exists; when there is no unconnected area, that is, the operation of step S13 is completed, all the areas become connected areas, and the process proceeds to step S14.
S14, selecting one sub-image in the communicated area as a root sub-image, and generating at least one path from the root sub-image by using a breadth first-connection confidence degree first search mode.
In this embodiment, the system may randomly select one sub-image as the root sub-image, or select one sub-image as the root sub-image according to a user instruction. Still following the above example, as shown in fig. 8, the sub-image 0 is selected as the root sub-image, and 4 paths from the sub-image 0 are generated by a breadth-first-connection confidence-first search method, where the 4 paths cover all the sub-images, and the 4 paths are: [06789],[061213],[0123410161514],[0123451117].
And S15, according to the path sequence, sequentially adjusting the next subimage from the root subimage according to the adjustment parameters between the next subimage and the previous subimage in each path to obtain a global adjustment splicing image.
Still taking fig. 8 as an example, taking path [ 061213 ] as an example, the sub-images in the path are adjusted in sequence from root sub-image 0 according to the order of the path, sub-image 6 is adjusted according to the adjustment parameter between sub-images 0 and 6, sub-image 12 is adjusted according to the adjustment parameter between sub-images 6 and 12, and sub-image 13 is adjusted according to the adjustment parameter between sub-images 12 and 13, so that the adjustment of the sub-images on the path is completed. And then, adjusting the sub-images of other paths, thereby completing the adjustment of the whole spliced image.
In this embodiment, the alignment condition between the sub-images can be determined by alignment detection, so as to determine the adjustment parameter. Preferably, the alignment detection may be performed based on at least one of hough transform, edge detection, semantic segmentation and manual labeling. Among them, Hough Transform (Hough Transform) is a feature detection algorithm, and detects an object with a specific shape by a voting mechanism, and can perform straight line detection by Hough Transform, thereby determining alignment conditions such as lane lines, haulage cables, support columns, or building sidelines. Of course, the hough transform also supports the detection of any known shape, such as circular, elliptical, etc. Several hough transforms, such as the Standard Hough Transform (SHT) and the cumulative probability hough transform (PPHT), are typically supported in OpenCV. In other alternative embodiments, the alignment of the determined edges or segments may be automatically identified by image processing techniques such as edge detection and semantic segmentation.
After the alignment detection, the form of the sub-image to be adjusted can be further determined according to the actual error and/or deformation condition in the image. Generally, for displaced misalignment, it is generally necessary to move and/or rotate the sub-images; for the inconsistent proportion, the sub-picture is generally required to be scaled; for other image distortion (typically common distortion conditions such as trapezoid, barrel, pincushion, etc.), perspective change processing is generally adopted for correction (deformation opposite to distortion); these adjustment processes will generally be referred to collectively as a bending process. In the embodiment of the present invention, in addition to determining the corresponding processing mode of each sub-image, the corresponding adjustment parameter needs to be further calculated and determined. For example, when the lane lines are not aligned left and right, a movement parameter of at least one sub-image in the panoramic image needs to be further determined (moving left and right in reverse according to the degree of misalignment), and the specific movement parameter may be automatically calculated after the lane lines are detected by using general methods such as hough transform, or may be identified and provided according to experience (for example, according to historical data, according to an artificial intelligence model after training, or according to artificial experience). In addition to left-right movement, other movements, rotations, zooms, or more general perspective changes may be obtained in a similar manner; further, if the misalignment frequently occurs and can be presumed to be caused by camera/camera setting (for example, caused by deviation of the sub-camera setting position and/or angle in the panoramic equipment), the panoramic image can be adjusted while feeding back equipment setting problems and equipment adjustment parameters, and the problematic camera position and/or angle can be adjusted, so as to strive to correct the misalignment situation from the source.
Wherein the adjustment to each sub-image is calculated in a uniform panorama coordinate system. Preferably, a reverse warping operation (including but not limited to shifting, rotating, scaling or perspective changing) is performed according to the degree of misalignment, a homography matrix required for warping adjustment corresponding to each sub-image, coordinates of the upper left corner and the lower right corner of the warped sub-image in the panorama coordinate system, and a mask array having the same size as the warped sub-image are calculated. The homography matrix is used for marking the relative position (relevance) of the sub-image pixels, and bending operations such as corresponding movement, rotation, scaling or perspective change of the sub-image are realized through matrix transformation; the coordinates of the upper left corner and the lower right corner are used for calibrating the absolute position of the sub-image in the panoramic image. The values in the mask array determine which pixels of the warped image are to be tiled into the panorama, while discarding those pixels that are not tiled. In some cases, the size of the sub-image after the warping operation may be transformed to a size that is not consistent with the space size given in the panorama, and thus a further cutting operation may be required for the sub-image. When the cutting operation is needed, the bent sub-image and the corresponding mask array are cut according to the cutting parameters, and the coordinates of the upper left corner and the lower right corner of the cut sub-image are modified according to the cutting parameters, so that the position of the content of the cut sub-image on the panorama is ensured to be unchanged.
In the image stitching method of the embodiment of the invention, the plurality of sub-images are connected into at least one region, the at least one region is connected into the connected region, at least one path starting from the root sub-image in the connected region is generated, and the sub-images on each path are adjusted according to the path sequence, so that the calculation of the adjustment between every two adjacent sub-images based on the connection confidence coefficient between every two adjacent sub-images is realized, the cumulative error is not generated, and the condition that the sub-image is adjusted well and the deviation of the other sub-image is larger is avoided.
Fig. 9 illustrates an image stitching method according to another embodiment of the present invention, which may include the steps of:
and S21, initially splicing the plurality of sub-images by using the initial splicing model to obtain an initial spliced image, wherein the specific content can refer to the corresponding description of the step S11.
And S22, acquiring the adjusting parameters between each pair of adjacent sub-images.
As an alternative implementation manner of the embodiment of the present invention, as shown in fig. 10, the step S22 may include:
s22a, determining whether there are enough mutually matched feature points between a pair of adjacent sub-images, and when there are enough mutually matched feature points, performing steps S22b to S22 d; when there are not enough mutually matching feature points, step S22e is executed.
S22b, calculating a homography matrix between adjacent sub-images according to the mutually matched feature points.
And S22c, acquiring an adjustment parameter between adjacent sub-images according to the homography matrix.
The specific contents of the above steps S22b and S22c can be understood with reference to the corresponding description in step S15, and are not described herein again.
S22d, optimizing the adjusting parameters by using a fusion reduction degree matching search mode which does not need to depend on the feature points.
Step S22d is an optional step, and although sufficient matched feature points are provided between adjacent sub-images, there may be a case where details of the overlapping regions are not aligned, and the accuracy of the adjustment can be further improved by the above step S22d.
S22e, acquiring adjustment parameters between adjacent sub-images by using a fusion reduction degree matching search mode which does not depend on feature points.
When there are not enough mutually matched feature points between a pair of adjacent sub-images, it is difficult to realize high-precision adjustment using the matched feature points, and therefore, adjustment can be realized only using a fusion reduction degree matching search mode that does not depend on feature points.
Further, since there are not enough mutually matched feature points between the pair of adjacent sub-images, it is necessary to set the connection confidence between the pair of adjacent sub-images to a value greater than 0 and smaller.
As an alternative implementation manner of the embodiment of the present invention, as shown in fig. 11, the fused reduction degree matching search manner in the above steps S22d and S22e may include the following steps:
and S221, acquiring a coincidence part, namely acquiring the parts of the adjacent sub-images, which respectively appear in the mutual coincidence area, according to the coordinate ranges of the adjacent sub-images in a unified panorama coordinate system, wherein the part of the adjacent sub-images, which appears in the mutual coincidence area, is a first coincidence part, and the part of the adjacent sub-images, which appears in the mutual coincidence area, is a second coincidence part.
For example, the adjacent sub-images a and B may obtain, according to the coordinate ranges of the sub-images a and B in the unified panorama coordinate system, the respective portions of the sub-images a and B that appear in the overlapping area, which is denoted as a _ p, i.e., a first overlapping portion, and the portion of the sub-image B that appears in the overlapping area, which is denoted as B _ p, i.e., a second overlapping portion.
S222, a fusion step, namely fusing the first superposition part A _ p and the second superposition part B _ p to obtain a fusion part F _ AB.
In this embodiment, the fusion operation may be performed by using a laplacian pyramid fusion method or the like.
And S223, acquiring a gradient difference step, namely performing gradient extraction calculation on the first overlapping part A _ p, the second overlapping part B _ p and the fusion part F _ AB to acquire the gradient difference.
Here, the gradient of the first overlapping portion a _ p is a first gradient a _ p _ grad, the gradient of the second overlapping portion B _ p is a second gradient B _ p _ grad, the gradient of the fusion portion F _ AB is a third gradient F _ AB _ grad, and the gradient difference d _ diff is equal to the absolute value of the difference between the third gradient F _ AB _ grad and the first gradient a _ p _ grad plus the absolute value of the difference between the third gradient F _ AB _ grad and the second gradient B _ p _ grad, as follows:
d_diff=abs(F_AB_grad-A_p_grad)+abs(F_AB_grad-B_p_grad)
after the gradient difference is obtained, whether the adjacent sub-images a and B are completely aligned or not can be judged according to the gradient difference, if the adjacent sub-images a and B are completely aligned, the parts of the adjacent sub-images a and B in the overlapping area are completely consistent, and the gradient difference d _ diff is 0; if the adjacent sub-images a and B are not completely aligned, the gradient difference d _ diff >0, and the worse the alignment, the larger the gradient difference d _ diff.
S224, a searching step, namely fixing the coordinates of the preset position of one of the adjacent sub-images in the panoramic image coordinate system, and searching the coordinates of the corresponding position of the other sub-image in the panoramic image coordinate system within a preset range.
For example, the coordinates of the upper left corner of the sub-image a on the panorama coordinate system may be fixed, while a range of search is performed for the coordinates of the upper left corner of the sub-image B on the panorama coordinate system. The predetermined position can be arbitrarily selected by a person skilled in the art, and is not limited to the upper left corner of the sub-image, and other positions are possible. Assuming that the coordinates of the upper left corner of the sub-image B are (x, y), the position of the sub-image B can be constantly changed centering on the coordinates within the range of x-Sx to x + Sx and y-Sy to y + Sy. Since the position of the sub-image B on the panorama coordinate system is entirely moved, the overlapping area between the sub-image B and the sub-image a is changed accordingly, and each search step obtains another set of a _ p and B _ p, that is, each search step is performed once in the steps S221, S222, and S223. In each search step, the movement distance [ dx, dy ] and the gradient difference d _ diff are recorded, where dx is the distance from the initial coordinate point x on the x-axis within the search range and dy is the distance from the initial coordinate point y on the y-axis within the search range.
And S225, taking the moving distance [ dx, dy ] corresponding to the searched minimum gradient difference d _ diff as an adjusting parameter between adjacent sub-images.
In the embodiment of the present invention, when the above-mentioned fusion reduction degree matching search method is used in step S22d, since the adjustment has been performed by using enough matching feature points before step S22d, step S22d is only to perform further optimization on the adjustment parameters, so that the above-mentioned search step can be performed in a small range to improve the image stitching efficiency. When the above-described fused reduction degree matching search method is used in step S22e, since the initial stitching is performed only using the initial stitching model before, the search step needs to be performed in a wide range. By adopting the above fusion reduction degree matching search mode, even if there are not enough matching feature points, the longitudinal and transverse translation parameters between a pair of adjacent sub-images can be obtained, or after the initial adjustment based on the matching feature points, the accuracy of the adjustment alignment is further improved.
And S23, performing primary adjustment on the initial spliced image according to the adjustment parameters.
The difference from the embodiment shown in fig. 1 to 8 is that, in this embodiment, the adjustment parameter is obtained in step S22, and the initial stitched image is initially adjusted, so that the efficiency of subsequent global adjustment can be improved.
And S24, connecting the sub-images into at least one area according to the connection confidence coefficient between each pair of adjacent sub-images, wherein the specific content can refer to the corresponding description of the step S12.
And S25, connecting at least one area into a connected area, wherein the specific content can refer to the corresponding description of the step S13.
And S26, selecting a sub-image in the connected region as a root sub-image, and generating at least one path from the root sub-image by using a breadth-first-connection confidence-first searching mode, wherein the specific content can refer to the corresponding description in the step S14.
And S27, according to the path sequence, sequentially adjusting the next sub-image from the root sub-image according to the adjustment parameter between the next sub-image and the previous sub-image in each path to obtain a global adjustment splicing image, wherein the specific content can refer to the corresponding description in the step S15.
In this embodiment, the initial stitched image is initially adjusted by obtaining the adjustment parameter between each pair of adjacent sub-images, so that the efficiency of subsequent global adjustment can be improved. Moreover, in an optional implementation manner of this embodiment, a fused reduction matching search manner is adopted for performing primary adjustment, and for a case where there are sufficient mutually matched feature points between adjacent sub-images, the fused reduction matching search manner is adopted, so that the alignment accuracy of the primary adjustment based on the matched feature points can be improved; and for the condition that the adjacent sub-images have insufficient mutually matched feature points, longitudinal and transverse translation parameters between the adjacent pair of sub-images can be acquired, so that relatively accurate primary adjustment is realized.
It should be noted that, in the embodiment of the present invention, the primary adjustment and the global adjustment each include the bending operation and/or the cutting operation in the above step S15. After the secondary adjustment is completed, all the subimages are uniformly distributed according to the panoramic image coordinate system, and the pixel point deviation of the whole image (especially the subimage edge) is corrected, so that the subimages are fused together with the smallest difference, and the final panoramic spliced image is constructed.
In an optional implementation manner of the embodiment of the present invention, the image stitching method according to the embodiment may further include:
and S28, judging whether the manual adjustment parameters between the adjacent sub-images are acquired, executing the step S29 when the manual adjustment parameters are acquired, and indicating that the operator is satisfied with the image splicing result when the manual adjustment parameters are not acquired, thereby finishing the flow of the image splicing method.
After the step S27, if the operator is not satisfied with the image stitching result, the globally adjusted stitched image may be manually adjusted, and the operator may manually adjust any one or more pairs of adjacent sub-images.
And S29, adjusting the globally adjusted spliced image according to the manual adjustment parameters, setting the connection confidence coefficient between each pair of adjacent sub-images after manual adjustment to be a value larger than the connection confidence coefficients of all automatic adjustments, and returning to the step S24.
When the manual adjustment parameters are acquired, the system adjusts the globally adjusted stitched image according to the manual adjustment parameters, and sets the connection confidence coefficient between each pair of adjacent sub-images after manual adjustment to be a value larger than the connection confidence coefficients of all automatic adjustments, namely, the system preferentially integrates the manual adjustment of an operator into the image stitching method of the embodiment of the invention. In this embodiment, the manual adjustment is to perform local adjustment on individual sub-images in the global adjustment stitched image, and after the manual adjustment is performed, a situation that the sub-image is adjusted well and the deviation of another sub-image is larger easily occurs. Therefore, in this embodiment, the system performs global adjustment again according to the arrangement and connection confidence of the sub-images after manual adjustment, and preferentially places the manually adjusted adjacent sub-image pairs with high connection confidence into the same path, so that the effect of the manually adjusted adjacent sub-image pairs between two sub-images is preferentially integrated into global adjustment, thereby achieving efficient human-computer cooperation, improving the image splicing effect, and preferentially optimizing according to the will of people.
In an optional implementation manner of the embodiment of the present invention, the image stitching method according to the embodiment may further include:
when the screen content of the sub-image is changed, the process returns to the above step S22 or step S24 to obtain the global adjustment stitched image again.
Returning to fig. 3, in the example of fig. 3, since feature points are not provided in the river surface, a set of isolated points {7,8,9,13,14,15} is generated, the confidence of connection between the sub-images in the isolated point set is 0, and the confidence of connection between these sub-images and other respective sub-images is also 0. However, due to the movement of the ship on the river surface, for example, the ship moves to the coincidence region between the sub-images of the isolated point set, or moves to the coincidence region between the sub-images of the isolated point set and other sub-images, matching feature points can be provided between the two sub-images, so that the connection confidence between the two sub-images is improved, and the system can return to the step S22 to perform the above steps of primary adjustment and global adjustment again, so as to improve the accuracy of image stitching; alternatively, the system may return to step S24 to perform only the above-described global adjustment step, which can also improve the accuracy of image stitching.
Correspondingly, as shown in fig. 12, an embodiment of the present invention further provides an image stitching apparatus, where the image stitching apparatus may include:
the initial stitching unit 31 is configured to perform initial stitching on the multiple sub-images by using an initial stitching model to obtain an initial stitched image, and the specific content may refer to the description of step S11.
The region unit 33 is configured to connect the sub-images into at least one region according to the connection confidence between each pair of adjacent sub-images, where the region includes at least two sub-images, and the specific content may refer to the description of step S12.
The communicating unit 34 is configured to communicate the at least one area as a communicated area, and the specific content may refer to the description of step S13.
The path unit 35 is configured to select one sub-image in the connected region as a root sub-image, and generate at least one path from the root sub-image by using a breadth-first-connection confidence-first search method, where the at least one path covers all the sub-images, and the specific content may refer to the description in step S14.
The global adjusting unit 36 is configured to adjust, according to the path sequence, the subsequent sub-image according to the adjustment parameter between the subsequent sub-image and the previous sub-image in each path, starting from the root sub-image, so as to obtain a global adjusted stitched image, where specific content may refer to the description in step S15.
In the image splicing device according to the embodiment of the present invention, by connecting a plurality of sub-images into at least one region, connecting the at least one region into a connected region, generating at least one path starting from a root sub-image in the connected region, and adjusting the sub-images on each path in accordance with the path order, it is achieved that how adjustment between two adjacent sub-images is performed on the whole is calculated based on the connection confidence between the two adjacent sub-images without generating an accumulated error, thereby avoiding a situation that the sub-image is adjusted well and the other sub-image has a larger deviation.
As an optional implementation manner of the embodiment of the present invention, the image stitching apparatus may further include:
the primary adjusting unit 32 is configured to obtain an adjustment parameter between each pair of adjacent sub-images after the initial splicing unit 31 obtains the initial spliced image and before the region unit 33 connects the sub-images into at least one region, and perform primary adjustment on the initial spliced image according to the adjustment parameter, which may refer to the descriptions of steps S22 and S23.
In this embodiment, the initial adjustment unit 32 performs initial adjustment on the initial stitched image by obtaining an adjustment parameter between each pair of adjacent sub-images, so as to improve the efficiency of subsequent global adjustment. Moreover, in an optional implementation manner of this embodiment, the primary adjustment unit 32 performs primary adjustment by using a fused reduction degree matching search manner, and for a case that there are enough mutually matched feature points between adjacent sub-images, the fused reduction degree matching search manner is used, so that the alignment accuracy of the primary adjustment based on the matched feature points can be improved; and for the condition that the adjacent sub-images have insufficient mutually matched feature points, longitudinal and transverse translation parameters between the adjacent pair of sub-images can be acquired, so that relatively accurate primary adjustment is realized.
As an optional implementation manner of the embodiment of the present invention, the area unit 33 is specifically configured to:
connecting adjacent sub-images with the connection confidence coefficient larger than 0 together to form at least one initial region, taking the rest sub-images as an isolated point set, wherein the connection confidence coefficient between the sub-images in the isolated point set is 0, and the connection confidence coefficient between the sub-images in the isolated point set and each sub-image in the initial region is 0;
traversing each isolated point sub-image in the isolated point set, when a sub-image belonging to a certain initial region exists in the adjacent sub-images of the traversed isolated point sub-images, moving the isolated point sub-images from the isolated point set into the corresponding initial region, and setting the connection confidence coefficient between the isolated point sub-images and the connected sub-images to be a value which is larger than 0 and is very small until the isolated point set is empty so as to form at least one region.
Further details may refer to the description of step S12.
As an optional implementation manner of the embodiment of the present invention, the communication unit 34 is specifically configured to:
when the number of the regions is 1, the regions are connected regions;
and when the number of the areas is more than 1, selecting one area as a connected area, traversing each sub-image of each unconnected area, if the sub-image belonging to the connected area exists in the adjacent sub-images of the traversed unconnected area, setting the unconnected area comprising the sub-image as the connected area, and setting the connection confidence coefficient between the sub-image of the unconnected area and the sub-image of the connected area as a small value which is more than zero until all the areas are connected areas.
Further details may refer to the description of step S13.
As an optional implementation manner of the embodiment of the present invention, the image stitching apparatus may further include:
the manual adjustment unit 37 is configured to, when acquiring a manual adjustment parameter between adjacent sub-images, adjust the global adjustment stitched image according to the manual adjustment parameter, set a connection confidence between each pair of adjacent sub-images after manual adjustment to a value greater than all connection confidences of automatic adjustment, and cause the region unit 33, the communication unit 34, the path unit 35, and the global adjustment unit 36 to perform corresponding steps again.
In this embodiment, the image stitching device performs global adjustment again according to the arrangement and connection confidence of the sub-images after manual adjustment, and preferentially places the adjacent sub-image pairs with high connection confidence after manual adjustment into the same path, so that the effect of the adjacent sub-image pairs between two manually adjusted sub-images is preferentially integrated into global adjustment, thereby achieving efficient human-computer cooperation, improving the image stitching effect, and preferentially optimizing according to the will of people.
As an optional implementation manner of the embodiment of the present invention, the image stitching apparatus may further include:
a secondary adjustment unit, configured to, when the picture content of the sub-image changes, cause the area unit 33, the communication unit 34, the path unit 35, and the global adjustment unit 36 to perform corresponding steps again; or
The secondary adjusting unit is configured to, when the picture content of the sub-image changes, cause the primary adjusting unit 32, the area unit 33, the communication unit 34, the path unit 35, and the global adjusting unit 36 to perform corresponding steps again.
In this embodiment, when the picture content of the sub-images changes, the confidence of the connection between some pairs of adjacent sub-images may be increased, and the image stitching device may perform the steps of primary adjustment and global adjustment again to improve the accuracy of image stitching; or only the step of global adjustment can be executed, and the precision of image splicing can be improved.
Further details of the image stitching device may be understood by referring to the corresponding related descriptions and effects in the embodiments shown in fig. 1 to fig. 11, which are not repeated herein.
As shown in fig. 13, an embodiment of the present invention further provides an electronic device, which may include a processor 41 and a memory 42, where the processor 41 and the memory 42 may be communicatively connected with each other through a bus or in another manner, and in fig. 13, the bus is taken as an example, the memory 42 stores computer instructions 43, and the processor 41 executes the computer instructions 43, so as to perform the method described in the foregoing method embodiment.
The processor 41 may be a Central Processing Unit (CPU). The Processor 41 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 42, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the image stitching method in the embodiments of the present disclosure. The processor 41 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions, and functional modules 43 stored in the memory 42.
The memory 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 41, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 42 may optionally include memory located remotely from processor 41, which may be connected to processor 41 via a network (e.g., via communication interface 44). Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Of course, those skilled in the art can understand that the electronic device in fig. 13 is a general description of a device with certain data processing capability in the art, and in the embodiment of the present invention, a specific implementation manner of the electronic device in fig. 13 is an image processing device; more preferably, the image processing device is preferably a panorama acquisition system.
An embodiment of the present invention further provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are used to enable the computer to execute the steps in the above method embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding descriptions in the foregoing method and/or apparatus embodiments, and are not described herein again.
While the subject matter described herein is provided in the general context of execution in conjunction with the execution of an operating system and application programs on a computer system, those skilled in the art will recognize that other implementations may also be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Those skilled in the art will appreciate that the subject matter described herein may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like, as well as distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. Such computer-readable storage media include physical volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. The computer-readable storage medium specifically includes, but is not limited to, a USB flash drive, a removable hard drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), an erasable programmable Read-Only Memory (EPROM), an electrically erasable programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, a CD-ROM, a Digital Versatile Disk (DVD), an HD-DVD, a Blue-Ray or other optical storage, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (21)

1. An image stitching method, comprising:
carrying out initial splicing on a plurality of sub-images by using an initial splicing model to obtain an initial spliced image;
connecting the sub-images into at least one region according to the connection confidence degree between each pair of adjacent sub-images, wherein the region at least comprises two sub-images;
communicating the at least one region into a communicated region;
selecting one sub-image in the communicated region as a root sub-image, and generating at least one path from the root sub-image by using a breadth first-connection confidence degree first search mode, wherein the at least one path covers all the sub-images;
and according to the path sequence, sequentially adjusting the next sub-image from the root sub-image according to the adjustment parameters between the next sub-image and the previous sub-image in each path to obtain a global adjustment splicing image.
2. The method of claim 1, wherein initially stitching the plurality of sub-images using the initial stitching model comprises:
extracting characteristic points of each subimage by using the initial splicing model;
identifying mutually matched feature points in different sub-images;
and aligning the coordinates of the matched characteristic points according to a uniform panorama coordinate system.
3. The method according to claim 1, wherein between the step of initially stitching the plurality of sub-images by using the initial stitching model and the step of obtaining the confidence of the connection between each pair of adjacent sub-images in the initially stitched image, further comprising:
acquiring an adjustment parameter between each pair of adjacent sub-images;
and performing primary adjustment on the initial spliced image according to the adjustment parameters.
4. The method of claim 3, wherein obtaining the adjustment parameter between each pair of adjacent sub-images comprises:
when a pair of adjacent sub-images have enough mutually matched feature points, calculating a homography matrix between the adjacent sub-images according to the mutually matched feature points;
and acquiring the adjustment parameters between the adjacent sub-images according to the homography matrix.
5. The method of claim 4, further comprising, after obtaining the adjustment parameters between the adjacent sub-images according to the homography matrix:
and optimizing the adjustment parameters by utilizing a fusion reduction degree matching search mode which does not depend on the feature points.
6. The method of claim 3, wherein obtaining the adjustment parameter between each pair of adjacent sub-images comprises:
when the feature points which are matched with each other are not enough between a pair of adjacent sub-images, the adjustment parameters between the adjacent sub-images are obtained by utilizing a fusion reduction degree matching search mode which does not depend on the feature points.
7. The method according to claim 5 or 6, wherein the fused reduction degree matching search mode comprises the following steps:
acquiring overlapped parts, namely acquiring the parts of the adjacent sub-images, which respectively appear in the overlapped areas of the adjacent sub-images, according to the coordinate ranges of the adjacent sub-images in a unified panorama coordinate system, wherein the part of the adjacent sub-images, which appears in the overlapped areas of the adjacent sub-images, is a first overlapped part, and the part of the adjacent sub-images, which appears in the overlapped areas of the adjacent sub-images, is a second overlapped part;
a fusion step of fusing the first and second overlapping parts to obtain a fused part;
a gradient difference obtaining step of performing gradient extraction calculation on the first overlapping portion, the second overlapping portion, and the fused portion to obtain a gradient difference, where a gradient of the first overlapping portion is a first gradient, a gradient of the second overlapping portion is a second gradient, and a gradient of the fused portion is a third gradient, and the gradient difference is equal to an absolute value of a difference between the third gradient and the first gradient plus an absolute value of a difference between the third gradient and the second gradient;
a searching step, namely fixing the coordinate of a preset position of one of the adjacent sub-images in a panoramic image coordinate system, searching the coordinate of the corresponding position of the other sub-image in the panoramic image coordinate system within a preset range, executing the step of acquiring the overlapped part, the step of fusing and the step of acquiring the gradient difference for each search, and recording the moving distance and the gradient difference of each search;
and taking the moving distance corresponding to the searched minimum gradient difference as an adjustment parameter between adjacent sub-images.
8. The method of claim 4, further comprising:
and acquiring the connection confidence coefficient between the adjacent sub-images according to the mutually matched feature points between the adjacent sub-images.
9. The method of claim 6, further comprising:
the confidence of the connection between the adjacent sub-images is set to a value greater than 0 and smaller.
10. The method according to any one of claims 1-9, wherein said connecting the sub-images into at least one region according to the connection confidence between each pair of adjacent sub-images comprises:
connecting adjacent sub-images with the connection confidence coefficient larger than 0 together to form at least one initial region, taking the rest sub-images as an isolated point set, wherein the connection confidence coefficient between the sub-images in the isolated point set is 0, and the connection confidence coefficient between the sub-images in the isolated point set and each sub-image in the initial region is 0;
traversing each isolated point sub-image in the isolated point set, when a sub-image belonging to a certain initial region exists in the adjacent sub-images of the traversed isolated point sub-images, moving the isolated point sub-images from the isolated point set into the corresponding initial region, and setting the connection confidence coefficient between the isolated point sub-images and the connected sub-images to be a value which is larger than 0 and is very small until the isolated point set is empty so as to form at least one region.
11. The method according to any one of claims 1-9, wherein said communicating said at least one region into one communicated region comprises:
when the number of the regions is 1, the regions are connected regions;
and when the number of the areas is more than 1, selecting one area as a connected area, traversing each sub-image of each unconnected area, if the sub-image belonging to the connected area exists in the adjacent sub-images of the traversed unconnected area, setting the unconnected area comprising the sub-image as the connected area, and setting the connection confidence coefficient between the sub-image of the unconnected area and the sub-image of the connected area as a small value which is more than zero until all the areas are connected areas.
12. The method according to any one of claims 1-11, further comprising, after obtaining the globally adjusted stitched image:
when the manual adjustment parameters between the adjacent sub-images are acquired, adjusting the overall adjustment spliced image according to the manual adjustment parameters, setting the connection confidence coefficient between each pair of the adjacent sub-images after manual adjustment to be a value larger than the connection confidence coefficients of all automatic adjustments, and returning to the step of connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of the adjacent sub-images so as to acquire the overall adjustment spliced image again.
13. The method according to any one of claims 1-11, further comprising, after obtaining the globally adjusted stitched image:
when the picture content of the sub-images changes, returning to the step of connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of adjacent sub-images so as to obtain a global adjustment splicing image again; or
And when the picture content of the sub-images changes, returning to the step of acquiring the adjustment parameters between each pair of adjacent sub-images to obtain the global adjustment splicing image again.
14. An image stitching device, comprising:
the initial splicing unit is used for initially splicing the sub-images by using the initial splicing model to obtain an initial spliced image;
the region unit is used for connecting the sub-images into at least one region according to the connection confidence coefficient between each pair of adjacent sub-images, and the region at least comprises two sub-images;
a communicating unit for communicating the at least one region into one communicated region;
the path unit is used for selecting one sub-image in the communicated area as a root sub-image, and generating at least one path from the root sub-image by using a breadth-first-connection confidence priority search mode, wherein the at least one path covers all the sub-images;
and the global adjusting unit is used for adjusting the next sub-image according to the adjusting parameters between the next sub-image and the previous sub-image in each path in sequence from the root sub-image according to the path sequence so as to obtain a global adjusting splicing image.
15. The image stitching device according to claim 14, further comprising:
and the primary adjustment unit is used for acquiring adjustment parameters between each pair of adjacent sub-images after the initial splicing unit obtains the initial spliced image and before the area unit connects the sub-images into at least one area, and performing primary adjustment on the initial spliced image according to the adjustment parameters.
16. The image stitching device according to claim 14, wherein the region unit is specifically configured to:
connecting adjacent sub-images with the connection confidence coefficient larger than 0 together to form at least one initial region, taking the rest sub-images as an isolated point set, wherein the connection confidence coefficient between the sub-images in the isolated point set is 0, and the connection confidence coefficient between the sub-images in the isolated point set and each sub-image in the initial region is 0;
traversing each isolated point sub-image in the isolated point set, when a sub-image belonging to a certain initial region exists in the adjacent sub-images of the traversed isolated point sub-images, moving the isolated point sub-images from the isolated point set into the corresponding initial region, and setting the connection confidence coefficient between the isolated point sub-images and the connected sub-images to be a value which is larger than 0 and is very small until the isolated point set is empty so as to form at least one region.
17. The image stitching device according to claim 14, wherein the communication unit is specifically configured to:
when the number of the regions is 1, the regions are connected regions;
and when the number of the areas is more than 1, selecting one area as a connected area, traversing each sub-image of each unconnected area, if the sub-image belonging to the connected area exists in the adjacent sub-images of the traversed unconnected area, setting the unconnected area comprising the sub-image as the connected area, and setting the connection confidence coefficient between the sub-image of the unconnected area and the sub-image of the connected area as a small value which is more than zero until all the areas are connected areas.
18. The image stitching device according to any one of claims 14 to 17, further comprising:
and the manual adjusting unit is used for adjusting the overall adjustment spliced image according to the manual adjusting parameters when the manual adjusting parameters between the adjacent sub-images are acquired, setting the connection confidence coefficient between each pair of adjacent sub-images after manual adjustment to be a value larger than the connection confidence coefficients of all automatic adjustments, and enabling the region unit, the communicating unit, the path unit and the overall adjusting unit to execute corresponding steps again.
19. The image stitching device according to any one of claims 14 to 17, further comprising:
the secondary adjusting unit is used for enabling the area unit, the communication unit, the path unit and the global adjusting unit to execute corresponding steps again when the picture content of the sub-image is changed; or
The secondary adjusting unit is used for enabling the primary adjusting unit, the area unit, the communication unit, the path unit and the global adjusting unit to execute corresponding steps again when the picture content of the sub-image is changed.
20. An electronic device, comprising: a memory and a processor communicatively coupled to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the method of any of claims 1-13.
21. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-13.
CN202111649158.8A 2021-12-30 2021-12-30 Image splicing method and device, electronic equipment and storage medium Pending CN114399429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111649158.8A CN114399429A (en) 2021-12-30 2021-12-30 Image splicing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111649158.8A CN114399429A (en) 2021-12-30 2021-12-30 Image splicing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114399429A true CN114399429A (en) 2022-04-26

Family

ID=81228589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111649158.8A Pending CN114399429A (en) 2021-12-30 2021-12-30 Image splicing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114399429A (en)

Similar Documents

Publication Publication Date Title
US11115565B2 (en) User feedback for real-time checking and improving quality of scanned image
US20210144353A1 (en) User feedback for real-time checking and improving quality of scanned image
US20200162629A1 (en) Method and apparatus for scanning and printing a 3d object
EP3092790B1 (en) Adaptive camera control for reducing motion blur during real-time image capture
KR101956151B1 (en) A foreground image generation method and apparatus used in a user terminal
JP5363752B2 (en) Road marking map generation method
CN101859433A (en) Image mosaic device and method
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN107346536B (en) Image fusion method and device
CN113066173A (en) Three-dimensional model construction method and device and electronic equipment
CN114399429A (en) Image splicing method and device, electronic equipment and storage medium
CN116342745A (en) Editing method and device for lane line data, electronic equipment and storage medium
CN113344782A (en) Image splicing method and device, storage medium and electronic device
CN113066010A (en) Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113810626A (en) Video fusion method, device and equipment based on three-dimensional map and storage medium
KR20100009452A (en) Method for image processing
CN117152400B (en) Method and system for fusing multiple paths of continuous videos and three-dimensional twin scenes on traffic road
CN112801873B (en) Panoramic image splicing modeling method and device
US10652472B2 (en) Enhanced automatic perspective and horizon correction
CN115937667A (en) Target position determination method and device, electronic equipment and storage medium
CN117541465A (en) Feature point-based ground library positioning method, system, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination