CN116109484A - Image splicing method, device and equipment for retaining foreground information and storage medium - Google Patents

Image splicing method, device and equipment for retaining foreground information and storage medium Download PDF

Info

Publication number
CN116109484A
CN116109484A CN202310099812.5A CN202310099812A CN116109484A CN 116109484 A CN116109484 A CN 116109484A CN 202310099812 A CN202310099812 A CN 202310099812A CN 116109484 A CN116109484 A CN 116109484A
Authority
CN
China
Prior art keywords
image
foreground
matching
feature
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310099812.5A
Other languages
Chinese (zh)
Inventor
陈曦
苗鑫朋
何楚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202310099812.5A priority Critical patent/CN116109484A/en
Publication of CN116109484A publication Critical patent/CN116109484A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image splicing method, device, equipment and storage medium for reserving foreground information, wherein the method is characterized in that image features in a reference image and a target image are respectively extracted by acquiring the reference image and the target image to be spliced, and a reference feature point set and a target feature point set are generated; performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair; the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs are obtained, the overlapping areas are divided according to the preset suture lines, the reference image and the target image are spliced according to the divided overlapping areas, so that spliced images are obtained, foreground information can be reserved, the speed and the efficiency of image splicing are improved, a user can obtain high-resolution images with natural appearance, no double images and no seams, a wider visual field is obtained, and user experience is improved.

Description

Image splicing method, device and equipment for retaining foreground information and storage medium
Technical Field
The present invention relates to the field of computer vision geometry technologies, and in particular, to an image stitching method, apparatus, device and storage medium for foreground information retention.
Background
In the ideal case of the image stitching algorithm, the installation positions of the cameras are all concentric; in this case, the images captured by the plurality of cameras are rotated in place like the images captured by the same camera a plurality of times; for the images, the image stitching algorithm can simply combine the images into a whole image through translation operation, and the visual effect of the stitched image is not different from that of the image shot by a single camera; however, in practical situations, it is impossible to achieve the effect of the co-optical center of a plurality of cameras, which requires that the mounting positions of the plurality of cameras are very similar; however, in an actual use situation, the distances between the mounting positions of the plurality of cameras are often not too close, for example, two ends of a wing of an aircraft need to be provided with one camera respectively, and vehicle-mounted cameras need to be arranged at four corners of a vehicle body; there are also many application scenes in which cameras are required to be arranged at different positions, which brings about a core problem in an image stitching algorithm, namely a parallax problem; the image stitching algorithm aims at solving the problems of seam, ghost, information loss and the like caused by stitching images aiming at unavoidable parallax.
Because the image splicing scene of the multi-camera co-optical center does not exist in reality, a plurality of images cannot be spliced by only using translation operation; the image stitching algorithm calculates the mapping relation between different image planes according to the different pose information of the cameras, so that a plurality of images are mapped to the same plane, and a stitched image is obtained; if the pose relation between the cameras is to be calculated, the difference information of the imaging of the same object on different cameras needs to be obtained, namely a plurality of cameras need to shoot the same object at different positions; therefore, imaging of the same object necessarily exists between a plurality of photographed images; the image stitching algorithm further obtains mapping relations among different imaging planes according to the imaging computer pose relations of the same objects in different cameras; obviously, mapping multiple images to the same plane must have an overlapping area; when a plurality of cameras shoot the same plane, the parallax problem does not exist, and the overlapping area between the images does not need special treatment; however, this is an ideal process, in which in practical situations, the camera captures a scene with a high probability is not a simple plane, which requires the image stitching algorithm to perform subsequent processing, so as to reduce or completely solve the problem of double image caused by parallax in the image overlapping area.
The image stitching algorithm roughly comprises two steps: registering and fusing images; the image registration can obtain better effects through a series of measures such as improving the quality of image feature points and improving the accuracy of a matching algorithm; the image fusion at the present stage is still lack of an effective processing method to obtain an image fusion result with good visual effect, and the problem of double image caused by parallax mentioned above is currently the bottleneck of an image stitching algorithm; although there is no algorithm that can completely solve the parallax problem and realize high-quality image fusion, there are still some image fusion algorithms that attempt to mitigate the influence of the parallax problem on the stitching effect; the mainstream image fusion algorithm has two directions: fusion methods and optimal suture methods; the fusion methods include eclosion fusion, multi-band fusion and the like, but the methods more or less comprise the problems of ghost, image distortion and the like; in the other direction, the optimal suture line method can ensure that the transition of the pixel value of the image overlapping area is gentle as much as possible by setting various parameters, so that a better visual effect of the overlapping area is achieved; however, while the optimal stitching algorithm avoids the ghost problem, it also alleviates the seam problem to a great extent, but also results in unavoidable loss of image information.
The problem of image information loss caused by the optimal stitching algorithm is not completely intolerable, and the loss of some insignificant information in most of non-real-time image stitching scenes does not affect the effect of final image stitching; when a panoramic image is manufactured, the loss of part of image information is even beneficial to the panoramic image effect; however, in some real-time image stitching application scenarios, the consequences of the loss of image information are extremely serious; for example, image stitching of a vehicle-mounted camera has high real-time requirements, and meanwhile, loss of image information can be hardly tolerated; however, the optimal stitching line can cause serious information loss, for example, a person standing right in front of the car body can lose the imaging probability of the person in the camera in the final spliced image due to the nature and principle of the optimal stitching line algorithm; this is a scenario where the stitching of the vehicle camera images is completely unacceptable; the fusion method brings serious ghost problems; at this time, both of the mainstream image fusion methods are no longer applicable.
Disclosure of Invention
The invention mainly aims to provide an image splicing method, device, equipment and storage medium for reserving foreground information, and aims to solve the technical problems that in the prior art, image information is easy to lose and an image splicing effect is poor in an image fusion method.
In a first aspect, the present invention provides an image stitching method for retaining foreground information, where the image stitching method for retaining foreground information includes the following steps:
acquiring a reference image and a target image to be spliced, and respectively extracting image features in the reference image and the target image to generate a reference feature point set and a target feature point set;
performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair;
and obtaining an overlapping region corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping region according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping region to obtain a spliced image.
Optionally, the obtaining the reference image and the target image to be spliced, extracting image features in the reference image and the target image, and generating a reference feature point set and a target feature point set, includes:
two original images to be spliced are obtained by shooting at the same time when the two cameras are positioned at different positions, and the two original images are respectively used as a reference image and a target image;
Respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features.
Optionally, the matching the feature points of the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair, which includes:
acquiring feature point descriptors in the reference feature point set and the target feature point set, performing feature point matching on the reference feature point set and the target feature point set by using a K-nearest neighbor algorithm by using the feature point descriptors, taking feature points with descriptor differences larger than a preset difference threshold value as error extraction points, removing and filtering the error extraction points, and obtaining filtered matching points;
and dividing the matching points by using a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair.
Optionally, the dividing the matching points by applying a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair includes:
using a preset homography matrix as a fitting model, extracting a background matching point pair of the matching point pair on a background part by using a RANSAC algorithm, and extracting a foreground matching point pair of the matching point pair on a foreground part;
and eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background characteristic matching point pair and a foreground characteristic matching point pair.
Optionally, the obtaining the overlapping area corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area, so as to obtain a spliced image, including:
determining the mapping relation from the target image plane to the reference image plane according to the background feature matching point pairs, and obtaining homography matrixes of all matching points corresponding to the mapping relation;
mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping region after projection transformation;
Dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
Optionally, the dividing the overlapping area according to a preset suture line, and splicing the reference map and the target map according to the divided overlapping area to obtain a spliced image, including:
performing superpixel division on the overlapped area to obtain a superpixel division mask;
acquiring matching point coordinates of a foreground feature matching point pair of the reference image and the target image, and extracting superpixels containing the matching point coordinates according to the superpixel division mask to obtain a foreground superpixel set;
and acquiring a foreground reservation penalty item and a preset suture line corresponding to the foreground super-pixel set, dividing the overlapping area into two parts according to the foreground reservation penalty item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
Optionally, the obtaining the foreground preserving penalty item and the preset suture line corresponding to the foreground super-pixel set divides the overlapping area into two parts according to the foreground preserving penalty item and the preset suture line, and splices the reference image and the target image according to the divided overlapping area to obtain a spliced image, which includes:
Acquiring a reference image foreground super-pixel set and a target image foreground super-pixel set corresponding to the foreground super-pixel set;
setting a preset penalty term between each superpixel in the reference picture foreground superpixel set and each superpixel in the target picture foreground superpixel set, and summing up and calculating all preset penalty terms to obtain a foreground reservation penalty term;
taking the superpixels in the overlapping area as basic units to be segmented in a Graph-Cut algorithm, and calculating superpixel differences between target Graph superpixels and reference Graph superpixels corresponding to the basic units to be segmented;
the super-pixel difference is used as an energy function of the Graph-Cut algorithm,
calculating an optimal suture line of the overlapped area by utilizing the Graph-Cut algorithm according to the energy function, and taking the optimal suture line as a preset suture line;
dividing the overlapping area into two parts according to the foreground reservation punishment item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
In a second aspect, to achieve the above object, the present invention further provides an image stitching device for retaining foreground information, where the image stitching device for retaining foreground information includes:
The feature extraction module is used for acquiring a reference image and a target image to be spliced, extracting image features in the reference image and the target image respectively, and generating a reference feature point set and a target feature point set;
the matching module is used for matching the characteristic points of the reference characteristic point set and the target characteristic point set to obtain matching points, and dividing the matching points to obtain a background characteristic matching point pair and a foreground characteristic matching point pair;
and the splicing module is used for obtaining the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs, dividing the overlapping areas according to preset sutures, and splicing the reference image and the target image according to the divided overlapping areas to obtain spliced images.
In order to achieve the above object, the present invention further provides an image stitching device for retaining foreground information, where the image stitching device for retaining foreground information includes: the image stitching device comprises a memory, a processor and an image stitching program which is stored in the memory and can be operated on the processor and used for reserving foreground information, wherein the image stitching program for reserving the foreground information is configured to realize the steps of the image stitching method for reserving the foreground information.
In a fourth aspect, to achieve the above object, the present invention further provides a storage medium having stored thereon an image stitching program for foreground information preservation, which when executed by a processor, implements the steps of the image stitching method for foreground information preservation as described above.
According to the image splicing method for reserving foreground information, the reference image and the target image to be spliced are obtained, image features in the reference image and the target image are extracted respectively, and a reference feature point set and a target feature point set are generated; performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair; the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs are obtained, the overlapping areas are divided according to preset stitching lines, the reference image and the target image are spliced according to the divided overlapping areas, so that spliced images are obtained, foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, image splicing effect is improved, image splicing precision is improved, time consumption of image splicing for reserving the foreground information is shortened, speed and efficiency of image splicing are improved, problems of joint, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural appearance, no ghost and no joint, a wider visual field is obtained, and user experience is improved.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of an image stitching method for foreground information preservation according to the present invention;
FIG. 3 is a flowchart of a second embodiment of an image stitching method for foreground information preservation according to the present invention;
FIG. 4 is a flowchart of a third embodiment of an image stitching method for foreground information preservation according to the present invention;
FIG. 5 is a flowchart of a fourth embodiment of an image stitching method for foreground information preservation according to the present invention;
FIG. 6 is a flowchart of a fifth embodiment of an image stitching method for foreground information preservation according to the present invention;
FIG. 7 is a flowchart of a sixth embodiment of an image stitching method for foreground information preservation according to the present invention;
fig. 8 is a functional block diagram of a first embodiment of an image stitching device with foreground information preservation according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The solution of the embodiment of the invention mainly comprises the following steps: respectively extracting image features in a reference image and a target image to be spliced by acquiring the reference image and the target image to be spliced, and generating a reference feature point set and a target feature point set; performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair; the method comprises the steps of obtaining an overlapping area corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain spliced images, so that foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, the image splicing effect is improved, the image splicing precision is improved, the time consumption of image splicing for reserving the foreground information is shortened, the speed and the efficiency of image splicing are improved, the problems of joint, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural appearance, no ghost and no joint, the wider visual field is obtained, the user experience is improved, and the technical problems that the image information is lost easily in the image fusion method in the prior art and the image splicing effect is poor are solved.
Referring to fig. 1, fig. 1 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the apparatus may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., wi-Fi interface). The Memory 1005 may be a high-speed RAM Memory or a stable Memory (Non-Volatile Memory), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the apparatus structure shown in fig. 1 is not limiting of the apparatus and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an image stitching program for retaining foreground information, a network communication module, a user interface module, and an operating device may be included in a memory 1005 as one type of storage medium.
The apparatus of the present invention calls an image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and performs the following operations:
acquiring a reference image and a target image to be spliced, and respectively extracting image features in the reference image and the target image to generate a reference feature point set and a target feature point set;
performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair;
and obtaining an overlapping region corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping region according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping region to obtain a spliced image.
The apparatus of the present invention calls the image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and also performs the following operations:
two original images to be spliced are obtained by shooting at the same time when the two cameras are positioned at different positions, and the two original images are respectively used as a reference image and a target image;
respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features.
The apparatus of the present invention calls the image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and also performs the following operations:
acquiring feature point descriptors in the reference feature point set and the target feature point set, performing feature point matching on the reference feature point set and the target feature point set by using a K-nearest neighbor algorithm by using the feature point descriptors, taking feature points with descriptor differences larger than a preset difference threshold value as error extraction points, removing and filtering the error extraction points, and obtaining filtered matching points;
and dividing the matching points by using a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair.
The apparatus of the present invention calls the image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and also performs the following operations:
using a preset homography matrix as a fitting model, extracting a background matching point pair of the matching point pair on a background part by using a RANSAC algorithm, and extracting a foreground matching point pair of the matching point pair on a foreground part;
and eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background characteristic matching point pair and a foreground characteristic matching point pair.
The apparatus of the present invention calls the image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and also performs the following operations:
determining the mapping relation from the target image plane to the reference image plane according to the background feature matching point pairs, and obtaining homography matrixes of all matching points corresponding to the mapping relation;
mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping region after projection transformation;
dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
The apparatus of the present invention calls the image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and also performs the following operations:
performing superpixel division on the overlapped area to obtain a superpixel division mask;
acquiring matching point coordinates of a foreground feature matching point pair of the reference image and the target image, and extracting superpixels containing the matching point coordinates according to the superpixel division mask to obtain a foreground superpixel set;
and acquiring a foreground reservation penalty item and a preset suture line corresponding to the foreground super-pixel set, dividing the overlapping area into two parts according to the foreground reservation penalty item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
The apparatus of the present invention calls the image stitching program for foreground information retention stored in the memory 1005 through the processor 1001, and also performs the following operations:
acquiring a reference image foreground super-pixel set and a target image foreground super-pixel set corresponding to the foreground super-pixel set;
setting a preset penalty term between each superpixel in the reference picture foreground superpixel set and each superpixel in the target picture foreground superpixel set, and summing up and calculating all preset penalty terms to obtain a foreground reservation penalty term;
taking the superpixels in the overlapping area as basic units to be segmented in a Graph-Cut algorithm, and calculating superpixel differences between target Graph superpixels and reference Graph superpixels corresponding to the basic units to be segmented;
the super-pixel difference is used as an energy function of the Graph-Cut algorithm,
calculating an optimal suture line of the overlapped area by utilizing the Graph-Cut algorithm according to the energy function, and taking the optimal suture line as a preset suture line;
dividing the overlapping area into two parts according to the foreground reservation punishment item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
According to the embodiment, through the scheme, the reference image and the target image to be spliced are obtained, the image features in the reference image and the target image are extracted respectively, and the reference feature point set and the target feature point set are generated; performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair; the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs are obtained, the overlapping areas are divided according to preset stitching lines, the reference image and the target image are spliced according to the divided overlapping areas, so that spliced images are obtained, foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, image splicing effect is improved, image splicing precision is improved, time consumption of image splicing for reserving the foreground information is shortened, speed and efficiency of image splicing are improved, problems of joint, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural appearance, no ghost and no joint, a wider visual field is obtained, and user experience is improved.
Based on the hardware structure, the embodiment of the image splicing method for retaining the foreground information is provided.
Referring to fig. 2, fig. 2 is a flowchart of a first embodiment of an image stitching method for foreground information preservation according to the present invention.
In a first embodiment, the image stitching method for foreground information preservation includes the following steps:
step S10, acquiring a reference image and a target image to be spliced, and respectively extracting image features in the reference image and the target image to generate a reference feature point set and a target feature point set.
After the reference image and the target image to be spliced are obtained, image features in the reference image and the target image can be extracted respectively, so that a set of reference feature points corresponding to the reference image is generated, and a set of target feature points corresponding to the target image is generated.
And step S20, carrying out feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair.
It can be understood that the reference feature point set and the target feature point set are subjected to feature point matching, so that matching feature points of the reference feature point and the target feature point can be obtained, and then the matching points can be divided to generate a background feature matching point pair and a foreground feature matching point pair.
Step S30, obtaining an overlapping area corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
It should be understood that by performing overlap region division on the background feature matching point pair and the foreground feature matching point pair, a divided overlap region may be obtained, and the reference image and the target image may be spliced to obtain a corresponding spliced image.
According to the embodiment, through the scheme, the reference image and the target image to be spliced are obtained, the image features in the reference image and the target image are extracted respectively, and the reference feature point set and the target feature point set are generated; performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair; the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs are obtained, the overlapping areas are divided according to preset stitching lines, the reference image and the target image are spliced according to the divided overlapping areas, so that spliced images are obtained, foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, image splicing effect is improved, image splicing precision is improved, time consumption of image splicing for reserving the foreground information is shortened, speed and efficiency of image splicing are improved, problems of joint, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural appearance, no ghost and no joint, a wider visual field is obtained, and user experience is improved.
Further, fig. 3 is a schematic flow chart of a second embodiment of the image stitching method for foreground information preservation according to the present invention, as shown in fig. 3, and the second embodiment of the image stitching method for foreground information preservation according to the present invention is proposed based on the first embodiment, in this embodiment, the step S10 specifically includes the following steps:
and S11, simultaneously shooting through two cameras at different positions to obtain two original images to be spliced, and taking the two original images as a reference image and a target image respectively.
It should be noted that, two original images to be spliced can be obtained by simultaneously shooting two cameras at different positions, and respectively named as a left image and a right image, wherein the left image is used as a reference image and the right image is used as a target image in the subsequent operation.
Step S12, respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features.
It can be understood that the image features in the reference map and the target map can be extracted respectively by a preset image feature extraction algorithm, and a reference feature point set and a target feature point set are generated accordingly, wherein the reference feature point set contains coordinates and corresponding descriptors of all reference features, and the target feature point set contains coordinates and corresponding descriptors of all target features.
In a specific implementation, a certain image feature point extraction algorithm, such as Scale-invariant feature transform (Scale-Invariant Feature Transform, SIFT), acceleration robust feature (Speeded Up Robust Features, SURF), orientation FAST and rotation BRIEF (Oriented FAST and Rotated BRIEF, ORB), may be used to extract feature point sets in the two images respectively; the feature point set comprises the coordinates of all the extracted feature points and descriptors corresponding to each feature point; the descriptor is a one-dimensional vector, and records the information of each characteristic point for subsequent matching.
According to the embodiment, through the scheme, two original images to be spliced are obtained through simultaneous shooting of the two cameras at different positions, and the two original images are respectively used as a reference image and a target image; respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features; the reference feature point set and the target feature point set can be obtained rapidly, the condition that image information is lost easily in the image fusion method is avoided, and the speed and the efficiency of image splicing are improved.
Further, fig. 4 is a schematic flow chart of a third embodiment of the image stitching method for foreground information reservation according to the present invention, and as shown in fig. 4, the third embodiment of the image stitching method for foreground information reservation according to the present invention is proposed based on the first embodiment, and in this embodiment, the step S20 specifically includes the following steps:
and S21, acquiring feature point descriptors in the reference feature point set and the target feature point set, performing feature point matching on the reference feature point set and the target feature point set by using a K-nearest algorithm by using the feature point descriptors, taking feature points with descriptor differences larger than a preset difference threshold value as error extraction points, and removing and filtering the error extraction points to obtain filtered matching points.
It should be noted that, descriptors of feature points in two images can be compared by using a K-nearest algorithm, the feature points of the left image and the right image are matched one by one, the feature points with relatively smaller differences are matched one by one according to the difference between the descriptors of the feature points by the K-nearest algorithm, and meanwhile, the feature points with larger differences of the descriptors are discarded as error extraction points, so that the matching relationship of the selected feature points is finally obtained.
And S22, dividing the matching points by using a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair.
It will be appreciated that the matching points may be partitioned using a set of sample data sets (Random Sample Consensus, RANSAC) algorithms containing anomaly data, and that pairs of background feature matching points and pairs of foreground feature matching points may be obtained.
According to the scheme, the characteristic point descriptors in the reference characteristic point set and the target characteristic point set are obtained, the characteristic point descriptors are used for carrying out characteristic point matching on the reference characteristic point set and the target characteristic point set by using a K-nearest algorithm, characteristic points with the descriptor difference larger than a preset difference threshold are used as error extraction points, and the error extraction points are removed and filtered to obtain filtered matching points; the RANSAC algorithm is applied to divide the matching points to obtain a background feature matching point pair and a foreground feature matching point pair, so that foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, the image splicing effect is improved, and the image splicing precision is improved.
Further, fig. 5 is a flowchart of a fourth embodiment of the image stitching method for foreground information reservation according to the present invention, as shown in fig. 5, and the fourth embodiment of the image stitching method for foreground information reservation according to the present invention is proposed based on the third embodiment, in this embodiment, the step S22 specifically includes the following steps:
Step S221, using a preset homography matrix as a fitting model, extracting a background matching point pair of the matching point pair on a background part by using a RANSAC algorithm, and extracting a foreground matching point pair of the matching point pair on a foreground part.
It should be noted that a homography matrix set in advance may be used as a fitting model, and a RANSAC algorithm is applied to extract a background matching point pair of the matching point pair to the background portion and a foreground matching point pair of the matching point pair to the foreground portion.
And step S222, eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background characteristic matching point pair and a foreground characteristic matching point pair.
It should be appreciated that, after the error matching points in the background matching point pair and the foreground matching point pair are removed, a background feature matching point pair and a foreground feature matching point pair may be obtained.
In a specific implementation, in order to ensure that the foreground information of the image is not lost in an optimal suture algorithm, the foreground part of the image is required to be distinguished from the background part; therefore, the obtained matching point pairs of the characteristic points of the two images can be divided to roughly represent the foreground part and the background part of the images; the method comprises the following specific steps: firstly, using a homography matrix as a fitting model, and extracting most of matching point pairs which are positioned in a background part by using a RANSAC algorithm; because the feature points of the background part account for the vast majority of all feature points, the RANSAC algorithm regards the feature points as interior points, and excludes the feature points of the other part positioned in the foreground and some wrong matching points as exterior points; then re-using the RANSAC algorithm for those matched point pairs that are excluded as outliers; at this time, the matching point pairs of the foreground part occupy most and become inner points to be reserved, and the rest of the mismatching point pairs are discarded, so that the matching precision is further improved; the RANSAC algorithm respectively obtains a background feature matching point pair and a foreground feature matching point pair, and realizes foreground and background division of the matching point pair.
According to the scheme, the RANSAC algorithm is applied to extract the background matching point pairs of the matching point pairs in the background part by using the preset homography matrix as a fitting model, and the foreground matching point pairs of the matching point pairs in the foreground part are extracted; and eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background feature matching point pair and a foreground feature matching point pair, so that foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, the image splicing effect is improved, the image splicing precision is improved, and the image splicing speed and efficiency are improved.
Further, fig. 6 is a flowchart of a fifth embodiment of the image stitching method for foreground information reservation according to the present invention, as shown in fig. 6, and the fifth embodiment of the image stitching method for foreground information reservation according to the present invention is proposed based on the first embodiment, in which the step S30 specifically includes the following steps:
and S31, determining the mapping relation from the target image plane to the reference image plane according to the background feature matching point pairs, and obtaining homography matrixes of all matching points corresponding to the mapping relation.
It should be noted that, through the background feature matching points, the mapping relationship from the target graph plane to the reference graph plane can be determined, and the homography matrix of all the matching points corresponding to the mapping relationship can be obtained.
In a specific implementation, calculating a mapping relation from a right image (target image) plane to a left image (reference image) plane according to the obtained background matching point pair set, and representing the mapping relation by using a homography matrix of 3 multiplied by 3; the homography matrix can be calculated by theoretically only 4 matching point pairs, but in order to improve the accuracy of the homography matrix, the homography matrix is fitted by using all the matching point pairs; the specific practice is to estimate the homography matrix most conforming to the background plane using least square method.
And step S32, mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping area after projection transformation.
It should be appreciated that all pixels in the target map may be mapped to the reference map plane by the homography matrix to obtain the overlap region after projective transformation.
In a specific implementation, all pixels in the target graph are mapped to a pixel plane of the reference graph one by one according to a mapping relation represented by a homography matrix, projection transformation is carried out on the target graph after mapping, and a part of overlapping area exists between the deformed target graph and the reference graph, so that the follow-up accurate extraction of the foreground content of the image is realized, and super-pixel division is carried out on the overlapping area image.
And step S33, dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
It can be understood that the overlapping area is divided by a preset suture line, so as to obtain a divided overlapping area, and then the reference image and the target image can be spliced according to the divided overlapping area, so as to obtain a spliced image after splicing.
According to the scheme, the mapping relation between the target image plane and the reference image plane is determined according to the background feature matching point pairs, and the homography matrix of all matching points corresponding to the mapping relation is obtained; mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping region after projection transformation; dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain spliced images, so that foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, the image splicing effect is improved, the image splicing precision is improved, the consumption time of image splicing for reserving the foreground information is shortened, the speed and efficiency of image splicing are improved, the problems of seam, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural look and feel, no ghost and no seam, a wider visual field is obtained, and the user experience is improved.
Further, fig. 7 is a flowchart of a sixth embodiment of the image stitching method for foreground information reservation according to the present invention, as shown in fig. 7, and the sixth embodiment of the image stitching method for foreground information reservation according to the present invention is proposed based on the fifth embodiment, in this embodiment, the step S33 specifically includes the following steps:
and step S331, carrying out super-pixel division on the overlapped area to obtain a super-pixel division mask.
It should be noted that, after the superpixel division is performed on the overlapping area, a mask corresponding to the superpixel division may be obtained.
In specific implementation, after a homography matrix is obtained, mapping all pixels in a target graph to a pixel plane of a reference graph one by one according to a mapping relation represented by the homography matrix; after mapping, the target graph is subjected to projective transformation; a part of overlapping area exists between the deformed target graph and the reference graph; in order to realize the subsequent further accurate extraction of the foreground content of the image, super-pixel division is carried out on the overlapping area image; and finally outputting the super-pixel division mask of the overlapped area.
Step S332, obtaining the matching point coordinates of the foreground feature matching point pair of the reference map and the target map, and extracting the superpixels containing the matching point coordinates according to the superpixel division mask, so as to obtain a foreground superpixel set.
It can be understood that after the matching point coordinates of the foreground feature matching point pair of the reference map and the target map are obtained, the superpixel containing the matching point coordinates can be extracted according to the superpixel division mask, so as to obtain the foreground superpixel.
In a specific implementation, foreground feature matching point pairs are extracted, and according to the coordinates of the matching point pairs, superpixels containing the coordinates can be extracted to be used as foreground superpixel sets, and one superpixel set is respectively arranged in a reference image and a target image to represent different imaging of the same foreground object in two cameras.
Step S333, acquiring a foreground reservation penalty term and a preset suture line corresponding to the foreground super-pixel set, dividing the overlapping area into two parts according to the foreground reservation penalty term and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
It should be understood that after the foreground preserving penalty term and the preset suture line corresponding to the foreground super-pixel set are obtained, the overlapping area may be divided into two parts according to the foreground preserving penalty term and the preset suture line, and the reference image and the target image may be subjected to a stitching operation according to the divided overlapping area, so as to obtain a stitched image after the stitching is completed.
Further, the step S333 specifically includes the following steps:
acquiring a reference image foreground super-pixel set and a target image foreground super-pixel set corresponding to the foreground super-pixel set;
setting a preset penalty term between each superpixel in the reference picture foreground superpixel set and each superpixel in the target picture foreground superpixel set, and summing up and calculating all preset penalty terms to obtain a foreground reservation penalty term;
taking the superpixels in the overlapping area as basic units to be segmented in a Graph-Cut algorithm, and calculating superpixel differences between target Graph superpixels and reference Graph superpixels corresponding to the basic units to be segmented;
the super-pixel difference is used as an energy function of the Graph-Cut algorithm,
calculating an optimal suture line of the overlapped area by utilizing the Graph-Cut algorithm according to the energy function, and taking the optimal suture line as a preset suture line;
dividing the overlapping area into two parts according to the foreground reservation punishment item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
It can be understood that, in order to improve the Graph-Cut algorithm in the optimal suture method, the problem of loss of foreground content is considered when the Graph-Cut algorithm segments images, the foreground content is ensured to appear in a final image splicing result, and a foreground reservation penalty term is added for the Graph-Cut algorithm, so that the Graph-Cut algorithm avoids super pixels representing foreground objects in the images when searching optimal sutures; the specific method comprises the following steps: two super-pixel sets representing the foreground are respectively positioned on a left image (reference image) and a right image (target image), the left image foreground super-pixel set is set as M, and the right image foreground super-pixel set is set as N; setting a larger penalty term between each super pixel in M and each super pixel in N, summing all penalty terms, and reserving the penalty terms as a foreground of a Graph-Cut algorithm; by adding the foreground preserving penalty term, the suture is avoided from passing through the portion between the two foreground super-pixel sets M and N, such that one of M and N is preserved.
It should be appreciated that applying the superpixel-based Graph-Cut algorithm to solve for the optimal suture; the Graph-Cut algorithm is originally used for pixel level, and is modified into a Graph-Cut algorithm based on the super pixel level in order to adapt to the requirement of the method for dividing foreground super pixels; the specific method comprises the following steps: super pixels of all the overlapped areas are used as basic units to be segmented in a Graph-Cut algorithm, and then pixel differences of a left image and a right image corresponding to each unit are calculated; taking the color difference, gradient difference and edge difference of all pixels of the left image and the right image which are positioned in the same unit into consideration as the difference of total super pixel level; then, the foreground preserving penalty term and the color, gradient and edge difference of the super pixel are added and summed as an energy function of the Graph-Cut algorithm; finally, in the image overlapping area, obtaining an optimal suture line by applying a Graph-Cut algorithm based on super pixels; the optimal suture line divides the overlapping area into two parts, so that the transition of pixel values at two sides of the suture line is gentle, and the visual effect of the image splicing result is good; and as the foreground preservation penalty term is added in the basic Graph-Cut algorithm, the problem of loss of foreground information is considered in solving the optimal suture line, so that the optimal suture line is finally ensured to avoid the foreground main body content, and the foreground content is preserved in the final splicing result.
In a specific implementation, after the optimal suture is obtained, the overlapping area is divided into two parts according to the optimal suture; in the final image stitching result diagram, the pixel value at the left side of the stitching line is in the left diagram, and the pixel value at the right side of the stitching line is in the right diagram; finally, obtaining a large-visual-angle spliced image with good visual effect and no ghost or seam; according to the embodiment, the optimal suture algorithm in image stitching is improved, and the foreground super-pixel extraction and Graph-Cut image segmentation algorithm are used for adding a foreground preservation strategy and the like, so that the optimal suture algorithm has the foreground information preservation capability, and meanwhile, the advantage of no double image of the optimal suture algorithm in image fusion is not affected.
In a specific implementation, the process of the present invention may be specifically described by taking the stitching of two images captured in a scene with large parallax as an example:
of the two input images, the left image (reference image) is defined as image a, the right image (target image) is defined as image B, and both image resolutions are 1080×720.
And step 1, extracting characteristic points. First, all SIFT feature points in the image A, B are detected; 1000 SIFT feature points are respectively extracted from the image A and the image B, and the SIFT feature point set in the image A is S A SIFT feature point set in image B is S B
And 2, matching the characteristic points. Matching feature point set S using K-nearest neighbor algorithm (K takes value 2) A And S is B Setting a part of worst matching of threshold filtering at the same time; obtaining 900 matching point pairs, and recording the matching point pair set as M; the remaining 100 feature points in the image A, B are not matched or the matching result is poor, and the feature points extracted by mistake are considered to be discarded.
Step 3, matching and dividing; firstly, taking a homography matrix as a fitting model, and extracting partial matching in a matching point pair set M by using a RANSAC algorithm to obtain 550 matching point pairs; the matching point pair set accounts for most of all matching point pairs, and represents the matching relation between the characteristic points in the image background area, and is recorded as M back
Then, for the remaining matching point pairs, i.e. M-M back Again using the above method, 250 matching point pairs are extracted.
The matching point pair set represents the matching relation between the characteristic points of the foreground region with secondary position in the image, and is recorded as M front
At this time, 100 matching point pairs, i.e., M-M back -M front And considering the false match or the non-critical match, and discarding the false match or the non-critical match.
Step 4, superpixel division of an overlapping area; first, according to the matching point pair set M back Calculating homography matrix H back Representing the projective transformation relationship of the background plane in image B to the background plane in image a; the specific method comprises the following steps: according to M back Coordinate mapping relation among all matching point pairs in the method, and calculating H by using a least square method back Is defined as eight parameters h1, h2, h3, h4, h5, h6, h7, h8; then, according to M using the same method front Calculating to obtain H front
Subsequently, since the background area occupies a major portion of the entire image, a homography matrix H is used back Performing projective transformation on the image B to align the image B to the image A;
at this time, the deformed image B and the deformed image A are both present on a two-dimensional plane where the image A is located, and a certain overlapping area exists between the two images; setting the overlapping area as R, the overlapping area part in the image A as RA, and the overlapping area part in the image B as RB;
finally, taking the part RA of the image A in the overlapping area as input, and dividing the super pixels (the number of the super pixels is set to be 50 multiplied by 50) by using a SLIC algorithm to obtain 2500 super pixels taking the pixels in the overlapping area RA as division standards;
the superpixel division results are stored in a two-dimensional matrix form, and the matrix size is the same as the overlapping area. Each element in the matrix represents the label of the superpixel to which its corresponding pixel coordinate depends. From 0 to 2499, each superpixel is assigned a number, while each pixel in the overlap region corresponds to a superpixel number, indicating that it is in that superpixel.
In the step 5, the step of the method,extracting foreground super pixels; step 3, the foreground and the background of the image are primarily distinguished after the matching point pairs are divided, but the foreground and the background are distinguished in the form of pixel point sets, so that the pixel point sets are converted into super pixel sets to more accurately represent the foreground target; for parallax reasons, the imaging of the foreground object in fig. a and B is at different relative positions; in use H back After projection transformation is completed, the background plane can be aligned well, but the foreground object can appear ghost image due to misalignment; to guarantee the implementation of the subsequent foreground-preserving strategy, it is necessary to distinguish both ghosts of the foreground object in the form of superpixels.
First, the foreground matching point pair set M divided according to the step 3 front Extracting a characteristic point set belonging to the graph A, wherein the characteristic point set represents a foreground object in the graph A and is recorded as
Figure BDA0004072848730000191
Likewise, the feature point set in the corresponding map B represents the foreground object in map B, denoted +.>
Figure BDA0004072848730000192
Since the pixel coordinates are changed by the projective transformation before the map B, the foreground feature point set is +.>
Figure BDA0004072848730000193
H is also required to be used back Performing projection transformation to finally obtain a projected foreground feature point set
Figure BDA0004072848730000194
And step 4, finally storing the super-pixel division in a two-dimensional matrix form. By dot set- >
Figure BDA0004072848730000195
The coordinates of each pixel in the list can find its corresponding superpixel, thus obtaining the point set +.>
Figure BDA0004072848730000196
The corresponding superpixel set, noted as
Figure BDA0004072848730000197
Likewise, according to the point set ∈ ->
Figure BDA0004072848730000198
The foreground super-pixel set of the image B in the overlapping area can also be obtained and is marked as
Figure BDA0004072848730000199
To this end, the foreground super-pixel rendering of the diagrams A and B is completed, respectively +.>
Figure BDA00040728487300001910
And->
Figure BDA00040728487300001911
And 6, setting a foreground preservation penalty term. Firstly, in order to ensure that foreground target information is accurately reserved in an optimal suture algorithm, a main dependent algorithm Graph-Cut of the optimal suture algorithm is modified to Cut images based on super pixels instead of pixels. The specific method comprises the following steps: and adding all super pixels in the overlapping region R into a node set as nodes to be Cut in the Graph-Cut algorithm. Then, a superpixel-based energy function is calculated to guide the solving of the optimal suture, this energy function being denoted Cost. Cost is used to represent the Cost required to cut through different superpixels, the greater the Cost the less likely the two superpixels will be cut. The Cost consists of three parts, namely color difference, gradient difference and edge difference of the super pixels. By calculating the average value of the differences between the pixel values of all pixels in the super-pixel at RA and the pixel values at RB as the color difference, denoted as D color . Similarly, after the overlapping area edge map is calculated through the Canny operator, the edge difference of the super pixel between the map A and the map B can be obtained. Denoted as D edge . The gradient difference can also be obtained by the same calculation method and is marked as D grad . Finally, the energy function cost=d is calculated color +D grad +D edge . So far, the transformation of the optimal suture algorithm based on the super pixels is completed. To implement the strategy of foreground information retention, a constraint term is added to the energy function, called a foreground retention penalty term. The specific method comprises the following steps: step 5 results in foreground super-pixel sets located in map A and map B, respectively
Figure BDA0004072848730000201
And->
Figure BDA0004072848730000202
Super-pixel set of foreground->
Figure BDA0004072848730000203
Is associated with +.>
Figure BDA0004072848730000204
A larger foreground preserving penalty term is arranged between each super pixel, and the value of the foreground preserving penalty term is far larger than D color 、D grad And D edge . Thereby ensuring that the optimal stitch line is not +.>
Figure BDA0004072848730000205
And->
Figure BDA0004072848730000206
Intermediate pass-through, such that at least one foreground object image is preserved in the optimal suture cut, see fig. 1.
And 7, solving the optimal suture line by applying a Graph-Cut algorithm based on super pixels. And (3) solving an optimal suture line by using Graph-Cut, and finally obtaining a suture line for cutting the image along the super-pixel edge in the image overlapping area.
And 8, image synthesis. After the optimal suture line is obtained, the final splicing result is generated. With the seam as a boundary, the overlapping area is divided into two, the overlapping area pixel value on the left side of the seam takes the overlapping area RA in the graph A, and the overlapping area pixel value on the right side of the seam takes the overlapping area RB in the graph B. Finally, the rest of the uncut parts of the images A and B are spliced to form an image splicing result image.
According to the embodiment, through the scheme, the super-pixel division mask is obtained by performing super-pixel division on the overlapped area; acquiring matching point coordinates of a foreground feature matching point pair of the reference image and the target image, and extracting superpixels containing the matching point coordinates according to the superpixel division mask to obtain a foreground superpixel set; the foreground reservation penalty item and the preset suture line corresponding to the foreground super-pixel set are obtained, the overlapping area is divided into two parts according to the foreground reservation penalty item and the preset suture line, the reference image and the target image are spliced according to the divided overlapping area, so that a spliced image is obtained, foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, the image splicing effect is improved, the image splicing precision is improved, the image splicing time for reserving the foreground information is shortened, the image splicing speed and efficiency are improved, the problems of seam, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural look, no ghost and no seam, a wider visual field is obtained, and the user experience is improved.
Correspondingly, the invention further provides an image splicing device for reserving the foreground information.
Referring to fig. 8, fig. 8 is a functional block diagram of a first embodiment of an image stitching device for foreground information preservation according to the present invention.
In a first embodiment of the image stitching device for retaining foreground information, the image stitching device for retaining foreground information includes:
the feature extraction module 10 is configured to obtain a reference image and a target image to be spliced, extract image features in the reference image and the target image, and generate a reference feature point set and a target feature point set.
And the matching module 20 is configured to match the reference feature point set with the target feature point set to obtain matching points, and divide the matching points to obtain a background feature matching point pair and a foreground feature matching point pair.
And the stitching module 30 is configured to obtain an overlapping region corresponding to the background feature matching point pair and the foreground feature matching point pair, divide the overlapping region according to a preset suture line, and stitch the reference image and the target image according to the divided overlapping region to obtain a stitched image.
The feature extraction module 10 is further configured to obtain two original images to be spliced by simultaneously capturing images of two cameras located at different positions, and respectively using the two original images as a reference image and a target image; respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features.
The matching module 20 is further configured to obtain feature point descriptors in the reference feature point set and the target feature point set, perform feature point matching on the reference feature point set and the target feature point set by using a K-nearest neighbor algorithm with the feature point descriptors, and reject and filter the error extraction points by using feature points with descriptor differences greater than a preset difference threshold as error extraction points, so as to obtain filtered matching points; and dividing the matching points by using a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair.
The matching module 20 is further configured to use a preset homography matrix as a fitting model, apply a RANSAC algorithm to extract a background matching point pair of the matching point pair on the background portion, and extract a foreground matching point pair of the matching point pair on the foreground portion; and eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background characteristic matching point pair and a foreground characteristic matching point pair.
The stitching module 30 is further configured to determine a mapping relationship from the target graph plane to the reference graph plane according to the background feature matching point pair, and obtain homography matrices corresponding to all matching points of the mapping relationship; mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping region after projection transformation; dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
The stitching module 30 is further configured to perform superpixel division on the overlapping area to obtain a superpixel division mask; acquiring matching point coordinates of a foreground feature matching point pair of the reference image and the target image, and extracting superpixels containing the matching point coordinates according to the superpixel division mask to obtain a foreground superpixel set; and acquiring a foreground reservation penalty item and a preset suture line corresponding to the foreground super-pixel set, dividing the overlapping area into two parts according to the foreground reservation penalty item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
The stitching module 30 is further configured to obtain a reference image foreground super-pixel set and a target image foreground super-pixel set corresponding to the foreground super-pixel set; setting a preset penalty term between each superpixel in the reference picture foreground superpixel set and each superpixel in the target picture foreground superpixel set, and summing up and calculating all preset penalty terms to obtain a foreground reservation penalty term; taking the superpixels in the overlapping area as basic units to be segmented in a Graph-Cut algorithm, and calculating superpixel differences between target Graph superpixels and reference Graph superpixels corresponding to the basic units to be segmented; taking the super-pixel difference as an energy function of the Graph-Cut algorithm, calculating an optimal suture line of the overlapped area by utilizing the Graph-Cut algorithm according to the energy function, and taking the optimal suture line as a preset suture line; dividing the overlapping area into two parts according to the foreground reservation punishment item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
The steps of implementing each functional module of the image stitching device for foreground information retention may refer to each embodiment of the image stitching method for foreground information retention of the present invention, which is not described herein.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores an image splicing program with retained foreground information, and the image splicing program with retained foreground information realizes the following operations when being executed by a processor:
acquiring a reference image and a target image to be spliced, and respectively extracting image features in the reference image and the target image to generate a reference feature point set and a target feature point set;
performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair;
and obtaining an overlapping region corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping region according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping region to obtain a spliced image.
Further, the image stitching program for retaining the foreground information further realizes the following operations when executed by the processor:
Two original images to be spliced are obtained by shooting at the same time when the two cameras are positioned at different positions, and the two original images are respectively used as a reference image and a target image;
respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features.
Further, the image stitching program for retaining the foreground information further realizes the following operations when executed by the processor:
acquiring feature point descriptors in the reference feature point set and the target feature point set, performing feature point matching on the reference feature point set and the target feature point set by using a K-nearest neighbor algorithm by using the feature point descriptors, taking feature points with descriptor differences larger than a preset difference threshold value as error extraction points, removing and filtering the error extraction points, and obtaining filtered matching points;
and dividing the matching points by using a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair.
Further, the image stitching program for retaining the foreground information further realizes the following operations when executed by the processor:
using a preset homography matrix as a fitting model, extracting a background matching point pair of the matching point pair on a background part by using a RANSAC algorithm, and extracting a foreground matching point pair of the matching point pair on a foreground part;
and eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background characteristic matching point pair and a foreground characteristic matching point pair.
Further, the image stitching program for retaining the foreground information further realizes the following operations when executed by the processor:
determining the mapping relation from the target image plane to the reference image plane according to the background feature matching point pairs, and obtaining homography matrixes of all matching points corresponding to the mapping relation;
mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping region after projection transformation;
dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
Further, the image stitching program for retaining the foreground information further realizes the following operations when executed by the processor:
Performing superpixel division on the overlapped area to obtain a superpixel division mask;
acquiring matching point coordinates of a foreground feature matching point pair of the reference image and the target image, and extracting superpixels containing the matching point coordinates according to the superpixel division mask to obtain a foreground superpixel set;
and acquiring a foreground reservation penalty item and a preset suture line corresponding to the foreground super-pixel set, dividing the overlapping area into two parts according to the foreground reservation penalty item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
Further, the image stitching program for retaining the foreground information further realizes the following operations when executed by the processor:
acquiring a reference image foreground super-pixel set and a target image foreground super-pixel set corresponding to the foreground super-pixel set;
setting a preset penalty term between each superpixel in the reference picture foreground superpixel set and each superpixel in the target picture foreground superpixel set, and summing up and calculating all preset penalty terms to obtain a foreground reservation penalty term;
taking the superpixels in the overlapping area as basic units to be segmented in a Graph-Cut algorithm, and calculating superpixel differences between target Graph superpixels and reference Graph superpixels corresponding to the basic units to be segmented;
The super-pixel difference is used as an energy function of the Graph-Cut algorithm,
calculating an optimal suture line of the overlapped area by utilizing the Graph-Cut algorithm according to the energy function, and taking the optimal suture line as a preset suture line;
dividing the overlapping area into two parts according to the foreground reservation punishment item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
According to the embodiment, through the scheme, the reference image and the target image to be spliced are obtained, the image features in the reference image and the target image are extracted respectively, and the reference feature point set and the target feature point set are generated; performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair; the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs are obtained, the overlapping areas are divided according to preset stitching lines, the reference image and the target image are spliced according to the divided overlapping areas, so that spliced images are obtained, foreground information can be reserved, the condition that image information is lost easily in an image fusion method is avoided, image splicing effect is improved, image splicing precision is improved, time consumption of image splicing for reserving the foreground information is shortened, speed and efficiency of image splicing are improved, problems of joint, ghost and information loss caused by spliced images are avoided, a user obtains a high-resolution image with natural appearance, no ghost and no joint, a wider visual field is obtained, and user experience is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The image splicing method for the foreground information reservation is characterized by comprising the following steps of:
Acquiring a reference image and a target image to be spliced, and respectively extracting image features in the reference image and the target image to generate a reference feature point set and a target feature point set;
performing feature point matching on the reference feature point set and the target feature point set to obtain matching points, and dividing the matching points to obtain a background feature matching point pair and a foreground feature matching point pair;
and obtaining an overlapping region corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping region according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping region to obtain a spliced image.
2. The image stitching method for retaining foreground information according to claim 1, wherein the obtaining a reference image and a target image to be stitched, extracting image features in the reference image and the target image, respectively, and generating a reference feature point set and a target feature point set, includes:
two original images to be spliced are obtained by shooting at the same time when the two cameras are positioned at different positions, and the two original images are respectively used as a reference image and a target image;
respectively extracting image features in the reference image and the target image according to a preset image feature extraction algorithm, and respectively generating a reference feature point set and a target feature point set; the reference feature point set comprises coordinates and corresponding descriptors of all reference features, and the target feature point set comprises coordinates and corresponding descriptors of all target features.
3. The image stitching method of claim 1, wherein the matching the reference feature point set with the target feature point set to obtain a matching point, and dividing the matching point to obtain a background feature matching point pair and a foreground feature matching point pair, includes:
acquiring feature point descriptors in the reference feature point set and the target feature point set, performing feature point matching on the reference feature point set and the target feature point set by using a K-nearest neighbor algorithm by using the feature point descriptors, taking feature points with descriptor differences larger than a preset difference threshold value as error extraction points, removing and filtering the error extraction points, and obtaining filtered matching points;
and dividing the matching points by using a RANSAC algorithm to obtain a background feature matching point pair and a foreground feature matching point pair.
4. The image stitching method for foreground information retention according to claim 3, wherein said applying a RANSAC algorithm to divide the matching points to obtain a background feature matching point pair and a foreground feature matching point pair includes:
using a preset homography matrix as a fitting model, extracting a background matching point pair of the matching point pair on a background part by using a RANSAC algorithm, and extracting a foreground matching point pair of the matching point pair on a foreground part;
And eliminating the error matching points in the background matching point pair and the foreground matching point pair to obtain a background characteristic matching point pair and a foreground characteristic matching point pair.
5. The image stitching method for retaining foreground information according to claim 1, wherein the obtaining an overlapping region corresponding to the background feature matching point pair and the foreground feature matching point pair, dividing the overlapping region according to a preset suture line, and stitching the reference image and the target image according to the divided overlapping region, and obtaining a stitched image includes:
determining the mapping relation from the target image plane to the reference image plane according to the background feature matching point pairs, and obtaining homography matrixes of all matching points corresponding to the mapping relation;
mapping all pixels in the target graph to the reference graph plane according to the homography matrix to obtain an overlapping region after projection transformation;
dividing the overlapping area according to a preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
6. The image stitching method for retaining foreground information according to claim 5, wherein the dividing the overlapping area according to a preset stitching line, and stitching the reference image and the target image according to the divided overlapping area, to obtain a stitched image, includes:
Performing superpixel division on the overlapped area to obtain a superpixel division mask;
acquiring matching point coordinates of a foreground feature matching point pair of the reference image and the target image, and extracting superpixels containing the matching point coordinates according to the superpixel division mask to obtain a foreground superpixel set;
and acquiring a foreground reservation penalty item and a preset suture line corresponding to the foreground super-pixel set, dividing the overlapping area into two parts according to the foreground reservation penalty item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
7. The method for stitching images for retaining foreground information according to claim 6, wherein said obtaining a foreground retaining penalty term and a preset stitching line corresponding to the foreground super-pixel set divides the overlapping area into two according to the foreground retaining penalty term and the preset stitching line, and stitching the reference image and the target image according to the divided overlapping area, and obtaining a stitched image includes:
acquiring a reference image foreground super-pixel set and a target image foreground super-pixel set corresponding to the foreground super-pixel set;
Setting a preset penalty term between each superpixel in the reference picture foreground superpixel set and each superpixel in the target picture foreground superpixel set, and summing up and calculating all preset penalty terms to obtain a foreground reservation penalty term;
taking the superpixels in the overlapping area as basic units to be segmented in a Graph-Cut algorithm, and calculating superpixel differences between target Graph superpixels and reference Graph superpixels corresponding to the basic units to be segmented;
the super-pixel difference is used as an energy function of the Graph-Cut algorithm,
calculating an optimal suture line of the overlapped area by utilizing the Graph-Cut algorithm according to the energy function, and taking the optimal suture line as a preset suture line;
dividing the overlapping area into two parts according to the foreground reservation punishment item and the preset suture line, and splicing the reference image and the target image according to the divided overlapping area to obtain a spliced image.
8. An image stitching device for retaining foreground information, wherein the image stitching device for retaining foreground information comprises:
the feature extraction module is used for acquiring a reference image and a target image to be spliced, extracting image features in the reference image and the target image respectively, and generating a reference feature point set and a target feature point set;
The matching module is used for matching the characteristic points of the reference characteristic point set and the target characteristic point set to obtain matching points, and dividing the matching points to obtain a background characteristic matching point pair and a foreground characteristic matching point pair;
and the splicing module is used for obtaining the overlapping areas corresponding to the background feature matching point pairs and the foreground feature matching point pairs, dividing the overlapping areas according to preset sutures, and splicing the reference image and the target image according to the divided overlapping areas to obtain spliced images.
9. An image stitching device for retaining foreground information, wherein the image stitching device for retaining foreground information comprises: a memory, a processor and a foreground information preserving image stitching program stored on the memory and executable on the processor, the foreground information preserving image stitching program being configured to implement the steps of the foreground information preserving image stitching method of any one of claims 1 to 7.
10. A storage medium, wherein a foreground information-preserved image stitching program is stored on the storage medium, and the foreground information-preserved image stitching program, when executed by a processor, implements the steps of the foreground information-preserved image stitching method according to any one of claims 1 to 7.
CN202310099812.5A 2023-02-03 2023-02-03 Image splicing method, device and equipment for retaining foreground information and storage medium Withdrawn CN116109484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310099812.5A CN116109484A (en) 2023-02-03 2023-02-03 Image splicing method, device and equipment for retaining foreground information and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310099812.5A CN116109484A (en) 2023-02-03 2023-02-03 Image splicing method, device and equipment for retaining foreground information and storage medium

Publications (1)

Publication Number Publication Date
CN116109484A true CN116109484A (en) 2023-05-12

Family

ID=86255796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310099812.5A Withdrawn CN116109484A (en) 2023-02-03 2023-02-03 Image splicing method, device and equipment for retaining foreground information and storage medium

Country Status (1)

Country Link
CN (1) CN116109484A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117336620A (en) * 2023-11-24 2024-01-02 北京智汇云舟科技有限公司 Adaptive video stitching method and system based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117336620A (en) * 2023-11-24 2024-01-02 北京智汇云舟科技有限公司 Adaptive video stitching method and system based on deep learning
CN117336620B (en) * 2023-11-24 2024-02-09 北京智汇云舟科技有限公司 Adaptive video stitching method and system based on deep learning

Similar Documents

Publication Publication Date Title
KR102013978B1 (en) Method and apparatus for fusion of images
US8345961B2 (en) Image stitching method and apparatus
CN107945112B (en) Panoramic image splicing method and device
US9224189B2 (en) Method and apparatus for combining panoramic image
CN106899781B (en) Image processing method and electronic equipment
JP2021533507A (en) Image stitching methods and devices, in-vehicle image processing devices, electronic devices, storage media
CN109474780B (en) Method and device for image processing
CN110490271B (en) Image matching and splicing method, device, system and readable medium
WO2015048694A2 (en) Systems and methods for depth-assisted perspective distortion correction
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN105469375A (en) Method and device for processing high dynamic range panorama
CN110505398B (en) Image processing method and device, electronic equipment and storage medium
CN116109484A (en) Image splicing method, device and equipment for retaining foreground information and storage medium
KR100934211B1 (en) How to create a panoramic image on a mobile device
CN112465702B (en) Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN113298187B (en) Image processing method and device and computer readable storage medium
CN111263067A (en) Image processing method, device, terminal equipment and storage medium
CN116757935A (en) Image fusion splicing method and system of fisheye camera and electronic equipment
Bajpai et al. High quality real-time panorama on mobile devices
CN115619636A (en) Image stitching method, electronic device and storage medium
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN114004839A (en) Image segmentation method and device of panoramic image, computer equipment and storage medium
KR20210133472A (en) Method of merging images and data processing device performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230512