CN113344835A - Image splicing method and device, computer readable storage medium and electronic equipment - Google Patents

Image splicing method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN113344835A
CN113344835A CN202110654609.0A CN202110654609A CN113344835A CN 113344835 A CN113344835 A CN 113344835A CN 202110654609 A CN202110654609 A CN 202110654609A CN 113344835 A CN113344835 A CN 113344835A
Authority
CN
China
Prior art keywords
image
determining
pose
feature points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110654609.0A
Other languages
Chinese (zh)
Inventor
饶童
胡洋
周杰
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seashell Housing Beijing Technology Co Ltd
Original Assignee
Beijing Fangjianghu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fangjianghu Technology Co Ltd filed Critical Beijing Fangjianghu Technology Co Ltd
Priority to CN202110654609.0A priority Critical patent/CN113344835A/en
Publication of CN113344835A publication Critical patent/CN113344835A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses an image splicing method and device, a computer readable storage medium and an electronic device, wherein the method comprises the following steps: obtaining the distribution of characteristic points of a first image acquired by the camera equipment in a first position; determining the pose variation amount of the camera equipment based on the feature point distribution of the first image and set redundant information; determining a second pose based on the pose variation amount, and controlling the camera device to acquire a second image spliced with the first image at the second pose; according to the embodiment, the position and pose variation quantity required to be adjusted of the camera equipment is determined by quickly determining the distribution of the characteristic points, the speed of obtaining the second image is increased, and on the premise that enough redundant information can be guaranteed between continuously shot images, excessive redundant information is not generated, so that the image splicing efficiency is improved, and the splicing effect is improved.

Description

Image splicing method and device, computer readable storage medium and electronic equipment
Technical Field
The present disclosure relates to image stitching technologies, and in particular, to an image stitching method and apparatus, a computer-readable storage medium, and an electronic device.
Background
In the pure image-based panoramic stitching algorithm, a plurality of pictures need to be shot and stitched together to generate a panoramic image. And the overlapping parts between the images are very important redundant information which can be used for solving the connection relation between the images. However, excessive redundant information does not bring extra information, but increases the data amount and the shooting time. How to reasonably configure the number of pictures taken is an important issue.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides an image splicing method and device, a computer-readable storage medium and electronic equipment.
According to an aspect of the embodiments of the present disclosure, there is provided an image stitching method, including:
obtaining the distribution of characteristic points of a first image acquired by the camera equipment in a first position;
determining the pose variation amount of the camera equipment based on the feature point distribution of the first image and set redundant information;
and determining a second pose based on the pose variation amount, and controlling the camera device to acquire a second image spliced with the first image at the second pose.
Optionally, the obtaining of the distribution of the feature points of the first image acquired by the image capturing apparatus in the first pose includes:
acquiring and obtaining the first image based on the camera shooting equipment in the first attitude;
performing feature point extraction on the first image to obtain a plurality of first feature points;
determining the distribution of the feature points based on the distribution of the plurality of first feature points in the first image.
Optionally, the method further comprises:
determining whether the second image meets the set condition of splicing with the first image or not according to the set redundant information;
splicing the first image and the second image to obtain a spliced third image in response to the second image meeting the set condition;
in response to the second image not satisfying the setting condition, iteratively performing: updating the pose change amount to obtain an updated second image again, and determining whether the updated second image meets the set condition spliced with the first image until the second image meets the set condition.
Optionally, the updating the pose change amount to obtain an updated second image includes:
reducing the pose variation to obtain an updated pose variation, and determining a third pose based on the updated pose variation;
and controlling the camera equipment to acquire the updated second image in the third posture.
Optionally, the determining whether the second image meets the setting condition for stitching with the first image according to the setting redundant information includes:
determining the number of matched feature point pairs of the second image and the first image;
determining that the second image satisfies the setting condition in response to the number of the feature point pairs being greater than or equal to the setting redundancy information;
determining that the second image does not satisfy the setting condition in response to the number of the feature point pairs being smaller than the setting redundancy information.
Optionally, the determining the number of matched feature point pairs of the second image and the first image includes:
extracting feature points of the second image to obtain a plurality of second feature points corresponding to the second image;
determining a corresponding relation between a plurality of first feature points and a plurality of second feature points based on a feature descriptor corresponding to each first feature point in the plurality of first feature points and a feature descriptor corresponding to each second feature point in the plurality of second feature points;
determining the number of pairs of feature points based on the first feature points and the second feature points for which there is a correspondence.
Optionally, the stitching the first image and the second image to obtain a stitched third image includes:
determining pairs of feature points included in the first image and the second image;
determining a mapping relationship between the first image and the second image based on the plurality of pairs of feature points;
and mapping the first image and the second image to a panoramic coordinate system based on the mapping relation, and splicing the first image and the second image in the panoramic coordinate system to obtain a third image.
Optionally, the determining the amount of pose change of the image capturing apparatus based on the feature point distribution of the first image and the setting redundancy information includes:
determining an overlapping portion of the first image based on the set redundant information and the feature point distribution of the first image;
determining the amount of change in the pose of the image pickup apparatus based on the proportion of the overlapping portion in the first image.
Optionally, the determining the overlapping portion of the first image based on the set redundant information and the feature point distribution of the first image includes:
determining the number of overlapped characteristic points required by image splicing based on the set redundant information;
and determining the overlapped part of the first image according to the number of the overlapped characteristic points and the characteristic point distribution of the first image.
According to another aspect of the embodiments of the present disclosure, there is provided an image stitching device, including:
the characteristic distribution module is used for obtaining the characteristic point distribution of a first image acquired by the camera equipment in a first position;
a pose change module for determining a pose change amount of the image pickup apparatus based on the feature point distribution of the first image and the set redundant information;
and the image splicing module is used for determining a second pose based on the pose variation and controlling the camera equipment to acquire a second image spliced with the first image at the second pose.
Optionally, the feature distribution module is specifically configured to acquire the first image based on the first pose of the image capturing device; performing feature point extraction on the first image to obtain a plurality of first feature points; determining the distribution of the feature points based on the distribution of the plurality of first feature points in the first image.
Optionally, the apparatus further comprises:
the splicing identification module is used for determining whether the second image meets the set condition for splicing with the first image according to the set redundant information; splicing the first image and the second image to obtain a spliced third image in response to the second image meeting the set condition; in response to the second image not satisfying the setting condition, iteratively performing: updating the pose change amount to obtain an updated second image again, and determining whether the updated second image meets the set condition spliced with the first image until the second image meets the set condition.
Optionally, when the pose change amount is updated to obtain an updated second image, the stitching identification module is configured to reduce the pose change amount to obtain an updated pose change amount, and determine a third pose based on the updated pose change amount; and controlling the camera equipment to acquire the updated second image in the third posture.
Optionally, the stitching identification module is configured to determine, when determining whether the second image meets a set condition for stitching with the first image according to the set redundant information, the number of matched feature point pairs of the second image and the first image; determining that the second image satisfies the setting condition in response to the number of the feature point pairs being greater than or equal to the setting redundancy information; determining that the second image does not satisfy the setting condition in response to the number of the feature point pairs being smaller than the setting redundancy information.
Optionally, when determining the number of the matched feature point pairs of the second image and the first image, the stitching identification module is configured to perform feature point extraction on the second image to obtain a plurality of second feature points corresponding to the second image; determining a corresponding relation between a plurality of first feature points and a plurality of second feature points based on a feature descriptor corresponding to each first feature point in the plurality of first feature points and a feature descriptor corresponding to each second feature point in the plurality of second feature points; determining the number of pairs of feature points based on the first feature points and the second feature points for which there is a correspondence.
Optionally, the stitching identification module is configured to determine pairs of feature points included in the first image and the second image when the first image and the second image are stitched to obtain a stitched third image; determining a mapping relationship between the first image and the second image based on the plurality of pairs of feature points; and mapping the first image and the second image to a panoramic coordinate system based on the mapping relation, and splicing the first image and the second image in the panoramic coordinate system to obtain a third image.
Optionally, the pose change module is specifically configured to determine an overlapping portion of the first image based on the set redundant information and the feature point distribution of the first image; determining the amount of change in the pose of the image pickup apparatus based on the proportion of the overlapping portion in the first image.
Optionally, the pose change module is configured to determine the number of overlapped feature points required for image stitching based on the set redundant information when determining the overlapped part of the first image based on the set redundant information and the feature point distribution of the first image; and determining the overlapped part of the first image according to the number of the overlapped characteristic points and the characteristic point distribution of the first image.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the image stitching method according to any one of the embodiments.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the image stitching method according to any one of the embodiments.
Based on the image stitching method and device, the computer-readable storage medium and the electronic device provided by the embodiment of the disclosure, the feature point distribution of a first image acquired by the camera device in a first position is obtained; determining the pose variation amount of the camera equipment based on the feature point distribution of the first image and set redundant information; determining a second pose based on the pose variation amount, and controlling the camera device to acquire a second image spliced with the first image at the second pose; according to the embodiment, the position and pose variation quantity required to be adjusted of the camera equipment is determined by quickly determining the distribution of the characteristic points, the speed of obtaining the second image is increased, and on the premise that enough redundant information can be guaranteed between continuously shot images, excessive redundant information is not generated, so that the image splicing efficiency is improved, and the splicing effect is improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic flowchart of an image stitching method according to an exemplary embodiment of the present disclosure.
FIG. 2 is a schematic flow chart of step 102 in the embodiment shown in FIG. 1 of the present disclosure.
Fig. 3 is a flowchart illustrating an image stitching method according to another exemplary embodiment of the present disclosure.
Fig. 4 is a schematic flow chart of step 104 in the embodiment shown in fig. 1 of the present disclosure.
Fig. 5 is a schematic structural diagram of an image stitching device according to an exemplary embodiment of the present disclosure.
Fig. 6 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Exemplary method
Fig. 1 is a schematic flowchart of an image stitching method according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 1, and includes the following steps:
and 102, obtaining the characteristic point distribution of a first image acquired by the camera in a first position.
The characteristic point distribution is the density distribution of the characteristic points in the first image, more characteristic points are arranged in a smaller image area, the characteristic points in the area are more densely distributed, fewer characteristic points are arranged in a larger image area, and the characteristic points in the area are more sparsely distributed; when the distribution of the feature points is known, the ratio of the set number of feature points to the first image can be determined.
And 104, determining the pose variation of the image pickup device based on the feature point distribution of the first image and the set redundant information.
In an embodiment, the set redundant information can be set according to a specific scene, and the pose of the camera device is adjusted by setting the redundant information and the distribution of the feature points, so that an image obtained by the camera device at the adjusted pose can meet the condition of splicing with the first image; the pose variation may include a pitch angle and/or a yaw angle.
And 106, determining a second pose based on the pose variation, and controlling the camera device to acquire a second image spliced with the first image at the second pose.
In the embodiment, the second pose can be obtained according to the pose variation amount, the first pose and the shooting direction, the camera device is controlled to adjust the pose variation amount according to the shooting direction, the camera device is enabled to be in the second pose, the second image collected in the second pose is greatly improved in probability of being spliced with the first image, and image splicing speed is increased.
According to the image stitching method provided by the embodiment of the disclosure, the distribution of the characteristic points of the first image acquired by the camera device in the first position is obtained; determining the pose variation amount of the camera equipment based on the feature point distribution of the first image and set redundant information; determining a second pose based on the pose variation amount, and controlling the camera device to acquire a second image spliced with the first image at the second pose; according to the embodiment, the position and pose variation quantity required to be adjusted of the camera equipment is determined by quickly determining the distribution of the characteristic points, the speed of obtaining the second image is increased, and on the premise that enough redundant information can be guaranteed between continuously shot images, excessive redundant information is not generated, so that the image splicing efficiency is improved, and the splicing effect is improved.
As shown in fig. 2, based on the embodiment shown in fig. 1, step 102 may include the following steps:
step 1021, acquiring a first image based on the camera device in the first pose.
Optionally, the camera device may be placed on a pan/tilt head, and the pan/tilt head is controlled to enable the camera device to acquire the first image in the first attitude state.
Step 1022, feature point extraction is performed on the first image to obtain a plurality of first feature points.
Optionally, the feature points may be more prominent points in the images such as corner points, and the feature points in the first image may be extracted by an ORB extraction algorithm to obtain a plurality of first feature points.
Step 1023 determines a distribution of the feature points based on a distribution of the plurality of first feature points in the first image.
After determining all the feature points included in the first image, the area ratio of the set number of feature points in the first image may be determined, for example, the area ratio of the 1/2 number of feature points on the left side of the first image to 1/3; in this embodiment, when the first image needs to be stitched with other images, the area proportion occupied by the overlapping portion in the first image can be determined by determining the feature distribution, and the image pickup apparatus is rotated by the area proportion, so that the stitching probability of the obtained images needing to be stitched can be improved, and the image stitching efficiency can be improved.
Fig. 3 is a flowchart illustrating an image stitching method according to another exemplary embodiment of the present disclosure. As shown in fig. 3, the method comprises the following steps:
step 302, obtaining the distribution of characteristic points of a first image acquired by the camera device in a first pose.
And step 304, determining the pose variation of the image pickup apparatus based on the feature point distribution of the first image and the set redundant information.
And step 306, determining a second pose based on the pose variation, and controlling the camera device to acquire a second image spliced with the first image at the second pose.
Step 308, determining whether the second image meets the setting condition for splicing with the first image according to the set redundant information; if yes, go to step 310; otherwise, step 312 is performed.
And step 310, splicing the first image and the second image to obtain a spliced third image.
And step 312, updating the pose variation to obtain the updated second image again, and returning to execute step 308.
Optionally, the pose variation amount is reduced to obtain an updated pose variation amount, and a third pose is determined based on the updated pose variation amount; and controlling the camera shooting equipment to acquire an updated second image in the third posture.
Optionally, in the process of obtaining the panoramic image by the camera device, all image information in the scene needs to be collected in two rotation modes of pitching and yawing, and then the panoramic image is obtained by splicing all collected images. Image redundant information required by panoramic stitching can be set according to the current scene, and for the same scene, the set redundant information is fixed; namely, certain image features can help to find the mapping relationship between images, so that the more abundant the texture is in a scene (the more densely the feature points are distributed), the more the feature points can be obtained, and the smaller the required overlapping area is.
In the incremental shooting process, after the first image shooting is finished, firstly, the feature information of the first image is extracted, and the distribution of feature points in the first image is determined. And on the basis, the cradle head is controlled to select the pose variation (including the rotation angle of the pitch angle and/or the yaw angle) according to the set direction. If the feature information in the first image is richer (the distribution is denser), the overlapping area required by shooting is smaller, and the rotating angle of the pan-tilt control pitch angle and/or yaw angle is larger; otherwise, the smaller the rotation angle. Optionally, the step 308 may include: determining the number of matched characteristic point pairs of the second image and the first image; determining that the second image meets the set condition in response to the number of the characteristic point pairs being greater than or equal to the set redundant information; and determining that the second image does not satisfy the setting condition in response to the number of the feature point pairs being less than the setting redundancy information.
In this embodiment, when the obtained second image does not satisfy the setting condition for stitching with the first image, it is described that the number of feature points included in the overlapping area between the second image and the first image is smaller than the set redundant information, and at this time, if the first image and the second image are stitched, the stitching effect is affected, which causes a problem such as a gap in the stitched panoramic image, and the like.
Optionally, determining the number of matched feature point pairs of the second image and the first image comprises:
extracting the feature points of the second image to obtain a plurality of second feature points corresponding to the second image;
determining the corresponding relation between the plurality of first characteristic points and the second characteristic points based on the characteristic descriptor corresponding to each first characteristic point in the plurality of first characteristic points and the characteristic descriptor corresponding to each second characteristic point in the plurality of second characteristic points;
the number of pairs of feature points is determined based on the first feature points and the second feature points for which there is a correspondence.
The feature points can be uniquely represented by positions (coordinates) and descriptions (descriptors), and the embodiment determines which feature points correspond to the same real point through the feature descriptors, wherein the feature descriptors are the features of the feature points which are processed by pixels around the feature points; one feature point can be uniquely determined by combining the feature descriptor and the position, and a plurality of pairs of feature point pairs are determined for the first image and the second image by combining the feature descriptor, so that the accuracy of the determined feature point pairs is improved, and the accuracy of image splicing is improved.
In some alternative embodiments, step 310 in the embodiment shown in fig. 3 may include:
determining pairs of feature points included in the first image and the second image;
determining a mapping relation between the first image and the second image based on the plurality of pairs of feature points;
and mapping the first image and the second image to a panoramic coordinate system based on the mapping relation, and splicing the first image and the second image in the panoramic coordinate system to obtain a third image.
Optionally, in this embodiment, it is determined whether the first image and the second image can effectively find a mapping relationship through the feature point pairs, and then the first image and the second image can be mapped to the panoramic coordinate system through the mapping relationship, image stitching is performed in the panoramic coordinate system to obtain a third image, the second image is continuously used as the first image, the second image is continuously rotated in the set direction to obtain a plurality of images, and the panoramic image is obtained based on the stitching of the plurality of images.
As shown in fig. 4, based on the embodiment shown in fig. 1, step 104 may include the following steps:
step 1041, determining an overlapping portion of the first image based on the set redundant information and the feature point distribution of the first image.
Optionally, determining the number of overlapped feature points required by image splicing based on the set redundant information; and determining the overlapped part of the first image according to the number of the overlapped characteristic points and the characteristic point distribution of the first image.
Alternatively, the set redundant information may be a set number of overlapping feature points (that is, when two images are stitched, feature points included in portions existing in both images, the overlapping feature points respectively have corresponding feature points in the first image and the second image, but correspond to the same real point in an actual scene), and when the distribution of the feature points is known, the area occupied by the set number of overlapping feature points in the first image, that is, the overlapping portion, may be directly determined.
And 1042, determining the pose variation of the image pickup device based on the proportion of the overlapped part in the first image.
After the overlapping part is determined, the area proportion of the overlapping part in the first image can be determined, at this time, the rotation angle of the camera equipment can be determined by combining data such as internal parameters of the camera equipment, so that the overlapping part and other contents in a scene can be included in the next image, the second image can be spliced with the first image by including the overlapping part, the splicing of the first image and the second image contributes to obtaining a global image, the same image content is not repeatedly obtained, and the splicing efficiency of the panoramic image is further improved.
Any of the image stitching methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: terminal equipment, a server and the like. Alternatively, any image stitching method provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any image stitching method mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 5 is a schematic structural diagram of an image stitching device according to an exemplary embodiment of the present disclosure. As shown in fig. 5, the apparatus provided in this embodiment includes:
and the feature distribution module 51 is configured to obtain a feature point distribution of a first image acquired by the image capturing apparatus in a first pose.
A pose change module 52 for determining a change amount of the pose of the image pickup apparatus based on the feature point distribution of the first image and the setting redundancy information.
And the image stitching module 53 is configured to determine a second pose based on the pose variation amount, and control the image capturing apparatus to capture a second image stitched with the first image in the second pose.
The image stitching device provided by the above embodiment of the present disclosure obtains the feature point distribution of the first image acquired by the camera device in the first position; determining the pose variation amount of the camera equipment based on the feature point distribution of the first image and set redundant information; determining a second pose based on the pose variation amount, and controlling the camera device to acquire a second image spliced with the first image at the second pose; according to the embodiment, the position and pose variation quantity required to be adjusted of the camera equipment is determined by quickly determining the distribution of the characteristic points, the speed of obtaining the second image is increased, and on the premise that enough redundant information can be guaranteed between continuously shot images, excessive redundant information is not generated, so that the image splicing efficiency is improved, and the splicing effect is improved.
Optionally, the feature distribution module is specifically configured to acquire a first image based on the first pose acquisition of the camera device; performing feature point extraction on the first image to obtain a plurality of first feature points; based on the distribution of the plurality of first feature points in the first image, a feature point distribution is determined.
Optionally, the apparatus further comprises:
the splicing identification module is used for determining whether the second image meets the set condition for splicing with the first image according to the set redundant information; splicing the first image and the second image to obtain a spliced third image in response to the second image meeting the set condition; in response to the second image not satisfying the setting condition, iteratively performing: and updating the pose variation to obtain an updated second image again, and determining whether the updated second image meets the set condition spliced with the first image until the second image meets the set condition.
Optionally, the stitching identification module is configured to, when the updated pose variation amount is updated to obtain an updated second image again, reduce the pose variation amount to obtain an updated pose variation amount, and determine a third pose based on the updated pose variation amount; and controlling the camera shooting equipment to acquire an updated second image in the third posture.
Optionally, the stitching identification module is configured to determine the number of the feature point pairs matched between the second image and the first image when determining whether the second image meets a set condition for stitching with the first image according to the set redundant information; determining that the second image meets the set condition in response to the number of the characteristic point pairs being greater than or equal to the set redundant information; and determining that the second image does not satisfy the setting condition in response to the number of the feature point pairs being smaller than the setting redundancy information.
Optionally, the stitching identification module is configured to, when determining the number of the matched feature point pairs of the second image and the first image, perform feature point extraction on the second image to obtain a plurality of second feature points corresponding to the second image; determining the corresponding relation between the plurality of first characteristic points and the second characteristic points based on the characteristic descriptor corresponding to each first characteristic point in the plurality of first characteristic points and the characteristic descriptor corresponding to each second characteristic point in the plurality of second characteristic points; the number of pairs of feature points is determined based on the first feature points and the second feature points for which there is a correspondence.
Optionally, the stitching identification module is configured to determine pairs of feature points included in the first image and the second image when the first image and the second image are stitched to obtain a stitched third image; determining a mapping relation between the first image and the second image based on the plurality of pairs of feature points; and mapping the first image and the second image to a panoramic coordinate system based on the mapping relation, and splicing the first image and the second image in the panoramic coordinate system to obtain a third image.
Optionally, the pose change module is specifically configured to determine an overlapping portion of the first image based on the set redundant information and the feature point distribution of the first image; the amount of change in the pose of the image pickup apparatus is determined based on the proportion of the overlapping portion in the first image.
Optionally, the pose change module is configured to determine the number of overlapped feature points required for image stitching based on the set redundant information when determining the overlapped part of the first image based on the set redundant information and the feature point distribution of the first image; and determining the overlapped part of the first image according to the number of the overlapped characteristic points and the characteristic point distribution of the first image.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 6. The electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device separate from them that may communicate with the first device and the second device to receive the collected input signals therefrom.
FIG. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 6, the electronic device 60 includes one or more processors 61 and a memory 62.
The processor 61 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 60 to perform desired functions.
Memory 62 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 61 to implement the image stitching methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 60 may further include: an input device 63 and an output device 64, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input device 63 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 63 may be a communication network connector for receiving the acquired input signals from the first device 100 and the second device 200.
The input device 63 may also include, for example, a keyboard, a mouse, and the like.
The output device 64 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 64 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 60 relevant to the present disclosure are shown in fig. 6, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 60 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image stitching method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image stitching method according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An image stitching method, comprising:
obtaining the distribution of characteristic points of a first image acquired by the camera equipment in a first position;
determining the pose variation amount of the camera equipment based on the feature point distribution of the first image and set redundant information;
and determining a second pose based on the pose variation amount, and controlling the camera device to acquire a second image spliced with the first image at the second pose.
2. The method according to claim 1, wherein the obtaining of the distribution of feature points of the first image acquired by the camera device in the first pose comprises:
acquiring and obtaining the first image based on the camera shooting equipment in the first attitude;
performing feature point extraction on the first image to obtain a plurality of first feature points;
determining the distribution of the feature points based on the distribution of the plurality of first feature points in the first image.
3. The method of claim 2, further comprising:
determining whether the second image meets the set condition of splicing with the first image or not according to the set redundant information;
splicing the first image and the second image to obtain a spliced third image in response to the second image meeting the set condition;
in response to the second image not satisfying the setting condition, iteratively performing: updating the pose change amount to obtain an updated second image again, and determining whether the updated second image meets the set condition spliced with the first image until the second image meets the set condition.
4. The method of claim 3, wherein the updating the pose change amount retrieves an updated second image, comprising:
reducing the pose variation to obtain an updated pose variation, and determining a third pose based on the updated pose variation;
and controlling the camera equipment to acquire the updated second image in the third posture.
5. The method according to claim 3 or 4, wherein the determining whether the second image meets the set condition of stitching with the first image according to the set redundancy information comprises:
determining the number of matched feature point pairs of the second image and the first image;
determining that the second image satisfies the setting condition in response to the number of the feature point pairs being greater than or equal to the setting redundancy information;
determining that the second image does not satisfy the setting condition in response to the number of the feature point pairs being smaller than the setting redundancy information.
6. The method of claim 5, wherein the determining the number of matched pairs of feature points of the second image and the first image comprises:
extracting feature points of the second image to obtain a plurality of second feature points corresponding to the second image;
determining a corresponding relation between a plurality of first feature points and a plurality of second feature points based on a feature descriptor corresponding to each first feature point in the plurality of first feature points and a feature descriptor corresponding to each second feature point in the plurality of second feature points;
determining the number of pairs of feature points based on the first feature points and the second feature points for which there is a correspondence.
7. The method of claim 6, wherein said stitching the first image and the second image to obtain a stitched third image comprises:
determining pairs of feature points included in the first image and the second image;
determining a mapping relationship between the first image and the second image based on the plurality of pairs of feature points;
and mapping the first image and the second image to a panoramic coordinate system based on the mapping relation, and splicing the first image and the second image in the panoramic coordinate system to obtain a third image.
8. An image stitching device, comprising:
the characteristic distribution module is used for obtaining the characteristic point distribution of a first image acquired by the camera equipment in a first position;
a pose change module for determining a pose change amount of the image pickup apparatus based on the feature point distribution of the first image and the set redundant information;
and the image splicing module is used for determining a second pose based on the pose variation and controlling the camera equipment to acquire a second image spliced with the first image at the second pose.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the image stitching method according to any one of the claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the image stitching method of any one of the claims 1 to 7.
CN202110654609.0A 2021-06-11 2021-06-11 Image splicing method and device, computer readable storage medium and electronic equipment Pending CN113344835A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110654609.0A CN113344835A (en) 2021-06-11 2021-06-11 Image splicing method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110654609.0A CN113344835A (en) 2021-06-11 2021-06-11 Image splicing method and device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113344835A true CN113344835A (en) 2021-09-03

Family

ID=77477079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110654609.0A Pending CN113344835A (en) 2021-06-11 2021-06-11 Image splicing method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113344835A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102023167A (en) * 2009-09-16 2011-04-20 奥林巴斯株式会社 Method and appratus for image generation
JP2014192829A (en) * 2013-03-28 2014-10-06 Ntt Docomo Inc Imaging device and imaging method
US20160073021A1 (en) * 2014-09-05 2016-03-10 Htc Corporation Image capturing method, panorama image generating method and electronic apparatus
CN106530214A (en) * 2016-10-21 2017-03-22 微景天下(北京)科技有限公司 Image splicing system and image splicing method
CN111429354A (en) * 2020-03-27 2020-07-17 贝壳技术有限公司 Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102023167A (en) * 2009-09-16 2011-04-20 奥林巴斯株式会社 Method and appratus for image generation
JP2014192829A (en) * 2013-03-28 2014-10-06 Ntt Docomo Inc Imaging device and imaging method
US20160073021A1 (en) * 2014-09-05 2016-03-10 Htc Corporation Image capturing method, panorama image generating method and electronic apparatus
CN106530214A (en) * 2016-10-21 2017-03-22 微景天下(北京)科技有限公司 Image splicing system and image splicing method
CN111429354A (en) * 2020-03-27 2020-07-17 贝壳技术有限公司 Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111429354B (en) Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment
CN114399597B (en) Method and device for constructing scene space model and storage medium
US11816865B2 (en) Extrinsic camera parameter calibration method, extrinsic camera parameter calibration apparatus, and extrinsic camera parameter calibration system
CN111432119B (en) Image shooting method and device, computer readable storage medium and electronic equipment
CN111428805B (en) Method for detecting salient object, model, storage medium and electronic device
CN112489114A (en) Image conversion method and device, computer readable storage medium and electronic equipment
CN115209031B (en) Video anti-shake processing method and device, electronic equipment and storage medium
CN112399188A (en) Image frame splicing method and device, readable storage medium and electronic equipment
CN114757301A (en) Vehicle-mounted visual perception method and device, readable storage medium and electronic equipment
CN113095228B (en) Method and device for detecting target in image and computer readable storage medium
CN113129211B (en) Optical center alignment detection method and device, storage medium and electronic equipment
CN113989376B (en) Method and device for acquiring indoor depth information and readable storage medium
CN113450258B (en) Visual angle conversion method and device, storage medium and electronic equipment
CN114882465A (en) Visual perception method and device, storage medium and electronic equipment
US20200118255A1 (en) Deep learning method and apparatus for automatic upright rectification of virtual reality content
CN112328150B (en) Automatic screenshot method, device and equipment, and storage medium
CN112102171B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN111429353A (en) Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment
CN113689508A (en) Point cloud marking method and device, storage medium and electronic equipment
CN116320765B (en) Method, apparatus, device, medium and program product for generating panoramic image
CN112770057A (en) Camera parameter adjusting method and device, electronic equipment and storage medium
CN113344835A (en) Image splicing method and device, computer readable storage medium and electronic equipment
CN113744339B (en) Method and device for generating panoramic image, electronic equipment and storage medium
CN114241029B (en) Image three-dimensional reconstruction method and device
CN115512046A (en) Panorama display method and device for model outer point positions, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211013

Address after: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: 101300 room 24, 62 Farm Road, Erjie village, Yangzhen Town, Shunyi District, Beijing

Applicant before: Beijing fangjianghu Technology Co.,Ltd.