CN111402136A - Panorama generation method and device, computer readable storage medium and electronic equipment - Google Patents

Panorama generation method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN111402136A
CN111402136A CN202010196117.7A CN202010196117A CN111402136A CN 111402136 A CN111402136 A CN 111402136A CN 202010196117 A CN202010196117 A CN 202010196117A CN 111402136 A CN111402136 A CN 111402136A
Authority
CN
China
Prior art keywords
image
determining
mapping
panorama
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010196117.7A
Other languages
Chinese (zh)
Other versions
CN111402136B (en
Inventor
饶童
杨永林
陈昱彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN202010196117.7A priority Critical patent/CN111402136B/en
Publication of CN111402136A publication Critical patent/CN111402136A/en
Priority to US17/200,659 priority patent/US11146727B2/en
Priority to US17/383,157 priority patent/US11533431B2/en
Priority to US17/981,056 priority patent/US20230056036A1/en
Application granted granted Critical
Publication of CN111402136B publication Critical patent/CN111402136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for generating a panorama, wherein the method comprises the following steps: acquiring at least two image sequences for generating a panorama; determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relation between the effective images; determining internal parameters of the camera; determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation; and mapping the effective image to a mapping surface with the camera as the center based on the attitude angle of the camera to obtain a panoramic image. The embodiment of the disclosure can generate the panoramic image with a large field angle in different modes according to habits of different users, thereby improving the flexibility and efficiency of generating the panoramic image.

Description

Panorama generation method and device, computer readable storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a panorama generating method and apparatus, a computer-readable storage medium, and an electronic device.
Background
Currently, panoramas are widely used in VR scenes. In some areas, such as maps, house rentals, interior decoration, etc., the use of panoramas can present the surrounding environment to the user in a near real-world manner. Meanwhile, the panoramic image contains a large amount of scene information and can be effectively applied to a depth map estimation algorithm.
In the process of shooting the panoramic image, a user is generally required to rotate the camera in place by taking the vertical direction as an axis for one circle, and therefore, the vertical-direction field angle of the shot panoramic image is limited.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a panorama generating method and device, a computer readable storage medium and an electronic device.
The embodiment of the present disclosure provides a panorama generating method, including: acquiring at least two image sequences for generating a panorama, wherein the at least two image sequences comprise at least one image sequence shot in a discrete mode; determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relation between the effective images; determining internal parameters of the camera; determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation; and mapping the effective image to a mapping surface with the camera as the center based on the attitude angle of the camera to obtain a panoramic image.
In some embodiments, determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relationship between the effective images, includes: in response to determining that each of the at least two image sequences is captured in a laterally discrete manner, for each of the at least two image sequences, determining that each of the images in the image sequences is a valid image, and determining a connection relationship between the images in the image sequences by feature extraction and feature matching.
In some embodiments, mapping the effective image to a mapping surface centered on the camera based on the camera pose angle to obtain a panorama, includes: adjusting the yaw angles of the images arranged in the vertical direction to be consistent; and for each image sequence in the at least two image sequences, mapping each image in the image sequences to a mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle is adjusted, so as to obtain the sub-panorama corresponding to the image sequence. Determining the characteristics of each obtained sub-panorama; and combining the sub-panoramic pictures based on the characteristics of the sub-panoramic pictures to obtain the final panoramic picture.
In some embodiments, determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relationship between the effective images, includes: in response to determining that each of the at least two image sequences is shot in a longitudinal discrete manner, determining a mapping relationship between a target image and other images in the image sequences for each of the at least two image sequences; fusing other images to the target image based on the mapping relation to obtain a fused image corresponding to the image sequence as an effective image; and determining the connection relation among the obtained fusion images through feature extraction and feature matching.
In some embodiments, mapping the effective image to a mapping surface centered on the camera based on the camera pose angle to obtain a panorama, includes: for each fusion image in each fusion image, mapping the fusion image to a mapping surface to obtain a sub-panorama corresponding to the fusion image; determining the characteristics of each obtained sub-panorama; and combining the sub-panoramic pictures based on the characteristics of the sub-panoramic pictures to obtain the final panoramic picture.
In some embodiments, determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relationship between the effective images, includes: in response to determining that a first image sequence of the at least two image sequences is shot in a transversely discrete manner and the other image sequences are shot in a transversely continuous manner, determining each image included in the first image sequence as an effective image; determining the connection relation between each image in the first image sequence through feature extraction and feature matching; and for each image sequence in other image sequences, determining that the key frame images in the image sequences are effective images, and determining the connection relation between the key frame images and the inter-image sequences through feature extraction and feature matching.
In some embodiments, mapping the effective image to a mapping surface centered on the camera based on the camera pose angle to obtain a panorama, includes: mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence; mapping the key frame image to a mapping surface to obtain a mapping image; determining the characteristics of the mapping image and the sub-panorama; and combining the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
According to another aspect of the embodiments of the present disclosure, there is provided a panorama generating apparatus, including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring at least two image sequences used for generating a panoramic image, and the at least two image sequences comprise at least one image sequence shot in a discrete mode; the first determining module is used for determining effective images in at least two image sequences based on shooting modes of the at least two image sequences and determining the connection relation between the effective images; the second determination module is used for determining internal parameters of the camera; the third determining module is used for determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation; and the mapping module is used for mapping the effective image to a mapping surface taking the camera as the center based on the camera attitude angle to obtain a panoramic image.
In some embodiments, the first determining module comprises: and the first determining unit is used for responding to the fact that each image sequence of the at least two image sequences is shot in a transverse discrete mode, determining each image of the image sequences as an effective image for each image sequence of the at least two image sequences, and determining the connection relation between the images of the image sequences through feature extraction and feature matching.
In some embodiments, the mapping module comprises: the adjusting unit is used for adjusting the yaw angles of the images arranged in the vertical direction to be consistent; and the first mapping unit is used for mapping each image in the image sequences to a mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle is adjusted, so as to obtain the sub-panoramas corresponding to the image sequences. A second determining unit, configured to determine characteristics of the obtained sub-panoramas; and the first merging unit is used for merging each sub-panorama based on the characteristics of each sub-panorama to obtain a final panorama.
In some embodiments, the first determining module comprises: the fusion unit is used for responding to the fact that each image sequence in the at least two image sequences is shot in a longitudinal discrete mode, and determining the mapping relation between the target image and other images in the image sequences for each image sequence in the at least two image sequences; fusing other images to the target image based on the mapping relation to obtain a fused image corresponding to the image sequence as an effective image; and the third determining unit is used for determining the connection relation between the obtained fusion images through feature extraction and feature matching.
In some embodiments, the mapping module comprises: the second mapping unit is used for mapping the fusion image to a mapping surface for each fusion image in each fusion image to obtain a sub-panorama corresponding to the fusion image; a fourth determining unit, configured to determine characteristics of the obtained sub-panoramas; and the second merging unit is used for merging each sub-panorama based on the characteristics of each sub-panorama to obtain a final panorama.
In some embodiments, the first determining module comprises: a fifth determining unit, configured to determine each image included in the first image sequence as an effective image in response to determining that the first image sequence of the at least two image sequences is captured in a laterally discrete manner and the other image sequences are captured in a laterally continuous manner; determining the connection relation between each image in the first image sequence through feature extraction and feature matching; and the sixth determining unit is used for determining the key frame images in the image sequences as effective images for each image sequence in other image sequences, and determining the connection relation between the key frame images and the inter-image sequences through feature extraction and feature matching.
In some embodiments, the mapping module comprises: the third mapping unit is used for mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence; the fourth mapping unit is used for mapping the key frame image to a mapping surface to obtain a mapping image; a seventh determining unit, configured to determine features of the mapped image and the sub-panorama; and the third merging unit is used for merging the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described panorama generating method.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; and the processor is used for reading the executable instructions from the memory and executing the instructions to realize the panorama generating method.
Based on the panorama generating method, the device, the computer readable storage medium and the electronic device provided by the embodiments of the present disclosure, effective images arranged in different modes are determined from a sequence of the photographed images in different photographing modes, a connection relationship between the effective images is determined, then the camera is calibrated to obtain camera internal parameters, a camera attitude angle of the effective images is determined based on the internal parameters and the connection relationship, and finally, each effective image is mapped into the panorama based on the camera attitude angle, so that the panorama with a large field angle can be generated in different modes according to habits of different users, and the flexibility and the efficiency of generating the panorama are improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a system diagram to which the present disclosure is applicable.
Fig. 2 is a flowchart illustrating a panorama generating method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a panorama generating method according to another exemplary embodiment of the present disclosure.
Fig. 4 is an exemplary schematic diagram of a landscape discrete mode photographing of the panorama generating method of the embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a panorama generating method according to another exemplary embodiment of the present disclosure.
Fig. 6 is an exemplary schematic diagram of longitudinal discrete mode shooting of a panorama generating method of an embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a panorama generating method according to another exemplary embodiment of the present disclosure.
Fig. 8 is an exemplary schematic diagram of a landscape continuous mode photographing of the panorama generating method of the embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of a panorama generating apparatus according to an exemplary embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of a panorama generating apparatus according to another exemplary embodiment of the present disclosure.
Fig. 11 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
The conventional panorama shooting device has the problem that a vertical view angle (VFOV) is small, so that the depth estimation effect is poor, and the rendering to an end user is limited by the view angle, so that the vertical view angle needs to be expanded when shooting a panorama.
Exemplary System
Fig. 1 illustrates an exemplary system architecture 100 of a panorama generation method or panorama generation apparatus to which embodiments of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or send messages and the like. Various communication client applications, such as a shooting application, a map application, a three-dimensional model application, and the like, may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices including, but not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc.
The server 103 may be a server that provides various services, such as a background image processing server that processes an image sequence uploaded by the terminal apparatus 101. The background image processing server can process the received image sequence to obtain information such as a panoramic image.
It should be noted that the panorama generating method provided by the embodiment of the present disclosure may be executed by the server 103 or the terminal device 101, and accordingly, the panorama generating apparatus may be provided in the server 103 or the terminal device 101.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the image sequence does not need to be acquired from a remote location, the system architecture described above may not include a network, including only a server or a terminal device.
Exemplary method
Fig. 2 is a flowchart illustrating a panorama generating method according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device (such as the terminal device 101 or the server 103 shown in fig. 1), and as shown in fig. 2, the method includes the following steps:
at step 201, at least two image sequences for generating a panorama are acquired.
In this embodiment, the electronic device may acquire at least two image sequences for generating a panorama, either locally or remotely. The image sequences may be obtained by shooting a surrounding scene by a camera integrated on the electronic device or a camera connected with the electronic device, and the at least two image sequences include at least one image sequence shot in a discrete mode. The discrete mode shooting refers to that a camera shoots an image at a certain position in a certain posture, then the posture and/or the position of the camera is changed to shoot another image, and the operation is repeated to obtain an image sequence. The images in the image sequence may be arranged in a landscape or portrait orientation.
As an example, a row of images arranged horizontally may be a sequence of images, or a column of images arranged vertically may be a sequence of images.
Step 202, determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relation between the effective images.
In this embodiment, the electronic device may determine effective images in the at least two image sequences based on the shooting manners of the at least two image sequences, and determine the connection relationship between the effective images. The camera may take a discrete image (i.e., the camera stays at a position to take an image) or a continuous image (e.g., a video image). The effective image is an image for mapping to a three-dimensional mapping surface to generate a panorama. Such as key frames of a video capture mode.
Step 203, determining the internal parameters of the camera.
In this embodiment, the electronic device may determine the internal parameters of the camera. The intrinsic parameter of the Camera is typically an intrinsic parameter matrix (Camera Intrinsics) K. The internal parameters of the camera may be fixed, i.e. known, and the electronic device may acquire the internal parameters that were previously input. The internal reference of the camera can also be obtained through calibration, and the electronic device can calibrate the camera by using the image sequence obtained in step 201 to obtain the internal reference of the camera. The camera internal reference calibration method is a widely used known technology at present, and is not described herein again.
And 204, determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation.
In this embodiment, the electronic device may determine the camera pose angle corresponding to the effective image based on the internal reference and the connection relationship. The camera attitude angle is used for representing the shooting direction of the camera under the three-dimensional coordinate system. The three-dimensional coordinate system may be a rectangular coordinate system established with the camera position as an origin. The attitude angles may include a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll). The pitch angle is used for representing deflection of an optical axis of the camera along the vertical direction, the yaw angle is used for representing deflection of the optical axis of the camera on the horizontal plane, and the roll angle is used for representing rolling degree of the camera along the optical axis.
The electronic device may determine the camera pose angle according to various existing methods based on the connection relationship between the internal reference and the image. For example, the method of determining the camera pose angle may include, but is not limited to, at least one of: such as photometric errors, reprojection errors, 3D geometric errors, etc.
And step 205, mapping the effective image to a mapping surface with the camera as the center based on the camera attitude angle to obtain a panoramic image.
In this embodiment, the electronic device may map the effective image to a mapping surface centered on the camera based on the camera attitude angle to obtain the panorama. In particular, the panorama may be an image mapped to a mapping surface of various shapes (e.g., a sphere, a cylinder, etc.). The reference coordinate system is established towards a certain direction by taking the center of a sphere (or the center of a cylinder) as the center, and a conversion relation exists between the coordinate system of the plane image obtained by the camera and the reference coordinate system, wherein the conversion relation can be represented by a camera attitude angle which indicates to which part of the panoramic image the plane image is mapped. It should be noted that the method for mapping a two-dimensional image to a three-dimensional mapping surface is a currently known technology, and is not described herein again.
According to the method provided by the embodiment of the disclosure, the effective images arranged in different modes are determined from the shot image sequence in different shooting modes, the connection relation between the effective images is determined, then the camera is calibrated to obtain the camera internal reference, the camera attitude angle of the effective images is determined based on the internal reference and the connection relation, and finally each effective image is mapped into the panorama based on the camera attitude angle, so that the panorama with a large field angle can be generated in different modes according to habits of different users, and the flexibility and the efficiency of generating the panorama are improved.
With further reference to fig. 3, a flowchart illustration of yet another embodiment of a panorama generation method is shown. As shown in fig. 3, the panorama generating method includes the steps of:
step 301, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 301 is substantially the same as step 201 in the embodiment corresponding to fig. 2, and is not described here again.
Step 302, in response to determining that each of the at least two image sequences is shot in a transversely discrete manner, determining each of the images in the image sequences as an effective image for each of the at least two image sequences, and determining a connection relationship between the images in the image sequences through feature extraction and feature matching.
In this embodiment, the images captured in the horizontal discrete mode may be distributed in at least two rows from top to bottom, where each row is an image sequence. As an example, as shown in fig. 4, the camera horizontally rotates 360 ° in the direction of the arrow in the figure to take three shots in three dimensions, and three lines of images, i.e., three image sequences, each corresponding to a pitch angle, can be obtained.
Each image sequence in this embodiment includes each valid image, that is, each of the images can be mapped into the panorama.
The electronic device may extract feature points in each image using an existing feature extraction method. As an example, the feature extraction algorithm may include, but is not limited to, at least one of: SIFT (Scale Invariant feature transform), SURF (speedup Robust Features, improved way of extracting and describing Features), ORB (organized FAST and rolling BRIEF, algorithm for FAST feature point extraction and description), etc. After the feature points are obtained, the feature points can be matched, and the same points in the characterization space are connected, so that the connection relation between the images is determined.
Step 303, determining internal parameters of the camera.
In this embodiment, step 303 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again.
And 304, determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation.
In this embodiment, step 304 is substantially the same as step 204 in the corresponding embodiment of fig. 2, and is not described herein again.
Step 305, adjusting the yaw angle of the vertically arranged images to be consistent.
As shown in fig. 4, the yaw angle (yaw) of each column of images may be deviated, which may cause the dislocation of two adjacent images, and the images adjacent to each other may be aligned by adjusting the yaw angle, which is beneficial to improving the accuracy of generating the panoramic image.
And step 306, for each image sequence of the at least two image sequences, mapping each image of the image sequences to a mapping surface based on the camera attitude angle of each image of the image sequences after the yaw angle is adjusted, so as to obtain a sub-panorama corresponding to the image sequence.
In this embodiment, for a certain image sequence, based on the camera pose angle of each image in the image sequence, the mapping relationship between the image and the panorama (i.e., the position where a pixel point in the image is mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, the sub-panorama corresponding to the image sequence may be generated.
And 307, determining the characteristics of each obtained sub-panorama.
In this embodiment, the electronic device may determine the features of each sub-panorama according to an existing feature extraction method (e.g., the various algorithms described in step 302 above).
And 308, merging the sub-panoramic pictures based on the characteristics of the sub-panoramic pictures to obtain the final panoramic picture.
In this embodiment, the electronic device may connect the sub-panoramas together based on the characteristics of the sub-panoramas, and fuse the pixels of the connected sub-panoramas, so as to obtain a final panoramas. As an example, the color values of the pixel points representing the same three-dimensional space point in the two connected sub-panoramas may be averaged (or weighted and summed based on other weights) to obtain the color value of the pixel point in the final panoramas.
In the method provided by the embodiment corresponding to fig. 3, when the shooting mode is the horizontal discrete mode, the sub-panoramas corresponding to the image sequences are generated based on the connection relationship between the images in each image sequence, and then the sub-panoramas are combined to obtain the final panoramas.
With further reference to fig. 5, a flowchart illustration of yet another embodiment of a panorama generation method is shown. As shown in fig. 5, the panorama generating method includes the steps of:
at step 501, at least two image sequences for generating a panorama are obtained.
In this embodiment, step 501 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
Step 502, in response to determining that each of the at least two image sequences is shot in a longitudinal discrete manner, determining a mapping relationship between a target image and other images in the image sequences for each of the at least two image sequences; and fusing other images to the target image based on the mapping relation to obtain a fused image corresponding to the image sequence as an effective image.
In this embodiment, the images obtained by the vertical discrete mode shooting may be distributed in at least two rows from left to right, and each row is an image sequence. As an example, as shown in fig. 6, the camera can horizontally rotate 360 ° in the direction of the arrow in the figure in three-dimensional space to capture images in multiple columns, each column being an image sequence.
The target image may be a pre-designated image, for example, for a column of images as shown in fig. 6, the image located in the middle may be the target image. The electronic equipment can extract the feature points of each image in an image sequence by using the existing feature extraction method, and performs feature matching by using the feature points to obtain a homography matrix between the images, thereby determining the mapping relation between other images and a target image.
The electronic device can fuse other images to the target image by using the mapping relation, so that the fused image is obtained as an effective image.
Step 503, determining the connection relationship between the obtained fusion images through feature extraction and feature matching.
In this embodiment, the electronic device may determine the connection relationship between the obtained fusion images according to the feature extraction and feature matching method described in step 302 in the embodiment corresponding to fig. 3.
At step 504, the internal parameters of the camera are determined.
In this embodiment, step 504 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described here again.
And 505, determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation.
In this embodiment, step 505 is substantially the same as step 204 in the corresponding embodiment of fig. 2, and is not described here again.
And step 506, mapping the fused image to a mapping surface for each fused image in each fused image to obtain a sub-panorama corresponding to the fused image.
In this embodiment, for a certain fused image, based on the camera pose angle of the fused image, the mapping relationship between the fused image and the panorama (i.e., the position of the pixel point in the fused image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, the sub-panorama corresponding to the fused image may be generated.
And step 507, determining the characteristics of each obtained sub-panorama.
In this embodiment, the electronic device may determine the features of each sub-panorama according to an existing feature extraction method (e.g., the various algorithms described in step 302 above).
And step 508, combining the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
In this embodiment, step 508 is substantially the same as step 308 in the embodiment corresponding to fig. 3, and is not described here again.
In the method provided by the embodiment corresponding to fig. 5, when the shooting mode is the longitudinal discrete mode, the images in each image sequence are fused to obtain fused images, each fused image is mapped onto the mapping surface of the panorama, sub-panoramas corresponding to each image sequence are generated, and the sub-panoramas are merged to obtain a final panorama.
With further reference to fig. 7, a flowchart illustration of yet another embodiment of a panorama generation method is shown. As shown in fig. 7, the panorama generating method includes the steps of:
in step 701, at least two image sequences for generating a panorama are obtained.
In this embodiment, step 701 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
Step 702, in response to determining that a first image sequence of at least two image sequences is shot in a transverse discrete mode and other image sequences are shot in a transverse continuous mode, determining each image included in the first image sequence as an effective image; and determining a connection relation between each image in the first image sequence through feature extraction and feature matching.
In this embodiment, the first image sequence is an image sequence captured first by the camera. As shown in fig. 8, 801 is a first image sequence. After the first image sequence is captured, the image sequence continues to be captured in a laterally continuous manner, which may typically be a sequence of video-captured image frames, by changing the pitch angle of the camera. As shown in fig. 8, 802 and 803 are image sequences photographed in a horizontally continuous manner.
The electronic device may determine the connection relationship between each image in the first sequence of images according to the feature extraction and feature matching method described in step 302.
Step 703, for each image sequence in other image sequences, determining that the key frame image in the image sequence is an effective image, and determining the connection relationship between the key frame image and the first image sequence through feature extraction and feature matching.
In this embodiment, as shown in fig. 8, the frame marked with an "x" is a key frame. The key frame (also called I frame) is a frame that completely retains image data in the compressed video, and when decoding the key frame, decoding can be completed only by the image data of the frame. In a video, a key frame is generally a frame when a scene, an object image, and the like in the video have a significant change, that is, the key frame includes key information of a plurality of corresponding frames within a certain time range. In general, the time interval between temporally adjacent key frames is reasonable, neither too long nor too short. By extracting key frames, a small number of images can be extracted from a plurality of image frames, the images contain a plurality of feature points corresponding to different space points, and a sufficient number of matched feature points exist between adjacent key frames. The electronic device may extract the key frames according to various methods, such as color feature based methods, motion analysis based methods, clustering based methods, and the like.
The electronic device may determine the connection relationship between the key frame images and the inter-image sequence according to the feature extraction and feature matching method described in step 302.
Step 704, the internal parameters of the camera are determined.
In this embodiment, step 704 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again.
Step 705, determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation.
In this embodiment, step 705 is substantially the same as step 204 in the corresponding embodiment of fig. 2, and is not described herein again.
Step 706, mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence.
In this embodiment, for a certain image sequence, based on the camera pose angle of the image, the mapping relationship between each image in the image sequence and the panorama (i.e., the position where a pixel point in the image is mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, the sub-panorama corresponding to the image sequence may be generated.
And step 707, mapping the key frame image to a mapping surface to obtain a mapping image.
In this embodiment, each key frame image may be mapped to the mapping surface according to the same method as in step 706, so as to obtain a mapping image corresponding to each key frame image.
In step 708, the characteristics of the mapped image and the sub-panorama are determined.
In this embodiment, the electronic device may determine the features of the mapping image and the sub-panorama according to the feature extraction and feature matching method described in step 302 in the embodiment corresponding to fig. 3.
And 709, combining the mapping image and the sub-panorama to obtain a final panorama based on the characteristics of the mapping image and the sub-panorama.
In this embodiment, the electronic device may connect each mapping image and the sub-panorama together based on the features of each mapping image and the sub-panorama, and fuse pixels of the connected images, thereby obtaining a final panorama.
In the method provided by the embodiment corresponding to fig. 7, when the first image sequence is obtained by shooting in a horizontal discrete manner and the other image sequences are obtained by shooting in a horizontal continuous manner, the sub-panorama is generated based on the first image sequence, the key frame images in the other image sequences are extracted, the key frame images are mapped to the mapping surface of the panorama, and finally the mapped images and the panorama are combined to obtain the final panorama. Because the information amount of the video is much larger than that of the discrete image and the selection of the key frame is flexible, the success rate of generating the panoramic image by image splicing can be improved.
Exemplary devices
Fig. 9 is a schematic structural diagram of a panorama generating apparatus according to an exemplary embodiment of the present disclosure. The present embodiment can be applied to an electronic device, and as shown in fig. 9, the panorama generating apparatus includes: an obtaining module 901, configured to obtain at least two image sequences used for generating a panorama, where the at least two image sequences include at least one image sequence shot in a discrete manner; a first determining module 902, configured to determine effective images in at least two image sequences based on shooting manners of the at least two image sequences, and determine a connection relationship between the effective images; a second determining module 903, configured to determine internal parameters of the camera; a third determining module 904, configured to determine a camera pose angle corresponding to the effective image based on the internal reference and the connection relationship; the mapping module 905 is configured to map the effective image to a mapping surface with the camera as a center based on the camera pose angle to obtain a panorama.
In this embodiment, the obtaining module 901 may obtain at least two image sequences for generating a panorama locally or remotely. The image sequences may be obtained by shooting a surrounding scene by a camera integrated with the panorama generating apparatus or a camera connected to the apparatus, and the at least two image sequences include at least one image sequence shot in a discrete manner. The discrete mode shooting refers to that a camera shoots an image at a certain position in a certain posture, then the posture and/or the position of the camera is changed to shoot another image, and the operation is repeated to obtain an image sequence. The images in the image sequence may be arranged in a landscape or portrait orientation.
As an example, a row of images arranged horizontally may be a sequence of images, or a column of images arranged vertically may be a sequence of images.
In this embodiment, the first determining module 902 may determine effective images in the at least two image sequences based on the shooting manners of the at least two image sequences, and determine the connection relationship between the effective images. The camera may take a discrete image (i.e., the camera stays at a position to take an image) or a continuous image (e.g., a video image). The effective image is an image for mapping to a three-dimensional mapping surface to generate a panorama. Such as key frames of a video capture mode.
In this embodiment, the second determination module 903 may determine the internal parameters of the camera. The intrinsic parameter of the Camera is typically an intrinsic parameter matrix (Camera Intrinsics) K. The internal parameters of the camera may be fixed, i.e. known, and the second determining module 903 may acquire the internal parameters input in advance. The internal reference of the camera can also be obtained through calibration, and the electronic device can calibrate the camera by using the image sequence obtained in step 201 to obtain the internal reference of the camera. The camera internal reference calibration method is a widely used known technology at present, and is not described herein again.
In this embodiment, the third determining module 904 may determine a camera pose angle corresponding to the effective image based on the internal reference and the connection relationship. The camera attitude angle is used for representing the shooting direction of the camera under the three-dimensional coordinate system. The three-dimensional coordinate system may be a rectangular coordinate system established with the camera position as an origin. The attitude angles may include a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll). The pitch angle is used for representing deflection of an optical axis of the camera along the vertical direction, the yaw angle is used for representing deflection of the optical axis of the camera on the horizontal plane, and the roll angle is used for representing rolling degree of the camera along the optical axis.
The third determining module 904 may determine the camera pose angle according to various existing methods based on the connection relationship between the internal reference and the image. For example, the method of determining the camera pose angle may include, but is not limited to, at least one of: such as photometric errors, reprojection errors, 3D geometric errors, etc.
In this embodiment, the mapping module 905 may map the effective image to a mapping surface centered on the camera based on the camera pose angle to obtain a panorama. In particular, the panorama may be an image mapped to a mapping surface of various shapes (e.g., a sphere, a cylinder, etc.). The reference coordinate system is established towards a certain direction by taking the center of a sphere (or the center of a cylinder) as the center, and a conversion relation exists between the coordinate system of the plane image obtained by the camera and the reference coordinate system, wherein the conversion relation can be represented by a camera attitude angle which indicates to which part of the panoramic image the plane image is mapped. It should be noted that the method for mapping a two-dimensional image to a three-dimensional mapping surface is a currently known technology, and is not described herein again.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a panorama generating apparatus according to another exemplary embodiment of the present disclosure.
In some optional implementations, the first determining module 902 may include: a first determining unit 9021, configured to determine, for each of the at least two image sequences, each image in the image sequences to be a valid image in response to determining that each of the at least two image sequences is captured in a laterally discrete manner, and determine a connection relationship between the images in the image sequences through feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: an adjusting unit 90501, configured to adjust yaw angles of the images arranged in the vertical direction to be consistent; a first mapping unit 90502, configured to, for each of the at least two image sequences, map each image in the image sequence to a mapping surface based on the camera pose angle of each image in the image sequence after the yaw angle is adjusted, so as to obtain a sub-panorama corresponding to the image sequence. A second determining unit 90503, configured to determine characteristics of the obtained sub-panoramas; the first merging unit 90504 is configured to merge the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
In some optional implementations, the first determining module 902 may include: the fusion unit 9022 is configured to determine, for each of the at least two image sequences, a mapping relationship between a target image and other images in the image sequence in response to determining that each of the at least two image sequences is obtained by shooting in a longitudinal discrete manner; fusing other images to the target image based on the mapping relation to obtain a fused image corresponding to the image sequence as an effective image; and a third determining unit 9023, configured to determine a connection relationship between the obtained fusion images through feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: a second mapping unit 90505, configured to map, for each fused image in each fused image, the fused image to a mapping surface, so as to obtain a sub-panorama corresponding to the fused image; a fourth determining unit 90506, configured to determine characteristics of the obtained sub-panoramas; and a second merging unit 90507, configured to merge the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
In some optional implementations, the first determining module 902 may include: a fifth determining unit 9024, configured to determine, in response to determining that a first image sequence of the at least two image sequences is captured in a laterally discrete manner and that the other image sequences are captured in a laterally continuous manner, that each image included in the first image sequence is an effective image; determining the connection relation between each image in the first image sequence through feature extraction and feature matching; a sixth determining unit 9025, configured to determine, for each of the other image sequences, that a key frame image in the image sequence is a valid image, and determine, through feature extraction and feature matching, a connection relationship between the key frame image and the inter-image sequence.
In some alternative implementations, the mapping module 905 may include: a third mapping unit 90508, configured to map each image in the first image sequence to a mapping surface, so as to obtain a sub-panorama corresponding to the first image sequence; a fourth mapping unit 90509, configured to map the key frame image to a mapping surface, so as to obtain a mapping image; a seventh determining unit 90510, configured to determine characteristics of the mapped image and the sub-panorama; and a third merging unit 90511, configured to merge the mapped image and the sub-panorama based on the characteristics of the mapped image and the sub-panorama to obtain a final panorama.
The panorama generating apparatus according to the above-described embodiment of the present disclosure determines, in different shooting modes, effective images arranged in different modes from a sequence of the shot images, determines a connection relationship between the effective images, then calibrates the camera to obtain camera parameters, determines a camera attitude angle of the effective images based on the camera parameters and the connection relationship, and finally maps each effective image into a panorama based on the camera attitude angle, so that panoramas with a large field angle can be generated in different modes according to habits of different users, and flexibility and efficiency of generating panoramas are improved.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 11. The electronic device may be either or both of the terminal device 101 and the server 103 as shown in fig. 1, or a stand-alone device separate from them, which may communicate with the terminal device 101 and the server 103 to receive the collected input signals therefrom.
FIG. 11 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 11, the electronic device 1100 includes one or more processors 1101 and memory 1102.
The processor 1101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1100 to perform desired functions.
Memory 1102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and executed by the processor 1101 to implement the panorama generation methods of the various embodiments of the present disclosure above and/or other desired functions. Various contents such as an image sequence may also be stored in the computer-readable storage medium.
In one example, the electronic device 1100 may further include: an input device 1103 and an output device 1104, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the terminal device 101 or the server 103, the input device 1103 may be a camera, a mouse, a keyboard, or the like, for inputting the image sequence. When the electronic device is a stand-alone device, the input device 1103 may be a communication network connector for receiving the input image sequence from the terminal device 101 and the server 103.
The output unit 1104 can output various information including the generated panorama to the outside. The output devices 1104 may include, for example, a display, speakers, printer, and remote output device connected to a communication network or the like.
Of course, for simplicity, only some of the components of the electronic device 1100 relevant to the present disclosure are shown in fig. 11, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 1100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the panorama generation method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a panorama generation method according to various embodiments of the present disclosure described in the "exemplary methods" section above of the specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A panorama generation method comprises the following steps:
acquiring at least two image sequences for generating a panorama, wherein the at least two image sequences comprise at least one image sequence shot in a discrete mode;
determining effective images in the at least two image sequences based on shooting modes of the at least two image sequences, and determining a connection relation between the effective images;
determining an internal reference of the camera;
determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation;
and mapping the effective image to a mapping surface taking the camera as a center based on the camera attitude angle to obtain a panoramic image.
2. The method according to claim 1, wherein the determining effective images in the at least two image sequences and determining a connection relation between the effective images based on the shooting modes of the at least two image sequences comprises:
in response to determining that each of the at least two image sequences is shot in a transversely discrete manner, for each of the at least two image sequences, determining that each of the images in the image sequences is a valid image, and determining a connection relationship between the images in the image sequences through feature extraction and feature matching.
3. The method of claim 2, wherein said mapping the effective image to a mapping plane centered on the camera based on the camera pose angle to obtain a panorama comprises:
adjusting the yaw angles of the images arranged in the vertical direction to be consistent;
for each image sequence in the at least two image sequences, mapping each image in the image sequences to the mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle is adjusted, so as to obtain a sub-panorama corresponding to the image sequences;
determining the characteristics of each obtained sub-panorama;
and combining the sub-panoramic pictures based on the characteristics of the sub-panoramic pictures to obtain a final panoramic picture.
4. The method according to claim 1, wherein the determining effective images in the at least two image sequences and determining a connection relation between the effective images based on the shooting modes of the at least two image sequences comprises:
in response to determining that each of the at least two image sequences is shot in a longitudinal discrete manner, determining a mapping relationship between a target image and other images in the image sequences for each of the at least two image sequences; based on the mapping relation, fusing the other images to a target image to obtain a fused image corresponding to the image sequence as an effective image;
and determining the connection relation among the obtained fusion images through feature extraction and feature matching.
5. The method of claim 4, wherein said mapping the effective image to a mapping plane centered on the camera based on the camera pose angle, resulting in a panorama, comprises:
for each fused image in each fused image, mapping the fused image to the mapping surface to obtain a sub-panorama corresponding to the fused image;
determining the characteristics of each obtained sub-panorama;
and combining the sub-panoramic pictures based on the characteristics of the sub-panoramic pictures to obtain a final panoramic picture.
6. The method according to claim 1, wherein the determining effective images in the at least two image sequences and determining a connection relation between the effective images based on the shooting modes of the at least two image sequences comprises:
in response to determining that a first image sequence of the at least two image sequences is shot in a transversely discrete manner and other image sequences are shot in a transversely continuous manner, determining each image included in the first image sequence as an effective image; determining the connection relation between each image in the first image sequence through feature extraction and feature matching;
and for each image sequence in the other image sequences, determining that the key frame images in the image sequences are effective images, and determining the connection relation between the key frame images and the inter-image sequences through feature extraction and feature matching.
7. The method of claim 6, wherein said mapping the effective image to a mapping plane centered on the camera based on the camera pose angle to obtain a panorama comprises:
mapping each image in the first image sequence to the mapping surface to obtain a sub-panorama corresponding to the first image sequence;
mapping the key frame image to the mapping surface to obtain a mapping image;
determining features of the mapped image and the sub-panorama;
and combining the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
8. A panorama generating apparatus comprising:
the panoramic image generation device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least two image sequences used for generating a panoramic image, and the at least two image sequences comprise at least one image sequence shot in a discrete mode;
the first determining module is used for determining effective images in the at least two image sequences based on shooting modes of the at least two image sequences and determining the connection relation between the effective images;
a second determination module to determine an internal reference of the camera;
a third determining module, configured to determine a camera pose angle corresponding to the effective image based on the internal reference and the connection relationship;
and the mapping module is used for mapping the effective image to a mapping surface taking the camera as the center based on the camera attitude angle to obtain a panoramic image.
9. A computer-readable storage medium, the storage medium storing a computer program for performing the method of any of the preceding claims 1-7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any one of claims 1 to 7.
CN202010196117.7A 2020-03-16 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment Active CN111402136B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010196117.7A CN111402136B (en) 2020-03-19 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment
US17/200,659 US11146727B2 (en) 2020-03-16 2021-03-12 Method and device for generating a panoramic image
US17/383,157 US11533431B2 (en) 2020-03-16 2021-07-22 Method and device for generating a panoramic image
US17/981,056 US20230056036A1 (en) 2020-03-16 2022-11-04 Method and device for generating a panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010196117.7A CN111402136B (en) 2020-03-19 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111402136A true CN111402136A (en) 2020-07-10
CN111402136B CN111402136B (en) 2023-12-15

Family

ID=71431024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010196117.7A Active CN111402136B (en) 2020-03-16 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111402136B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833250A (en) * 2020-07-13 2020-10-27 北京爱笔科技有限公司 Panoramic image splicing method, device, equipment and storage medium
CN113012290A (en) * 2021-03-17 2021-06-22 展讯通信(天津)有限公司 Terminal posture-based picture display and acquisition method and device, storage medium and terminal
CN113689482A (en) * 2021-10-20 2021-11-23 贝壳技术有限公司 Shooting point recommendation method and device and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001167249A (en) * 1999-12-06 2001-06-22 Sanyo Electric Co Ltd Method and device for synthesizing image and recording medium stored with image synthesizing program
CN101123722A (en) * 2007-09-25 2008-02-13 北京智安邦科技有限公司 Panorama video intelligent monitoring method and system
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
CN103176347A (en) * 2011-12-22 2013-06-26 百度在线网络技术(北京)有限公司 Method and device for shooting panorama and electronic device
CN104463956A (en) * 2014-11-21 2015-03-25 中国科学院国家天文台 Construction method and device for virtual scene of lunar surface
CN105611169A (en) * 2015-12-31 2016-05-25 联想(北京)有限公司 Image obtaining method and electronic device
KR101642975B1 (en) * 2015-04-27 2016-07-26 주식회사 피씨티 Panorama Space Modeling Method for Observing an Object
US20160217611A1 (en) * 2015-01-26 2016-07-28 Uber Technologies, Inc. Map-like summary visualization of street-level distance data and panorama data
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN107451952A (en) * 2017-08-04 2017-12-08 追光人动画设计(北京)有限公司 A kind of splicing and amalgamation method of panoramic video, equipment and system
CN109076158A (en) * 2017-12-22 2018-12-21 深圳市大疆创新科技有限公司 Panorama photographic method, photographing device and machine readable storage medium
CN110111241A (en) * 2019-04-30 2019-08-09 北京字节跳动网络技术有限公司 Method and apparatus for generating dynamic image
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001167249A (en) * 1999-12-06 2001-06-22 Sanyo Electric Co Ltd Method and device for synthesizing image and recording medium stored with image synthesizing program
CN101123722A (en) * 2007-09-25 2008-02-13 北京智安邦科技有限公司 Panorama video intelligent monitoring method and system
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN103176347A (en) * 2011-12-22 2013-06-26 百度在线网络技术(北京)有限公司 Method and device for shooting panorama and electronic device
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
CN104463956A (en) * 2014-11-21 2015-03-25 中国科学院国家天文台 Construction method and device for virtual scene of lunar surface
US20160217611A1 (en) * 2015-01-26 2016-07-28 Uber Technologies, Inc. Map-like summary visualization of street-level distance data and panorama data
KR101642975B1 (en) * 2015-04-27 2016-07-26 주식회사 피씨티 Panorama Space Modeling Method for Observing an Object
CN105611169A (en) * 2015-12-31 2016-05-25 联想(北京)有限公司 Image obtaining method and electronic device
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN107451952A (en) * 2017-08-04 2017-12-08 追光人动画设计(北京)有限公司 A kind of splicing and amalgamation method of panoramic video, equipment and system
CN109076158A (en) * 2017-12-22 2018-12-21 深圳市大疆创新科技有限公司 Panorama photographic method, photographing device and machine readable storage medium
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN110111241A (en) * 2019-04-30 2019-08-09 北京字节跳动网络技术有限公司 Method and apparatus for generating dynamic image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈泗勇: "全景拼接系统的研究与实现", vol. 35, no. 11, pages 21 - 24 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833250A (en) * 2020-07-13 2020-10-27 北京爱笔科技有限公司 Panoramic image splicing method, device, equipment and storage medium
CN113012290A (en) * 2021-03-17 2021-06-22 展讯通信(天津)有限公司 Terminal posture-based picture display and acquisition method and device, storage medium and terminal
CN113012290B (en) * 2021-03-17 2023-02-28 展讯通信(天津)有限公司 Terminal posture-based picture display and acquisition method and device, storage medium and terminal
CN113689482A (en) * 2021-10-20 2021-11-23 贝壳技术有限公司 Shooting point recommendation method and device and storage medium
CN113689482B (en) * 2021-10-20 2021-12-21 贝壳技术有限公司 Shooting point recommendation method and device and storage medium

Also Published As

Publication number Publication date
CN111402136B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN107566688B (en) Convolutional neural network-based video anti-shake method and device and image alignment device
CN111402136B (en) Panorama generation method and device, computer readable storage medium and electronic equipment
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
CN108958469B (en) Method for adding hyperlinks in virtual world based on augmented reality
CN111432119B (en) Image shooting method and device, computer readable storage medium and electronic equipment
CN112489114B (en) Image conversion method, image conversion device, computer readable storage medium and electronic equipment
CN111008985B (en) Panorama picture seam detection method and device, readable storage medium and electronic equipment
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
US11533431B2 (en) Method and device for generating a panoramic image
CN111402404B (en) Panorama complementing method and device, computer readable storage medium and electronic equipment
CN114399597A (en) Method and device for constructing scene space model and storage medium
WO2023005170A1 (en) Generation method and apparatus for panoramic video
CN112102199A (en) Method, device and system for filling hole area of depth image
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
CN111415386A (en) Shooting equipment position prompting method and device, storage medium and electronic equipment
CN111445518A (en) Image conversion method and device, depth map prediction method and device
CN113129211B (en) Optical center alignment detection method and device, storage medium and electronic equipment
CN111712857A (en) Image processing method, device, holder and storage medium
KR102261544B1 (en) Streaming server and method for object processing in multi-view video using the same
CN116708862A (en) Virtual background generation method for live broadcasting room, computer equipment and storage medium
CN111429353A (en) Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment
WO2018150086A2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses
WO2018100230A1 (en) Method and apparatuses for determining positions of multi-directional image capture apparatuses
CN115278049A (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200921

Address after: 100085 Floor 102-1, Building No. 35, West Second Banner Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: 300 457 days Unit 5, Room 1, 112, Room 1, Office Building C, Nangang Industrial Zone, Binhai New Area Economic and Technological Development Zone, Tianjin

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220328

Address after: 100085 8th floor, building 1, Hongyuan Shouzhu building, Shangdi 6th Street, Haidian District, Beijing

Applicant after: As you can see (Beijing) Technology Co.,Ltd.

Address before: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant before: Seashell Housing (Beijing) Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant