CN116866722A - Panoramic image generation method, device, computer equipment and storage medium - Google Patents

Panoramic image generation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116866722A
CN116866722A CN202310327816.4A CN202310327816A CN116866722A CN 116866722 A CN116866722 A CN 116866722A CN 202310327816 A CN202310327816 A CN 202310327816A CN 116866722 A CN116866722 A CN 116866722A
Authority
CN
China
Prior art keywords
image
initial
aligned
target
mapping information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310327816.4A
Other languages
Chinese (zh)
Inventor
曲超
苏坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202310327816.4A priority Critical patent/CN116866722A/en
Publication of CN116866722A publication Critical patent/CN116866722A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/04Supports for telephone transmitters or receivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a panoramic image generation method, a panoramic image generation device, a computer device and a storage medium. The method comprises the following steps: and determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information, performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image, and determining target mapping information corresponding to each initial image frame after each image alignment, so as to generate a target panoramic image according to each initial image frame and the corresponding target mapping information. The mapping information is used for indicating the projection of each initial image frame to the area of the target panoramic image, and the reference image aligned each time and the image to be aligned have an overlapping area. The dislocation phenomenon can be reduced by adopting the method.

Description

Panoramic image generation method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to a panoramic image generation method, a panoramic image generation device, a computer device, and a storage medium.
Background
Panoramic images are increasingly being used in life because they can exhibit more surrounding environments. The process of generating a panoramic image in the related art is: firstly, calculating the internal and external parameters of the camera according to the multiple images, and then directly projecting the multiple images shot by the internal and external parameters of the camera, so as to obtain a final panoramic image. However, in the process, the situation that the internal parameters and the external parameters of the camera are wrong or the precision is insufficient may occur, so that the currently obtained panoramic image still has a dislocation phenomenon.
Disclosure of Invention
In view of the above, it is desirable to provide a panoramic image generation method, apparatus, computer device, and storage medium capable of reducing misalignment.
In a first aspect, the present application provides a panoramic image generation method. The method comprises the following steps:
determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information; the reference image and the image to be aligned are aligned each time with an overlapping area;
according to the reference image aligned each time, aligning the corresponding images to be aligned, and determining target mapping information corresponding to each initial image frame after each time of image alignment; the target mapping information is used for indicating the area where each initial image frame is projected to the target panoramic image after each image alignment;
and generating the target panoramic image according to each initial image frame and the corresponding target mapping information.
In a second aspect, the present application also provides a handheld cradle head, comprising: the device comprises a motor, a camera and a processor;
the motor is used for controlling the rotation of the cradle head so as to drive the rotation of the camera, and the processor is used for executing the method of any one of the above.
In a third aspect, the application also provides a panoramic image generation system, which comprises a cradle head and a terminal;
the cradle head is used for rotating according to a preset path;
the terminal comprises a shooting device and a processor, wherein the shooting device is used for shooting a plurality of initial image frames of a plurality of shooting angles in the process that the cradle head rotates according to a preset path, and sending the initial image frames to the processor so as to execute the method of any one of the above steps by the processor.
In a fourth aspect, the present application also provides a panoramic image generation apparatus, including:
the first determining module is used for determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information;
the second determining module is used for carrying out alignment processing on the corresponding images to be aligned according to the reference images aligned each time, and determining target mapping information corresponding to each initial image frame after each time of image alignment; the target mapping information is used for indicating the area where each initial image frame is projected to the target panoramic image after each image alignment;
and the first generation module is used for generating the target panoramic image according to each initial image frame and the corresponding target mapping information.
In a fifth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of any of the methods described above when the processor executes the computer program.
In a sixth aspect, the present application also provides a computer readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
In a seventh aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the methods described above.
According to the panoramic image generation method, the panoramic image generation device, the computer equipment and the storage medium, the reference image and the image to be aligned for each image alignment are determined according to each initial image frame and the initial mapping information, the corresponding image to be aligned is aligned according to the reference image aligned for each image, the target mapping information corresponding to each initial image frame after each image alignment is determined, and therefore the target panoramic image is generated according to each initial image frame and the corresponding target mapping information. In the application, the target mapping information can indicate the projection of each initial image frame to the area of the target panoramic image after each image alignment, and each time the images are aligned, the reference image and the image to be aligned which are aligned by the current image with the overlapped area are determined, and the alignment processing is carried out on the image to be aligned which is aligned by the current image, therefore, the reference image and the image to be aligned can be aligned after the alignment processing. Further, the finally generated target panoramic image can reduce the splicing dislocation phenomenon.
Drawings
FIG. 1 is an application environment diagram of a panoramic image generation method in an embodiment of the present application;
fig. 2 is a schematic flow chart of a panoramic image generation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a preset path according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of determining a reference image and an image to be aligned according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for determining mapping information according to an embodiment of the present application;
FIG. 6 is a schematic diagram of coordinates of overlapping areas in two adjacent initial image frames;
FIG. 7 is a flowchart illustrating a control point determination process according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of generating a target panoramic image according to an embodiment of the application;
FIG. 9 is a schematic diagram of mask information;
FIG. 10 is a flowchart illustrating another method for generating a panoramic image of a subject in accordance with an embodiment of the present application;
FIG. 11 is a schematic diagram of generating a target panoramic image according to an embodiment of the present application;
FIG. 12 is a schematic view of an equatorial splice;
FIG. 13 is a schematic flow chart of equatorial stitching in an embodiment of the application;
FIG. 14 is a schematic flow chart of a top splice or bottom splice according to an embodiment of the present application;
FIG. 15 is a schematic diagram of top frame distortion;
FIG. 16 is a schematic view of the effect after top stitching;
FIG. 17 is a schematic flow chart of two-side splicing in an embodiment of the application;
fig. 18 is a block diagram illustrating a configuration of a panoramic image generation apparatus in an embodiment of the present application;
fig. 19 is an internal structural diagram of a computer device in an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
At present, in order to obtain a panoramic image, a user usually shoots a plurality of images at different positions, then determines internal parameters and external parameters of a camera according to the shot plurality of images, and further projects the plurality of images to corresponding positions in the panoramic image according to a certain sequence by utilizing the internal parameters and the external parameters of the camera so as to obtain a final panoramic image. However, the currently obtained panoramic image still has a dislocation phenomenon. In view of the above, it is desirable to provide a panoramic image generation method capable of reducing the misalignment phenomenon, which will be described below.
In order to more clearly explain the panoramic image generation method in the present application, the following concept is first introduced.
The cradle head is used for installing and fixing a mobile phone, a camera, a video camera and other shooting equipment, and can rotate at least in one degree of freedom.
The Field angle (Field of view, fov) is a parameter of a lens in the photographing apparatus, and the larger fov is, the larger the range of a picture that the photographing apparatus can photograph.
The Euler angle is used for shooting the rotation condition of the equipment along with the cradle head. Euler angles include roll angle (roll), pitch angle (pitch), and yaw angle (yaw). The view point is taken as an origin, the horizontal direction is defined as an X axis, the vertical direction is defined as a Y axis, the vertical direction is defined as an X axis, the X axis and the Y axis are perpendicular to each other, the rolling angle indicates the rotation degree of the tripod head along the Z axis, the pitch angle indicates the rotation degree of the tripod head along the Y axis, and the yaw angle indicates the rotation degree of the tripod head along the X axis.
Panoramic images refer to pictures with an aspect ratio of 2:1 when unfolded, which are typically stitched from multiple image frames. The panoramic image is in accordance with the longitude and latitude expansion method, the width of the panoramic image is within 0-2 pi of latitude, and the height of the panoramic image is within 0-pi of longitude. For example, the panoramic image may include a 180 ° panoramic image, a 270 ° panoramic image, a 360 ° panoramic image, and the like. Wherein a 360 ° panoramic image can record all information of latitude 0-2 pi and longitude 0-pi, i.e. a 360 ° panoramic image can record all information of horizontal 360 ° and pitch 180 °. The following description will mainly take an example of generating a 360 ° panoramic image, that is, a spherical panoramic image.
Fig. 1 is an application environment diagram of a panoramic image generation method in an embodiment of the present application, as shown in fig. 1 (a) and (b), a pan-tilt 101 is externally connected to other photographing devices 102. The shooting device 102 is mainly used for shooting objects or environments in a scene and the like to obtain corresponding multi-frame images; the capture device 102 may be a camera, a webcam-equipped cell phone, or the like. In the process that the cradle head 101 rotates through the motor, the shooting device 102 can be driven to rotate, so that multi-frame images with different visual angles are shot.
In addition, the photographing device 102 may have a built-in computer device (shown in the figure), which may be the computer device 103, a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processing, DSP), a Field programmable gate array (Field-Programmable Gate Array, FPGA), or other programmable logic device. The computer device mainly processes the multi-frame image photographed by the photographing device 104.
Optionally, the pan-tilt 101 may further be provided with a control panel 103 and a control button 104, where the control panel 103 facilitates the user to operate the pan-tilt 101, and the control button 104 facilitates the user to send a shooting instruction to the shooting device 102 through the pan-tilt 101.
It should be noted that fig. 1 illustrates only one optional application environment, and in some application scenarios, the cradle head 101 may be provided with a photographing device 102, and the photographing device 102 may also be connected with an external computer device, which may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the portable wearable devices may be smart watches, smart bracelets, headsets, and the like.
Fig. 2 is a flow chart of a panoramic image generation method according to an embodiment of the present application, which can be applied to the computer device shown in fig. 1, and in one embodiment, as shown in fig. 2, the method includes the following steps:
s201, determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information; there is an overlap area between the reference image and the image to be aligned for each image alignment.
At present, when a panoramic image is generated by a computer device, internal and external parameters of a camera are determined according to a plurality of images shot by a user, and then the internal and external parameters of the camera are utilized to project the plurality of images to corresponding positions in the panoramic image according to a certain sequence, so that a final panoramic image is obtained.
However, when generating panoramic images, on one hand, a photographing device may inevitably perform translational motion during photographing, resulting in a situation where significant parallax occurs when a scene is close; on the other hand, there may be cases where errors or insufficient accuracy of the determined camera internal and external parameters occur. Both of the above cases may cause misalignment of the target panoramic image.
Therefore, in the present application, local alignment (hereinafter referred to as image alignment) is performed in the process of generating the panoramic image. Specifically, in this embodiment, when generating the target panoramic image according to each initial image frame and initial mapping information, the computer device first determines, according to each initial image frame and initial mapping information, a reference image and an image to be aligned for each image alignment. And, there is an overlapping area between the reference image and the image to be aligned for each image alignment.
The initial mapping information is used for mapping each initial image frame in a 2:1 panoramic expansion chart according to the shooting sequence and the shooting position so as to obtain a target panoramic image. That is, the initial mapping information is used to indicate the area where each initial image frame is projected to the target panoramic image.
Optionally, the computer device may cut out the overlapping area from each initial image frame according to the overlapping area between two adjacent initial image frames in each initial image frame, extract similar feature points between the two adjacent initial image frames based on the overlapping area, and optimize internal parameters and external parameters of the photographing device by using the similar feature points, so as to determine initial mapping information between each initial image frame and the target panoramic image according to the internal and external parameters of the photographing device.
For example, assuming that the computer device acquires the initial image frames 1 to 10 and that two adjacent image frames between the initial image frames 2 to 10 have overlapping areas, when the first image is aligned, the computer device uses a portion of the initial image frame 1 projected into the 2:1 panoramic expansion image as a reference image and uses a portion of the initial image frame 2 projected into the 2:1 panoramic expansion image as an image to be aligned; when the second image is aligned, the part of the initial image frame 2 projected to the 2:1 panoramic expansion image is used as a reference image, and the part of the initial image frame 3 projected to the 2:1 panoramic expansion image is used as an image to be aligned; in the third image alignment, the part of the initial image frame 3 after being projected into the 2:1 panoramic expansion is used as a reference image, and the part of the initial image frame 4 after being projected into the 2:1 panoramic expansion is used as an image to be aligned. And so on, until the last time of image alignment, the part of the initial image frame 9 after being projected into the 2:1 panoramic expansion is used as a reference image, and the part of the initial image frame 10 after being projected into the 2:1 panoramic expansion is used as an image to be aligned.
In some embodiments, to reduce the amount of computation and increase the computational efficiency, the computer device may determine the reference image and the image to be aligned for each image alignment in a 2:1 panoramic expansion of small resolution.
It should be noted that, the above-mentioned reference image and the image to be aligned for each image alignment are determined by the computer device in the order of the initial image frame 1 to the initial image frame 10. In some embodiments, each initial image frame may have a certain proportion of image frames overlapped with each other, and thus, the computer device may determine the reference image and the image to be aligned for each image alignment according to other sequences.
S202, aligning corresponding images to be aligned according to reference images aligned in each image, and determining target mapping information corresponding to each initial image frame after each image alignment; the target mapping information is used to indicate the area where each initial image frame is projected to the target panoramic image after each image alignment.
In this embodiment, the computer device performs alignment processing on the corresponding image to be aligned according to the mapping information of the reference image aligned each time.
In this embodiment, the computer device performs alignment processing on the corresponding images to be aligned according to the reference image aligned in each image.
Wherein the target mapping information is also referred to as map information, and is used to indicate the area where each initial image frame is projected to the target panoramic image after each image alignment. It can be understood that the initial mapping information is only one mapping relation estimated according to each initial image frame, and the shake of the photographing device, the rotation error of the cradle head and other factors affect the accuracy of the initial mapping information, and the target mapping information is corrected by the image alignment process, so that the accuracy of the target mapping information is higher than that of the initial mapping information.
Further, when the computer equipment aligns the images each time, the computer equipment performs alignment processing on the images to be aligned, which are aligned at the current time, according to the reference image when the images are aligned at the current time. After the alignment processing is performed on the image to be aligned when the image is aligned at the current time, the computer equipment also obtains the target mapping information of the image to be aligned when the image is aligned at the current time.
It should be noted that, the target mapping information of the reference image when the first image is aligned may be directly determined according to the initial mapping information of the reference image when the first image is aligned, for example, the computer device directly uses the initial mapping information of the reference image as the target mapping information of the reference image when the first image is aligned.
Further, optionally, the computer device may modify the initial mapping information of the image to be aligned after the image is aligned, so as to obtain the target mapping information of the image to be aligned when the image is aligned the previous time. The computer device may also determine, directly after the image alignment, target mapping information for the image to be aligned when the current image is aligned according to a portion of the image to be aligned in the 2:1 panoramic image.
Taking the first image alignment as an example, when the computer device performs the first image alignment, the computer device takes the part of the initial image frame 1 after being projected to the panoramic expansion of 2:1 as a reference image, and takes the part of the initial image frame 2 after being projected to the panoramic expansion of 2:1 as an image to be aligned. Then, the computer device aligns the reference image when the first image is aligned, performs alignment processing on the image to be aligned when the first image is aligned, and modifies the initial mapping information of the initial image frame 2 after the alignment processing to determine the target mapping information corresponding to the initial image frame 2.
Wherein the alignment process is for aligning the reference image aligned each time with at least one side of the image to be aligned, the alignment process may include a stretching process, a rotation process, a translation process, and the like.
And S203, generating a target panoramic image according to each initial image frame and the corresponding target mapping information.
In this embodiment, after each image alignment is completed, the computer device also obtains the target mapping information corresponding to each initial image frame. Further, the computer device may generate a target panoramic image from each initial image frame and corresponding target mapping information.
For example, after obtaining the target mapping information 1 to the target mapping information 10 corresponding to the initial image frame 1 to the initial image frame 10, the computer device may project the initial image frame 1 to the initial image frame 10 into the panoramic expansion image with the preset resolution of 2:1 according to the initial image frame 1 to the initial image frame 10 and the target mapping information 1 to the target mapping information 10 corresponding to the initial image frame 1 to the initial image frame 10, so as to obtain the target panoramic image. The preset resolution may be a resolution specified by the user, or may be a resolution automatically determined by the computer device according to its own performance, which is not limited in this embodiment.
In some embodiments, the computer device projects the initial image frames 1 to 10 to the panoramic expansion map of 2:1 according to the initial image frames 1 to 10 and the corresponding target mapping information 1 to 10, and then performs post-processing on the obtained images to generate the target panoramic image.
The target panoramic image obtained by projecting each initial image frame to the 2:1 panoramic expansion map by using the target mapping information may be an unexpanded spherical panoramic image or an expanded 2:1 panoramic image. It should be noted that the unexpanded spherical panoramic image is a spherical picture that cannot be displayed on one plane, and the user can observe the unexpanded spherical panoramic image by adjusting the viewing angle.
According to the panoramic image generation method, the reference image and the image to be aligned for each image alignment are determined according to each initial image frame and the initial mapping information, the corresponding image to be aligned is aligned according to the reference image aligned for each image, and the target mapping information corresponding to each initial image frame after each image alignment is determined, so that the target panoramic image is generated according to each initial image frame and the corresponding target mapping information. In the current panoramic image generation process, firstly, the internal and external parameters of a camera are calculated according to a plurality of images, and then the internal and external parameters of the camera are utilized to directly project a plurality of photographed images, so that a final panoramic image is obtained. However, in the process, the situation that the internal parameters and the external parameters of the camera are wrong or the precision is insufficient may occur, so that the currently obtained panoramic image still has a dislocation phenomenon. In the application, as the target mapping information can indicate the area of each initial image frame projected to the target panoramic image after each image alignment, each time the images are aligned, the reference image and the image to be aligned which are aligned by the current image with the overlapped area are determined, and the alignment processing is carried out on the image to be aligned which is aligned by the current image, the reference image and the image to be aligned can be aligned after the alignment processing. Further, the finally generated target panoramic image can reduce the splicing dislocation phenomenon.
In one embodiment, optionally, each of the initial image frames is an image frame of a plurality of shooting angles shot by a shooting device disposed on the pan-tilt during rotation of the pan-tilt.
In this embodiment, the pan-tilt rotates according to the preset path, and in the process of rotating the pan-tilt according to the preset path, the photographing device connected with the pan-tilt also rotates along with the pan-tilt. Further, the photographing apparatus can photograph a plurality of initial image frames at a plurality of photographing angles. The shooting angle comprises at least one angle of a rolling angle, a pitch angle and a yaw angle. Further, the photographing apparatus communicates with the computer apparatus, and the computer apparatus can acquire the plurality of initial image frames.
Optionally, the cradle head may send a shooting instruction to the shooting device after each rotation to a shooting angle according to the preset path, so that at least one initial image frame under the shooting angle is shot by the shooting device. Further alternatively, after the photographing device photographs at least one initial image frame under the photographing angle, a completion instruction may be returned to the pan-tilt to rotate to the next photographing angle after the pan-tilt receives the completion instruction, and so on, the photographing device may photograph the initial image frames of a plurality of photographing angles during the rotation of the pan-tilt according to the preset path.
Of course, the computer device may also set the shooting frequency matched with the rotation path of the pan-tilt in advance on the shooting device, so that the shooting device may automatically shoot a plurality of initial image frames in the process of rotating the pan-tilt according to the preset path according to the set shooting frequency.
Furthermore, the computer device may acquire the initial image frame captured by the capturing device in real time, or may acquire the initial image frame captured by the capturing device according to a certain period.
The preset path can be a path which is sent to the cradle head after the computer equipment is planned, can be a path which is sent to the cradle head after the user is customized through the computer equipment, and can be a preset path of the cradle head.
Fig. 3 is a schematic diagram of a preset path in an embodiment of the present application, a dotted line is a schematic diagram of a rotation path of a pan-tilt, and each dot is a shooting angle set in the path. As shown in fig. 3, the yaw angle of each row is the same and the pitch angle of each column is the same between dots 1 to 9.
As shown in fig. 3 (a) and 3 (b), the cradle head is first rotated from the initial position to the dot 1 so that the photographing apparatus photographs the initial image frame 1 at the photographing angle corresponding to the dot 1, and then rotated to the dot 2 according to the dotted line so that the photographing apparatus photographs the initial image frame 2 at the photographing angle corresponding to the dot 2. And so on, the photographing device photographs the initial image frames 1 to 9 at photographing angles corresponding to the dots 1 to 9.
The foregoing illustrates only some alternative forms of the preset path, and it is understood that the preset path may be rotated first transversely and then longitudinally, or may be rotated first longitudinally and then transversely. In some embodiments, the preset path may be in other forms as long as the shooting device is enabled to shoot initial image frames of a plurality of shooting angles during rotation of the pan-tilt head. In addition, the above-mentioned example of taking only 1 initial image frame at each photographing angle is not limited to this embodiment, and a plurality of initial image frames may be taken at each photographing angle.
It should be noted that, in order to reduce the subsequent workload and improve the generation efficiency, the resolution size of each initial image frame should be kept the same. Of course, in some embodiments, the photographing device may also acquire each initial image frame with different resolution or different size, and the computer device may crop each initial image frame after acquiring each initial image frame.
In this embodiment, since each initial image frame is an image frame of a plurality of shooting angles shot by the shooting device arranged on the pan-tilt during the rotation of the pan-tilt, the shooting device can shoot a plurality of initial image frames during the rotation of the pan-tilt, and then after the computer device acquires the plurality of initial image frames, a target panoramic image can be generated. In the process of generating the target panoramic image, a user does not need to manually shoot a large number of images at a plurality of angles, so that the problem that the operation process is complicated in the current process of generating the panoramic image is avoided. Further, in the embodiment, since the initial image frame can be automatically shot in the rotation process of the cradle head, compared with a large number of images manually shot by a user at present, the initial image frame acquired in the embodiment is more accurate, and therefore the precision of the target panoramic image generated based on the initial image frame is also improved.
Fig. 4 is a schematic flow chart of determining a reference image and an image to be aligned according to an embodiment of the present application, and referring to fig. 4, this embodiment relates to an alternative implementation of how to determine the reference image and the image to be aligned for each image alignment from each initial image frame. On the basis of the above embodiment, the step S201 of determining the reference image and the image to be aligned for each image alignment according to each initial image frame and the initial mapping information includes the following steps:
s401, projecting at least two image frames in each initial image frame into a first preset panoramic image by utilizing initial mapping information so as to determine a reference image aligned with the current image and an image to be aligned.
In this embodiment, when determining the reference image and the image to be aligned for each image alignment according to each initial image frame and the initial mapping information, the computer device first projects at least two image frames in each initial image frame into the first preset panoramic image by using the initial mapping information to determine the reference image and the image to be aligned for the current image alignment. The resolution of the first preset panoramic image, that is, the 2:1 panoramic expansion image, may be set by a user, or may be determined by the computer device according to a preset rule.
Taking the example that the current time is the first image alignment, the computer device may project the first image frame of the initial image frames to a small resolution of 2:1 panoramic expansion map and projects the first image frame to a small resolution of 2:1 as a reference image for the current image alignment. Further, the computer device projects a second image frame of each initial image frame also to the aforementioned small resolution 2 using the initial mapping information: 1 panoramic expansion map and projects a second image frame to 2 of the small resolution: 1 as the image to be aligned for the current image alignment.
Of course, the computer device may also randomly select at least two image frames from the initial image frames and project the selected image frames to the first preset panoramic image to determine the reference image or the image to be aligned for the current image alignment, or may select at least two initial image frames of the designated area from the initial image frames as the reference image for the current image alignment by using the initial mapping information.
S402, after alignment processing is carried out on the corresponding images to be aligned according to the reference images aligned in the current image, the images to be aligned after the alignment processing and the reference images aligned in the last image are used as the reference images aligned in the next image, and the images to be aligned in the next image are determined from the initial image frames according to the initial image frames and the initial mapping information.
In this embodiment, assuming that, when the first image is aligned, the computer device projects the portion of the initial image frame 1 to the first preset panoramic image as the reference image and projects the portion of the initial image frame 2 to the first preset panoramic image as the image to be aligned, the computer device performs the alignment processing on the corresponding image to be aligned using the reference image when the first image is aligned, that is, performs the alignment processing on the portion of the initial image frame 2 projected to the first preset panoramic image according to the portion of the initial image frame 1 projected to the first preset panoramic image, and after the first image alignment processing, the computer device determines the target mapping information 1 of the initial image frame 1 and the target mapping information 2 of the initial image frame 2.
Further, when the second image alignment is performed, the computer device uses the reference image after the first image alignment and the image to be aligned as the reference image when the second image alignment is performed, that is, the portion of the initial image frame 1 projected to the first preset panoramic image and the portion of the initial image frame 2 projected to the first preset panoramic image after the image alignment are used as the reference image when the second image alignment is performed. Then, the computer device determines an initial image frame 3 from the initial image frames, and projects the initial image frame 3 to a portion in the first preset panoramic image as an image to be aligned for the second image alignment by using the initial mapping relationship. Still further, the computer device continues to align the images to be aligned when the second image is aligned using the reference image when the second image is aligned. After the second image alignment process, the computer device also determines the target mapping information 3 corresponding to the initial image frame 3.
And so on, after the computer device performs alignment processing on the corresponding images to be aligned according to the reference image aligned by the current image, the aligned images to be aligned and the reference image aligned by the previous image are used as the reference image aligned by the next image, and the aligned images to be aligned by the next image are determined from each initial image frame, which is not repeated here.
The embodiment utilizes initial mapping information to project at least two image frames in each initial image frame into a first preset panoramic image so as to determine a reference image aligned with a current image and an image to be aligned, and after the corresponding image to be aligned is aligned according to the reference image aligned with the current image, the aligned image to be aligned and the reference image aligned with a previous image are used as reference images aligned with a next image, and the next image aligned to be aligned is determined from each initial image frame according to each initial image frame and the initial mapping information. When the images are aligned each time, the images to be aligned after the previous alignment processing and the reference image aligned the previous image are used as the reference image of this time, so that the accuracy and precision of the image alignment are ensured.
Fig. 5 is a schematic flow chart of determining mapping information in an embodiment of the present application, and referring to fig. 5, this embodiment relates to an alternative implementation of determining mapping information corresponding to each initial image frame after each image alignment. On the basis of the above embodiment, two adjacent initial image frames in the plurality of initial image frames have overlapping areas, and S202 described above, according to the reference image aligned by each image, performs alignment processing on the corresponding image to be aligned, and determines target mapping information corresponding to each initial image frame after each image alignment, includes the following steps:
s501, determining a control point corresponding to an overlapping region between a reference image aligned with each image and a corresponding image to be aligned.
In the present embodiment, in order to ensure the accuracy and effect of generating the target panoramic image, adjacent two initial image frames among the plurality of initial image frames have an overlapping area in the present embodiment.
Note that, here, two adjacent initial image frames refer to: each initial image frame at two different consecutive photographing angles. For example, in connection with fig. 3, the photographing apparatus has an overlapping region between the initial image frame 1 corresponding to the dot 1 and the initial image frame 2 corresponding to the dot 2, that is, the adjacent two initial image frames, that is, the initial image frame 1 and the initial image frame 2. Similarly, the initial image frame 2 corresponding to the dot 2 needs to have an overlap region with the initial image frame 3 corresponding to the dot 3, and so on.
Optionally, when the computer device acquires each initial image frame, the adjacent relation between the initial image frames can be acquired because the shooting sequence and the shooting position are known.
Taking an example of capturing an initial image frame at each capturing angle, the computer device may also determine and store the neighboring relationship edge_list of each initial image frame after capturing each initial image frame.
Illustratively, edge_list= [ (a, b), (b, c), ], wherein a, b, c, d is an integer greater than or equal to 1, each bracket represents a set of adjacent initial image frames that overlap two by two, e.g., (a, b) indicates that there is an overlap of initial image frame a with initial image frame b, and (b, c) indicates that there is an overlap of initial image frame b with initial image frame c.
Further, since the rotation path of the pan/tilt head is known, information of the overlapping area between the initial image frames can be determined during the photographing of the initial image frames by the photographing apparatus.
Alternatively, the computer device may determine and store information roi_list of overlapping areas in two adjacent initial image frames.
For example, roi_list= [ ((x_a, y_a, w_a, h_a), (x_b, y_b, w_b, h_b)) ] ((x_b, y_b, w_b, h_b), (x_c, y_c, w_c, h_c)) ],.]. Wherein x, y are used to represent the coordinates of the upper left corner of the overlap region in the initial image frame; w, h are used for the ratio of the width-height of the overlap area relative to the width-height of the original image frame.
It should be noted that only two adjacent initial image frames having overlapping areas are correlated. For example, the overlapping area of initial image frame a and initial image frame b is not related to the overlapping area of initial image frame b and initial image frame c, the overlapping area of initial image frame b and initial image frame c is not related to the overlapping area of initial image frame c and initial image frame d, and so on.
Specifically, in ((x_a, y_a, w_a, h_a), (x_b, y_b, w_b, h_b)), the expression (x_a, y_a, w_a, h_a0) is: the abscissa and the ordinate of the upper left corner of the overlapping area of the initial image frame a and the initial image frame b are x_a and y_a respectively, and the abscissa and the ordinate of the lower left corner of the overlapping area of the initial image frame a and the initial image frame b are w_a and h_a respectively, with the coordinate system established at the upper left corner of the initial image frame a as a reference.
(x_b, y_b, w_b, h_b) represents: the abscissa and the ordinate of the upper left corner of the overlapping area of the initial image frame a and the initial image frame b are x_b and y_b respectively, and the abscissa and the ordinate of the lower right corner of the overlapping area of the initial image frame a and the initial image frame b are w_b and h_b respectively, with the coordinate system established at the upper left corner of the initial image frame b as a reference.
For example, fig. 6 is a schematic diagram illustrating coordinates of overlapping areas in two adjacent initial image frames. As shown in fig. 6, the initial image frame a (thin dotted line) and the initial image frame b (thick dotted line) have overlapping areas (oblique lines), and according to the rotation path of the pan/tilt head, the computer device knows the relative position of the initial image frame a in the initial image frame b, and the initial image frame a and the initial image frame b are the same in size, and the overlapping areas each occupy 50% of the initial image frame a and the initial image frame b.
In the roi _ list this overlap area can be denoted ((0.5,0,0.5,1), (0,0,0.5,1)), i.e. the overlap area of the initial image frame a and the initial image frame b is in the right half of the initial image frame a and the left half of the initial image frame b, respectively.
Further, the computer device may determine an overlap region between the reference image for each image alignment and the corresponding image to be aligned.
Alternatively, in one implementation, the computer device may determine an overlap region between the reference image for each image alignment and the corresponding image to be aligned based on the initial mapping information. For example, according to the initial mapping information, the computer device uses a portion of the initial image frame 1 projected into the first preset panoramic image as a reference image, and uses a portion of the initial image frame 2 projected into the first preset panoramic image as an image to be aligned, and then the computer device determines an overlapping area between the projected portion of the initial image frame 1 and the projected portion of the initial image frame 2 in the first preset panoramic image.
In one embodiment, the computer device may also determine the overlapping region between the reference image for each image alignment and the corresponding image to be aligned based on information of the overlapping region of the adjacent two initial image frames. For example, when the first image is aligned, the computer device determines the overlapping area of the initial image frame 1 and the initial image frame 2 according to the roi_list, and uses the overlapping area of the initial image frame 1 and the initial image frame 2 directly as the overlapping area between the reference image and the image to be aligned when the first image is aligned.
Further, after the computer device determines the overlapping area between the reference image for each image alignment and the corresponding image to be aligned, the control point corresponding to the overlapping area may be determined.
Optionally, the computer device may sample the corresponding control point from each pixel in the overlapping area according to the pixel value of each pixel in the overlapping area between the reference image aligned with each image and the corresponding image to be aligned. For example, after the computer device determines the overlapping area of the initial image frame 1 and the initial image frame 2, 100 pixel points may be uniformly sampled from the overlapping area of the initial image frame 1 and the initial image frame 2 to obtain 100 control points.
It should be noted that, in some embodiments, optionally, if there is no overlapping area between the reference image aligned with each image and the corresponding image to be aligned due to an error operation or other reasons during the shooting process, the computer device may directly use the initial mapping information as the target mapping information of the initial image frame corresponding to the reference image aligned with the image to be aligned and then proceed with the image alignment of the subsequent time.
Of course, in order to ensure accuracy and effect of image alignment, in some embodiments, the computer device may optionally need to make overlapping areas exist between the reference image and the image to be aligned when each image is aligned, and further optionally, the computer device may determine the reference image and the image to be aligned when each image is aligned according to two adjacent initial image frames and initial mapping information. For example, when the computer device knows that there is an overlapping area between the initial image frame 1 and the initial image frame 2, and that there is an overlapping area between the initial image frame 2 and the initial image frame 3, the computer device uses the initial mapping information to project a portion of the initial image frame 1 onto the first preset panoramic image as a reference image for the first image alignment, and projects a portion of the initial image frame 2 onto the first preset panoramic image as an image to be aligned for the first image alignment. And so on, when the computer pen aligns the second image, the initial image frame 3 is projected to the part of the first preset panoramic image as an image to be aligned for the second image alignment.
S502, according to the control points corresponding to each image alignment, the corresponding images to be aligned are aligned, and target mapping information corresponding to each initial image frame after each image alignment is determined.
In this embodiment, after determining the control point corresponding to the overlapping area between the reference image aligned with each image and the corresponding image to be aligned, the computer device may perform alignment processing on the corresponding image to be aligned according to the control point aligned with each image, and determine the target mapping information corresponding to each initial image frame after each image alignment.
Taking the first image alignment as an example, the computer device uses the initial mapping information to project the portion of the initial image frame 1 into the panoramic expansion of 2:1 as a reference image, and projects the portion of the initial image frame 1 into 2:1 as an image to be aligned. Then, the computer device may determine 100 control points corresponding to the overlapping area between the image to be aligned and the reference image, and use the 100 control points to perform alignment processing on the portion of the initial image frame 2 projected into the 2:1 panoramic expansion map, so that the portion of the initial image frame 2 projected into the 2:1 panoramic expansion map is aligned with the portion of the initial image frame 1 projected into the 2:1 panoramic expansion map, thereby determining the target mapping information of the initial image frame 2 after the first image alignment.
Optionally, the computer device may interpolate the corresponding images to be aligned with the corresponding control points for each image alignment, so as to implement alignment processing for the corresponding images to be aligned.
The embodiment determines control points corresponding to overlapping areas between the reference image aligned each time and the corresponding image to be aligned, performs alignment processing on the corresponding image to be aligned according to the control points corresponding to each time of image alignment, and determines target mapping information corresponding to each initial image frame after each time of image alignment. Because the alignment processing is carried out on the corresponding images to be aligned according to the control points corresponding to the overlapping areas between the reference images and the corresponding images to be aligned, which are determined to be aligned each time, the accuracy and precision of the image alignment are further ensured.
Fig. 7 is a schematic flow chart of determining a control point according to an embodiment of the present application, referring to fig. 7, this embodiment relates to an alternative implementation of how to determine a control point corresponding to an overlapping area between a reference image aligned with each image and a corresponding image to be aligned. On the basis of the above embodiment, S501 described above, determines a control point corresponding to an overlapping region between a reference image aligned with each image and a corresponding image to be aligned, including the steps of:
S701, determining a light value of each pixel point in an overlapping region of the reference image and the image to be aligned.
In this embodiment, the computer apparatus first determines the light current value of each pixel point in the overlapping region between the reference image and the image to be aligned for each image alignment. For example, after the computer device determines the overlapping area of the reference image and the image to be aligned at the time of the first image alignment, the light current value of each pixel point in the overlapping area is further determined.
The optical flow value is a two-dimensional vector, reflects the change trend of gray scale of each point on the initial image frame, and can be seen as an instantaneous speed field generated by the movement of the pixel point with gray scale on the image plane.
The computer device may determine the light values of the pixels in the overlapping area of the reference image and the image to be aligned according to a DIS (Dense Inverse Search-basedm) light flow algorithm, a RAFT (current All-Pairs Field Transforms for OpticalFlow) light flow algorithm, or the like.
S702, determining control points corresponding to overlapping areas of the reference image and the image to be aligned according to the light value of each pixel point.
In this embodiment, after the step of S1201, the computer device determines, according to the light current value of each pixel, a control point corresponding to the overlapping area of the reference image and the image to be aligned.
The computer equipment can use an interpolation method to screen out control points in an overlapping area of the reference image and the image to be aligned according to the light value of each pixel point; the computer device may also determine the control point from the overlapping area of the reference image and the image to be aligned according to a preset judgment condition according to the light current value of each pixel point, and the embodiment is not limited.
In this embodiment, optical flow values of each pixel point in an overlapping region of a reference image and an image to be aligned are determined, and control points corresponding to the overlapping region of the reference image and the image to be aligned are determined according to the optical flow values of each pixel point. Because the control points are screened according to the optical flow values of the pixel points, the control points are utilized to align the images to be aligned, so that the accuracy and precision of image alignment can be improved.
Fig. 8 is a schematic flow chart of generating a target panoramic image according to an embodiment of the present application, and referring to fig. 3, this embodiment relates to an alternative implementation of how to generate a target panoramic image. Based on the above embodiment, S203 generates a target panoramic image according to each initial image frame and corresponding target mapping information, including the following steps:
S801, determining mask information corresponding to each initial image frame according to the overlapping area of each initial image frame; the mask information is used to indicate that each initial image frame is projected to the boundary of the target panoramic image.
In this embodiment, since the target mapping information of each initial image frame is redetermined in the image alignment process, in order to further ensure the stitching effect of the target panoramic image, the computer device further determines the mask information corresponding to each initial image frame according to the overlapping area of each initial image frame.
The mask information is also called mask information, and is used for indicating that each initial image frame is projected to the boundary of the target panoramic image. Fig. 14 is a schematic diagram of mask information. The white area as shown in fig. 14 indicates the boundary after the projection of the corresponding initial image frame to the target panoramic image. In other words, the mask information is determined from the stitching between the initial image frames, which is used to instruct the computer device where to use which initial image frame when generating the target panoramic image. For example, if there is a seam between the initial image frame 1 and the initial image frame 2, the computer device will use the initial image frame 1 on the left side of the seam and the initial image frame 2 on the right side of the seam when generating the target panoramic image.
In one embodiment, after obtaining the target mapping information corresponding to each initial image frame, the computer device sequentially searches for the seam between every two adjacent initial image frames by using a dynamic seam planning algorithm, so as to determine the mask information corresponding to each initial image frame. For example, after obtaining the target mapping information 1 to the target mapping information 10 corresponding to the initial image frame 1 to the initial image frame 10, the computer device searches for a piece of seam from the overlapping area of the initial image frame 1 and the initial image frame 2, so as to obtain mask information 1 of the initial image frame 1 and mask information 2 of the initial image frame 2; then, the computer device searches for a piece of seam from the overlapping area of the initial image frame 2 and the initial image frame 3, so that the mask information 1 of the initial image frame 2 is updated, the mask information 3 of the initial image frame 3 is determined, and the computer device finally determines the mask information 1-10 corresponding to the initial image frames 1-10.
Alternatively, in one embodiment, the computer device may utilize a dynamic patch planning algorithm to determine mask information simultaneously during each image alignment. For example, the computer device searches for a seam from the overlapping area of the initial image frame 1 and the initial image frame 2 by using the dynamic seam planning algorithm after the first image alignment, so that after the first image alignment, the computer device determines not only the target mapping information 1 and the target mapping information 2 corresponding to the initial image frame 1 and the initial image frame 2 respectively, but also the mask information 1 and the mask information 2 corresponding to the initial image frame 1 and the initial image frame 2 respectively. After the second image alignment, the computer device searches a piece of seam from the overlapped area of the initial image frame 2 and the initial image frame 3 by using a dynamic seam planning algorithm, so as to update the mask information 1 of the initial image frame 2, determine the mask information 3 of the initial image frame 3, and so on, and finally determine the mask information 1-10 corresponding to the initial image frames 1-10.
Compared with the method for planning the seam by all initial image frames simultaneously and planning the seam by two pairs in turn, the situation that the moving object in the final target panoramic image is cut off is less likely to occur.
S802, projecting each initial image frame into a second preset panoramic image according to the target mapping information and the mask information so as to generate a target panoramic image.
In this embodiment, after the image alignment is completed, the computer device also determines target mapping information corresponding to each initial image frame and mask information corresponding to each initial image frame. Further, the computer device may project each of the initial image frames into the second preset panoramic image based on the target mapping information and the mask information to generate a target panoramic image.
The resolution of the second preset panoramic image, i.e. the panoramic expansion chart of 2:1, is set by the user, or determined by the computer device according to a preset rule, which is the same principle as that of the preset panoramic image. The second preset panoramic image may be the same as the first preset panoramic image or may be different from the first preset panoramic image.
For example, when the computer device acquires the initial image frames 1 to 10, after the image alignment is completed, the computer device also determines the target mapping information 1 to the target mapping information 10 corresponding to the initial image frames 1 to 10 and the mask information 1 to the mask information 10 corresponding to the initial image frames 1 to 10. Furthermore, the computer device projects the initial image frames 1 to 10 into a second preset panoramic image of 2:1 according to the target mapping information 1 to 10 and the mask information 1 to 10, respectively, so as to obtain a target panoramic image.
In some embodiments, considering that the processing speed and the memory of a part of computer equipment are limited, the computer equipment projects each initial image frame into a spherical panoramic expansion chart with high resolution of 2:1 according to the target mapping information through alpha fusion, performs weighted fusion according to the weight of mask information, and takes the fused image as a target panoramic image.
In some embodiments, the computer device may also employ other fusion algorithms, such as multi-band fusion, to project each initial image frame into the second preset panoramic image based on the target mapping information, the mask information, and the initial mapping information to generate the target panoramic image.
According to the embodiment, mask information corresponding to each initial image frame is determined according to the overlapping area of each initial image frame, and each initial image frame is projected into a second preset panoramic image according to the target mapping information and the mask information so as to generate a target panoramic image. Because the mask information is used for indicating the projection of each initial image frame to the boundary of the target panoramic image, and the computer equipment determines the mask information corresponding to each initial image frame according to the overlapping area of each initial image frame, the target panoramic image can be generated by projecting each initial image frame to the second preset panoramic image according to the target mapping information and the mask information.
Fig. 10 is a schematic flow chart of still another method for generating a target panoramic image according to an embodiment of the present application, and referring to fig. 10, this embodiment relates to an alternative implementation of how to generate a target panoramic image. Based on the above embodiment, the step S802 of projecting each initial image frame into the second preset panoramic image according to the target mapping information and the mask information to generate the target panoramic image includes the following steps:
s1001, projecting each initial image frame into a second preset panoramic image according to the target mapping information and the mask information, and generating a first panoramic image.
In this embodiment, when generating the target panoramic image, the computer device first projects each initial image frame into the second preset panoramic image according to the target mapping information and the mask information, and generates the first panoramic image. For example, the computer device projects the initial image frames 1 to 10 into the second preset panoramic image of 2:1 respectively by using the initial mapping information according to the target mapping information 1 to the target mapping information 10 and the mask information 1 to the mask information 10, so as to obtain the first panoramic image.
S1002, generating a target panoramic image from the first panoramic image.
In this embodiment, after step S1001, the computer device may continue to generate a target panoramic image from the first panoramic image. Optionally, the computer device may generate the target panoramic image after performing the achromatic process on the first panoramic image; the computer device may also generate a target panoramic image after performing the underfill process on the first panoramic image; the computer device may also perform an underfill process on the first panoramic image before performing an achromatic process on the first panoramic image to obtain the target panoramic image. Of course, the computer device may generate the target panoramic image from the first panoramic image in other manners or orders, which is not limited in this embodiment.
It will be appreciated that, due to the difference in the angle of light rays for each of the initial image frames, there is also a color difference between the initial image frames, and therefore the computer device may perform an achromatism process on the first panoramic image after it is obtained.
Illustratively, the process of the achromatism process may be:
first, the computer equipment projects each initial image frame into a 2:1 panoramic expansion chart with small resolution according to the mapping information, the mask information and the target projection relation, and generates a second panoramic image. Furthermore, the computer device performs gradient domain fusion processing on the second panoramic image, for example, the computer device uses poisson fusion and multi-band fusion to obtain a third panoramic image with the same size as the second panoramic image and without chromatic aberration.
Further, the computer equipment projects the first panoramic image, namely the generated panoramic original image, to the corresponding position of the third panoramic image to obtain a fourth panoramic image which is the same as the third panoramic image in size and is not subjected to chromatic aberration elimination processing.
Still further, the computer device divides a pixel value of each pixel in the third panoramic image by a pixel value of a pixel at the same location in the fourth panoramic image to obtain a first gain image. Further, the computer device performs optimization processing such as filtering processing on the first gain image, and then upsamples the optimized first gain image to obtain a second gain image with the same size as the first panoramic image.
Finally, the computer device processes the first panoramic image with the second gain image to obtain a final target panoramic image.
In some embodiments, the target panoramic image needs to be connected left and right, and optionally, both ends of the first panoramic image need to be expanded when the gradient fields are fused, that is, a part of the image on the left side of the first panoramic image is copied to the right edge, and a part of the image on the right side is copied to the left edge. And then fusing the left two-part image and the right two-part image respectively.
Further, since the photographing angle is limited, a bottom dead zone may exist in the final first panoramic image. Fig. 11 is a schematic diagram of generating a target panoramic image according to an embodiment of the present application. As shown in fig. 11, the portion with greater brightness illustrates that more initial image frames are projected thereto, i.e., that more initial image frames overlap thereat. The shade of the color in fig. 11 is inversely proportional to the brightness, that is, the more the brightness, the lighter the partial color. In the process of generating a panoramic image, a small part of pictures at the bottom are not covered by the initial image frames, so that black edges exist at the bottom of the finally generated panoramic image. Thus, the computer device may also perform an underfill process on the first panoramic image after the first panoramic image is obtained.
Illustratively, the process of underfilling may be:
firstly, after obtaining a first panoramic image, the computer equipment projects a bottom blind area of the first panoramic image to obtain a first intermediate image, and then, the computer equipment fills the first intermediate image by using a preset filling image to obtain a final target panoramic image.
In some embodiments, the computer device may also perform a filling process on the first intermediate image using a preset filling algorithm to obtain a final target panoramic image.
In some embodiments, the computer device may perform the achromatic process and the underfill process sequentially after obtaining the first panoramic image, which will not be described herein.
According to the embodiment, each initial image frame is projected into a second preset panoramic image according to the target mapping information and the mask information, a first panoramic image is generated, and a target panoramic image is generated according to the first panoramic image. After the first panoramic image is generated according to the target mapping information and the mask information, the first panoramic image still needs to be processed continuously to obtain the second panoramic image, so that the effect of the target panoramic image is improved.
In the above embodiment, the stitching manner of the target panoramic image in the image alignment process is not limited, and the computer device may stitch frame by frame, so as to obtain the target panoramic image according to each initial image frame. In some other application scenarios, the computer device may also obtain the target panoramic image according to other stitching manners, which will be described below.
In an embodiment, optionally, the panoramic image generation method further includes:
determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information by utilizing an image stitching mode, performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image, and determining target mapping information corresponding to each initial image frame after each image alignment;
The image stitching mode comprises at least one of frame-by-frame stitching, equatorial stitching, top stitching, bottom stitching and two-side stitching.
Generating a target panoramic image according to each initial image frame and the image stitching mode; the image stitching mode comprises at least one of frame-by-frame stitching, equatorial stitching, top stitching, bottom stitching and two-side stitching.
In this embodiment, according to different image stitching manners, the sequence and process of projecting the initial image frames to the target panoramic image will also be different.
Illustratively, FIG. 12 is a schematic view of an equatorial splice. In the process of generating the target panoramic image by each initial image, the computer device firstly carries out equatorial stitching, and firstly directly projects initial image frames in the middle row of the target panoramic image into a 2:1 panoramic expansion chart in a one-by-one manner, as shown in fig. 12 (a) and 12 (b). The computer device then pastes the other initial image frames in the middle line of the target panoramic image into the 2:1 panoramic expanded view, as shown in fig. 12 (c) and 12 (d).
By combining different image stitching modes, the flexibility of generating the target panoramic image can be improved, and the generating efficiency of the target panoramic image is improved to a certain extent.
The equatorial stitching will be described in detail below. Fig. 13 is a schematic flow chart of equatorial stitching in an embodiment of the present application, and referring to fig. 13, this embodiment relates to an alternative implementation of equatorial stitching. On the basis of the above embodiment, the equatorial stitching includes the following steps:
s1301, a first initial image frame located at a first preset position and having no overlapping region is determined from the initial image frames, and a reference image for the first image alignment is determined according to the first initial image frame and the initial mapping information.
In this embodiment, in the process of performing equatorial stitching, the computer device first determines a first initial image frame located at a first preset position and having no overlapping area from among the initial image frames, and determines a reference image for first image alignment according to the first initial image frame and the initial mapping information. Optionally, the computer device projects the first initial image frame to a portion of the first preset panoramic image as the reference image for the first image alignment using the initial mapping information.
The first preset position is a middle part in the target panoramic image, and the coverage area of the first preset position can be adjusted according to requirements, and the first preset position generally covers an initial image frame positioned at the most middle line of the target panoramic image.
For example, assuming that the initial image frames 1 to 20 are acquired by the computer device, according to the preset path of the pan-tilt, it can be known that the initial image frames 6 to 10 are located in the middle row in the target panoramic image, and only two adjacent frames of the initial image frames 6 to 10 overlap each other. Then, in the first image alignment, the computer device may first take the initial image frame 6, the initial image frame 8 and the initial image frame 10 as the first initial image frame, and project the first initial image frame, that is, the initial image frame 6, the initial image frame 8 and the initial image frame 10, into the panoramic expansion image of 2:1 according to the initial mapping information, so as to obtain the reference image for the first image alignment.
S1302, determining an image to be aligned for the first image alignment according to the initial mapping information and other initial image frames positioned at a first preset position; the other initial image frames are image frames other than the first initial image frame among the initial image frames.
In this embodiment, after S2701, the computer apparatus determines an image to be aligned from other initial image frames located at a first preset position. Wherein the other initial image frames are image frames except the first initial image frame in the initial image frames.
Continuing with the example above, at the time of first image alignment, the computer device projects portions of the initial image frame 6, the initial image frame 8, and the initial image frame 10 to the first preset panoramic image as reference images, and then projects the remaining portions of the initial image frame 7 and the initial image frame 9 to the first preset panoramic image as images to be aligned.
Further, the computer device performs alignment processing on the portions of the initial image frame 7 and the initial image frame 9 projected to the first preset panoramic image according to the portions of the initial image frame 6, the initial image frame 8 and the initial image frame 10 projected to the first preset panoramic image respectively to determine the target mapping information corresponding to the initial image frame 7 and the initial image frame 9 respectively, and the image alignment process may refer to the steps in the above embodiments, which will not be repeated here
It should be noted that the foregoing is merely an example of an optional sequence and an optional number of equatorial stitching, and the computer device may also take the initial image frame 8 and the initial image frame 10 as reference images, take the initial image frame 9 as the images to be aligned, and then perform subsequent image alignment in the first image alignment, which is not limited in this embodiment.
When the equatorial stitching is performed, the first initial image frame which is positioned at the first preset position and does not have the overlapping area is determined from the initial image frames, the reference image aligned for the first time is determined according to the first initial image frame and the initial mapping information, and then the image to be aligned is determined according to the initial mapping information and other initial image frames positioned at the first preset position. In one aspect, the first initial image frame is directly used as a reference image, and image alignment is not required, so that the processing speed of the computer device is increased. On the other hand, the first initial image frame can be used as an anchor point for fixing, so that the whole panoramic image is prevented from being excessively deformed or changed in position when the images are aligned, and the quality of the target panoramic image is improved.
The top splice and the bottom splice will be described below. Fig. 14 is a schematic flow chart of a top splice or a bottom splice according to an embodiment of the present application, and referring to fig. 14, this embodiment relates to an alternative implementation of the top splice or the bottom splice. On the basis of the above embodiment, the top stitching or the bottom stitching includes the following steps:
s1401, determining a second initial image frame located in a second preset area from the initial image frames, and projecting the second initial image frame to a first target area in the first preset panoramic image according to the initial mapping information.
This is unavoidable because the top and bottom portions near the panoramic image are severely stretched or distorted during the generation of the target panoramic image. Fig. 15 is a schematic diagram of top frame distortion, such as fig. 15 (a) and 15 (b), if the initial image frame directly at the top is directly projected into the first preset panoramic image in the current manner, the top frame will be severely stretched, for example, the area outlined by the dashed line in fig. 15 is stretched.
Therefore, in the process of performing the top stitching or the bottom stitching, the computer device projects the second initial image frame located in the second preset area to the first target area in the first preset panoramic image.
The second preset position is a top portion or a bottom portion in the target panoramic image, and a coverage area of the second preset position can be adjusted according to requirements, and generally, an initial image frame positioned at the topmost line or bottommost line of the target panoramic image can be covered.
The first target area refers to an area in the panoramic image where stretching or distortion does not occur, for example, the first target area includes a middle portion in the target panoramic image.
For example, at the time of top stitching, it is assumed that the computer device acquires the initial image frames 1 to 20, and according to the preset path of the pan-tilt, it is known that the initial image frames 1 to 5 are located at the uppermost row in the target panoramic image. The initial image frames 1 to 5 are taken as second initial image frames at the computer device and the initial image frames 1 to 5 are projected to a first target area in a first preset panoramic image.
Similarly, in the bottom stitching, it is assumed that the computer device acquires the initial image frames 1 to 20, and according to the preset path of the pan-tilt, it is known that the initial image frames 11 to 20 are located at the lowest edge in the target panoramic image. The initial image frames 11 to 20 are taken as second initial image frames at the computer device and the initial image frames 11 to 20 are projected to the first target area in the first preset panoramic image.
In this way, the initial image frames after projection largely eliminate the effect of distortion, facilitating subsequent image alignment.
S1402, determining a reference image and an image to be aligned for each image alignment according to a second initial image frame projected to the first target area, and performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image alignment to determine intermediate mapping information corresponding to the second initial image frame after each image alignment.
In this embodiment, further, the computer device may determine, according to the second initial image frame projected to the first target area, a reference image and an image to be aligned for each image alignment, and perform alignment processing on the corresponding image to be aligned according to the reference image aligned for each image alignment, so as to determine intermediate mapping information corresponding to the second initial image frame after each image alignment.
For example, when the first image is aligned by the top stitching, the computer device uses a portion of the initial image frame 1 projected to the first target area in the first preset panoramic image as a reference image, and uses a portion of the initial image frame 2 projected to the first target area in the first preset panoramic image as an image to be aligned, and then performs alignment processing on the corresponding image to be aligned according to the reference image aligned by the image to determine intermediate mapping information corresponding to the second initial image frame after the first image is aligned. That is, the computer device determines intermediate mapping information 1 corresponding to the initial image frame 1 and intermediate mapping information 2 corresponding to the initial image frame 2.
The image alignment process may refer to the above embodiments, and will not be described herein.
S1403, determining target mapping information of the second initial image frame according to the intermediate mapping information corresponding to the second initial image frame.
In this embodiment, after S1402, the computer device may determine the target mapping information of the second initial image frame according to the intermediate mapping information corresponding to the second initial image frame.
It will be appreciated that the intermediate mapping information obtained after image alignment is not the target mapping information required by the final computer device, since the second initial image frame located in the second preset area is projected onto the first target area in the first preset panoramic image.
The computer equipment also modifies intermediate mapping information corresponding to the image to be aligned projected to the first target area, so as to update intermediate mapping information of the reference image projected to the first target area and the second initial image frame corresponding to the image to be aligned, so as to obtain target mapping information of the second initial image frame, and ensure the accuracy of the target mapping information of each initial image frame.
For example, assuming that the intermediate mapping information 1 corresponding to the initial image frame 1 is obtained by the computer device, the computer device further modifies the position in the intermediate mapping information 1, that is, modifies a part of the values in the intermediate mapping information, thereby obtaining the target mapping information 1. So that when the computer device projects the initial image frame 1 by using the target mapping information 1 when the target panoramic image is finally generated, the computer device can ensure that the initial image frame 1 is still projected to the position corresponding to the second preset area, but not to the position corresponding to the first target area.
Fig. 16 is a schematic view of the effect of the top stitching, and in conjunction with fig. 15, as shown in fig. 16 (a) and 16 (b), the projected top texture largely eliminates the effect of distortion and facilitates image alignment by subsequent computer devices.
In some embodiments, the computer device performs the equatorial splice first, and then performs the top splice and the bottom splice based on the equatorial splice. Further, image alignment is performed simultaneously during the equatorial stitching, top stitching, and bottom stitching.
When the top stitching or the bottom stitching is performed, first, a second initial image frame located in a second preset area is determined from all initial image frames, the second initial image frame is projected to a first target area in a first preset panoramic image according to initial mapping information, then, a reference image and an image to be aligned for each image alignment are determined according to the second initial image frame projected to the first target area, and corresponding images to be aligned are aligned according to the reference image aligned for each image alignment, so that intermediate mapping information corresponding to the second initial image frame after each image alignment is determined, and target mapping information of the second initial image frame is determined according to the intermediate mapping information corresponding to the second initial image frame.
The two-sided splice will be described below. Fig. 17 is a schematic flow chart of two-sided splicing in an embodiment of the present application, and referring to fig. 17, this embodiment relates to an alternative implementation manner of two-sided splicing. On the basis of the above embodiment, the above two-side splicing includes the following steps:
S1701, connecting third initial image frames of two side areas of the target panoramic image, and projecting the third initial image frames to a second target area in the first preset panoramic image according to the initial mapping information.
In this embodiment, in the 2:1 panoramic expansion, the left and right sides are connected, that is, the target panoramic image requires continuous pictures at both ends. Therefore, when the two-side stitching is performed, the computer device may connect the third initial image frames of the two-side areas of the target panoramic image, and project the third initial image frames to the second target area in the first preset panoramic image according to the initial mapping information. The size of the two side areas can be set according to actual requirements. The second target area may be any position in the first preset panorama, which is not limited in this embodiment. It will be appreciated that, in order to enhance the stitching effect, the second target area may be provided in a middle portion of the first preset panorama.
For example, referring to fig. 11, when the leftmost initial image frame 1 and the rightmost initial image frame 2 need to be processed, the computer device connects the initial image frame 1 and the initial image frame 2 together as a third initial image frame, and projects the third initial image frame to the second target area in the first preset panoramic image using the initial mapping information. That is, the computer device uses the initial mapping information to connect and project the initial image frame 1 and the initial image frame 2 to the middle area in the panoramic expansion chart of 2:1, so that the two ends of the target panoramic image can be connected to form a continuous picture, thereby ensuring that the target panoramic image still has left and right continuity after the subsequent images are aligned, and ensuring that the left and right ends of the searched splicing seam are connected.
S1702, determining a reference image and an image to be aligned for each image alignment according to a third initial image frame projected to a second target area, and performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image alignment to determine intermediate mapping information corresponding to the third initial image frame after each image alignment.
In this embodiment, after S1701 described above, the computer device may determine the reference image and the image to be aligned for each image alignment according to the third initial image frame projected to the second target area, and perform alignment processing on the corresponding image to be aligned according to the reference image aligned for each image alignment, so as to determine the intermediate mapping information corresponding to the third initial image frame after each image alignment.
The process of S1702 is the same as the principle of S1402, and the process of image alignment may refer to the above embodiments, and will not be described here again.
S1703, determining target mapping information of the third initial image frame according to the intermediate mapping information corresponding to the third initial image frame.
In this embodiment, the same principle as the top stitching and the bottom stitching, the computer device further updates the intermediate mapping information corresponding to the third initial image frame, so as to update the intermediate mapping information of the reference image projected to the second target area and the third initial image frame corresponding to the image to be aligned, so as to obtain the target mapping information of the third initial image frame, thereby ensuring the accuracy of the target mapping information of each initial image frame. That is, after the images are aligned, the computer device still needs to swap the left and right portions of the map and mask back to the position of the original image frame corresponding to itself in the panoramic image.
For example, assuming that the third initial image frame includes the initial image frame 5, the computer device modifies the position in the intermediate mapping information 5 after obtaining the intermediate mapping information 5 corresponding to the initial image frame 5, that is, modifies a partial value in the intermediate mapping information, thereby obtaining the target mapping information 5. So that when the initial image frame 5 is projected by the computer device using the target mapping information 5 when the target panoramic image is finally generated, it can be ensured that the initial image frame 1 is still projected to the positions of both sides.
When the two-side stitching is performed, the method includes the steps of firstly connecting third initial image frames of two side areas of a target panoramic image, projecting the third initial image frames to a second target area in a first preset panoramic image according to initial mapping information, then determining reference images aligned each time and images to be aligned according to the third initial image frames projected to the second target area, performing alignment processing on the corresponding images to be aligned according to the reference images aligned each time, determining intermediate mapping information corresponding to the third initial image frames after each time of image alignment, and finally determining target mapping information of the third initial image frames according to the intermediate mapping information corresponding to the third initial image frames. Therefore, images at two ends of the target panoramic image can be continuous, and the quality of the target panoramic image is improved.
In an embodiment, optionally, the panoramic image generation method may further include the following steps:
transmitting a rotation instruction to the cradle head; the rotation instruction instructs the cradle head to rotate according to the preset path, so that the shooting equipment shoots an initial image frame in the process that the cradle head rotates according to the preset path.
In this embodiment, the computer device may send a rotation instruction to the pan-tilt, so that the pan-tilt may rotate according to a preset path after receiving the rotation instruction, so that the photographing device photographs an initial image frame in a process that the pan-tilt rotates according to the preset path.
The rotation instruction may include a preset path. The rotation instruction may be an instruction that the computer device sends to the cradle head at regular time, or an instruction that the computer device sends to the cradle head after receiving the operation of the user.
In this embodiment, the computer device may send a rotation instruction to the pan-tilt, so that the pan-tilt may rotate according to a preset path after receiving the rotation instruction, so that the photographing device may photograph an initial image frame during rotation of the pan-tilt according to the preset path. Therefore, the target panoramic image can be automatically generated, and the operation process of a user is simplified.
In one embodiment, optionally, the preset path can enable the angle of view photographed by the photographing apparatus to cover 360 ° in the first direction and 180 ° in the second direction; the first direction is perpendicular to the second direction.
In this embodiment, the first direction and the second direction are perpendicular, which means that the angle difference between the included angle between the first direction and the second direction and 90 ° is smaller than a preset angle threshold, and the preset difference threshold is a number greater than or equal to 0 °. Illustratively, the preset path enables the angle of view photographed by the photographing apparatus to cover 360 ° in the horizontal direction and 180 ° in the vertical direction.
In one embodiment, optionally, the preset path includes a plurality of parallel sub-paths, and a difference of target angle parameters between two adjacent shooting points in the sub-paths is smaller than a preset difference threshold, where the target angle parameters include at least one of yaw angle, pitch angle, and roll angle.
In this embodiment, referring to fig. 3, the preset path includes 3 parallel sub-paths, where the parallel may be a transverse parallel or a longitudinal parallel. For each sub-path, the difference of pitch angles between every two adjacent shooting points is smaller than a preset difference threshold value, and for each sub-path, the difference of yaw angles between every two adjacent shooting points is smaller than a preset difference threshold value. The preset difference threshold is set according to requirements, and may be 0.
For example, the preset path includes 3 transversely parallel sub-paths 1 to 3, that is, the photographing device needs to photograph three rows of images, the sub-path 1 photographs 5 initial image frames at equal intervals or approximately equal intervals, and the sub-path 2 photographs 8 initial image frames at equal intervals or approximately equal intervals; the sub-path 3 captures 5 initial image frames at equal or approximately equal intervals.
In one embodiment, in order to facilitate subsequent stitching, optionally, a number of frames of the initial image frames corresponding to the target sub-path in the plurality of sub-paths is an even number, and at least one group of image frames having the same yaw angle exists in the initial image frames corresponding to the plurality of sub-paths; the target sub-path is a sub-path located in the middle row among the plurality of sub-paths. That is, the photographing apparatus needs to acquire an even number of initial image frames in the sub-paths located in the middle row, i.e., the equatorial row, and one initial image frame needs to be longitudinally aligned in each sub-path.
Optionally, in one embodiment, the photographing device is a non-panoramic photographing device.
In this embodiment, the photographing apparatus is a non-panoramic photographing apparatus. A dedicated panoramic photographing apparatus refers to a photographing apparatus in which a field angle horizontally covers 360 ° and vertically covers 180 °, for example, a dedicated panoramic photographing apparatus is a photographing apparatus including two fisheye cameras.
Whereas a non-panoramic photographing apparatus refers to a photographing apparatus in which the angle of view of a lens cannot cover 360 ° in a first direction and 180 ° in a second direction. For example, the photographing apparatus includes at least one of a standard lens having an angle of view of 45 °, a wide-angle lens having an angle of view of 60 to 80 °, an ultra-wide-angle lens having an angle of view of 80 to 120 °, and a fisheye lens having an angle of view of 180 to 220 °.
In some embodiments, the photographing device may also be a mobile phone, and the user uses a rear camera of the mobile phone to photograph the initial image frame of the device.
In some application scenes at present, a user can only obtain a panoramic image by using a special panoramic shooting device, so that the application scenes of the current panoramic image generation method are limited. In this embodiment, the photographing device is a non-panoramic photographing device, so the application provides a panoramic image generation method applicable to the non-panoramic photographing device, and expands the application range and flexibility of the panoramic image generation method.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the application also provides a handheld cradle head. This handheld cloud platform includes: motor, camera and treater. The motor is used for controlling the cradle head to rotate so as to drive the camera to rotate, and the processor is used for executing the panoramic image generation method of any one of the above.
In one embodiment, optionally, in the handheld cradle head, an original upper limit of a depression angle of the cradle head is set as a new upper limit of an elevation angle of the cradle head, and the original upper limit of the elevation angle of the cradle head is set as the new upper limit of the depression angle of the cradle head.
Taking a mobile phone as an example of a photographing device, since a rear lens of the mobile phone generally has a wider range fov and is more suitable for photographing scenes, a user usually uses a rear camera of the mobile phone to photograph an initial image frame to obtain a target panoramic image. In order to shoot the bottom as much as possible, the cradle head and the mobile phone need to have a very low depression angle, so that the cradle head body at the bottom enters the picture of the rear camera of the mobile phone, and the picture of the initial image frame under the low depression angle is affected.
Therefore, in the embodiment of the application, the mobile phone is reversely clamped on the cradle head so as to realize the pitching and pitching exchange of the cradle head. That is, the upper limit of the original depression angle of the cradle head is set as the upper limit of the new elevation angle of the cradle head, and the upper limit of the original elevation angle of the cradle head is set as the upper limit of the new depression angle of the cradle head. Assuming that the upper limit of the depression angle of the original cradle head is 60 degrees and the upper limit of the elevation angle is 70 degrees, after the mobile phone is reversely clamped on the cradle head, the upper limit of the depression angle of the cradle head is updated to be 70 degrees, and the upper limit of the elevation angle is updated to be 60 degrees.
Therefore, even if the depression angle of the rear lens of the mobile phone is very low, the cradle head body is not displayed in the picture, the effect of hiding the cradle head in the picture close to the bottom is realized, and the quality of the generated target panoramic image is improved. It should be noted that, the above-mentioned holder clamping method is due to the structural design of the holder, and if the other holder structure does not have the condition of restricting the pitch angle, the above-mentioned back clamping method may not be used.
In one embodiment, the embodiment of the application also provides a panoramic image generation system, which comprises a cradle head and a terminal. The cradle head is used for rotating according to a preset path; the terminal comprises a shooting device and a processor, wherein the shooting device is used for shooting a plurality of initial image frames of a plurality of shooting angles in the process of rotating the cradle head according to a preset path, and sending the initial image frames to the processor so that the processor can execute the panoramic image generation method of any one of the above. The specific limitation in the panoramic image generation system described above may be referred to as limitation of the panoramic image generation method described above, and will not be described herein.
Based on the same inventive concept, the embodiment of the application also provides a panoramic image generation device for realizing the panoramic image generation method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the panoramic image generation apparatus provided below may be referred to the limitation of the panoramic image generation method hereinabove, and will not be described herein.
Fig. 18 is a block diagram of a panoramic image generation device according to an embodiment of the present application, and as shown in fig. 18, there is provided a panoramic image generation device 1800 according to an embodiment of the present application, including: a first determination module 1801, a second determination module 1802, and a first generation module 1803, wherein:
a first determining module 1801, configured to determine, according to each initial image frame and initial mapping information, a reference image and an image to be aligned for each image alignment.
A second determining module 1802, configured to perform alignment processing on the corresponding images to be aligned according to the reference images aligned in each image alignment, and determine target mapping information corresponding to each initial image frame after each image alignment; the target mapping information is used to indicate the area where each initial image frame is projected to the target panoramic image after each image alignment.
The first generation module 1803 is configured to generate a target panoramic image according to each initial image frame and corresponding target mapping information.
According to the panoramic image generation device provided by the embodiment, the reference image and the image to be aligned for each image alignment are determined according to each initial image frame and the initial mapping information, the corresponding image to be aligned is aligned according to the reference image aligned for each image, and the target mapping information corresponding to each initial image frame after each image alignment is determined, so that the target panoramic image is generated according to each initial image frame and the corresponding target mapping information. In the current panoramic image generation process, firstly, the internal and external parameters of a camera are calculated according to a plurality of images, and then the internal and external parameters of the camera are utilized to directly project a plurality of photographed images, so that a final panoramic image is obtained. However, in the process, the situation that the internal parameters and the external parameters of the camera are wrong or the precision is insufficient may occur, so that the currently obtained panoramic image still has a dislocation phenomenon. In the application, as the target mapping information can indicate the area of each initial image frame projected to the target panoramic image after each image alignment, each time the images are aligned, the reference image and the image to be aligned which are aligned by the current image with the overlapped area are determined, and the alignment processing is carried out on the image to be aligned which is aligned by the current image, the reference image and the image to be aligned can be aligned after the alignment processing. Further, the finally generated target panoramic image can reduce the splicing dislocation phenomenon.
Optionally, each initial image frame is an image frame of a plurality of shooting angles shot by shooting equipment arranged on the cradle head in the rotation process of the cradle head.
Optionally, the first determining module 1801 includes:
and the first determining unit is used for projecting at least two image frames in the initial image frames into a first preset panoramic image by utilizing the initial mapping information so as to determine the reference image aligned with the current image and the image to be aligned.
And the second determining unit is used for taking the aligned image to be aligned and the reference image aligned by the previous image as the reference image aligned by the next image after the corresponding image to be aligned is aligned according to the reference image aligned by the current image, and determining the aligned image to be aligned by the next image according to each initial image frame and the initial mapping information.
Optionally, the alignment process includes at least one of translation, rotation, and stretching.
Optionally, two adjacent initial image frames in the plurality of initial image frames have overlapping areas; the second determination module 1802 includes:
and a third determining unit for determining a control point corresponding to an overlapping region between the reference image aligned with each image and the corresponding image to be aligned.
And a fourth determining unit, for performing alignment processing on the corresponding images to be aligned according to the control points corresponding to each image alignment, and determining the target mapping information corresponding to each initial image frame after each image alignment.
Optionally, the third determining unit includes:
and the first determination subunit is used for determining the light value of each pixel point in the overlapping area of the reference image and the image to be aligned.
And the second determination subunit is used for determining control points corresponding to the overlapping areas of the reference image and the image to be aligned according to the light values of the pixel points.
Optionally, the first generating module 1803 includes:
a fifth determining unit, configured to determine mask information corresponding to each initial image frame according to the overlapping area of each initial image frame; the mask information is used to indicate that each initial image frame is projected to the boundary of the target panoramic image.
And the generating unit is used for projecting each initial image frame into the second preset panoramic image according to the target mapping information and the mask information so as to generate a target panoramic image.
Optionally, the generating unit includes:
and the first generation subunit is used for projecting each initial image frame into the second preset panoramic image according to the target mapping information and the mask information to generate a first panoramic image.
And the second generation subunit is used for generating a target panoramic image according to the first panoramic image.
Optionally, the panoramic image generation apparatus 1800 further includes:
the third determining module is used for determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information by utilizing an image stitching mode, performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image, and determining target mapping information corresponding to each initial image frame after each image alignment; the image stitching mode comprises at least one of frame-by-frame stitching, equatorial stitching, top stitching, bottom stitching and two-side stitching.
Optionally, the equatorial stitching comprises:
determining a first initial image frame which is positioned at a first preset position and does not have an overlapping area from all initial image frames, and determining a reference image aligned for the first time according to the first initial image frame and initial mapping information;
determining an image to be aligned for the first image alignment according to the initial mapping information and other initial image frames positioned at a first preset position; the other initial image frames are image frames other than the first initial image frame among the initial image frames.
Optionally, the top stitching or the bottom stitching comprises:
determining a second initial image frame positioned in a second preset area from the initial image frames, and projecting the second initial image frame to a first target area in a first preset panoramic image according to initial mapping information;
according to the second initial image frame projected to the first target area, determining a reference image and an image to be aligned for each image alignment, and according to the reference image aligned for each image alignment, performing alignment processing on the corresponding image to be aligned so as to determine intermediate mapping information corresponding to the second initial image frame after each image alignment;
and determining target mapping information of the second initial image frame according to the intermediate mapping information corresponding to the second initial image frame.
Optionally, the two-side stitching includes:
connecting the third initial image frames of the two side areas of the target panoramic image, and projecting the third initial image frames to the second target area in the first preset panoramic image according to the initial mapping information;
according to the third initial image frame projected to the second target area, determining a reference image and an image to be aligned for each image alignment, and according to the reference image aligned for each image alignment, performing alignment processing on the corresponding image to be aligned so as to determine intermediate mapping information corresponding to the third initial image frame after each image alignment;
And determining target mapping information of the third initial image frame according to the intermediate mapping information corresponding to the third initial image frame.
Optionally, the panoramic image generation apparatus 1800 further includes:
the sending module is used for sending a rotation instruction to the cradle head; the rotation instruction instructs the cradle head to rotate according to the preset path, so that the shooting equipment shoots an initial image frame in the process that the cradle head rotates according to the preset path.
Optionally, the preset path can enable the angle of view shot by the shooting device to cover 360 ° in the first direction and 180 ° in the second direction; the first direction is perpendicular to the second direction.
Optionally, the preset path includes a plurality of parallel sub-paths, and a difference of target angle parameters between two adjacent shooting points in the sub-paths is smaller than a preset difference threshold, and the target angle parameters include a yaw angle or a pitch angle.
Optionally, the number of frames of the initial image frames corresponding to the target sub-path in the plurality of sub-paths is an even number, and at least one group of image frames with the same yaw angle exists in the initial image frames corresponding to the plurality of sub-paths; the target sub-path is a sub-path located in the middle row among the plurality of sub-paths.
Optionally, the photographing device is a non-panoramic photographing device.
The respective modules in the above panoramic image generation apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 19 is an internal structure diagram of a computer device in an embodiment of the present application, and in an embodiment of the present application, a computer device may be a server, and the internal structure diagram may be as shown in fig. 19. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing relevant data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a panoramic image generation method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 19 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (23)

1. A panoramic image generation method, the method comprising:
determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information; an overlapping area is formed between the reference image and the image to be aligned of each image alignment;
according to the reference image aligned each time, aligning the corresponding images to be aligned, and determining target mapping information corresponding to each initial image frame after each time of image alignment; the target mapping information is used for indicating the area where each initial image frame is projected to the target panoramic image after each image alignment;
And generating the target panoramic image according to each initial image frame and the corresponding target mapping information.
2. The method of claim 1, wherein each of the initial image frames is an image frame of a plurality of photographing angles photographed by a photographing device provided on a cradle head during rotation of the cradle head.
3. The method of claim 2, wherein determining the reference image and the image to be aligned for each image alignment based on the initial image frames and the initial mapping information comprises:
projecting at least two image frames in each initial image frame into a first preset panoramic image by utilizing the initial mapping information so as to determine a reference image aligned with the current image and an image to be aligned;
and after the corresponding images to be aligned are aligned according to the reference image aligned with the current image, taking the aligned images to be aligned and the reference image aligned with the previous image as the reference image aligned with the next image, and determining the images to be aligned with the next image according to the initial image frames and the initial mapping information.
4. A method according to claim 3, wherein the alignment process comprises at least one of translation, rotation, stretching.
5. The method of claim 4, wherein adjacent two of the plurality of initial image frames have overlapping regions; the alignment processing is performed on the corresponding images to be aligned according to the reference images aligned in each image, and the target mapping information corresponding to each initial image frame after each image alignment is determined, including:
determining control points corresponding to overlapping areas between the reference image aligned with each image and the corresponding image to be aligned;
and according to the control points corresponding to each image alignment, carrying out alignment processing on the corresponding images to be aligned, and determining target mapping information corresponding to each initial image frame after each image alignment.
6. The method of claim 5, wherein determining control points corresponding to overlapping areas between the reference image and the corresponding image to be aligned for each image alignment comprises:
determining the light value of each pixel point in the overlapping area of the reference image and the image to be aligned;
and determining control points corresponding to the overlapping areas of the reference image and the image to be aligned according to the optical flow values of the pixel points.
7. The method of claim 6, wherein generating the target panoramic image from each of the initial image frames and corresponding target mapping information comprises:
Determining mask information corresponding to each initial image frame according to the overlapping area of each initial image frame; the mask information is used for indicating the projection of each initial image frame to the boundary of the target panoramic image;
and projecting each initial image frame into a second preset panoramic image according to the target mapping information and the mask information so as to generate the target panoramic image.
8. The method of claim 7, wherein projecting each of the initial image frames into a second preset panoramic image based on the target mapping information and the mask information to generate the target panoramic image comprises:
projecting each initial image frame into a second preset panoramic image according to the target mapping information and the mask information to generate a first panoramic image;
and generating the target panoramic image according to the first panoramic image.
9. The method according to any one of claims 1-8, further comprising:
determining a reference image and an image to be aligned for each image alignment according to each initial image frame and the initial mapping information by using an image stitching mode, performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image, and determining target mapping information corresponding to each initial image frame after each image alignment;
The image stitching mode comprises at least one of frame-by-frame stitching, equatorial stitching, top stitching, bottom stitching and two-side stitching.
10. The method of claim 9, wherein the equatorial splice comprises:
determining a first initial image frame which is positioned at a first preset position and does not have an overlapping area from the initial image frames, and determining a reference image aligned with the first image according to the first initial image frame and the initial mapping information;
determining an image to be aligned for the first image alignment according to the initial mapping information and other initial image frames positioned at the first preset position; the other initial image frames are image frames other than the first initial image frame in the initial image frames.
11. The method of claim 9, wherein the top splice or bottom splice comprises:
determining a second initial image frame positioned in a second preset area from the initial image frames, and projecting the second initial image frame to a first target area in the first preset panoramic image according to the initial mapping information;
determining the reference image and the image to be aligned for each image alignment according to the second initial image frame projected to the first target area, and performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image alignment so as to determine intermediate mapping information corresponding to the second initial image frame after each image alignment;
And determining target mapping information of the second initial image frame according to the intermediate mapping information corresponding to the second initial image frame.
12. The method of claim 9, wherein the two-sided splice comprises:
connecting third initial image frames of two side areas of the target panoramic image, and projecting the third initial image frames to a second target area in the first preset panoramic image according to the initial mapping information;
determining the reference image and the image to be aligned for each image alignment according to a third initial image frame projected to the second target area, and performing alignment processing on the corresponding image to be aligned according to the reference image aligned for each image alignment so as to determine intermediate mapping information corresponding to the third initial image frame after each image alignment;
and determining target mapping information of the third initial image frame according to the intermediate mapping information corresponding to the third initial image frame.
13. The method according to claim 2, wherein the method further comprises:
transmitting a rotation instruction to the cradle head; the rotation instruction indicates the cradle head to rotate according to a preset path, so that the shooting equipment shoots the initial image frame in the process that the cradle head rotates according to the preset path.
14. The method according to claim 13, wherein the preset path enables a view angle photographed by the photographing apparatus to cover 360 ° in a first direction and 180 ° in a second direction; the first direction is perpendicular to the second direction.
15. The method of claim 14, wherein the predetermined path comprises a plurality of parallel sub-paths, a difference in a target angle parameter between two adjacent shots in the sub-paths is less than a predetermined difference threshold, the target angle parameter comprising at least one of yaw angle, pitch angle, roll angle.
16. The method of claim 15, wherein the number of initial image frames corresponding to a target sub-path among the plurality of sub-paths is an even number, and at least one group of image frames having the same yaw angle exists among the initial image frames corresponding to the plurality of sub-paths; the target sub-path is a sub-path positioned in the middle row in the plurality of sub-paths.
17. The method of claim 2, wherein the photographing device is a non-panoramic photographing device.
18. A handheld cradle head, comprising: the device comprises a motor, a camera and a processor;
The motor is used for controlling the rotation of the cradle head so as to drive the rotation of the camera, and the processor is used for executing the method of any one of claims 1-17.
19. The panoramic image generation system is characterized by comprising a cradle head and a terminal;
the cradle head is used for rotating according to a preset path;
the terminal comprises a shooting device and a processor, wherein the shooting device is used for shooting a plurality of initial image frames of a plurality of shooting angles in the process that the cradle head rotates according to a preset path, and sending the plurality of initial image frames to the processor so that the processor can execute the method of any one of claims 1-17.
20. A panoramic image generation apparatus, the apparatus comprising:
the first determining module is used for determining a reference image and an image to be aligned for each image alignment according to each initial image frame and initial mapping information;
the second determining module is used for carrying out alignment processing on the corresponding images to be aligned according to the reference images aligned each time, and determining target mapping information corresponding to each initial image frame after each time of image alignment; the target mapping information is used for indicating the area where each initial image frame is projected to the target panoramic image after each image alignment;
And the first generation module is used for generating the target panoramic image according to each initial image frame and the corresponding target mapping information.
21. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 17 when the computer program is executed.
22. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 17.
23. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 17.
CN202310327816.4A 2023-03-24 2023-03-24 Panoramic image generation method, device, computer equipment and storage medium Pending CN116866722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310327816.4A CN116866722A (en) 2023-03-24 2023-03-24 Panoramic image generation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310327816.4A CN116866722A (en) 2023-03-24 2023-03-24 Panoramic image generation method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116866722A true CN116866722A (en) 2023-10-10

Family

ID=88230982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310327816.4A Pending CN116866722A (en) 2023-03-24 2023-03-24 Panoramic image generation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116866722A (en)

Similar Documents

Publication Publication Date Title
CN110663245B (en) Apparatus and method for storing overlapping regions of imaging data to produce an optimized stitched image
US10452945B2 (en) Image generating device, electronic device, image generating method and recording medium
CN107945112B (en) Panoramic image splicing method and device
US10104288B2 (en) Method and apparatus for generating panoramic image with stitching process
US9900505B2 (en) Panoramic video from unstructured camera arrays with globally consistent parallax removal
US9019426B2 (en) Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data
KR20200030603A (en) Seamless image stitching
US10489885B2 (en) System and method for stitching images
US20040061774A1 (en) Digital imaging system using overlapping images to formulate a seamless composite image and implemented using either a digital imaging sensor array
US9838614B1 (en) Multi-camera image data generation
CN110868541B (en) Visual field fusion method and device, storage medium and terminal
JP2003178298A (en) Image processor, image processing method, storage medium and computer program
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN107333064B (en) Spherical panoramic video splicing method and system
CN111292278B (en) Image fusion method and device, storage medium and terminal
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
CN108156383B (en) High-dynamic billion pixel video acquisition method and device based on camera array
KR20060056050A (en) Creating method of automated 360 degrees panoramic image
US20090059018A1 (en) Navigation assisted mosaic photography
CN115174878B (en) Projection picture correction method, apparatus and storage medium
CN115086625B (en) Correction method, device and system for projection picture, correction equipment and projection equipment
CN116456191A (en) Image generation method, device, equipment and computer readable storage medium
CN116866722A (en) Panoramic image generation method, device, computer equipment and storage medium
CN116245734A (en) Panoramic image generation method, device, equipment and storage medium
CN113327198A (en) Remote binocular video splicing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination