CN112308778A - Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information - Google Patents

Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information Download PDF

Info

Publication number
CN112308778A
CN112308778A CN202011112436.1A CN202011112436A CN112308778A CN 112308778 A CN112308778 A CN 112308778A CN 202011112436 A CN202011112436 A CN 202011112436A CN 112308778 A CN112308778 A CN 112308778A
Authority
CN
China
Prior art keywords
original image
dimensional
space
point
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011112436.1A
Other languages
Chinese (zh)
Other versions
CN112308778B (en
Inventor
史文中
王牧阳
范文铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute HKPU
Original Assignee
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute HKPU filed Critical Shenzhen Research Institute HKPU
Priority to CN202011112436.1A priority Critical patent/CN112308778B/en
Publication of CN112308778A publication Critical patent/CN112308778A/en
Application granted granted Critical
Publication of CN112308778B publication Critical patent/CN112308778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The invention discloses a method and a terminal for assisting splicing of panoramic cameras by utilizing spatial three-dimensional information. The method comprises the following steps: obtaining a corresponding relation between a space three-dimensional point and an original image pixel according to the original image and space three-dimensional information of a scene corresponding to the original image; obtaining spherical projection points on a projection spherical surface according to the corresponding relation between the space three-dimensional points and the original image pixels, and constructing a target model for acquiring color information of the original image pixels; and obtaining a panoramic image according to the target model. Therefore, the problem of cliff type errors during panoramic image splicing in the prior art is effectively solved.

Description

Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information
Technical Field
The invention relates to the field of panoramic images, in particular to a method and a terminal for assisting splicing of a panoramic camera by utilizing spatial three-dimensional information.
Background
The panoramic camera is a camera system which is composed of a plurality of sub-cameras or sub-lenses and can obtain horizontal 360-degree image information, and a panoramic image is obtained by splicing original images acquired by the plurality of sub-cameras into a whole. However, the camera center of each sub-camera is not coincident with the center of the whole panoramic camera, so that all projection points and image pixels cannot truly correspond to each other, but the deviation between the center of each sub-camera and the center of the panoramic camera is usually not large, so that the projection area of each sub-camera has no obvious visual error. However, at the splicing seam, the centers of the two partial cameras are deviated, so that a cliff type error occurs in splicing.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and a terminal for assisting a panoramic camera in stitching by using spatial three-dimensional information, aiming at solving the problem of cliff type errors in panoramic image stitching in the prior art.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a method for assisting stitching of a panoramic camera by using spatial three-dimensional information, where the method includes:
obtaining a corresponding relation between a space three-dimensional point and an original image pixel according to the original image and space three-dimensional information of a scene corresponding to the original image;
obtaining spherical projection points on a projection spherical surface according to the corresponding relation between the space three-dimensional points and the original image pixels, and constructing a target model for acquiring color information of the original image pixels;
and obtaining a panoramic image according to the target model.
In an implementation method, the obtaining, according to the spatial three-dimensional information of the camera splitter, a correspondence between a spatial three-dimensional point and an original image pixel includes:
acquiring an original image and space three-dimensional information of a scene corresponding to the original image, and obtaining a rotation matrix and a translation vector according to the space three-dimensional information;
and establishing a corresponding relation between the space three-dimensional point and the original image pixel according to the rotation matrix and the translation vector.
In one implementation, the establishing the correspondence between the spatial three-dimensional point and the original image pixel according to the rotation matrix and the translation vector includes:
obtaining a first conversion expression for representing the conversion relation between the coordinates in the space coordinate system and the coordinates in the camera coordinate system according to the rotation matrix and the translation vector;
acquiring internal parameters of the camera splitter, and acquiring a second conversion formula for expressing the conversion relation between the coordinates under the camera coordinate system and the coordinates under the image plane coordinate system according to the internal parameters;
and establishing a corresponding relation between the space three-dimensional point and the original image pixel according to the first conversion formula and the second conversion formula.
In an implementation method, the obtaining spherical projection points on a projection spherical surface and constructing a target model according to the correspondence between the spatial three-dimensional points and the pixels of the original image includes:
acquiring a space coordinate of the space three-dimensional point;
obtaining a standard projection point according to the space coordinate of the space three-dimensional point and the corresponding relation between the space three-dimensional point and the original image pixel;
obtaining a spherical projection point according to the standard projection point;
and constructing a target model according to the spherical projection points.
In an implementation method, the obtaining a standard projection point according to a spatial coordinate of a spatial three-dimensional point and a corresponding relationship between the spatial three-dimensional point and an original image pixel includes:
obtaining a pixel plane coordinate point corresponding to the space three-dimensional point according to the space coordinate of the space three-dimensional point and the corresponding relation between the space three-dimensional point and the original image pixel;
obtaining the pixel depth of the original image pixel according to the coordinate point of the image plane;
and obtaining a standard projection point according to the center of the camera splitter and the pixel depth.
In one implementation, the obtaining the spherical projection point according to the standard projection point includes:
taking the center of the panoramic camera as a sphere center to obtain a projection spherical surface;
and projecting the standard projection point onto the projection spherical surface to obtain a spherical surface projection point.
In one implementation, the constructing the object model from the spherical projection points comprises:
acquiring longitude and latitude data of the spherical projection point and the distance from the standard projection point to the center of the sphere;
and constructing a target model according to the longitude and latitude data of the spherical projection point and the distance from the standard projection point to the sphere center.
In one embodiment, the obtaining a panoramic image according to the target model includes:
obtaining the corresponding relation between the original image pixel and the spherical projection point according to the corresponding relation between the space three-dimensional point and the original image pixel and the target model;
acquiring color information of an original image pixel, and converting the color information into color information of a spherical projection point corresponding to the original image pixel according to the corresponding relation between the original image pixel and the spherical projection point;
carrying out interpolation calculation on the color information of all spherical projection points according to the image resolution to obtain image color information;
and generating a panoramic image according to the image color information.
In a second aspect, an embodiment of the present invention further provides a terminal, where the terminal includes: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions; the processor is adapted to call instructions in the storage medium to implement any of the above-mentioned steps of the method for assisting stitching of a panoramic camera by using spatial three-dimensional information.
In a third aspect, an embodiment of the present invention further provides a storage medium having a plurality of instructions stored thereon, where the instructions are adapted to be loaded and executed by a processor to implement any of the above steps of the method for assisting stitching of a panoramic camera by using spatial three-dimensional information.
The invention has the beneficial effects that: according to the embodiment of the invention, the corresponding relation between the spatial three-dimensional point and the pixel of the original image is obtained according to the original image and the spatial three-dimensional information of the scene corresponding to the original image; obtaining spherical projection points on a projection spherical surface according to the corresponding relation between the space three-dimensional points and the original image pixels, and constructing a target model for acquiring color information of the original image pixels; and obtaining a panoramic image according to the target model. Therefore, the problem of cliff type errors during panoramic image splicing in the prior art is effectively solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for assisting stitching of a panoramic camera by using spatial three-dimensional information according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart illustrating a process of establishing a correspondence between a spatial three-dimensional point and an original image pixel according to an embodiment of the present invention.
Fig. 3 is a detailed flowchart for establishing a correspondence relationship between a spatial three-dimensional point and an original image pixel according to an embodiment of the present invention.
Fig. 4 is a schematic flowchart of building a target model according to an embodiment of the present invention.
Fig. 5 is a schematic flowchart of acquiring a standard proxel according to an embodiment of the present invention.
Fig. 6 is a schematic flowchart of a process for obtaining spherical projection points according to an embodiment of the present invention.
FIG. 7 is a detailed flow chart for building an object model according to an embodiment of the present invention.
Fig. 8 is a schematic flowchart of generating a panoramic image according to an embodiment of the present invention.
Fig. 9 is a schematic diagram illustrating a reason why a cliff-type error occurs in panoramic image stitching according to an embodiment of the present invention.
Fig. 10 is a first panoramic image of a rooftop obtained by a conventional stitching method according to an embodiment of the present invention.
Fig. 11 is a second panoramic image of a rooftop obtained by a conventional stitching method according to an embodiment of the present invention.
Fig. 12 is a third panoramic image of a rooftop obtained by using a conventional stitching method according to an embodiment of the present invention.
Fig. 13 is a panoramic image of a rooftop obtained by a method for assisting stitching of a panoramic camera by using spatial three-dimensional information according to an embodiment of the present invention.
Fig. 14 is a functional block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
The panoramic camera is a camera system which is composed of a plurality of sub-cameras or sub-lenses and can obtain horizontal 360-degree image information, and a panoramic image is obtained by splicing original images acquired by the plurality of sub-cameras into a whole.
The following two methods are commonly used for splicing: 1. and calculating the three-dimensional conversion matrix of each photo according to the position of the center of each camera obtained by calibration and the internal parameters of the camera by using the camera relation for splicing. 2. And performing homonymy point matching according to the overlapping area between the camera images by using an image matching method, and calculating to obtain a three-dimensional conversion matrix for splicing.
However, the two panoramic camera stitching methods have respective defects, which can cause the panoramic image to have a stitching seam, that is, a stitching area has a fault and is not coincident. In the first camera correlation method, the camera center of each camera is not coincident with the center of the whole panoramic camera, so that the panoramic camera with the center of the panoramic camera as the origin is spliced to generate a projection error; meanwhile, the position of the camera center corresponding to each photo obtained by using the calibration method is limited by the calibration precision. In the second image matching method, because the overlapping area of each camera is small, the corresponding points which are uniformly distributed enough cannot be obtained, and the overlapping area is located at the edge of the image with large distortion, the conversion parameters obtained by image matching are unreliable, and then projection errors are generated.
For example, as shown in fig. 9, a panoramic camera composed of 5 sub-cameras evenly distributed in 360 degrees horizontally is taken as an example of a top view, where O is the center of the camera and O is the center of the camera1The center of the first camera. 5 images are projected on a spherical surface with O as the center of the sphere to generate a panoramic image. Take the same position on the panoramic image as an example, L and different radii r1,r2Intersect at P1Is P2And P is1And P2Corresponds to O1Different rays in the original image are taken, i.e. correspond to different pixels in the original image. The different radii of the selected projection spherical surfaces can cause different image pixels corresponding to the same position of the panoramic image. Therefore, for the traditional spherical projection method, it is important to select a proper radius, and the average depth of the shooting environment is usually estimated, for example, 3m to 5m can be selected for shooting in a classroom, and 10m to 20m can be selected for shooting street view on a mobile measuring vehicle.
However, in the existing panoramic image stitching technology, P is1And P2At O1Corresponding pixel Q on the image1And Q2The color error can be caused because of not real correspondence, and splicing seams appear in splicing areas. Projected Q1、Q2Representative real space three-dimensional point P 1、P 2At O1P1And O2P2On the ray of (a) with O1A distance of Q1And Q2Of the depth of (c). OP in panoramic images 1And OP 2The representative ray is the correct correspondence of the spherical model. Correct spherical projection only guarantees O1And the center of each sub-camera is not overlapped with the center of the O, although the deviation between the center of each sub-camera and the center of the panoramic camera is usually not large, the projection area of each sub-camera has no obvious visual error, but the center of the two sub-cameras is deviated at the splicing seam, so that the splicing has cliff type errors.
Based on the defects in the prior art, the invention establishes the target model which can be projected correctly to obtain the panoramic image. The target model can realize correct assignment of color information of pixels in the panoramic image, so that the problems of projection errors and splicing seams are solved.
As shown in fig. 1, the method comprises the steps of:
step S100, obtaining a corresponding relationship between a spatial three-dimensional point and an original image pixel according to the original image and spatial three-dimensional information of a scene corresponding to the original image.
The spatial three-dimensional information refers to spatial information described by means of a spatial information carrier having attributes such as a shape, a position, and the like, such as a three-dimensional point cloud, a three-dimensional model, and the like. In this embodiment, the three-dimensional spatial information of the scene photographed by the camera may be acquired according to the Lidar sensor rigidly connected to the camera. In actual operation, in order to correctly project an original image shot by a camera, a corresponding relationship needs to be established between a real three-dimensional point and an original image pixel in the original shot image, so that correct assignment of color information of each pixel of a panoramic image is realized.
In one implementation, as shown in fig. 2, the step S100 specifically includes the following steps:
step S110, acquiring an original image and space three-dimensional information of a scene corresponding to the original image, and obtaining a rotation matrix and a translation vector according to the space three-dimensional information;
step S120, obtaining a rotation matrix and a translation vector according to the space three-dimensional information;
step S130, establishing a corresponding relation between the space three-dimensional point and an original image pixel according to the rotation matrix and the translation vector.
First, the three-dimensional spatial information of a scene corresponding to an original image needs to be acquired. In this embodiment, the ladybug5+ panoramic camera is taken as an example, and the panoramic camera includes 6 sub-lenses (equivalent to the sub-camera in this embodiment). Ladybug5+ was rigidly connected to a LiDAR capable of line scanning. When taking panoramic imagery, the location of the LiDAR in space is known from the LiDAR SLAM, while the relative position and orientation of the panoramic camera in the scene can be obtained from the relative relationship to the LiDAR. The spatial three-dimensional information may be derived from three-dimensional point cloud data of a scene captured by a panoramic camera using LiDAR for LiDAR SLAM.
And then acquiring the relative attitude of the panoramic camera according to the three-dimensional information of the space, namely acquiring the position T and the orientation R of the panoramic camera in the three-dimensional space to obtain a rotation matrix and a translation vector. When the space three-dimensional information is obtained by a sensor rigidly connected with the panoramic camera, the internal parameters of each camera and the position and rotation relation between the cameras can be obtained by a calibration method; when the space three-dimensional information consists of discrete points, matching the two-dimensional characteristic points with the unit characteristic points to obtain the internal parameters of each camera and the position and rotation relation between the two-dimensional characteristic points and the unit characteristic points; when the space three-dimensional information consists of lines and faces, edge lines and corner points are matched to obtain internal parameters of each camera and the position and rotation relation between the internal parameters and the corner points. And after the rotation matrix and the translation vector are obtained, converting the space three-dimensional point from a space coordinate system to a camera coordinate system of the camera according to the rotation matrix and the translation vector, thereby realizing the establishment of the corresponding relation between the space three-dimensional point and the original image pixel.
In one implementation, as shown in fig. 3, the step S130 specifically includes the following steps:
s131, obtaining a first conversion expression for expressing the conversion relation between the coordinates in the space coordinate system and the coordinates in the camera coordinate system according to the rotation matrix and the translation vector;
step S132, obtaining internal parameters of the camera, and obtaining a second conversion formula for expressing the conversion relation between the coordinates under the camera coordinate system and the coordinates under the image plane coordinate system according to the internal parameters;
step S133, establishing a corresponding relationship between the spatial three-dimensional point and the original image pixel according to the first conversion formula and the second conversion formula.
In addition to the transformation between the spatial coordinate system and the camera coordinate system, the present embodiment also relates to the transformation between the camera coordinate system and the image plane coordinate system. The space coordinate system refers to an absolute coordinate system in the objective world, which represents the actual position of the object in space and is used for describing the coordinate position relationship between the object and the camera in the three-dimensional space. The camera coordinate system is established based on the lens optical imaging principle of the camera, wherein the coordinate origin of the coordinate system is the optical center of the camera, and the Z axis is coincident with the optical axis of the camera and is vertical to the imaging plane. The image plane coordinate system is a two-dimensional coordinate system formed on the basis of the photosensitive imaging surface of the camera and an origin on the optical axis of the camera, the X axis and the Y axis of the plane of the image plane coordinate system are respectively parallel to the X axis and the Y axis of the plane of the camera coordinate system, and the origin of the coordinate of the image plane coordinate system is the intersection point between the axis of the camera coordinate system and the image plane coordinate system. The corresponding relation between the coordinate points of the image plane and the pixels of the original image is stored in the sub-camera, so that the coordinate points on the coordinate system of the image plane can be converted into the corresponding pixels of the original image under the condition that the unit pixel size of the camera is known.
Specifically, the transformation between the spatial coordinate system and the camera coordinate system mainly depends on the rotation matrix and the translation vector for transformation. And rotating the space coordinate system where the sub-camera is located according to the rotation matrix and translating the space coordinate system according to the translation vector to convert the space coordinate system into the camera coordinate system of the sub-camera, thereby obtaining a conversion formula between the space coordinate point and the camera coordinate point, namely a first conversion formula. For example, assume that a coordinate point in a spatial coordinate system is (X)n,Yn,Zn),The coordinate point in the corresponding camera coordinate system is (X'n,Y’n,Z’n) Then (X)n,Yn,Zn) And (X'n,Y’n,Z’n) The first conversion formula for conversion is as follows:
Figure BDA0002729032570000101
where RL is a rotation matrix of 3 × 3 and TL is a translation vector of 3 × 1. The space three-dimensional point under the space coordinate system can be converted into the camera coordinate system through the first conversion formula, and the corresponding camera coordinate point is obtained.
The conversion between the camera coordinate system and the image plane coordinate system mainly depends on the internal parameters of the camera for conversion. According to the internal parameters of the sub-camera, the camera coordinate points in the camera coordinate system are projected to the image plane coordinate system to obtain image plane coordinate points corresponding to the camera coordinate points, and therefore a conversion formula between the camera coordinate points and the image plane coordinate points, namely a second conversion formula is obtained. The coordinates in the camera coordinate system can be converted into coordinates in the image plane coordinate system by the second conversion formula. For example, assume that the coordinate point in the camera coordinate system is (X'n,Y’n,Z’n) The coordinate point in the image plane coordinate system is (x)n,yn) Then (X'n,Y’n,Z’n) And (x)n,yn) The second conversion formula for conversion is as follows:
Figure BDA0002729032570000102
wherein K is a matrix of 3 x 3, belonging to the intrinsic parameters of the camera. The coordinate points in the image plane coordinate system correspond to the original image pixels shot by each sub-camera, so that the corresponding relationship between the camera coordinate points and the image plane coordinate points is determined, namely the corresponding relationship between the camera coordinate points and the pixels is indirectly determined. And the camera coordinate point can obtain the corresponding space three-dimensional point through the first conversion formula, so that the corresponding relation between the space three-dimensional point and the original image pixel can be established according to the first conversion formula and the second conversion formula.
After acquiring the corresponding relationship between the spatial three-dimensional point and the original image pixel, in order to obtain a target model for realizing correct projection, as shown in fig. 1, the method further includes the following steps:
and S200, obtaining a spherical projection point on a projection spherical surface according to the corresponding relation between the spatial three-dimensional point and the original image pixel, and constructing a target model for obtaining the color information of the original image pixel.
When multiple sub-cameras are used for shooting, all the original images shot by the sub-cameras are usually mapped to a standard projection. In view of better visualization of the spherical surface, the spherical surface is usually selected as the surface of the standard projection. However, in the prior art, images shot by a plurality of cameras are synthesized into a panoramic image of a spherical model, and an obvious splicing seam often appears in a splicing area due to wrong color information of pixels, so that a target model for acquiring correct color information of pixels is introduced in the embodiment.
In one implementation, as shown in fig. 4, the step S200 further includes the following steps:
step S210, obtaining the space coordinate of the space three-dimensional point;
s220, obtaining a standard projection point according to the space coordinate of the space three-dimensional point and the corresponding relation between the space three-dimensional point and the original image pixel;
step S230, obtaining spherical projection points according to the standard projection points;
and S240, constructing a target model according to the spherical projection points.
Specifically, the spatial coordinates of the spatial three-dimensional points are first obtained, the original image pixels corresponding to the spatial three-dimensional points and the coordinates thereof in an image plane coordinate system can be obtained according to the spatial coordinates of the spatial three-dimensional points and the corresponding relationship between the spatial three-dimensional points and the original image pixels, and the standard projection points corresponding to the spatial three-dimensional points are obtained according to the image plane coordinates of the original image pixels. And then obtaining a spherical projection point corresponding to the standard projection point on the projection spherical surface, wherein the spherical projection point can correspond to a real space three-dimensional point. Therefore, the panoramic image pixels can correspond to the real three-dimensional space points according to the target model constructed by the spherical projection points, and the pixels in the subsequent panoramic image can obtain correct color information.
In one implementation, as shown in fig. 5, the step S220 specifically includes the following steps:
step S221, obtaining a pixel plane coordinate point corresponding to the space three-dimensional point according to the space coordinate of the space three-dimensional point and the corresponding relation between the space three-dimensional point and the original image pixel;
step S222, obtaining the pixel depth of the original image pixel according to the image plane coordinate point;
and step S223, obtaining a standard projection point according to the center of the camera and the pixel depth.
Specifically, if the spatial coordinates of the spatial three-dimensional point are known, the original image pixel corresponding to the spatial three-dimensional point and the corresponding pixel plane coordinate point thereof can be obtained through the corresponding relationship between the spatial three-dimensional point and the original image pixel, and the pixel depth of the original image pixel can be obtained according to the coordinates of the pixel plane coordinate point and a preset pixel depth calculation formula.
The calculation process of the pixel depth is illustrated, and the spatial coordinate of a spatial three-dimensional point is assumed to be (X)n,Yn,Zn) Through the corresponding relation between the space three-dimensional point and the original image pixel, the original image pixel corresponding to the space three-dimensional point can be obtained as (x)n,yn) According to the calculation formula of the pixel depth:
Figure BDA0002729032570000121
the pixel depth of the original image pixel is obtained. Due to the high redundancy of the point cloud technology, the situation that a plurality of spatial three-dimensional points correspond to one original image pixel may occurIn this case, the pixel depth with the smallest pixel depth value is selected as the pixel depth of the original image pixel. And then, with the center of the camera splitter as an end point, finding a standard projection point corresponding to each pixel according to the pixel depth of the pixel, wherein the standard projection point can accurately correspond to a real space three-dimensional point. According to the number of the cameras in the camera splitting system and the resolution of the cameras, the number of the standard projection points can be calculated. For example, when 6 sub-cameras are included in the sub-camera system, and the resolution of each sub-camera is H × W, a total of 6 × H × W standard projection points can be obtained.
In one implementation, as shown in fig. 6, the step S230 specifically includes the following steps:
s231, taking the center of the panoramic camera as a sphere center to obtain a projection spherical surface;
and step S232, projecting the standard projection point onto the projection spherical surface to obtain a spherical projection point.
In this embodiment, it is necessary to first determine a standard projection point corresponding to a real spatial three-dimensional point, and then obtain a spherical projection point according to the standard projection point, so that the obtained spherical projection point can correctly correspond to the real spatial three-dimensional point. Specifically, firstly, the center of the panoramic camera is taken as the sphere center, any length is taken as the radius, a projection sphere for projecting the original image is obtained, and then the standard projection point is projected onto the projection sphere, so that a spherical projection point is obtained. In an implementation manner, the step may be performed by connecting the standard projection point with the center of the sphere, and obtaining an intersection point of a line segment obtained after the connection and the projection sphere, that is, the sphere projection point corresponding to the standard projection point. In one implementation, to save computational resources, the radius of the projection sphere may be set to 1.
In one implementation, as shown in fig. 7, the step S240 specifically includes the following steps:
s241, acquiring longitude and latitude data of the spherical projection point and the distance from the standard projection point to the center of the sphere;
and S242, constructing a target model according to the longitude and latitude data of the spherical projection point and the distance from the standard projection point to the sphere center.
In this embodiment, the longitude and latitude data of the spherical projection point and the distance from the standard projection point to the sphere center need to be acquired, and the acquired data are used as the auxiliary information of each node in the target model, so as to construct the target model. Each node in the target model has a real space three-dimensional point corresponding to the node, and the original image pixel corresponding to each node in the target model can be obtained according to the corresponding relation between the space three-dimensional point and the original image pixel. On the other hand, each node in the target model also has a corresponding spherical projection point, so that the correct correspondence between the original image pixel and the spherical projection point can be realized through the target model.
After obtaining the correct corresponding relationship between the original image pixels and the spherical projection points, as shown in fig. 1, in order to obtain a correctly projected panoramic image, the method further includes the following steps:
and step S300, obtaining a panoramic image according to the target model.
The final purpose of this embodiment is to correct the color information of the projected panoramic image. Because the target model can realize the correct correspondence of the original image pixels and the spherical projection points, the correctly projected panoramic image can be generated by acquiring the color information of the spherical projection points through the target model.
In one implementation, as shown in fig. 8, the step S300 specifically includes the following steps:
step S310, obtaining the corresponding relation between the original image pixel and the spherical projection point according to the corresponding relation between the space three-dimensional point and the original image pixel and the target model;
step S320, obtaining color information of an original image pixel, and converting the color information into color information of a spherical projection point corresponding to the original image pixel according to the corresponding relation between the original image pixel and the spherical projection point;
s330, performing interpolation calculation on the color information of all spherical projection points according to the image resolution to obtain image color information;
and step S340, generating a panoramic image according to the image color information.
Specifically, the target model includes a corresponding relationship between a real three-dimensional point and a spherical projection point, and therefore, the corresponding relationship between the three-dimensional point and the original image pixel is obtained by adding the corresponding relationship between the three-dimensional point and the original image pixel. And then acquiring color information of the original image pixel, and converting the color information into color information of a corresponding spherical projection point according to the corresponding relation between the original image pixel and the spherical projection point. And then carrying out interpolation calculation on the color information of all spherical projection points according to the image resolution of the sub-camera to obtain the correct color information of the panoramic image, thereby generating the correctly projected panoramic image. For example, each node in the object model may be represented in the form of (phi, omega, r), where phi and omega are longitude and latitude data of a spherical projection point, and r is a distance from a standard projection point to the center of the sphere. Each node (phi, omega, r) can be associated with a spatial three-dimensional point (X)n,Yn,Zn) And an original image pixel (x)n,yn) Correspondingly, the original image pixel (x)n,yn) The color information (R, G, B) of the panoramic image is obtained by giving the corresponding spherical projection point to the node through the node, and the color information of the panoramic image can be obtained by carrying out interpolation calculation on all the (phi, omega, R, G, B) according to the image resolution. In the actual measurement process, due to the fact that the images are shielded by the external environment and other factors, partial areas in a scene corresponding to an original image shot by the panoramic camera cannot be scanned, namely, corresponding three-dimensional point cloud data do not exist, and in order to guarantee the integrity of the generated panoramic image, interpolation calculation is carried out on all (phi, omega, R, G and B) according to the resolution ratio of the image to obtain the color information of the panoramic image. Currently, common difference methods include interpolation calculation methods such as an adjacent point interpolation method, a bilinear interpolation method, a color shift interpolation method, and the like.
Fig. 10, 11 and 12 are panoramic images of a rooftop acquired by a conventional stitching method, and fig. 13 is a panoramic image of a rooftop acquired by a method of the present invention. As shown in fig. 10, when the radius of the projection sphere is set to 1 meter by using the conventional splicing method, a splicing seam is clearly seen in the middle, and a splicing seam is also clearly seen in the enclosure on the right side. As shown in fig. 11, the radius of the projection spherical surface is set to be 3 meters by using the traditional splicing method, the wood chairs at the nearby positions are well spliced, obvious errors occur on the ceramic tiles and the flower bed wall surface, and a little error also exists on the enclosing wall at the right side. As shown in fig. 12, the radius of the projected spherical surface is set to 10m by using the conventional splicing method, the distance is wrong, the wall surface of the flower bed at the far position is correct, and the wall surface at the right side is basically correct. As shown in fig. 13, the panoramic image obtained by the method of the present invention is projected substantially correctly regardless of the scene at different depths. Therefore, the method provided by the invention can effectively solve the problems of panoramic image projection error and splicing seams in the prior art.
Based on the above embodiments, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 14. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program is executed by a processor to implement a method of assisting stitching of a panoramic camera with spatial three-dimensional information. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram of fig. 14 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have different arrangements of components.
In one implementation, one or more programs are stored in a memory of the smart terminal and configured to be executed by one or more processors include instructions for performing a method of assisting panoramic camera stitching with spatial three-dimensional information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a method for assisting stitching of a panoramic camera by using spatial three-dimensional information, wherein the method comprises: obtaining a corresponding relation between a space three-dimensional point and an original image pixel according to the original image and space three-dimensional information of a scene corresponding to the original image; obtaining spherical projection points on a projection spherical surface according to the corresponding relation between the space three-dimensional points and the original image pixels, and constructing a target model for acquiring color information of the original image pixels; and obtaining a panoramic image according to the target model. Thereby effectively solving the problem of cliff-breaking errors during the panoramic image splicing in the prior art
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for assisting stitching of a panoramic camera by using spatial three-dimensional information is characterized by comprising the following steps:
obtaining a corresponding relation between a space three-dimensional point and an original image pixel according to the original image and space three-dimensional information of a scene corresponding to the original image;
obtaining spherical projection points on a projection spherical surface according to the corresponding relation between the space three-dimensional points and the original image pixels, and constructing a target model for acquiring color information of the original image pixels;
and obtaining a panoramic image according to the target model.
2. The method of claim 1, wherein obtaining the corresponding relationship between the spatial three-dimensional point and the pixel of the original image according to the spatial three-dimensional information of the sub-camera and the spatial three-dimensional information of the scene corresponding to the original image comprises:
acquiring an original image and space three-dimensional information of a scene corresponding to the original image, and obtaining a rotation matrix and a translation vector according to the space three-dimensional information;
and establishing a corresponding relation between the space three-dimensional point and the original image pixel according to the rotation matrix and the translation vector.
3. The method of claim 2, wherein the establishing the correspondence between the spatial three-dimensional points and the original image pixels according to the rotation matrix and the translation vector comprises:
obtaining a first conversion expression for representing the conversion relation between the coordinates in the space coordinate system and the coordinates in the camera coordinate system according to the rotation matrix and the translation vector;
acquiring internal parameters of the camera splitter, and acquiring a second conversion formula for expressing the conversion relation between the coordinates under the camera coordinate system and the coordinates under the image plane coordinate system according to the internal parameters;
and establishing a corresponding relation between the space three-dimensional point and the original image pixel according to the first conversion formula and the second conversion formula.
4. The method for assisting in stitching of a panoramic camera by using spatial three-dimensional information according to claim 1, wherein the obtaining of spherical projection points on a projection spherical surface and the construction of a target model according to the correspondence between the spatial three-dimensional points and pixels of an original image comprises:
acquiring a space coordinate of the space three-dimensional point;
obtaining a standard projection point according to the space coordinate of the space three-dimensional point and the corresponding relation between the space three-dimensional point and the original image pixel;
obtaining a spherical projection point according to the standard projection point;
and constructing a target model according to the spherical projection points.
5. The method of claim 4, wherein obtaining a standard projection point according to the spatial coordinates of the spatial three-dimensional point and the corresponding relationship between the spatial three-dimensional point and the original image pixel comprises:
obtaining a pixel plane coordinate point corresponding to the space three-dimensional point according to the space coordinate of the space three-dimensional point and the corresponding relation between the space three-dimensional point and the original image pixel;
obtaining the pixel depth of the original image pixel according to the coordinate point of the image plane;
and obtaining a standard projection point according to the center of the camera splitter and the pixel depth.
6. The method of claim 4, wherein obtaining spherical projection points according to the standard projection points comprises:
taking the center of the panoramic camera as a sphere center to obtain a projection spherical surface;
and projecting the standard projection point onto the projection spherical surface to obtain a spherical surface projection point.
7. The method for assisting stitching of a panoramic camera according to the three-dimensional information in the space as set forth in claim 6, wherein the constructing of the target model according to the spherical projection points comprises:
acquiring longitude and latitude data of the spherical projection point and the distance from the standard projection point to the center of the sphere;
and constructing a target model according to the longitude and latitude data of the spherical projection point and the distance from the standard projection point to the sphere center.
8. The method of claim 1, wherein obtaining a panoramic image according to the target model comprises:
obtaining the corresponding relation between the original image pixel and the spherical projection point according to the corresponding relation between the space three-dimensional point and the original image pixel and the target model;
acquiring color information of an original image pixel, and converting the color information into color information of a spherical projection point corresponding to the original image pixel according to the corresponding relation between the original image pixel and the spherical projection point;
carrying out interpolation calculation on the color information of all spherical projection points according to the image resolution to obtain image color information;
and generating a panoramic image according to the image color information.
9. A terminal, comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions; the processor is adapted to invoke instructions in the storage medium to implement the steps of the method of assisting stitching of a panoramic camera with spatial three-dimensional information of any of the preceding claims 1-8.
10. A storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to perform the steps of the method of using spatial three-dimensional information to assist stitching of a panoramic camera of any of claims 1-8.
CN202011112436.1A 2020-10-16 2020-10-16 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information Active CN112308778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011112436.1A CN112308778B (en) 2020-10-16 2020-10-16 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011112436.1A CN112308778B (en) 2020-10-16 2020-10-16 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information

Publications (2)

Publication Number Publication Date
CN112308778A true CN112308778A (en) 2021-02-02
CN112308778B CN112308778B (en) 2021-08-10

Family

ID=74328171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011112436.1A Active CN112308778B (en) 2020-10-16 2020-10-16 Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information

Country Status (1)

Country Link
CN (1) CN112308778B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134546A1 (en) * 2022-01-12 2023-07-20 如你所视(北京)科技有限公司 Scene space model construction method and apparatus, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud
CN108470370A (en) * 2018-03-27 2018-08-31 北京建筑大学 The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109115186A (en) * 2018-09-03 2019-01-01 山东科技大学 A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method
CN109658511A (en) * 2018-12-11 2019-04-19 香港理工大学 A kind of calculation method and relevant apparatus of the adjacent interframe posture information based on image
US20190180714A1 (en) * 2017-12-08 2019-06-13 Topcon Corporation Device, method, and program for controlling displaying of survey image
CN109903227A (en) * 2019-02-21 2019-06-18 武汉大学 Full-view image joining method based on camera geometry site
CN110223226A (en) * 2019-05-07 2019-09-10 中国农业大学 Panorama Mosaic method and system
CN111275750A (en) * 2020-01-19 2020-06-12 武汉大学 Indoor space panoramic image generation method based on multi-sensor fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud
US20190180714A1 (en) * 2017-12-08 2019-06-13 Topcon Corporation Device, method, and program for controlling displaying of survey image
CN108470370A (en) * 2018-03-27 2018-08-31 北京建筑大学 The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109115186A (en) * 2018-09-03 2019-01-01 山东科技大学 A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method
CN109658511A (en) * 2018-12-11 2019-04-19 香港理工大学 A kind of calculation method and relevant apparatus of the adjacent interframe posture information based on image
CN109903227A (en) * 2019-02-21 2019-06-18 武汉大学 Full-view image joining method based on camera geometry site
CN110223226A (en) * 2019-05-07 2019-09-10 中国农业大学 Panorama Mosaic method and system
CN111275750A (en) * 2020-01-19 2020-06-12 武汉大学 Indoor space panoramic image generation method based on multi-sensor fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134546A1 (en) * 2022-01-12 2023-07-20 如你所视(北京)科技有限公司 Scene space model construction method and apparatus, and storage medium

Also Published As

Publication number Publication date
CN112308778B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN110490916B (en) Three-dimensional object modeling method and apparatus, image processing device, and medium
CN110351494B (en) Panoramic video synthesis method and device and electronic equipment
US7899270B2 (en) Method and apparatus for providing panoramic view with geometric correction
CN111862302B (en) Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
Ha et al. Panorama mosaic optimization for mobile camera systems
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
JP2000350239A (en) Camera.calibration device and method, image processing unit and method, program serving medium and camera
CN112308778B (en) Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information
KR20060056050A (en) Creating method of automated 360 degrees panoramic image
CN115471619A (en) City three-dimensional model construction method based on stereo imaging high-resolution satellite image
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
CN117522963A (en) Corner positioning method and device of checkerboard, storage medium and electronic equipment
CN112529006A (en) Panoramic picture detection method and device, terminal and storage medium
CN109579796B (en) Area network adjustment method for projected image
JP2016114445A (en) Three-dimensional position calculation device, program for the same, and cg composition apparatus
EP4266239A1 (en) Image splicing method, computer-readable storage medium, and computer device
JP4548228B2 (en) Image data creation method
CN115508814A (en) Camera and laser radar combined calibration method and device, medium and robot
CN114792343A (en) Calibration method of image acquisition equipment, and method and device for acquiring image data
CN114187415A (en) Topographic map generation method and device
CN110148086B (en) Depth filling method and device for sparse depth map and three-dimensional reconstruction method and device
CN115578466A (en) Camera calibration method and device, computer readable storage medium and electronic equipment
CN113160059A (en) Underwater image splicing method and device and storage medium
CN110779517A (en) Data processing method and device of laser radar, storage medium and computer terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant