CN115330888A - Three-dimensional panoramic image generation method and device and electronic equipment - Google Patents

Three-dimensional panoramic image generation method and device and electronic equipment Download PDF

Info

Publication number
CN115330888A
CN115330888A CN202210854523.7A CN202210854523A CN115330888A CN 115330888 A CN115330888 A CN 115330888A CN 202210854523 A CN202210854523 A CN 202210854523A CN 115330888 A CN115330888 A CN 115330888A
Authority
CN
China
Prior art keywords
image
point cloud
point
cloud picture
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210854523.7A
Other languages
Chinese (zh)
Inventor
蒋斌
梁兵
曹杨
刘欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jianzhi Technology Co ltd
Original Assignee
Beijing Jianzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jianzhi Technology Co ltd filed Critical Beijing Jianzhi Technology Co ltd
Priority to CN202210854523.7A priority Critical patent/CN115330888A/en
Publication of CN115330888A publication Critical patent/CN115330888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the invention provides a method and a device for generating a three-dimensional panoramic image and electronic equipment, wherein the method comprises the following steps: acquiring at least four images shot by at least four depth cameras at the same moment and depth data of each pixel point in the images, wherein scenes in the field angles of the adjacent depth cameras are overlapped; generating a first point cloud picture and a second point cloud picture based on depth data of each pixel point in a first image and a second image, wherein the first point cloud picture and the second point cloud picture have an overlapping region and the origin of coordinates are both at a first position; and overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is the overlapping area of the first point cloud picture and the second point cloud picture. The three-dimensional panoramic image can guide a user to accurately judge the surrounding conditions of the vehicle, and the safety in the driving process is improved.

Description

Three-dimensional panoramic image generation method and device and electronic equipment
Technical Field
The invention relates to the technical field of vehicles, in particular to a method and a device for generating a three-dimensional panoramic image and electronic equipment.
Background
With the increasing number of vehicles on the road, the driving safety is more and more emphasized. At present, vehicle-mounted equipment such as a vehicle event data recorder and a reversing image can be installed on most vehicles to improve the safety in the driving process.
Because in the driving process, the blind areas of the driver are more, all the blind areas cannot be covered by the automobile data recorder and the backing image alone. Therefore, the vehicle-mounted 360-degree panorama is produced, videos are collected in real time through 4-8 wide-angle fisheye cameras installed on the vehicle, video synthesis processing is carried out through an algorithm, and a bird's-eye view of the periphery of the vehicle is formed, so that a driver can view real-time image information fused with the 360-degree panorama around the vehicle through the bird's-eye view, the view of the driver is wider, and driving is safer.
However, as the requirement of the user on driving safety is continuously increased, the current vehicle-mounted 360 ° panorama cannot meet the requirement of the user gradually.
Disclosure of Invention
The embodiment of the invention provides a method and a device for generating a three-dimensional panoramic image and electronic equipment, and aims to solve the problem that in the prior art, a vehicle-mounted 360-degree panoramic image cannot meet the increasing requirements of users on driving safety.
In a first aspect, an embodiment of the present invention provides a method for generating a three-dimensional panoramic image, where the method includes:
acquiring at least four images shot by at least four depth cameras at the same moment and depth data of each pixel point in the images, wherein scenes in the field angles of the adjacent depth cameras are overlapped;
generating a first point cloud picture and a second point cloud picture based on the depth data of each pixel point in the first image and the second image, wherein the first point cloud picture and the second point cloud picture have an overlapping region, and the origin of coordinates are both at a first position;
overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is the overlapping area of the first point cloud picture and the second point cloud picture;
obtaining a panoramic point cloud picture based on the first splicing point cloud picture and the depth data of each pixel point in the residual image; wherein the remaining image is an image of the at least four images other than the first image and the second image;
and generating a three-dimensional panoramic image based on the panoramic point cloud image and the color data of each pixel point in the at least four images.
Optionally, the generating a first point cloud image and a second point cloud image based on the depth data of each pixel point in the first image and the second image includes:
mapping each pixel point in the first image to a world coordinate system with the first position as a coordinate origin to obtain a first cloud picture based on the depth data of each pixel point in the first image;
mapping each pixel point in the second image to a world coordinate system with the first position as a coordinate origin to obtain a second point cloud picture based on the depth data of each pixel point in the second image and first combined calibration data;
the first combined calibration data is obtained by performing combined calibration on the depth camera for shooting the first image and the second image.
Optionally, the obtaining a panoramic point cloud image based on the first stitched point cloud image and depth data of each pixel point in the remaining image includes:
generating a third point cloud picture based on the depth data of each pixel point in the third image and the second combined calibration data; the second combined calibration data is obtained by performing combined calibration on the depth camera which shoots the first image and the third image; the third point cloud picture and the first spliced point cloud picture have an overlapping area, and the origin of coordinates is at the first position;
overlapping the first spliced point cloud picture and the third spliced point cloud picture according to a second overlapping area to obtain a second spliced point cloud picture, wherein the second overlapping area is the overlapping area of the third spliced point cloud picture and the first spliced point cloud picture;
generating a fourth cloud image of the coordinate origin at the target position of the vehicle based on the depth data of each pixel point in the fourth image, wherein the fourth cloud image and the second spliced point cloud image have an overlapping region;
and switching the coordinate origin of the second spliced point cloud picture to the target position, and overlapping the second spliced point cloud picture and the fourth spliced point cloud picture after the coordinate origin is switched according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is an overlapping area of the fourth spliced point cloud picture and the second spliced point cloud picture.
Optionally, the obtaining a panoramic point cloud image based on the first stitched point cloud image and depth data of each pixel point in the remaining image includes:
generating a third cloud image and a fourth cloud image based on the depth data of each pixel point in the third image and the fourth image; wherein the third cloud picture and the fourth cloud picture have an overlapping area, and the origin of coordinates are both at a second position;
overlapping the third cloud image and the fourth cloud image according to a second overlapping area to obtain a second spliced cloud image, wherein the second overlapping area is an overlapping area of the third cloud image and the fourth cloud image;
and switching the coordinate origin points of the first splicing point cloud picture and the second splicing point cloud picture to the target position of the vehicle, and overlapping according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is the overlapping area of the first splicing point cloud picture and the second splicing point cloud picture.
Optionally, the target position is a position at which a driver's head of the vehicle is located.
In a second aspect, an embodiment of the present invention further provides an apparatus for generating a three-dimensional panoramic image, where the apparatus includes:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring at least four images shot by the at least four depth cameras at the same moment and depth data of each pixel point in the images, and scenes in the field angles of the adjacent depth cameras are overlapped;
the point cloud module is used for generating a first point cloud picture and a second point cloud picture based on depth data of each pixel point in the first image and the second image, the first point cloud picture and the second point cloud picture have an overlapping area, and the origin of coordinates are both at a first position;
the first splicing module is used for overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is the overlapping area of the first point cloud picture and the second point cloud picture;
the second splicing module is used for obtaining a panoramic point cloud picture based on the first spliced point cloud picture and the depth data of each pixel point in the residual image; wherein the remaining image is an image of the at least four images other than the first image and the second image;
and the panoramic module is used for generating a three-dimensional panoramic image based on the panoramic point cloud image and the color data of each pixel point in the at least four images.
Optionally, the point cloud module comprises:
the first point cloud unit is used for mapping each pixel point in the first image to a world coordinate system with the first position as a coordinate origin to obtain a first point cloud picture based on the depth data of each pixel point in the first image;
the second point cloud unit is used for mapping each pixel point in the second image to a world coordinate system taking the first position as a coordinate origin to obtain a second point cloud picture based on the depth data of each pixel point in the second image and the first combined calibration data;
the first combined calibration data is data obtained by performing combined calibration on the depth camera which shoots the first image and the second image.
Optionally, the second splicing module comprises:
the third point cloud unit is used for generating a third point cloud picture based on the depth data and the second combined calibration data of all the pixel points in the third image; the second combined calibration data is obtained by performing combined calibration on the depth camera for shooting the first image and the third image; the third point cloud picture and the first spliced point cloud picture have an overlapping area, and the origin of coordinates is at the first position;
the first splicing unit is used for overlapping the first splicing point cloud picture and the third splicing point cloud picture according to a second overlapping area to obtain a second splicing point cloud picture, wherein the second overlapping area is the overlapping area of the third splicing point cloud picture and the first splicing point cloud picture.
A fourth cloud unit, configured to generate a fourth cloud image of a coordinate origin at a target position of a vehicle based on depth data of each pixel point in the fourth image, where the fourth cloud image and the second stitched cloud image have an overlapping region;
and the second splicing unit is used for switching the origin of coordinates of the second spliced point cloud picture to the target position, overlapping the second spliced point cloud picture after the origin of coordinates is switched with the fourth spliced point cloud picture according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is an overlapping area of the fourth spliced point cloud picture and the second spliced point cloud picture.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps in the method according to the first aspect are implemented.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the method according to the first aspect.
In the embodiment of the invention, images and depth data shot by at least four depth cameras at the same time can be acquired, and because scenes in the field angles of adjacent depth cameras are overlapped, scene information around the installation position of the depth cameras can be acquired. And then generating a first point cloud picture and a second point cloud picture based on the depth data of each pixel point in the first image and the second image. The first point cloud picture and the second point cloud picture have an overlapping region, and the coordinate origin is at a first position, that is, the two point cloud pictures have the same coordinate origin in the world coordinate system. And then, directly overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is an overlapping area of the first point cloud picture and the second point cloud picture. Because the two point cloud images have the same coordinate origin, the first spliced point cloud image can be obtained by directly splicing. And then, based on the first spliced point cloud picture and the depth data of each pixel point in the residual image, obtaining a panoramic point cloud picture, adding color data into the panoramic point cloud picture to obtain a three-dimensional panoramic image, wherein the three-dimensional panoramic image contains the color data and shows the scenes around the vehicle to a user in a three-dimensional form, so that the user can be guided to make accurate judgment on the conditions around the vehicle, the safety in the driving process is improved, and the increasing requirements of the user on the driving safety are further met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating steps of a method for generating a three-dimensional panoramic image according to an embodiment of the present invention;
FIG. 2 is a schematic layout of a depth camera of a vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a target location of a vehicle provided by an embodiment of the present invention;
fig. 4 is a flowchart of an actual application of the method for generating a three-dimensional panoramic image according to the embodiment of the present invention;
fig. 5 is a block diagram of a three-dimensional panoramic image generation apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for generating a three-dimensional panoramic image, where the method may include:
step 101: at least four images shot by at least four depth cameras at the same moment and depth data of each pixel point in the images are obtained.
It should be noted that, in order to ensure timing synchronization between different images captured by different depth cameras, here, an image and depth data are obtained for a time instant, that is, for each time instant, at least four images captured by at least four depth cameras at the time instant and depth data of each pixel point in the images are obtained. It can be understood that, in the case that a three-dimensional panoramic image in a certain time period needs to be obtained, the data processing processes for the images and the depth data obtained at different times are the same, a three-dimensional panoramic image at each time in the time period is obtained, and finally, a three-dimensional panoramic image in the time period is generated. Here, a process of obtaining a three-dimensional panoramic image at one time will be described by taking only one time as an example. Preferably, when performing timing Synchronization, high-Precision data timing Synchronization can be achieved through Precision Time Synchronization Protocol (PTP) or Pulse Per Second (PPS), but is not limited thereto.
For the depth data of each pixel point in the image, an internal parameter matrix and an external parameter matrix of a depth camera can be firstly generated through calibration, a left image and a right image are obtained through real-time shooting, and parallax data (Disparity) is generated through a stereo matching algorithm. And finally, according to the principle of triangular distance measurement, the depth data = B (base line) xF (equivalent focal length)/Disparity (parallax data) can be obtained, and the depth data of each pixel point can be obtained.
Preferably, the depth camera comprises a binocular camera, each binocular camera is used for shooting to obtain two images of a left image and a right image, the left image and the right image are aligned to a plane in advance, and pixel-by-pixel feature matching is carried out on the left image and the right image by means of epipolar constraint conditions so as to generate the full-resolution depth point cloud.
It is understood that at least four depth cameras may be mounted on the same object, and the scenes in the field angles of adjacent depth cameras overlap, so as to collect scene information around the object. Specifically, when the method for generating a three-dimensional panoramic image provided by the embodiment of the present invention is applied to a vehicle, at least four depth cameras are installed around the vehicle, and scenes in the field angles of adjacent depth cameras overlap, it can be understood that the depth cameras are cameras capable of collecting color data and depth data, that is, not only an image including scene information in the field angle thereof but also a distance between each point in the image and the depth cameras can be obtained by the depth cameras. Here, at least one depth camera is mounted on each of front, rear, left, and right sides of the vehicle, where the front side of the vehicle means a head side, the rear side means a tail side, the left side means a side close to a driver, and the right side means a side close to a passenger. The number of depth cameras installed is related to the angle of field of view of each depth camera. For example, in a case where the angle of view of each depth camera is about 180 °, one depth camera may be installed on each side, but is not limited thereto. As shown in fig. 2, the vehicle 21 has a first depth camera 22 mounted on the front side, a second depth camera 23 mounted on the left side, a third depth camera 24 mounted on the right side, and a fourth depth camera 25 mounted on the rear side. Accordingly, there is an overlap of the scenes within the field of view of the first and second depth cameras 22, 24, 25, 24.
Step 102: and generating a first point cloud picture and a second point cloud picture based on the depth data of each pixel point in the first image and the second image.
It should be noted that the first and second point clouds have overlapping regions and the origin of coordinates are both at the first location. Since the first point cloud image is generated using the first image and the second point cloud image is generated using the second image, the first image and the second image also have an overlapping region. Specifically, the first image and the second image are two images with an overlapping area in at least four images, namely images obtained by shooting by two adjacent depth cameras with the scenes in the field angle overlapping. Preferably, the first position is a position of the depth camera taking the first image or the second image in the world coordinate system, i.e. one of two adjacent depth cameras where the scene within the aforementioned field angle is overlapping. Continuing with fig. 2 as an example, if the first image and the second image are taken by the first depth camera 22 and the second depth camera 23, respectively, the first position may be a position of the first depth camera 22 or the second depth camera 23 in a world coordinate system.
It can be understood that the point cloud image is an image generated by point cloud data, and each point in the point cloud image corresponds to a pixel point and has three-dimensional coordinates. Wherein, the image shot by each depth camera takes the image as a reference or a coordinate origin. And keeping the coordinate origin of one of the first image and the second image unchanged for unifying the coordinate origin, and mapping the image to a world coordinate system to generate a point cloud picture. The other image is additionally processed during mapping to the world coordinate system so that its origin of coordinates is the same as that of the other point cloud. That is, the origin of coordinates in the camera coordinate system of one of the two images and the origin of coordinates in the world coordinate system of the point cloud image generated based on the image coincide, and only the process of generating the point cloud image based on the other image is compensated so that the two point cloud images have the same origin of coordinates in the world coordinate system.
Step 103: and overlapping the first point cloud picture and the second point cloud picture according to the first overlapping area to obtain a first spliced point cloud picture.
It should be noted that the first point cloud image and the second point cloud image have the same coordinate origin, and the overlapping region of the two is the first overlapping region. Therefore, the first spliced point cloud picture obtained by overlapping according to the first overlapping area can be understood as a point cloud picture generated based on different pictures observed at the same position and at the same view angle. Continuing with the example of fig. 2, if the first image is an image captured by the first depth camera 22 and the second image is an image captured by the second depth camera 23, the first overlapping area is an area corresponding to a scene in the first field angle overlapping area 26 of the two depth cameras, and the first overlapping area has the same coordinate origin and the same scene information, so that the two cloud point images can be overlapped according to the first overlapping area, and the first connected cloud point image after being overlapped corresponds to a cloud point image generated by an image and depth data captured by one depth camera with a field angle of 270 °. Specifically, assume that the first cloud image includes: a first overlapping area corresponding to the first scene information and an area other than the first overlapping area corresponding to the second scene information; the second point cloud picture comprises: a first overlapping area corresponding to the first scene information and an area other than the first overlapping area corresponding to the third scene information; the first splice point cloud includes: a region other than the first overlapping region corresponding to the second scene information, a first overlapping region corresponding to the first scene information, and a region other than the first overlapping region corresponding to the third scene information.
Step 104: and obtaining the panoramic point cloud picture based on the first spliced point cloud picture and the depth data of each pixel point in the residual image.
It should be noted that the panoramic cloud of points is a cloud of points 360 ° around the vehicle. The remaining image is an image other than the first image and the second image among the at least four images. Another cloud image of the connected points can be generated by using the remaining images, and then the two cloud images of the connected points are overlapped or connected again to obtain a global cloud image, but not limited thereto.
Step 105: and generating a three-dimensional panoramic image based on the panoramic point cloud picture and the color data of each pixel point in at least four images.
It should be noted that each pixel in the image has its own color data, and the pixel corresponds to the depth data one to one. After the point cloud image is generated, each point in the point cloud image corresponds to one pixel point, so that when the three-dimensional panoramic image is generated, the color data of the pixel point corresponding to each point in the panoramic point cloud image is added to the point.
In the embodiment of the invention, images and depth data which are obtained by shooting by at least four depth cameras at the same time can be obtained, and because scenes in the field angles of adjacent depth cameras are overlapped, scene information around the installation position of the depth cameras can be obtained. And then generating a first point cloud picture and a second point cloud picture based on the depth data of each pixel point in the first image and the second image. The first point cloud image and the second point cloud image have an overlapping area, and the coordinate origins are both in the first position, that is, the two point cloud images have the same coordinate origin in the world coordinate system. And then, directly overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is the overlapping area of the first point cloud picture and the second point cloud picture. Because the two point cloud images have the same coordinate origin, the first spliced point cloud image can be obtained by directly splicing. And then, based on the first spliced point cloud picture and the depth data of each pixel point in the residual image, obtaining a panoramic point cloud picture, adding color data into the panoramic point cloud picture to obtain a three-dimensional panoramic image, wherein the three-dimensional panoramic image contains the color data and shows the scenes around the vehicle to a user in a three-dimensional form, so that the user can be guided to make accurate judgment on the conditions around the vehicle, the safety in the driving process is improved, and the increasing requirements of the user on the driving safety are further met.
Optionally, generating a first point cloud image and a second point cloud image based on depth data of each pixel point in the first image and the second image, including:
and mapping each pixel point in the first image to a world coordinate system taking the first position as a coordinate origin to obtain a first point cloud picture based on the depth data of each pixel point in the first image.
Mapping each pixel point in the second image to a world coordinate system taking the first position as a coordinate origin to obtain a second point cloud picture based on the depth data of each pixel point in the second image and the first combined calibration data;
it should be noted that the first position may be determined here as the position of the depth camera taking the first image in the world coordinate system, and when the origin of coordinates is unified, the origin of coordinates is unified as the position of the depth camera taking the first image in the world coordinate system. Since the first image is an image obtained by taking the first position as a reference or a coordinate origin, the first point cloud picture taking the first position as the coordinate origin can be obtained only by using the calibration data of the depth camera for shooting the first image and the depth data of each pixel point in the first image. The calibration data is the position, the pitching angle, the degree of freedom and the like of the camera obtained in the process of calibrating the depth camera.
Because the second image is not a reference or coordinate origin of the position of the depth camera in the world coordinate system for shooting the first image, the depth cameras for shooting the first image and the second image need to be jointly calibrated in advance to obtain first joint calibration data, and then a second point cloud picture with the position of the depth camera in the world coordinate system for shooting the first image as the coordinate origin can be obtained by using the first joint calibration data and the depth data of each pixel point in the second image.
In the embodiment of the invention, the first combined calibration data can be obtained by the combined calibration of the depth cameras for shooting the first image and the second image, and the second point cloud picture taking the depth camera for shooting the first image as the coordinate origin is obtained by using the first combined calibration data and the depth data of each pixel point in the second image.
Optionally, obtaining the panoramic point cloud image based on the first stitched point cloud image and the depth data of each pixel point in the remaining image, includes:
generating a third point cloud picture based on the depth data of each pixel point in the third image and the second combined calibration data; the second combined calibration data is obtained by performing combined calibration on the depth camera for shooting the first image and the third image, the third point cloud picture and the first splicing point cloud picture have an overlapping area, and the origin of coordinates is at the first position.
It should be noted that the at least four images further include a third image having an overlapping area with the first image, and a fourth image having an overlapping area with both the second image and the third image. Here, the first position is determined as a position of the depth camera taking the first image in the world coordinate system. The third image is an image having an overlapping area with the first image, except for the second image, among the at least four images. Continuing with the example of FIG. 2, if the first image is taken by the first depth camera 22 and the second image is taken by the second depth camera 23, then the third image is taken by the first depth camera 24. It can be understood that the process of generating the third point cloud image with the coordinate origin at the first position based on the depth data of each pixel point in the third image and the second combined calibration data is similar to the process of obtaining the second point cloud image by mapping each pixel point in the second image to the world coordinate system with the first position as the coordinate origin based on the depth data of each pixel point in the second image and the first combined calibration data in the foregoing embodiment of the invention, and details are not repeated here.
And overlapping the first spliced point cloud picture and the third spliced point cloud picture according to a second overlapping area to obtain a second spliced point cloud picture, wherein the second overlapping area is the overlapping area of the third spliced point cloud picture and the first spliced point cloud picture.
In this step, the coordinate origins of the first stitched cloud point image and the third stitched cloud point image are the positions of the depth camera shooting the first image in the world coordinate system, that is, the two have a uniform coordinate origin. And the first cloud point image part and the third cloud point image in the first spliced cloud point image have an overlapping region, so that the first spliced cloud point image and the third cloud point image can be directly overlapped according to the overlapping region to obtain the second spliced cloud point image, but the method is not limited to this. The first cloud point picture and the third cloud point picture can be overlapped according to the overlapping area to obtain another spliced cloud point picture, and the spliced cloud point picture and the first spliced cloud point picture are spliced to obtain a second spliced cloud point picture.
Continuing with the example of FIG. 2, assume that the first image is an image captured by the first depth camera 22, the second image is an image captured by the second depth camera 23, and the third image is an image captured by the third depth camera 24. Correspondingly, the first point cloud picture is a point cloud picture corresponding to the first image, the second point cloud picture is a point cloud picture corresponding to the second image, the third point cloud picture is a point cloud picture corresponding to the third image, and the coordinate origin points of the first point cloud picture, the second point cloud picture and the third point cloud picture are the positions of the first depth camera 22 in the world coordinate system. And copying the first point cloud picture while obtaining a first spliced point cloud picture based on the first point cloud picture and the second point cloud picture, and obtaining another spliced point cloud picture based on the re-copied first point cloud picture and the third point cloud picture. The two cloud images of the splicing points are spliced or overlapped again, and a second cloud image of the splicing points can be generated. It can be understood that the two spliced point cloud images for generating the second spliced point cloud image respectively contain all contents of the first point cloud image and the second point cloud image and all contents of the first point cloud image and the third point cloud image, and both contain the content of the first point cloud image, so that the two spliced point cloud images have an overlapping region, and can be overlapped according to the overlapping region to obtain the second spliced image, or can be split from the same position and then respectively spliced to obtain a part of the second spliced image.
And generating a fourth cloud picture of the coordinate origin at the target position of the vehicle based on the depth data of each pixel point in the fourth image, wherein the fourth cloud picture and the second spliced point cloud picture have an overlapping region.
In this step, when the depth cameras are all mounted on the vehicle, the target position may be any position of the vehicle. Before the fourth cloud image is generated, calibrating the depth camera for shooting the fourth image by using the target position to obtain calibration data. When the fourth cloud picture is generated, the calibration data is adopted for processing. The fourth image is an image having an overlapping area with the second image and the third image, respectively, out of the at least four images except the first image.
And switching the coordinate origin of the second spliced point cloud picture to a target position, and overlapping the second spliced point cloud picture and the fourth spliced point cloud picture after the coordinate origin is switched according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is an overlapping area of the fourth spliced point cloud picture and the second spliced point cloud picture.
It should be noted that, in the case where the target position is not the position of the depth camera in the world coordinate system that captured the first image, the coordinate origin is unified for the second stitched point cloud image and the fourth stitched point cloud image, and then the global point cloud images are obtained by overlapping based on the overlapping area of the two images. Continuing with the example of fig. 2, the second stitched cloud point image includes all contents of the second cloud point image and the third cloud point image, and both the second cloud point image and the third cloud point image have an overlapping region with the fourth cloud point image, that is, the overlapping region is a region corresponding to a scene in the second view angle overlapping region 27 and the third view angle overlapping region 28 of the two depth cameras. Therefore, the second stitching point cloud image and the fourth stitching point cloud image have an overlapping area, and the second stitching point cloud image and the fourth stitching point cloud image can be overlapped according to the overlapping area.
In the embodiment of the invention, the data of the third point cloud picture and the fourth point cloud picture are spliced into the first spliced point cloud picture in sequence, so that the accuracy of the point cloud data can be improved while the panoramic point cloud picture is obtained, a better overlapping effect is achieved in an overlapping area, and the quality of a final image is improved.
Optionally, obtaining the panoramic point cloud image based on the first stitched point cloud image and the depth data of each pixel point in the remaining image, includes:
generating a third cloud image and a fourth cloud image based on the depth data of each pixel point in the third image and the fourth image; and the third cloud picture and the fourth cloud picture have an overlapping area, and the coordinate origin is in the second position.
It should be noted that the at least four images further include a third image having an overlapping area with the first image, and a fourth image having an overlapping area with both the second image and the third image. Namely, the third image is an image which is in an overlapping area with the first image except the second image in at least four images; the fourth image is an image having an overlapping area with both the second image and the third image, out of the at least four images, except the first image. The second position is a position of the depth camera in the world coordinate system that captured the third image or the fourth image. The process of generating the third cloud image and the fourth cloud image based on the depth data of each pixel point in the third image and the fourth image is similar to the process of generating the first cloud image and the second cloud image based on the depth data of each pixel point in the first image and the second image in the embodiment of the present invention, and details are not repeated here.
And overlapping the third cloud picture and the fourth cloud picture according to a second overlapping area to obtain a second spliced point cloud picture, wherein the second overlapping area is the overlapping area of the third cloud picture and the fourth cloud picture.
In this step, the third cloud point image and the fourth cloud point image have the same coordinate origin, and therefore the third splicing cloud point image obtained by overlapping according to the second overlapping region can be understood as a cloud point image generated based on different images observed at the same position and at the same viewing angle. The specific process is similar to step 103, and is not described herein again.
And switching the coordinate origin of the first splicing point cloud picture and the coordinate origin of the second splicing point cloud picture to the target position of the vehicle, and overlapping according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is the overlapping area of the first splicing point cloud picture and the second splicing point cloud picture.
It should be noted that in the case where the origins of the first and second stitched point clouds are different, a uniform origin of coordinates is required. When the depth cameras are installed on the vehicle, the coordinate origin points of the two spliced point cloud pictures are switched to the same position, namely the target position, of the vehicle, so that the obtained panoramic point cloud picture can be understood as a point cloud picture generated based on different pictures observed at the same position and the same visual angle. Since the first image and the third image have an overlapping region and the second image and the fourth image have an overlapping region, the first stitching point cloud image and the third stitching point cloud image also have an overlapping region, i.e., a third overlapping region. Continuing with the example of FIG. 2, assume that the first image is an image captured by the first depth camera 22, the second image is an image captured by the second depth camera 23, the third image is an image captured by the third depth camera 24, and the fourth image is an image captured by the fourth depth camera 25. Correspondingly, the first cloud point map is a cloud point map corresponding to the first image, the second cloud point map is a cloud point map corresponding to the second image, the third cloud point map is a cloud point map corresponding to the third image, the fourth cloud point map is a cloud point map corresponding to the fourth image, and after the coordinate origins of the first and third cloud point maps are switched to the same position of the vehicle, the first and third cloud point maps have an overlapping region, that is, the overlapping region is a region corresponding to a scene in the second and fourth field angle overlapping regions 27 and 29 of the two depth cameras. Therefore, the first splicing point cloud image and the third splicing point cloud image have an overlapping region, namely a third overlapping region, and the overlapping can be carried out according to the third overlapping region.
In the embodiment of the invention, after the third cloud point image and the fourth cloud point image are overlapped according to the second overlapping area to generate the third splicing point cloud image, the first splicing point cloud image and the third splicing point cloud image are overlapped according to the third overlapping area, so that the panoramic cloud point image is obtained, the accuracy of the point cloud data can be improved, the overlapping effect is better in the overlapping area, and the quality of the final image is improved.
Optionally, the target position is a position at which a driver's head of the vehicle is located.
It should be noted that the position of the head of the driver is understood as the position of the head of the person sitting in the driving position, and as shown in fig. 3, the position a is the position of the head of the driver. Here, three-dimensional coordinates in a vehicle may be determined as a position where a driver's head of the vehicle is located, with respect to the design of the vehicle interior. Preferably, different designs of the vehicle interior correspond to different three-dimensional coordinates.
In the embodiment of the invention, the target position is determined as the position of the head of the driver of the vehicle, so that the three-dimensional panoramic image can be presented at the visual angle of the driver, and the use experience of the driver is improved.
The following describes a method for generating a three-dimensional panoramic image according to an embodiment of the present invention, with a specific example, as shown in fig. 4. Wherein, 1 number of degree of depth camera sets up in the vehicle left side, 2 number of degree of depth cameras set up in the locomotive, 3 number of degree of depth cameras set up in the vehicle right side, and 4 number of degree of depth cameras set up in the rear of a vehicle. The horizontal field angle of original imaging of each depth camera is larger than 180 degrees, the resolution is larger than 1920X 1080, and the maximum distance measurement precision of each group of cameras reaches the centimeter level, but the method is not limited to the above. Here, the original is used for higher resolution calibration, so that the calibrated large image field angle is close to the original imaging field angle, and the target is 180 °. The front and rear depth cameras can be used as a vehicle traveling recorder and a backing image. The depth camera is an RGBD camera, but is not limited thereto, and a combination of a plurality of monocular cameras and a lidar may be used as the depth camera. Preferably, four groups of cameras are all on the full-scale, and have the characteristics of miniaturization, high reliability, high durability, high adaptability and the like. Calibration, matching, and rendering calculations for the RGBD camera are all performed in a high performance programmable chip.
Specifically, the data acquired by the depth camera No. 1 and the data acquired by the depth camera No. 2 are subjected to image and three-dimensional coordinate stitching to obtain a stitched image (corresponding to the first point cloud stitched image in the above-described embodiment of the invention) with the depth camera No. 2 as a reference (origin of coordinates). And (3) carrying out image and three-dimensional coordinate splicing on the data acquired by the No. 2 depth camera and the data acquired by the No. 3 depth camera to obtain a spliced image (which is equivalent to the second point cloud spliced image in the embodiment of the invention) taking the No. 2 depth camera as a reference. And splicing the two spliced point cloud pictures into an A point front spliced picture taking the No. 2 depth camera as a coordinate origin, wherein the A point front spliced picture comprises a vehicle front scene, a left side scene and a right side scene. And switching the coordinate origin of the front splicing diagram of the point A to the point A.
The depth camera No. 4 acquires a point cloud image (which is equivalent to the fourth point cloud image in the embodiment of the invention) behind the point a, and switches the origin of coordinates of the point cloud image behind the point a to the point a. And then splicing and fusing the front point A splicing image and the rear point A cloud image which both take the point A as the origin of coordinates to obtain a 360-degree image and three-dimensional coordinate splicing and fusing data which take the point A as the origin of coordinates, wherein the 360-degree image and the three-dimensional coordinate splicing and fusing data are equivalent to the three-dimensional panoramic image in the embodiment of the invention. And finally, outputting the three-dimensional panoramic image to a display module, and displaying the three-dimensional panoramic image by using a display model.
In the embodiment of the invention, the plurality of RGBD cameras can be used for realizing scene reconstruction with high precision, high frame rate and high resolution in real time, and the plurality of RGBD cameras are used for high-precision combined calibration, so that the error of the spliced part can be ensured to be as low as centimeter level when the plurality of depth cameras are spliced, and meanwhile, the hardware system can still maintain the original precision effect after vibration, structural part aging and high-low temperature impact are carried out on the equipment in the normal use process.
With the above description of the method for generating a three-dimensional panoramic image according to the embodiment of the present invention, a device for generating a three-dimensional panoramic image according to the embodiment of the present invention will be described with reference to the accompanying drawings.
Referring to fig. 5, an embodiment of the present invention further provides an apparatus for generating a three-dimensional panoramic image, where the apparatus includes:
the acquiring module 51 is configured to acquire at least four images captured by at least four depth cameras at the same time and depth data of each pixel point in the images, where scenes in field angles of adjacent depth cameras overlap;
the point cloud module 52 is configured to generate a first point cloud image and a second point cloud image based on the depth data of each pixel point in the first image and the second image, where the first point cloud image and the second point cloud image have an overlapping area and the origin of coordinates are both at a first position;
the first splicing module 53 is configured to overlap the first point cloud image and the second point cloud image according to a first overlapping area to obtain a first spliced point cloud image, where the first overlapping area is an overlapping area of the first point cloud image and the second point cloud image;
the second stitching module 54 is configured to obtain a panoramic point cloud image based on the first stitching point cloud image and depth data of each pixel point in the remaining image; wherein the remaining image is an image other than the first image and the second image among the at least four images;
and the panoramic module 55 is configured to generate a three-dimensional panoramic image based on the panoramic cloud image and color data of each pixel point in the at least four images.
Optionally, point cloud module 52, comprising:
the first point cloud unit is used for mapping each pixel point in the first image to a world coordinate system with the first position as a coordinate origin to obtain a first point cloud picture based on the depth data of each pixel point in the first image;
the second point cloud unit is used for mapping each pixel point in the second image to a world coordinate system with the first position as a coordinate origin to obtain a second point cloud picture based on the depth data of each pixel point in the second image and the first combined calibration data;
the first combined calibration data is data obtained by performing combined calibration on a depth camera for shooting the first image and the second image.
Optionally, the second splicing module 54 comprises:
the third point cloud unit is used for generating a third point cloud picture based on the depth data and the second combined calibration data of all the pixel points in the third image; the second combined calibration data is obtained by performing combined calibration on a depth camera for shooting the first image and the third image; the third point cloud picture and the first spliced point cloud picture have an overlapping area, and the origin of coordinates is at a first position;
the first splicing unit is used for overlapping the first splicing point cloud picture and the third splicing point cloud picture according to a second overlapping area to obtain a second splicing point cloud picture, wherein the second overlapping area is the overlapping area of the third splicing point cloud picture and the first splicing point cloud picture;
the fourth cloud unit is used for generating a fourth cloud image of the coordinate origin at the target position of the vehicle based on the depth data of each pixel point in the fourth image, wherein the fourth cloud image and the second splicing point cloud image have an overlapping region;
and the second splicing unit is used for switching the coordinate origin of the second spliced point cloud picture to a target position, and overlapping the second spliced point cloud picture and the fourth spliced point cloud picture after the coordinate origin is switched according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is an overlapping area of the fourth cloud picture and the second spliced point cloud picture.
Optionally, the second splicing module 54 comprises:
the third point cloud unit is used for generating a third point cloud picture and a fourth point cloud picture based on the depth data of each pixel point in the third image and the fourth image; the third cloud picture and the fourth cloud picture have an overlapping area, and the origin of coordinates are both at a second position;
the first splicing unit is used for overlapping the third cloud image and the fourth cloud image according to a second overlapping area to obtain a second splicing point cloud image, wherein the second overlapping area is the overlapping area of the third cloud image and the fourth cloud image;
and the panoramic unit is used for switching the coordinate origin points of the first splicing point cloud picture and the second splicing point cloud picture to the target position of the vehicle and overlapping according to a third overlapping area to obtain the panoramic point cloud picture, wherein the third overlapping area is the overlapping area of the first splicing point cloud picture and the second splicing point cloud picture.
Optionally, the target position is a position at which a driver's head of the vehicle is located.
The device for generating a three-dimensional panoramic image provided by the embodiment of the present invention can implement each process implemented by the method for generating a three-dimensional panoramic image in the method embodiments of fig. 1 to 4, and is not repeated here to avoid repetition.
In the embodiment of the invention, images and depth data shot by at least four depth cameras at the same time can be acquired, and because scenes in the field angles of adjacent depth cameras are overlapped, scene information around the installation position of the depth cameras can be acquired. And then generating a first point cloud picture and a second point cloud picture based on the depth data of each pixel point in the first image and the second image. The first point cloud image and the second point cloud image have an overlapping area, and the coordinate origins are both in the first position, that is, the two point cloud images have the same coordinate origin in the world coordinate system. And then, directly overlapping the first point cloud image and the second point cloud image according to a first overlapping area to obtain a first spliced point cloud image, wherein the first overlapping area is the overlapping area of the first point cloud image and the second point cloud image. Because the two point cloud images have the same coordinate origin, the first spliced point cloud image can be obtained by directly splicing. And then, based on the first spliced point cloud picture and the depth data of each pixel point in the residual image, obtaining a panoramic point cloud picture, adding color data into the panoramic point cloud picture to obtain a three-dimensional panoramic image, wherein the three-dimensional panoramic image contains the color data and shows the scenes around the vehicle to a user in a three-dimensional form, so that the user can be guided to make accurate judgment on the conditions around the vehicle, the safety in the driving process is improved, and the increasing requirements of the user on the driving safety are further met.
On the other hand, the embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the processor implements the method for generating the three-dimensional panoramic image provided by the embodiments of the present application.
In still another aspect, the present application further provides a readable storage medium, and when instructions in the readable storage medium are executed by a processor of an electronic device, the electronic device is enabled to perform the method for generating a three-dimensional panoramic image provided in the foregoing embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for generating a three-dimensional panoramic image, the method comprising:
acquiring images shot by at least four depth cameras at the same moment and depth data of each pixel point in the images, wherein scenes in the field angles of the adjacent depth cameras are overlapped;
generating a first point cloud picture and a second point cloud picture based on the depth data of each pixel point in the first image and the second image, wherein the first point cloud picture and the second point cloud picture have an overlapping region, and the origin of coordinates are both at a first position;
overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is the overlapping area of the first point cloud picture and the second point cloud picture;
obtaining a panoramic point cloud picture based on the first spliced point cloud picture and depth data of each pixel point in the residual images; wherein the remaining image is an image of the at least four images other than the first image and the second image;
and generating a three-dimensional panoramic image based on the panoramic point cloud image and the color data of each pixel point in the at least four images.
2. The method of claim 1, wherein generating the first and second point cloud maps based on depth data for each pixel point in the first and second images comprises:
mapping each pixel point in the first image to a world coordinate system with the first position as a coordinate origin to obtain a first cloud picture based on the depth data of each pixel point in the first image;
mapping each pixel point in the second image to a world coordinate system with the first position as a coordinate origin to obtain a second point cloud picture based on the depth data of each pixel point in the second image and first combined calibration data;
the first combined calibration data is obtained by performing combined calibration on the depth camera for shooting the first image and the second image.
3. The method of claim 1, wherein obtaining a panoramic cloud point based on the first stitched cloud point and depth data of each pixel point in the remaining images comprises:
generating a third point cloud picture based on the depth data of each pixel point in the third image and the second combined calibration data; the second combined calibration data is obtained by performing combined calibration on the depth camera which shoots the first image and the third image; the third point cloud picture and the first spliced point cloud picture have an overlapping area, and the origin of coordinates is at the first position;
overlapping the first spliced point cloud picture and the third spliced point cloud picture according to a second overlapping area to obtain a second spliced point cloud picture, wherein the second overlapping area is the overlapping area of the third spliced point cloud picture and the first spliced point cloud picture;
generating a fourth cloud image of the coordinate origin at the target position of the vehicle based on the depth data of each pixel point in the fourth image, wherein the fourth cloud image and the second spliced point cloud image have an overlapping region;
and switching the coordinate origin of the second spliced point cloud picture to the target position, and overlapping the second spliced point cloud picture and the fourth spliced point cloud picture after the coordinate origin is switched according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is an overlapping area of the fourth spliced point cloud picture and the second spliced point cloud picture.
4. The method of claim 1, wherein obtaining a panoramic cloud point based on the first stitched cloud point and depth data of each pixel point in the remaining images comprises:
generating a third cloud image and a fourth cloud image based on the depth data of each pixel point in the third image and the fourth image; the third cloud picture and the fourth cloud picture have an overlapping region, and the origin of coordinates are both at a second position;
overlapping the third cloud point image and the fourth cloud point image according to a second overlapping area to obtain a second spliced cloud point image, wherein the second overlapping area is an overlapping area of the third cloud point image and the fourth cloud point image;
and switching the coordinate origin points of the first splicing point cloud picture and the second splicing point cloud picture to the target position of the vehicle, and overlapping according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is the overlapping area of the first splicing point cloud picture and the second splicing point cloud picture.
5. A method according to claim 3 or 4, characterized in that the target position is the position where the driver's head of the vehicle is located.
6. An apparatus for generating a three-dimensional panoramic image, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring at least four images shot by the at least four depth cameras at the same moment and depth data of each pixel point in the images, and scenes in the field angles of the adjacent depth cameras are overlapped;
the point cloud module is used for generating a first point cloud picture and a second point cloud picture based on depth data of each pixel point in the first image and the second image, the first point cloud picture and the second point cloud picture have an overlapping area, and the origin of coordinates are both at a first position;
the first splicing module is used for overlapping the first point cloud picture and the second point cloud picture according to a first overlapping area to obtain a first spliced point cloud picture, wherein the first overlapping area is the overlapping area of the first point cloud picture and the second point cloud picture;
the second splicing module is used for obtaining a panoramic point cloud picture based on the first spliced point cloud picture and the depth data of each pixel point in the residual image; wherein the remaining image is an image of the at least four images other than the first image and the second image;
and the panoramic module is used for generating a three-dimensional panoramic image based on the panoramic point cloud picture and the color data of each pixel point in the at least four images.
7. The apparatus of claim 6, wherein the point cloud module comprises:
the first point cloud unit is used for mapping each pixel point in the first image to a world coordinate system with the first position as a coordinate origin to obtain a first point cloud picture based on the depth data of each pixel point in the first image;
the second point cloud unit is used for mapping each pixel point in the second image to a world coordinate system taking the first position as a coordinate origin to obtain a second point cloud picture based on the depth data of each pixel point in the second image and the first combined calibration data;
the first combined calibration data is obtained by performing combined calibration on the depth camera for shooting the first image and the second image.
8. The apparatus of claim 6, wherein the second stitching module comprises:
the third point cloud unit is used for generating a third point cloud picture based on the depth data and the second combined calibration data of all the pixel points in the third image; the second combined calibration data is obtained by performing combined calibration on the depth camera for shooting the first image and the third image; the third point cloud picture and the first spliced point cloud picture have an overlapping area, and the origin of coordinates is at the first position;
a first splicing unit, configured to overlap the first splicing point cloud image and the third splicing point cloud image according to a second overlapping area to obtain a second splicing point cloud image, where the second overlapping area is the
An overlapping region of a third point cloud picture and the first stitched point cloud picture;
a fourth cloud unit, configured to generate a fourth cloud image of a coordinate origin at a target position of a vehicle based on depth data of each pixel point in the fourth image, where the fourth cloud image and the second stitched cloud image have an overlapping region;
and the second splicing unit is used for switching the coordinate origin of the second spliced point cloud picture to the target position, overlapping the second spliced point cloud picture after the coordinate origin is switched with the fourth spliced point cloud picture according to a third overlapping area to obtain a panoramic point cloud picture, wherein the third overlapping area is an overlapping area of the fourth spliced point cloud picture and the second spliced point cloud picture.
9. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the method of generating a three-dimensional panoramic image according to any one of claims 1 to 5 when executing the program.
10. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of generating a three-dimensional panoramic image according to any one of claims 1 to 5.
CN202210854523.7A 2022-07-15 2022-07-15 Three-dimensional panoramic image generation method and device and electronic equipment Pending CN115330888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210854523.7A CN115330888A (en) 2022-07-15 2022-07-15 Three-dimensional panoramic image generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210854523.7A CN115330888A (en) 2022-07-15 2022-07-15 Three-dimensional panoramic image generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115330888A true CN115330888A (en) 2022-11-11

Family

ID=83916670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210854523.7A Pending CN115330888A (en) 2022-07-15 2022-07-15 Three-dimensional panoramic image generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115330888A (en)

Similar Documents

Publication Publication Date Title
US11303806B2 (en) Three dimensional rendering for surround view using predetermined viewpoint lookup tables
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
JP3286306B2 (en) Image generation device and image generation method
CN108765496A (en) A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
JP6079131B2 (en) Image processing apparatus, method, and program
JP2014520337A (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
JP2008085446A (en) Image generator and image generation method
JP2006287892A (en) Driving support system
CN110796711B (en) Panoramic system calibration method and device, computer readable storage medium and vehicle
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
KR20130064169A (en) An apparatus for generating around view image of vehicle using multi look-up table
CN112655024A (en) Image calibration method and device
CN107972585A (en) Scene rebuilding System and method for is looked around with reference to the adaptive 3 D of radar information
WO2017043331A1 (en) Image processing device and image processing method
TW201605247A (en) Image processing system and method
DE112018003270T5 (en) IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM
CN105774657B (en) Single-camera panoramic reverse image system
CN110428361A (en) A kind of multiplex image acquisition method based on artificial intelligence
JP2001256482A (en) Device and method for generating parallax image
JP2006033282A (en) Image forming device and method thereof
CN115330888A (en) Three-dimensional panoramic image generation method and device and electronic equipment
JP6076083B2 (en) Stereoscopic image correction apparatus and program thereof
CN114219895A (en) Three-dimensional visual image construction method and device
CN112215917A (en) Vehicle-mounted panorama generation method, device and system
KR20170077331A (en) Arbitrary View Image Generation Method and System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination