CN115222793A - Method, device and system for generating and displaying depth image and readable medium - Google Patents

Method, device and system for generating and displaying depth image and readable medium Download PDF

Info

Publication number
CN115222793A
CN115222793A CN202210841424.5A CN202210841424A CN115222793A CN 115222793 A CN115222793 A CN 115222793A CN 202210841424 A CN202210841424 A CN 202210841424A CN 115222793 A CN115222793 A CN 115222793A
Authority
CN
China
Prior art keywords
eye camera
image
panoramic image
depth
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210841424.5A
Other languages
Chinese (zh)
Inventor
王森
刘阳
罗小伟
林福辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202210841424.5A priority Critical patent/CN115222793A/en
Publication of CN115222793A publication Critical patent/CN115222793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

A method, a device, a system and a readable medium for generating and displaying a depth image are provided, wherein the method for generating the depth image comprises the following steps: acquiring images shot by a left-eye camera and a right-eye camera; generating a stereoscopic panorama image based on the acquired photographed image; and generating a depth image corresponding to the stereoscopic panoramic image based on the optical flow value of an overlapping area between adjacent images shot by the left-eye camera and the right-eye camera. By applying the method, the depth information is added on the basis of the stereoscopic panoramic video, so that real motion parallax can be formed during subsequent display, and the immersion feeling of a user is increased.

Description

Method, device and system for generating and displaying depth image and readable medium
The application is a divisional application of a patent with application date of 2017, 12 and 22, application number of 201711414436.5 and invention name of 'method, device and system for generating and displaying depth image, and readable medium'.
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a method, a device, a system and a readable medium for generating and displaying a depth image.
Background
Virtual Reality (VR) technology utilizes a Head-Mounted display (Head-Mounted display) to provide a person with a sense of immersion and to feel the person in the VR. Current VR systems provide a high resolution, low latency experience, and are able to track user head position and angle in real time.
For a VR scene shot by using cameras, the cameras can point to different parts in the scene, then images shot by the cameras at the same time are spliced into a panoramic picture at the same time, and finally the panoramic pictures at different times are combined together to form a panoramic video, wherein the panoramic video is a 'pseudo 3D' panoramic video, that is, only one spherical Projection (equivalent Projection) panoramic picture at one time. When viewed by a user, the images are processed by an algorithm (such as the simplest left-right translation by a displacement) to produce the effect of a "pseudo-3D" panoramic video. Because the user experience of the 'pseudo 3D' panoramic video is poor and discomfort such as dizziness is easy to generate, facebook proposes a Surround360 technology for shooting 'true 3D' panoramic video. The Surround360 technology generates a panoramic image surrounding a circle by simulating a left eye and a right eye, and generating content shot by a virtual camera at each point of the circular ring by using a left-eye camera and a right-eye camera. When a user watches the images, the images shot by the left-eye camera and the right-eye camera are only required to be respectively displayed in the corresponding display areas.
For a 'pseudo-3D' panoramic video synthesized by shooting with a camera, no depth effects such as parallax and shielding exist, so that the user experience is poor, and discomfort such as dizziness is easily caused. The real 3D panoramic video shot by Surround360 can generate certain occlusion and parallax effects of left and right eyes due to the fact that eyes are simulated to shoot during shooting, and user experience is good.
Disclosure of Invention
The embodiment of the invention solves the technical problem of how to form real motion parallax on the basis of different viewpoints for shot images, and increases the immersion of users.
In order to solve the above technical problem, an embodiment of the present invention provides a method for generating a depth image, where the method includes: acquiring images shot by a left-eye camera and a right-eye camera; generating a stereoscopic panorama image based on the acquired photographed image; and generating a depth image corresponding to the stereoscopic panoramic image based on the optical flow value of an overlapping area between adjacent images shot by the left-eye camera and the right-eye camera.
Optionally, the generating a depth image corresponding to the stereoscopic panorama image based on a light flow value of an overlapping area between adjacent images captured by a left-eye camera and a right-eye camera includes: converting world coordinates corresponding to pixels in the stereoscopic panoramic image into camera coordinates corresponding to a left-eye camera and a right-eye camera respectively; calculating a light flow value of an overlapping area between adjacent images photographed by a left-eye camera and a right-eye camera; calculating a depth value corresponding to a pixel in the stereoscopic panoramic image based on the converted camera coordinates and the optical flow value; and generating a depth image corresponding to the stereoscopic panoramic image based on the calculated depth value.
Optionally, the calculating depth values corresponding to pixels in the stereoscopic panorama image based on the converted camera coordinates and the optical flow values includes: obtaining the radius of a circular ring formed by the left eye camera and the right eye camera relative to the center of a circle as R, and the included angle of the z axis of the camera coordinate system relative to the x axis of the world coordinate system as delta, and calculating
Figure BDA0003751224440000021
Is t1; obtaining the width W of the panoramic image, and calculating
Figure BDA0003751224440000022
Is t2, wherein
Figure BDA0003751224440000023
Psi is the optical flow value; and calculating the sum of t1 and t2 as the depth value d corresponding to the pixel.
Optionally, the generating a stereoscopic panorama image based on the acquired photographed image includes: performing spherical or cylindrical projection on the acquired shot image, namely converting pixel coordinates corresponding to pixels in the acquired shot image into spherical coordinates or cylindrical coordinates; calculating a light flow value of an overlapping area between adjacent images photographed by a left-eye camera and a right-eye camera; for each column of pixels in the overlapping area, generating a left-eye camera panoramic image based on a left-eye camera and an optical flow value corresponding to the left-eye camera, and generating a right-eye camera panoramic image based on a right-eye camera and an optical flow value corresponding to the right-eye camera; and synthesizing the left-eye camera panoramic image and the right-eye camera panoramic image based on a fusion algorithm to generate a three-dimensional panoramic image.
Alternatively, the optical flow value of the overlapping area between the adjacent images captured by the left-eye camera and the right-eye camera is calculated based on any one of the following algorithms: phase correlation algorithm, phase correlation algorithm and Lucas-Kanade algorithm.
The embodiment of the invention provides a method for displaying a depth image, which comprises the following steps: generating a three-dimensional panoramic image and a depth image corresponding to the three-dimensional panoramic image by adopting any depth image generation method; reconstructing a point cloud picture based on the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image, and acquiring a point cloud picture corresponding to the left-eye camera panoramic image and a point cloud picture corresponding to the right-eye camera panoramic image; projecting the point cloud picture to a view plane and distorting a depth image corresponding to the point cloud picture; warping pixels in the panoramic image corresponding to the warped depth image to a viewing plane to generate a left-eye camera viewing plane image and a right-eye camera viewing plane image; and outputting and displaying the view plane image with the smaller corresponding depth value.
Optionally, the view plane is: a plane perpendicular to the eye's gaze direction of attention.
Optionally, the method for displaying the depth image further includes: compressing the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to generate compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image; and decompressing the compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to obtain the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image.
An embodiment of the present invention provides a depth image generation device, including: an acquisition unit adapted to acquire images captured by a left-eye camera and a right-eye camera; a first generation unit adapted to generate a stereoscopic panorama image based on the acquired photographed image; and the second generation unit is suitable for generating the depth image corresponding to the stereoscopic panoramic image based on the optical flow value of the overlapping area between the adjacent images shot by the left-eye camera and the right-eye camera.
Optionally, the second generating unit includes: the first conversion subunit is suitable for converting world coordinates corresponding to pixels in the stereoscopic panoramic image into camera coordinates corresponding to a left-eye camera and a right-eye camera respectively; a first calculating subunit adapted to calculate a light flow value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera; a second calculating subunit adapted to calculate depth values corresponding to pixels in the stereoscopic panorama image based on the converted camera coordinates and the optical flow values; a first generating subunit, adapted to generate a depth image corresponding to the stereoscopic panorama image based on the calculated depth value.
Optionally, the second computing subunit includes: a first computing module, a second computing module, and a third computing module, wherein: the first calculation module is suitable for acquiring that the radius of a circular ring formed by the left-eye camera and the right-eye camera relative to the circle center is R, the included angle of the z axis of the camera coordinate system relative to the x axis of the world coordinate system is delta, and calculating
Figure BDA0003751224440000031
Is t1; the second calculation module is suitable for acquiring the width W of the panoramic image and calculating
Figure BDA0003751224440000032
Is t2, wherein
Figure BDA0003751224440000033
Psi is the optical flow value; and the third calculating module is suitable for calculating the sum of t1 and t2 as the depth value d corresponding to the pixel.
Optionally, the first generating unit includes: the second conversion subunit is suitable for performing spherical or cylindrical projection on the acquired shot image, namely converting pixel coordinates corresponding to pixels in the acquired shot image into spherical coordinates or cylindrical coordinates; a first calculating subunit adapted to calculate a light flow value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera; a second generation subunit adapted to generate, for each column of pixels of the overlap area, a left-eye camera panoramic image based on a left-eye camera and its corresponding optical flow value, and a right-eye camera panoramic image based on a right-eye camera and its corresponding optical flow value, respectively; and the third generation subunit is suitable for synthesizing the left-eye camera panoramic image and the right-eye camera panoramic image based on a fusion algorithm to generate a three-dimensional panoramic image.
Optionally, the first calculating subunit is adapted to calculate an optical flow value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera based on any one of the following algorithms: phase correlation algorithm, phase correlation algorithm and Lucas-Kanade algorithm.
An embodiment of the present invention provides a display device for a depth image, including: a third generating unit, adapted to generate a stereoscopic panorama image and a corresponding depth image thereof by using the method according to any one of claims 1 to 5; the reconstruction unit is suitable for reconstructing a point cloud picture based on the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image, and acquiring a point cloud picture corresponding to the left-eye camera panoramic image and a point cloud picture corresponding to the right-eye camera panoramic image; the warping unit is suitable for projecting the point cloud picture to a view plane and warping a depth image corresponding to the point cloud picture; a fourth generating unit, adapted to warp pixels in the panoramic image corresponding to the warped depth image to a viewing plane, and generate a left-eye camera viewing plane image and a right-eye camera viewing plane image; and the output unit is suitable for outputting and displaying the view plane image with smaller corresponding depth value.
Optionally, the view plane is: a plane perpendicular to the eye's gaze direction of attention.
Optionally, the display device of the depth image further includes: the compression unit is suitable for compressing the stereoscopic panoramic image and the depth image corresponding to the stereoscopic panoramic image to generate compressed data of the stereoscopic panoramic image and the depth image corresponding to the stereoscopic panoramic image; and the decompression unit is suitable for decompressing the compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to acquire the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image.
The embodiment of the invention provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the computer instructions execute the steps of any one of the depth image generation methods.
The embodiment of the invention provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the method for displaying the depth image is performed according to any one of the steps.
The embodiment of the invention provides a depth image generation system, which comprises a memory and a processor, wherein the memory is stored with computer instructions capable of being executed on the processor, and the processor executes any one of the steps of the depth image generation method when executing the computer instructions.
The embodiment of the invention provides a depth image display system, which comprises a memory and a processor, wherein the memory is stored with computer instructions capable of being executed on the processor, and the processor executes the steps of any one of the depth image display methods when executing the computer instructions.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
according to the embodiment of the invention, the depth image corresponding to the three-dimensional panoramic image is generated based on the optical flow value of the overlapping area between the adjacent images shot by the left-eye camera and the right-eye camera, and the depth information is added on the basis of the three-dimensional panoramic video, so that real motion parallax can be formed based on different viewpoints during subsequent display, and the immersion feeling of a user is increased.
Furthermore, a point cloud image is reconstructed based on the generated stereoscopic panoramic video and the corresponding depth image thereof, then the point cloud image is projected to a viewing plane, the depth image is distorted, and a viewing plane image is generated and output.
Drawings
Fig. 1 is a detailed flowchart of a depth image generation method according to an embodiment of the present invention;
FIG. 2 is a top view of a camera coordinate system and a world coordinate system provided by an embodiment of the invention;
fig. 3 is a detailed flowchart of a method for displaying a depth image according to an embodiment of the present invention;
FIG. 4 is a detailed flow chart of an image capture and display method provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a depth image generating apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a display device for depth images according to an embodiment of the present invention.
Detailed Description
In the prior art, the 'pseudo-3D' panoramic video shot and synthesized by a camera does not have any depth effects such as parallax, shading and the like, so that the user experience is poor, and discomfort such as dizziness and the like is easy to generate. The real 3D panoramic video shot by Surround360 can generate certain occlusion and parallax effects of left and right eyes due to the fact that eyes are simulated to shoot during shooting, and user experience is good.
According to the embodiment of the invention, the depth image corresponding to the three-dimensional panoramic image is generated based on the optical flow value of the overlapping area between the adjacent images shot by the left-eye camera and the right-eye camera, and the depth information is added on the basis of the three-dimensional panoramic video, so that real motion parallax can be formed based on different viewpoints during subsequent display, and the immersion feeling of a user is increased.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, an embodiment of the present invention provides a method for generating a depth image, which may include the following steps:
in step S101, images captured by the left-eye camera and the right-eye camera are acquired.
In the specific implementation, due to the defects that the user experience of the 'pseudo 3D' panoramic video is poor, the vertigo feeling is easy to generate, and the like, facebook proposes a Surround360 technology for shooting 'true 3D' panoramic video. The Surround360 technology generates a panoramic image surrounding a circle by simulating a left eye and a right eye and generating contents shot by a virtual camera at each point of the circle by using a left-eye camera and a right-eye camera, and the depth image can be generated based on images shot by the left-eye camera and the right-eye camera of the Surround360 technology.
Step S102, based on the acquired photographed image, generates a stereoscopic panorama image.
In a specific implementation, a stereoscopic panoramic image may be generated based on the acquired images captured by the left and right eye cameras.
In a specific implementation, since the pixels of the acquired images captured by the left-eye camera and the right-eye camera correspond to the pixel coordinates, the acquired captured images need to be spherically or cylindrically projected, that is, the pixel coordinates corresponding to the pixels in the acquired captured images are mapped to the spherical coordinates or the cylindrical coordinates. Then, calculating and calculating an optical flow value of an overlapping area between adjacent images shot by a left-eye camera and a right-eye camera based on spherical coordinates or cylindrical coordinates, generating new virtual camera output based on the left-eye camera and the corresponding optical flow value thereof for each column of pixels of the overlapping area, generating new virtual camera output based on the right-eye camera and the corresponding optical flow value thereof, generating a right-eye camera panoramic image, and finally synthesizing the left-eye camera panoramic image and the right-eye camera panoramic image based on a fusion algorithm to generate a stereoscopic panoramic image.
In one embodiment of the present invention, a conversion formula of Spherical coordinates (Spherical coordinates system) to Cartesian coordinates (Cartesian coordinates system) is as follows:
x=r sinθcosφ
y=r sinθsinφ
z=r cosθ (1)
wherein (x, y, z) are Cartesian coordinates,
Figure BDA0003751224440000071
is a spherical coordinate, r is a radius of the sphere, θ is an zenith angle (azimuthal angle),
Figure BDA0003751224440000072
is the azimuth (polar angle).
From the camera external parameters (Extrinsic), the conversion formula of world coordinates to camera coordinates is as follows:
X c =R 1 X+T (2)
wherein X c As camera coordinates, X as world coordinates, R 1 T is a translation vector.
Again, from camera Intrinsic (Intrinsic), pixel coordinates are calculated as follows:
x′=x/z
y′=y/z
u=f x x′+c x
v=f y y′+c y (3)
where (u, v) are pixel coordinates, f x ,f y Focal lengths in horizontal and vertical directions, respectively, c x ,c y Respectively, the horizontal offset and the vertical offset of the image origin with respect to the optical center imaging point.
Based on the formulas (1), (2), and (3), pixel coordinates corresponding to pixels in the acquired captured image can be converted to spherical coordinates.
The use of spherical projection can provide an immersive experience of 360x180 degrees, the use of cylindrical projection cannot provide top and bottom images, and the experience is inferior to spherical projection.
In particular implementations, the optical flow values for the overlap region may be calculated based on a phase correlation algorithm, or a Lucas-Kanade algorithm.
In a specific implementation, since the number of real cameras is limited, in order to generate a panoramic image, a continuous virtual camera needs to be generated between the real left-eye camera and the real right-eye camera. And because of the limited resolution of the images, each column of the panoramic image can be taken as the output of one virtual camera.
In an embodiment of the present invention, a phase angle of a column in an overlapping region between adjacent images captured by a left-eye camera and a right-eye camera is ξ, and phase angles corresponding to positions where optical centers of the left-eye camera and the right-eye camera are located are α respectively 1 And alpha 2 Then the optical flow value psi corresponding to the left eye camera 1 Optical flow value psi corresponding to right eye camera 2 Respectively as follows:
Figure BDA0003751224440000081
in a specific implementation, the left-eye camera panoramic image and the right-eye camera panoramic image may be subjected to distance-based alpha fusion to generate a stereoscopic panoramic image.
And step S103, generating a depth image corresponding to the stereoscopic panoramic image based on the optical flow value of the overlapping area between the adjacent images shot by the left-eye camera and the right-eye camera.
In specific implementation, because the depth value is acquired based on methods such as structured light, time Of Flight (TOF), laser scanning and the like, additional equipment is needed, which not only brings difficulty in structural design, but also improves cost and reduces portability, so that the depth value is calculated by using the optical flow value, and the cost and the design complexity can be effectively reduced without additional equipment.
In an embodiment of the present invention, first, world coordinates corresponding to pixels in the stereoscopic panorama image are converted into camera coordinates corresponding to a left-eye camera and a right-eye camera, respectively, then, a light stream value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera is calculated, then, a depth value corresponding to the pixels in the stereoscopic panorama image is calculated based on the converted camera coordinates and the light stream value, and a depth image corresponding to the stereoscopic panorama image is generated based on the calculated depth value.
In a specific implementation, the step of calculating the optical flow value of the overlapping area between the adjacent images captured by the left-eye camera and the right-eye camera may refer to the description in step S102, and details are not repeated here.
To enable those skilled in the art to better understand and implement the present invention, embodiments of the present invention provide a top view of a camera coordinate system and a world coordinate system, as shown in fig. 2.
Referring to fig. 2, the left eye camera C 1 Is x 1 y 1 z 1 Right eye camera C 2 Is x 2 y 2 z 2 The world coordinate system with the circle center of the ring as the origin is xyz, the included angle of the z axis of the left-eye camera coordinate system or the right-eye camera coordinate system relative to the x axis of the world coordinate system is delta, and the radius of the circle formed by the left-eye camera and the right-eye camera relative to the circle center is R.
For a pixel in the panoramic image, its corresponding point P (x, y, z) in the world coordinate system, the corresponding coordinates in the left and right eye camera coordinate systems are formula (5) and formula (6), respectively:
x 1 =xsinδ-ycosδ
y 1 =z
z 1 =-xcosδ-ysinδ+R (5)
x 2 =-xsinδ-ycosδ
y 2 =z
z 2 =-xcosδ+ysinδ+R (6)
according to the camera model, the following relationships are given:
Figure BDA0003751224440000091
from equations (5), (6) and (7), the following relationships can be derived:
Figure BDA0003751224440000092
due to the following relationships:
Figure BDA0003751224440000093
in an actual scene, when x > y and x > z, the depth value d may be calculated according to the following formula:
Figure BDA0003751224440000094
wherein
Figure BDA0003751224440000095
W is the width of the panoramic image, psi is the optical flow value psi corresponding to the left-eye camera 1 Or the optical flow value psi corresponding to the right-eye camera 2
In an embodiment of the present invention, it may be obtained that a radius of a circle formed by the left-eye camera and the right-eye camera with respect to a center of the circle is R, an included angle between a z-axis of the camera coordinate system and an x-axis of the world coordinate system is δ, and an angle between the z-axis of the camera coordinate system and the x-axis of the world coordinate system is calculated
Figure BDA0003751224440000096
Is t1; then, the width W of the panoramic image is obtained, and calculation is carried out
Figure BDA0003751224440000097
Is t2, wherein
Figure BDA0003751224440000098
Psi is the optical flow value; and finally, calculating the sum of t1 and t2 as the depth value d corresponding to the pixel.
By applying the method, the depth image corresponding to the three-dimensional panoramic image is generated based on the optical flow value of the overlapping area between the adjacent images shot by the left-eye camera and the right-eye camera, and the depth information is added on the basis of the three-dimensional panoramic video, so that real motion parallax can be formed based on different viewpoints during subsequent display, and the immersion feeling of a user is increased.
In order to make the present invention more understandable and practical for those skilled in the art, an embodiment of the present invention provides a method for displaying a depth image, as shown in fig. 3.
Referring to fig. 3, the method for displaying the depth image may include the steps of:
step S301 is to generate a stereoscopic panoramic image and a depth image corresponding to the stereoscopic panoramic image by using any one of the above-described depth image generation methods.
In a specific implementation, any one of the above methods for generating a depth image may be used to generate a stereoscopic panoramic image and a depth image corresponding to the stereoscopic panoramic image, which is not described herein again.
And S302, reconstructing a point cloud picture based on the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image, and acquiring a point cloud picture corresponding to the left-eye camera panoramic image and a point cloud picture corresponding to the right-eye camera panoramic image.
In a specific implementation, a point cloud image may be reconstructed based on the stereoscopic panorama image and its corresponding depth image.
In an embodiment of the present invention, each pixel p (h, w) in the panoramic image is corresponding, where h, w are pixel indexes according to the coordinates of the pixel indexes on the imaging sphere
Figure BDA0003751224440000101
The coordinates of the world coordinate system corresponding thereto are calculated as follows:
Figure BDA0003751224440000102
wherein O is the origin of the spherical coordinate system,
Figure BDA0003751224440000103
representing a vector pointing from the origin to S, d is the depth value for p (h, w), and f is the radius of the sphere.
Step S303, projecting the point cloud image to a view plane, and warping a depth image corresponding thereto.
In a specific implementation, for a given eye position, the plane perpendicular to its gaze direction of attention is the viewing plane of the viewpoint.
Because the embodiment of the invention supports the rotation of the head, the positions of the eyes are variable, as long as the positions of the eyes are positioned in the supported head movement range, and the center of the connecting line of the two eyes is not necessarily in the circle center.
And step S304, warping the pixels in the panoramic image corresponding to the warped depth image to a viewing plane, and generating a left-eye camera viewing plane image and a right-eye camera viewing plane image.
In step S305, a view plane image with a small corresponding depth value is output and displayed.
In a specific implementation, step S301 and steps S302 to S305 may be implemented in different modules, for example, step S301 may be implemented in a shooting module, and steps S302 to S305 may be implemented in a display module, and the shooting module and the display module may communicate with each other through an interface or a transmission line. When step S301 and steps S302 to S305 are implemented in different modules, steps S301 and S302 may further include:
in the module corresponding to step S301, compressing the stereoscopic panoramic image and the depth image corresponding thereto, and generating compressed data of the stereoscopic panoramic image and the depth image corresponding thereto;
transmitting the compressed data of the stereoscopic panoramic image and the depth image corresponding to the stereoscopic panoramic image to the module corresponding to the step S302 through an interface or a transmission line between the module corresponding to the step S301 and the module corresponding to the step S302;
in the module corresponding to step S302, the compressed data of the stereoscopic panoramic image and the depth image corresponding thereto are decompressed, and the stereoscopic panoramic image and the depth image corresponding thereto are obtained again.
In particular implementations, to reduce the overhead, the depth image may be compressed using a compression algorithm with a higher compression ratio.
By applying the method, the point cloud images are reconstructed based on the generated stereoscopic panoramic video and the corresponding depth images thereof, then the point cloud images are projected to the viewing plane, and the viewing plane images are generated and output by distorting the depth images, so that different viewpoints can be generated based on the position of the head movement, a real motion parallax is formed, and the immersion feeling of a user is increased.
In order to make the present invention more understandable and practical for those skilled in the art, an embodiment of the present invention further provides an image capturing and displaying method, as shown in fig. 4.
Referring to fig. 4, the image photographing and displaying method includes the steps of:
step S401 generates a stereoscopic panoramic image.
Step S402, generating a depth image corresponding to the stereoscopic panoramic image.
Step S403, compressing the stereoscopic panoramic image and the depth image corresponding thereto, generating a stereoscopic panoramic image and depth image compression data corresponding thereto, and transmitting the stereoscopic panoramic image and the depth image compression data to a display module.
In a specific implementation, the steps S401, S402 and S403 may be executed in a shooting module.
Step S404, decompressing the stereoscopic panoramic image and the depth image compression data corresponding thereto, and generating the stereoscopic panoramic image and the depth image corresponding thereto.
And S405, point cloud reconstruction is carried out on the basis of the stereoscopic panoramic image and the corresponding depth image.
And step S406, generating a new viewpoint image based on the point cloud picture, and outputting and displaying the new viewpoint image.
In a specific implementation, the steps S404, S405 and S406 may be executed in a display module.
In order to make those skilled in the art better understand and implement the present invention, the embodiment of the present invention further provides a device capable of implementing the above depth image generation method, as shown in fig. 5.
Referring to fig. 5, the depth image generation apparatus 50 includes: an acquisition unit 51, a first generation unit 52, and a second generation unit 53, wherein:
the acquiring unit 51 is adapted to acquire images captured by the left-eye camera and the right-eye camera.
The first generating unit 52 is adapted to generate a stereoscopic panorama image based on the acquired photographed image.
The second generating unit 53 is adapted to generate a depth image corresponding to the stereoscopic panorama image based on a light flux value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera.
In a specific implementation, the second generating unit 53 includes: a first conversion subunit (not shown), a first calculation subunit (not shown), a second calculation subunit (not shown), and a first generation subunit (not shown), wherein:
the first conversion subunit is adapted to convert world coordinates corresponding to pixels in the stereoscopic panoramic image into camera coordinates corresponding to a left-eye camera and a right-eye camera, respectively.
The first calculating subunit is adapted to calculate a light flow value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera.
The second calculating subunit is adapted to calculate depth values corresponding to pixels in the stereoscopic panoramic image based on the converted camera coordinates and the optical flow values.
The first generating subunit is adapted to generate a depth image corresponding to the stereoscopic panorama image based on the calculated depth value.
In an embodiment of the present invention, the second calculating subunit includes: a first computing module (not shown), a second computing module (not shown), and a third computing module (not shown), wherein:
the first calculation module is suitable for acquiring that the radius of a circle formed by the left-eye camera and the right-eye camera relative to the circle center is R, the included angle between the z axis of the camera coordinate system and the x axis of the world coordinate system is delta, and calculating
Figure BDA0003751224440000131
Is t1.
The second calculation module is suitable for acquiring the width W of the panoramic image and calculating
Figure BDA0003751224440000132
Is t2, wherein
Figure BDA0003751224440000133
ψ is the optical flow value.
And the third calculating module is suitable for calculating the sum of t1 and t2 as the depth value d corresponding to the pixel.
In a specific implementation, the first generating unit 52 includes: a second conversion subunit (not shown), a first calculation subunit (not shown), a second generation subunit (not shown), and a third generation subunit (not shown), wherein:
the second conversion subunit is adapted to perform spherical or cylindrical projection on the acquired photographed image, that is, to convert pixel coordinates corresponding to pixels in the acquired photographed image into spherical coordinates or cylindrical coordinates.
The first calculating subunit is adapted to calculate a light flow value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera.
The second generation subunit is adapted to generate, for each column of pixels of the overlap area, a left-eye camera panorama based on a left-eye camera and its corresponding optical flow value, and a right-eye camera panorama based on a right-eye camera and its corresponding optical flow value.
And the third generation subunit is suitable for synthesizing the left-eye camera panoramic image and the right-eye camera panoramic image based on a fusion algorithm to generate a three-dimensional panoramic image.
In an embodiment of the present invention, the first calculating subunit is adapted to calculate the optical flow value of the overlapping area between the adjacent images captured by the left-eye camera and the right-eye camera based on any one of the following algorithms: phase correlation algorithm, phase correlation algorithm and Lucas-Kanade algorithm.
In a specific implementation, the workflow and the principle of the generating device 50 may refer to descriptions in the methods provided in the foregoing embodiments, and are not described herein again.
In order to make those skilled in the art better understand and implement the present invention, the embodiment of the present invention further provides a device capable of implementing the above-mentioned depth image display method, as shown in fig. 6.
Referring to fig. 6, the display device 60 of the depth image includes: a third generation unit 61, a reconstruction unit 62, a warping unit 63, a fourth generation unit 64 and an output unit 65, wherein:
the third generating unit 61 is adapted to generate a stereoscopic panoramic image and a corresponding depth image by using any of the above depth image generating methods.
The reconstructing unit 62 is adapted to reconstruct a point cloud image based on the stereoscopic panoramic image and the depth image corresponding to the stereoscopic panoramic image, and acquire a point cloud image corresponding to the left-eye camera panoramic image and a point cloud image corresponding to the right-eye camera panoramic image.
The warping unit 63 is adapted to project the point cloud images to a viewing plane and warp the depth images corresponding thereto.
The fourth generating unit 64 is adapted to warp the pixels in the panoramic image corresponding to the warped depth image to the viewing plane, and generate a left-eye camera viewing plane image and a right-eye camera viewing plane image.
The output unit 65 is adapted to output a view plane image displaying a smaller corresponding depth value.
In a specific implementation, the view plane is: a plane perpendicular to the eye's gaze direction of attention.
In a specific implementation, the display device 60 may further include: a compression unit (not shown) and a decompression unit (not shown), wherein:
the compression unit is suitable for compressing the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to generate compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image.
The decompression unit is suitable for decompressing the compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to obtain the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image.
In a specific implementation, the working procedure and principle of the display device 60 may refer to the description in the method provided in the foregoing embodiment, and are not described herein again.
An embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, and a computer instruction is stored on the computer-readable storage medium, and when the computer instruction runs, the step corresponding to any one of the depth image generation methods is executed, which is not described herein again.
An embodiment of the present invention provides a computer-readable storage medium, which is a non-volatile storage medium or a non-transitory storage medium, and on which a computer instruction is stored, where the computer instruction executes, when running, steps corresponding to any one of the depth image display methods described above, and details are not repeated here.
The embodiment of the present invention provides a depth image generation system, which includes a memory and a processor, where the memory stores a computer instruction capable of being executed on the processor, and the processor executes, when executing the computer instruction, a step corresponding to any one of the depth image generation methods, which is not described herein again.
The embodiment of the invention provides a depth image display system, which comprises a memory and a processor, wherein a computer instruction capable of being operated on the processor is stored in the memory, and when the processor operates the computer instruction, the corresponding steps of any one of the depth image display methods are executed, which is not described herein again.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (16)

1. A method for displaying a depth image, comprising:
generating a three-dimensional panoramic image and a depth image corresponding to the three-dimensional panoramic image by adopting a depth image generation method;
reconstructing a point cloud picture based on the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image, and acquiring a point cloud picture corresponding to the left-eye camera panoramic image and a point cloud picture corresponding to the right-eye camera panoramic image;
projecting the point cloud picture to a view plane and distorting a depth image corresponding to the point cloud picture;
warping pixels in the panoramic image corresponding to the warped depth image to a viewing plane to generate a left-eye camera viewing plane image and a right-eye camera viewing plane image;
outputting and displaying a view plane image with a smaller corresponding depth value;
the generation method of the depth image comprises the following steps: acquiring images shot by a left-eye camera and a right-eye camera;
generating a stereoscopic panorama image based on the acquired photographed image; and generating a depth image corresponding to the stereoscopic panoramic image based on the optical flow value of an overlapping area between adjacent images shot by the left-eye camera and the right-eye camera.
2. The method for displaying a depth image according to claim 1, wherein the viewing planes are: a plane perpendicular to the eye's gaze direction of attention.
3. The method for displaying a depth image according to claim 1, further comprising:
compressing the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to generate compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image;
and decompressing the compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to obtain the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image.
4. The method for displaying the depth image according to claim 1, wherein the generating the depth image corresponding to the stereoscopic panorama image based on the optical flow value of the overlapping area between the adjacent images captured by the left-eye camera and the right-eye camera comprises:
converting world coordinates corresponding to pixels in the stereoscopic panoramic image into camera coordinates corresponding to a left-eye camera and a right-eye camera respectively;
calculating a light flow value of an overlapping area between adjacent images photographed by a left-eye camera and a right-eye camera;
calculating a depth value corresponding to a pixel in the stereoscopic panoramic image based on the converted camera coordinates and the optical flow value;
and generating a depth image corresponding to the stereoscopic panoramic image based on the calculated depth value.
5. The method for displaying the depth image according to claim 1, wherein the calculating the depth value corresponding to the pixel in the stereoscopic panorama image based on the converted camera coordinate and the optical flow value comprises:
obtaining the radius of a circular ring formed by the left eye camera and the right eye camera relative to the center of a circle as R, and the included angle of the z axis of the camera coordinate system relative to the x axis of the world coordinate system as delta, and calculating
Figure FDA0003751224430000021
Is t1;
obtaining the width W of the panoramic image, and calculating
Figure FDA0003751224430000022
Is t2, wherein
Figure FDA0003751224430000023
Psi is the optical flow value;
and calculating the sum of t1 and t2 as the depth value d corresponding to the pixel.
6. The method for displaying a depth image according to claim 1, wherein the generating a stereoscopic panorama image based on the acquired photographed image includes:
performing spherical or cylindrical projection on the acquired shot image, namely converting pixel coordinates corresponding to pixels in the acquired shot image into spherical coordinates or cylindrical coordinates;
calculating a light flow value of an overlapping area between adjacent images photographed by a left-eye camera and a right-eye camera;
for each column of pixels in the overlapping area, generating a left-eye camera panoramic image based on a left-eye camera and an optical flow value corresponding to the left-eye camera, and generating a right-eye camera panoramic image based on a right-eye camera and an optical flow value corresponding to the right-eye camera;
and synthesizing the left-eye camera panoramic image and the right-eye camera panoramic image based on a fusion algorithm to generate a three-dimensional panoramic image.
7. The method for displaying a depth image according to claim 4 or 6, wherein the optical flow value of the overlapping area between the adjacent images captured by the left-eye camera and the right-eye camera is calculated based on any one of the following algorithms: phase correlation algorithm, phase correlation algorithm and Lucas-Kanade algorithm.
8. A display device for a depth image, comprising:
the third generating unit is suitable for generating a stereoscopic panoramic image and a depth image corresponding to the stereoscopic panoramic image by adopting a depth image generating method;
the reconstruction unit is suitable for reconstructing a point cloud picture based on the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image, and acquiring a point cloud picture corresponding to the left-eye camera panoramic image and a point cloud picture corresponding to the right-eye camera panoramic image;
the warping unit is suitable for projecting the point cloud picture to a view plane and warping a depth image corresponding to the point cloud picture;
a fourth generating unit, adapted to warp pixels in the panoramic image corresponding to the warped depth image to a viewing plane, and generate a left-eye camera viewing plane image and a right-eye camera viewing plane image;
an output unit adapted to output and display a view plane image having a smaller corresponding depth value;
wherein the third generating unit includes: an acquisition unit adapted to acquire images captured by the left-eye camera and the right-eye camera; a first generation unit adapted to generate a stereoscopic panoramic image based on the acquired photographed image; and the second generation unit is suitable for generating the depth image corresponding to the stereoscopic panoramic image based on the optical flow value of the overlapping area between the adjacent images shot by the left-eye camera and the right-eye camera.
9. The apparatus according to claim 8, wherein the viewing plane is: a plane perpendicular to the eye's gaze direction of attention.
10. The apparatus for displaying a depth image according to claim 8, further comprising:
the compression unit is suitable for compressing the stereoscopic panoramic image and the depth image corresponding to the stereoscopic panoramic image to generate compressed data of the stereoscopic panoramic image and the depth image corresponding to the stereoscopic panoramic image;
and the decompression unit is suitable for decompressing the compressed data of the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image to acquire the three-dimensional panoramic image and the depth image corresponding to the three-dimensional panoramic image.
11. The display device of the depth image according to claim 8, wherein the second generation unit includes:
the first conversion subunit is suitable for converting world coordinates corresponding to pixels in the stereoscopic panoramic image into camera coordinates corresponding to a left-eye camera and a right-eye camera respectively;
a first calculation subunit adapted to calculate a light flow value of an overlap region between adjacent images captured by the left-eye camera and the right-eye camera;
a second calculating subunit adapted to calculate depth values corresponding to pixels in the stereoscopic panorama image based on the converted camera coordinates and the optical flow values;
a first generating subunit, adapted to generate a depth image corresponding to the stereoscopic panorama image based on the calculated depth value.
12. The apparatus according to claim 8, wherein the second calculation subunit includes: a first computing module, a second computing module, and a third computing module, wherein:
the first calculation module is suitable for acquiring that the radius of a circle formed by the left-eye camera and the right-eye camera relative to the circle center is R, the included angle between the z axis of the camera coordinate system and the x axis of the world coordinate system is delta, and calculating
Figure FDA0003751224430000041
Is t1;
the second calculation module is suitable for acquiring the width W of the panoramic image and calculating
Figure FDA0003751224430000042
Is t2, wherein
Figure FDA0003751224430000043
Psi is the optical flow value;
and the third calculating module is suitable for calculating the sum of t1 and t2 as the depth value d corresponding to the pixel.
13. The harness device for depth images according to claim 8, wherein the first generating unit includes:
the second conversion subunit is suitable for performing spherical or cylindrical projection on the acquired shot image, namely converting pixel coordinates corresponding to pixels in the acquired shot image into spherical coordinates or cylindrical coordinates;
a first calculating subunit adapted to calculate a light flow value of an overlapping area between adjacent images captured by the left-eye camera and the right-eye camera;
a second generation subunit adapted to generate, for each column of pixels of the overlap area, a left-eye camera panoramic image based on a left-eye camera and its corresponding optical flow value, and a right-eye camera panoramic image based on a right-eye camera and its corresponding optical flow value, respectively;
and the third generation subunit is suitable for synthesizing the left-eye camera panoramic image and the right-eye camera panoramic image based on a fusion algorithm to generate a three-dimensional panoramic image.
14. The apparatus for displaying a depth image according to claim 11 or 13, wherein the first calculating sub-unit is adapted to calculate the optical flow value of the overlapping area between the adjacent images captured by the left-eye camera and the right-eye camera based on any one of the following algorithms: phase correlation algorithm, phase correlation algorithm and Lucas-Kanade algorithm.
15. A computer readable storage medium having computer instructions stored thereon for execution by a processor to perform the steps of the method of any one of claims 1 to 7.
16. A display system for depth images, comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the method of any one of claims 1 to 7.
CN202210841424.5A 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium Pending CN115222793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210841424.5A CN115222793A (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711414436.5A CN109961395B (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium
CN202210841424.5A CN115222793A (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201711414436.5A Division CN109961395B (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium

Publications (1)

Publication Number Publication Date
CN115222793A true CN115222793A (en) 2022-10-21

Family

ID=67020271

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210841424.5A Pending CN115222793A (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium
CN201711414436.5A Active CN109961395B (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201711414436.5A Active CN109961395B (en) 2017-12-22 2017-12-22 Method, device and system for generating and displaying depth image and readable medium

Country Status (1)

Country Link
CN (2) CN115222793A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192312B (en) * 2019-12-04 2023-12-26 中广核工程有限公司 Depth image acquisition method, device, equipment and medium based on deep learning
CN113891061B (en) * 2021-11-19 2022-09-06 深圳市易快来科技股份有限公司 Naked eye 3D display method and display equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974055B (en) * 2013-02-06 2016-06-08 城市图像科技有限公司 3D photo generation system and method
EP3080986A4 (en) * 2013-12-13 2017-11-22 8702209 Canada Inc. Systems and methods for producing panoramic and stereoscopic videos
CN105225241B (en) * 2015-09-25 2017-09-15 广州极飞科技有限公司 The acquisition methods and unmanned plane of unmanned plane depth image
CN106060523B (en) * 2016-06-29 2019-06-04 北京奇虎科技有限公司 Panoramic stereo image acquisition, display methods and corresponding device

Also Published As

Publication number Publication date
CN109961395A (en) 2019-07-02
CN109961395B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
US20230328220A1 (en) System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
KR102054363B1 (en) Method and system for image processing in video conferencing for gaze correction
US20180192033A1 (en) Multi-view scene flow stitching
TWI619091B (en) Panorama image compression method and device
JP6406853B2 (en) Method and apparatus for generating optical field images
Thatte et al. Depth augmented stereo panorama for cinematic virtual reality with head-motion parallax
JP2009211335A (en) Virtual viewpoint image generation method, virtual viewpoint image generation apparatus, virtual viewpoint image generation program, and recording medium from which same recorded program can be read by computer
JP2010250452A (en) Arbitrary viewpoint image synthesizing device
US11812009B2 (en) Generating virtual reality content via light fields
KR101586249B1 (en) Apparatus and method for processing wide viewing angle image
KR20190062102A (en) Method and apparatus for operating 2d/3d augument reality technology
WO2018121401A1 (en) Splicing method for panoramic video images, and panoramic camera
US20200145695A1 (en) Apparatus and method for decoding a panoramic video
Lin et al. A low-cost portable polycamera for stereoscopic 360 imaging
CN109961395B (en) Method, device and system for generating and displaying depth image and readable medium
TWI615808B (en) Image processing method for immediately producing panoramic images
WO2018052100A1 (en) Image processing device, image processing method, and image processing program
JP2008217593A (en) Subject area extraction device and subject area extraction program
US10757345B2 (en) Image capture apparatus
JP5759439B2 (en) Video communication system and video communication method
CN104463958A (en) Three-dimensional super-resolution method based on disparity map fusing
CN114513646A (en) Method and device for generating panoramic video in three-dimensional virtual scene
Katayama et al. A method for converting three-dimensional models into auto-stereoscopic images based on integral photography
KR102558294B1 (en) Device and method for capturing a dynamic image using technology for generating an image at an arbitray viewpoint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination