CN114339191B - Naked eye three-dimensional display method based on multi-viewpoint reconstruction - Google Patents

Naked eye three-dimensional display method based on multi-viewpoint reconstruction Download PDF

Info

Publication number
CN114339191B
CN114339191B CN202111252954.8A CN202111252954A CN114339191B CN 114339191 B CN114339191 B CN 114339191B CN 202111252954 A CN202111252954 A CN 202111252954A CN 114339191 B CN114339191 B CN 114339191B
Authority
CN
China
Prior art keywords
layer
viewpoint
image
screen
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111252954.8A
Other languages
Chinese (zh)
Other versions
CN114339191A (en
Inventor
戴天翊
夏军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202111252954.8A priority Critical patent/CN114339191B/en
Publication of CN114339191A publication Critical patent/CN114339191A/en
Application granted granted Critical
Publication of CN114339191B publication Critical patent/CN114339191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a naked eye three-dimensional display algorithm based on multi-view reconstruction, which is characterized in that a plurality of intensive view points are acquired, a plurality of screens which are arranged in parallel are used at a display end, a plurality of convergence view points are arranged in front of the screens, each convergence view point is provided with a light ray corresponding to each pixel on the last layer of screen, when the light ray passes through the display end, each light ray can form an intersection point with each layer of screen, the brightness and the color information corresponding to each intersection point formed by each light ray on the screen are mutually overlapped in the direction corresponding to the light ray, so that the brightness and the color information of one pixel can be reconstructed by each light ray, the brightness and the color information of a two-dimensional image corresponding to a single view point can be reconstructed by the light rays on one convergence view point, and the resolution of a viewed image is consistent with the resolution of the multi-layer screen, thereby enabling the brightness and the color information reconstructed on the convergence view point to be consistent with the sub-image information corresponding to the position of the convergence view point.

Description

Naked eye three-dimensional display method based on multi-viewpoint reconstruction
Technical Field
The invention relates to a naked eye three-dimensional display technology, in particular to a naked eye three-dimensional display method based on multi-view reconstruction.
Background
The conventional three-dimensional display technology mainly uses binocular parallax characteristics and persistence of vision effects to enable human eyes to obtain three-dimensional images. However, these display techniques display objects on a display screen in such a way that the position of the object is perceived by monocular accommodation in a two-dimensional plane, whereas the position of the object focused by binocular convergence is a three-dimensional object in space, i.e. the refractive accommodation and the vergence of the eye do not occur simultaneously automatically for the human eye, they have a direct correlation, given any refractive accommodation, corresponding to a vergence, and vice versa. This causes a convergence adjustment conflict.
Multi-layer light field display refers to the reconstruction of a spatial light field of a three-dimensional object to be displayed using a plurality of display planes. A light field refers to the collection of rays in all directions where light passes through any point in space. Generally, the light field is represented by two parallel planes, wherein the propagation direction of a ray is determined by two points in the two planes.
Existing multi-layer displays are often addressed by non-negative matrix factorization methods. The pixel data of each display layer is generated through an iterative algorithm, and the method is widely applied, but the problems of small depth of field, small angle of view, low resolution and the like generally exist.
Deep learning methods are also commonly used to solve this problem, by training the data of the neural network, and then using the data and the original multi-viewpoint image to obtain pixel value data of the multiple display planes. The stereoscopic image obtained by the method has good effect. The viewer will obviously perceive that the object is hovering outside the display panel. However, the current deep learning method has various problems, which restrict further development and application thereof.
The main manifestations are as follows:
1. the convergence effect of the trained deep learning model is not good enough. Due to the way data is trained and the limitations of the network architecture, the convergence effect of the model is difficult to expect in many cases. The reconstructed three-dimensional image tends to have streaks and insufficient brightness.
2. The real-time performance of the deep learning model trained for specific multi-viewpoint images is not strong enough. The calculation speed of the model is insufficient due to the influence of computer hardware and model performance, so that video streams with high real-time requirements are difficult to process.
3. The field angle of the multi-layer display is not large enough and the depth of field range is relatively small. In the past light field display, because the reconstruction method is optimized only for one view point, the original correct view angle information which can be compared is little, and the traditional multi-layer display can bear limited load, the small view angle, the small depth of field and the large crosstalk can be caused.
Disclosure of Invention
In view of the above, the invention aims to provide a naked eye three-dimensional display method based on multi-viewpoint reconstruction, which is used for solving the problem of viewing angle limitation in multi-layer naked eye three-dimensional display.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a naked eye three-dimensional display method based on multi-viewpoint reconstruction comprises the following steps:
s1, acquiring dense viewpoint diagrams of a certain three-dimensional object under different angles;
step S2, a display device is adopted, the display device comprises a plurality of layers of screens which are arranged in parallel, a plurality of convergence viewpoints are set in front of the display device, each convergence viewpoint corresponds to each pixel on the last layer of screen of the display device, each light has a cross point with each layer of screen when the light passes through the display device, and brightness and color information corresponding to each cross point formed by each light on the screen overlap each other in the transmission direction of the light.
Further, in the step S1, the method for obtaining the dense view point diagram of a certain three-dimensional object under different angles is specifically a method for directly shooting by using a light field camera, a method for simulating by using computer simulation software, or a method for generating by using 2d+depth algorithm.
Further, the screen adopted by the display device is specifically liquid crystal, OLED or LED.
Further, in the step S2, the reset sub-image is a multi-layer image, wherein at least one layer is a color layer, the color layer adopts an RGB display method, each pixel carries pixel values of RGB three colors, the other layers are gray-scale layers, the gray-scale layers only display gray-scale values, and each pixel only carries gray-scale data.
Further, the innermost layer of the display device is a backlight layer, the outermost layer is a color layer, the middle of the display device is provided with a first gray level layer and a second gray level layer, and a deep learning method is adopted to reconstruct sub-images at each convergence viewpoint simultaneously, specifically comprising:
step S201, a coordinate system is established by taking the direction vertical to the display device as the z axis and taking the left to right of the position of the screen as the x axis;
step S202, setting K convergence viewpoints in front of the display device, and creating respective view angle images through the K convergence viewpoints simultaneously on the 3-layer screen of the display device, specifically:
let the screen transmittance through the intersection of the light rays of the second gray layer, the first gray layer and the color layer and each layer when reconstructing the ith viewpoint be expressed as
Then, the expression of the pixel value of the light ray after the mth stripe passes through the first gray layer and the color layer is:
in the formula (1), L i,m (x, z) represents a pixel value of an mth ray at the time of reconstructing the ith viewpoint, wherein x represents an abscissa of a corresponding pixel position of the reconstructed image, and z represents an ordinate of the corresponding pixel position of the reconstructed image;representing a pixel value corresponding to an intersection point of an mth ray and a second gray level layer when reconstructing an ith viewpoint, wherein x is 1 An abscissa, z, representing a pixel position of an intersection with the second gray level layer 1 An ordinate representing a pixel position of an intersection with the second gray level layer;representing a pixel value corresponding to an intersection point of an mth ray and the first gray layer when reconstructing an ith viewpoint, wherein x 2 An abscissa, z, representing a pixel position of an intersection with the first gray layer 2 An ordinate representing a pixel position at an intersection with the first gray level; />Representing reconstruction of the ith viewPixel value corresponding to intersection point of mth ray and color layer, wherein x 3 An abscissa, z, representing the pixel position of the intersection with the color layer 3 An ordinate representing the pixel position at the intersection with the color layer;
then the reconstructed sub-image of the ith view is L i If the original image corresponding to the i viewpoints is T i The relationship between the original image and the reconstructed sub-image is expressed as:
arg min∑ i ||L i -T i || 2 (2)
solving the formula 2 as a loss function by using a deep learning method until a deep learning model converges to obtain a reconstructed multi-layer sub-image of the ith viewpoint;
and amplifying the reconstructed multi-layer sub-image by a plurality of times by using an interpolation algorithm until the size of the amplified image is the same as that of the original image, thereby realizing naked eye three-dimensional display.
Further, in the deep learning method, an architecture of an encoding-decoding is adopted, and at least 50 epochs are adopted for training.
Further, K convergence viewpoints are provided in front of the display device, and the position of a specific convergence viewpoint is determined by the following method:
at a position outside the display device, a plane is determined, which is parallel to the screen and on which all view points are on;
a point is randomly selected on each screen and is guaranteed to be collinear, and the point is connected into a straight line, and the intersection point with the plane is the position of the viewpoint.
Furthermore, all the multi-layer sub-images obtained through calculation by the deep learning method are loaded on a multi-layer screen of the display device to realize the naked eye three-dimensional display effect, and the three-dimensional display mode is a 3D display mode; when the 2D display mode is needed, loading the information of the 2D image on a certain layer of screen, and setting other layers of screens to be transparent, so as to realize the 2D display mode.
Further, in the step S2, the reset sub-image is a multi-layer image, wherein each layer of display is a monochrome image. By the dynamic refresh of the timing R, G, B, color display is realized.
The beneficial effects of the invention are as follows:
1. in the prior art, for multi-layer naked-eye three-dimensional display, the technical problem and the technical defect of small angle of view always exist, the original multi-viewpoint sub-image is amplified by using a nearest neighbor interpolation algorithm, and the multi-viewpoint image is reconstructed by using a multi-layer display screen, so that the reconstructed image with higher precision can be displayed under a larger angle of view, and compared with the prior art, the multi-layer naked-eye three-dimensional display has a larger angle of view.
2. The invention uses the deep learning algorithm to process the input end and the output end of the data, the outmost layer of the three-layer image data to be output is a color layer, the middle layer and the innermost layer are gray layers, the data quantity to be output is relatively less, the color layer adopts a time sequence color mode to display in real time, and when the model training is applied to other data after the model training is finished, the image with multiple viewpoints can be rebuilt in a very short time, thereby realizing real-time display.
3. In the present invention, the multi-viewpoint image used will generally have at least 19 angles, so much image data will significantly reduce crosstalk information of the image when reconstructing the image, thereby improving the reconstruction quality of the image.
Drawings
Fig. 1 shows a schematic view of reconstructing a sub-image for a converging viewpoint;
FIG. 2 is a top view of a display device and a convergence point;
FIG. 3 is a schematic diagram illustrating the determination of one of a plurality of converging view points;
fig. 4 shows a schematic diagram of capturing an image for a converging viewpoint.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-4, the present embodiment provides a naked eye three-dimensional display method based on multi-viewpoint reconstruction, including the following steps:
step 1: firstly, dense viewpoint diagrams of different angles of a three-dimensional object are obtained through a 2D+depth algorithm, and the viewpoint diagrams are amplified by a certain multiple in the horizontal direction.
Step 2: the display end uses a mode that a plurality of layers of screens are arranged in parallel, and convergence viewpoints are set in front of the screens, wherein each convergence viewpoint corresponds to each pixel on the last layer of screen, and each convergence viewpoint has one light ray. When the light rays in different directions penetrate the multi-layer display screen, each light ray forms an intersection point with each layer of screen, the positions of the plurality of intersection points formed by each light ray are mutually overlapped in the direction corresponding to the light ray, so that the brightness and color information of a pixel can be reconstructed by each light ray, the brightness and color information of a two-dimensional image corresponding to a single view point can be reconstructed by the light rays on a convergence view point, the resolution of the observed image is consistent with the resolution of the multi-layer screen, and the brightness and color information reconstructed on the convergence view point is consistent with the sub-image information corresponding to the convergence view point position. The schematic diagram of the reconstruction is shown in fig. 1, and the schematic diagram of the converging viewpoint captured image is shown in fig. 4.
Step 3: a plurality of convergence viewpoints are provided in front of the screen, and the positions at which the convergence viewpoints are set are arbitrary, and may be set in the horizontal direction or in the vertical direction. The method for viewing and reconstructing the image after the acquisition of the plurality of convergence viewpoints is consistent with the method in the step 2, so that all the convergence viewpoints can acquire the reconstructed image, the convergence viewpoints multiplex pixel data on the multi-layer display screen, and the simultaneous reconstruction of the multi-viewpoint image in the three-dimensional space is realized by reducing the resolution of each viewpoint. An example of setting a plurality of convergence viewpoints is shown in fig. 3.
A coordinate system is established with the direction perpendicular to the 3-layer screen as the z-axis and the position of the screen from left to right as the x-axis. The 3-layer screen is now used to reconstruct k field angle images from different viewpoints. Let the screen transmittance through the intersection of light rays of the gray layer 2, gray layer 1, color layer and each layer when reconstructing the ith viewpoint be expressed asThe pixel value of the light ray after the mth stripe passes through the gray layer 1 and the color layer among the parallel light rays exiting from the gray layer 2 can be expressed as the following formula:
at point (x) 1 ,z 1 ),(x 2 ,z 2 ),(x 3 ,z 3 ) Collinear because the light propagates along a straight line. Whereby a reconstructed image of the ith viewpoint can be obtained as L i If the original image of the image corresponding to i viewpoints is T i Then the relationship between the original and reconstructed maps can be written in the form:
solving the formula, obtaining the minimum value meeting the condition by using a deep learning method, and amplifying the reconstructed three-layer light field image by a plurality of times in the vertical direction by using a nearest neighbor interpolation algorithm, wherein the amplification factor is the same as the horizontal amplification factor of the image after the viewpoint diagram is generated. Thereby, reconstruction of a multi-layer light field display can be achieved.
Specifically, in this embodiment, an object image with 19 viewpoints and a parallax of 1.8 is generated by a 2d+depth method, the resolution of the original image is 1920×1080, the images are combined into a data image, and then the data image is amplified to 5 times of the original image in the horizontal direction by using a nearest neighbor interpolation algorithm, and the size of the obtained data image is 19×1920×5×1080. Then three layers of screens are used for parallel alignment. The data map is processed using a deep learning approach.
The coordinate system is set in the manner of fig. 1, and the method of reconstructing the 1 st, 2 nd, k-1 st, and k-th viewpoints (k=19) is shown in fig. 2, and fig. 2 is a top view of the display screen and the viewpoints. The image range watched by each view point is the range of an included angle formed by the connecting line of the view point and the leftmost end of the innermost screen (namely the gray level layer 2) and the connecting line of the view point and the rightmost end of the innermost screen, and the degree range of the included angle is between 0 and 180 degrees, and the principle is shown in figure 4. Meanwhile, the distance d between the screens is set to be 1mm, and the horizontal distance s between the outermost screen and the viewpoint is set to be 2m. Since the viewing range of each viewpoint is the whole screen and the positions corresponding to the pixels of the viewpoint map are identical to the positions corresponding to the pixels of the gray layer 2, the intersection point of the line between the viewpoint position and the positions of the pixels of the gray layer 2 and the gray layer 1 and the color layer can determine the positions of the pixels of the gray layer 1 and the color layer seen at a certain viewpoint. At some point of view, it may happen that part of the pixels of the gray layer 1 and the color layer are not visible, because these rays no longer pass through the two screens, in which case the transmittance of the rays in the two screens is set to 1. Let the intersection point of light and gray layer 2, gray layer 1, and color layer be (x) 2 ,z 2 ),(x 1 ,z 1 ),(x 0 ,z 0 ) Gray level 2, gray level 1 has gray values of H (x 2 ,z 2 ),H(x 1 ,z 1 ) Each component of RGB of the color layer is R (x 0 ,z 0 ),G(x 0 ,z 0 ),B(x 0 ,z 0 ) The RGB components of the pixel values of the pixels of the reconstructed view are respectively:
L R (x,z)=H(x 2 ,z 2 )×H(x 1 ,z 1 )×R(x 0 ,z 0 )
L G (x,z)=H(x 2 ,z 2 )×H(x 1 ,z 1 )×G(x 0 ,z 0 )
L B (x,z)=H(x 2 ,z 2 )×H(x 1 ,z 1 )×B(x 0 ,z 0 )
and reconstructing the pixels one by one to obtain the whole reconstruction view diagram under a certain view point.
In this embodiment, the determination method of the position of the viewpoint is as follows: since there are 19 viewpoints in total, the middle viewpoint is determined as the 10 th viewpoint, the position of the viewpoint is opposite to the middle position of the display screen, and as shown in fig. 3, the x-th viewpoint is determined as follows: if x epsilon [11,19], determining a point on the gray level layer 2, the point being x-10 pixel units to the left of the screen middle line, determining a point on the gray level layer 1, the point being on the screen middle line, determining a point on the color layer, the point being x-10 pixel units to the right of the screen middle line, connecting the three points, and at a position with a distance s=2m from the color layer, being the position of the x-th viewpoint; if x e 1,9, a point is determined on gray level 2, which is 10-x pixel units to the right of the screen middle line, a point is determined on gray level 1, which is on the screen middle line, a point is determined on color layer, which is 10-x pixel units to the left of the screen middle line, and these three points are connected together, and the position at a distance s=2m from the color layer is the position of the x-th viewpoint.
After the reconstruction is completed, the reconstructed image of the ith view point is assumed to be L i The original picture image of the ith view point is T i Then the relationship between the original and reconstructed images can be expressed as
Solving the formula can realize the reconstruction and display of the three-dimensional light field.
When solving by using a deep learning method, solving the above formula as a loss function, the deep learning architecture is an encode-decode, and the model can be regarded as having converged as the value of the loss function becomes almost unchanged by training of 50 or more epochs. Finally, amplifying the reconstructed three-layer image to be 5 times of the original image in the vertical direction by using an interpolation algorithm, so that the size proportion of the image is the same as that of the original image, and the naked eye three-dimensional display based on multiple viewpoints is realized.
In summary, according to the naked eye three-dimensional display algorithm based on multi-viewpoint reconstruction provided by the invention, firstly, dense viewpoint patterns of three-dimensional objects at different angles are obtained, a display end sets convergence viewpoints at a plurality of positions in front of a screen in a mode of parallel arrangement of multiple layers of screens, wherein the convergence viewpoints refer to that each convergence viewpoint has a light ray at a position corresponding to each pixel on the last layer of screen. When the light rays in different directions penetrate the multi-layer display screen, each light ray forms an intersection point with each layer of screen, the positions of the plurality of intersection points formed by each light ray are mutually overlapped in the direction corresponding to the light ray, so that the brightness and color information of a pixel can be reconstructed by each light ray, the brightness and color information of a two-dimensional image corresponding to a single view point can be reconstructed by the light rays on a convergence view point, the resolution of the observed image is consistent with the resolution of the multi-layer screen, and the brightness and color information reconstructed on the convergence view point is consistent with the sub-image information corresponding to the convergence view point position. The set positions of the convergence viewpoints are arbitrary, all the convergence viewpoints can obtain reconstructed images, the convergence viewpoints multiplex pixel data on the multi-layer display screen, and the multi-viewpoint images are reconstructed simultaneously in a three-dimensional space by reducing the resolution of each viewpoint.
The present invention is not described in detail in the present application, and is well known to those skilled in the art.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (8)

1. The naked eye three-dimensional display method based on multi-viewpoint reconstruction is characterized by comprising the following steps of:
s1, acquiring dense viewpoint diagrams of a certain three-dimensional object under different angles;
s2, adopting a display device, wherein the display device comprises a plurality of layers of screens which are arranged in parallel, setting a plurality of convergence viewpoints in front of the display device, wherein the convergence viewpoints have a light ray at positions corresponding to each pixel on the last layer of screen of the display device, the light rays have an intersection point with each layer of screen when passing through the display device, and brightness and color information corresponding to each intersection point formed by each light ray on the screen are mutually overlapped in the transmission direction of the light ray;
the innermost layer of the display device is a backlight layer, the outermost layer is a color layer, a first gray level layer and a second gray level layer are arranged in the middle of the display device, and a sub-image is reconstructed at the same time at each convergence viewpoint by adopting a deep learning method, and the method specifically comprises the following steps:
step S201, a coordinate system is established by taking the direction vertical to the display device as the z axis and taking the left to right of the position of the screen as the x axis;
step S202, setting K convergence viewpoints in front of the display device, and creating respective view angle images through the K convergence viewpoints simultaneously on the 3-layer screen of the display device, specifically:
let the screen transmittance through the intersection of the light rays of the second gray layer, the first gray layer and the color layer and each layer when reconstructing the ith viewpoint be expressed as
Then, the expression of the pixel value of the light ray after the mth stripe passes through the first gray layer and the color layer is:
in the formula (1), L i,m (x, z) represents a pixel value of an mth ray at the time of reconstructing the ith viewpoint, wherein x represents an abscissa of a corresponding pixel position of the reconstructed image, and z represents an ordinate of the corresponding pixel position of the reconstructed image;representing a pixel value corresponding to an intersection point of an mth ray and a second gray level layer when reconstructing an ith viewpoint, wherein x is 1 An abscissa, z, representing a pixel position of an intersection with the second gray level layer 1 An ordinate representing a pixel position of an intersection with the second gray level layer; />Representing a pixel value corresponding to an intersection point of an mth ray and the first gray layer when reconstructing an ith viewpoint, wherein x 2 An abscissa, z, representing a pixel position of an intersection with the first gray layer 2 An ordinate representing a pixel position at an intersection with the first gray level;
representing a pixel value corresponding to an intersection point of an mth ray and a color layer when reconstructing an ith viewpoint, wherein x 3 An abscissa, z, representing the pixel position of the intersection with the color layer 3 An ordinate representing the pixel position at the intersection with the color layer;
then the reconstructed sub-image of the ith view is L i If the original image corresponding to the i viewpoints is T i The relationship between the original image and the reconstructed sub-image is expressed as:
arg min∑ i ||L i -T i || 2 (2)
solving the formula (2) as a loss function by using a deep learning method until a deep learning model converges to obtain a reconstructed multi-layer sub-image of the ith viewpoint;
and amplifying the reconstructed multi-layer sub-image by a plurality of times by using an interpolation algorithm until the size of the amplified image is the same as that of the original image, thereby realizing naked eye three-dimensional display.
2. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein in the step S1, the method of obtaining the dense view map of a certain three-dimensional object under different angles is specifically a method of direct shooting by using a light field camera, a method of simulation by using computer simulation software, or a method of generation by using 2d+depth algorithm.
3. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein the screen adopted by the display device is specifically liquid crystal, OLED or LED.
4. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein in the step S2, the reconstructed sub-image is a multi-layer image, wherein at least one layer is a color layer, the color layer adopts an RGB display method, each pixel carries pixel values of RGB three colors, the other layers are gray-scale layers, the gray-scale layers only display gray-scale values, and each pixel only carries gray-scale data.
5. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein in the deep learning method, an architecture of decode-decode is adopted, and at least 50 epochs are adopted for training.
6. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein K convergence viewpoints are provided in front of the display device, and the position of a specific convergence viewpoint is determined by the following method:
at a position outside the display device, a plane is determined, which is parallel to the screen and on which all view points are on;
a point is randomly selected on each screen and is guaranteed to be collinear, and the point is connected into a straight line, and the intersection point with the plane is the position of the viewpoint.
7. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein the multi-layer sub-images obtained through calculation of the deep learning method are all loaded on a multi-layer screen of a display device to realize a naked eye three-dimensional display effect, and the naked eye three-dimensional display method is a 3D display mode; when the 2D display mode is needed, loading the information of the 2D image on a certain layer of screen, and setting other layers of screens to be transparent, so as to realize the 2D display mode.
8. The naked eye three-dimensional display method based on multi-viewpoint reconstruction according to claim 1, wherein in the step S2, the reset sub-image is a multi-layer image, wherein each layer of display is a single-color image, and the color display is realized by dynamic refresh at the timing R, G, B.
CN202111252954.8A 2021-10-27 2021-10-27 Naked eye three-dimensional display method based on multi-viewpoint reconstruction Active CN114339191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111252954.8A CN114339191B (en) 2021-10-27 2021-10-27 Naked eye three-dimensional display method based on multi-viewpoint reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111252954.8A CN114339191B (en) 2021-10-27 2021-10-27 Naked eye three-dimensional display method based on multi-viewpoint reconstruction

Publications (2)

Publication Number Publication Date
CN114339191A CN114339191A (en) 2022-04-12
CN114339191B true CN114339191B (en) 2024-02-02

Family

ID=81045161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111252954.8A Active CN114339191B (en) 2021-10-27 2021-10-27 Naked eye three-dimensional display method based on multi-viewpoint reconstruction

Country Status (1)

Country Link
CN (1) CN114339191B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115166993B (en) * 2022-05-31 2023-11-10 北京邮电大学 Self-adaptive three-dimensional light field display method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3413562A1 (en) * 2017-06-09 2018-12-12 Leyard Optoelectronic Co., Ltd 3d display device and method
CN111565308A (en) * 2020-07-15 2020-08-21 江苏奥斯汀光电科技股份有限公司 Naked eye 3D display method and device based on multilayer transparent liquid crystal screen
WO2020228282A1 (en) * 2019-05-15 2020-11-19 合肥工业大学 Light field display device and display method thereof
CN112866676A (en) * 2021-01-06 2021-05-28 东南大学 Naked eye three-dimensional display algorithm based on single-pixel multi-view reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3413562A1 (en) * 2017-06-09 2018-12-12 Leyard Optoelectronic Co., Ltd 3d display device and method
WO2020228282A1 (en) * 2019-05-15 2020-11-19 合肥工业大学 Light field display device and display method thereof
CN111565308A (en) * 2020-07-15 2020-08-21 江苏奥斯汀光电科技股份有限公司 Naked eye 3D display method and device based on multilayer transparent liquid crystal screen
CN112866676A (en) * 2021-01-06 2021-05-28 东南大学 Naked eye three-dimensional display algorithm based on single-pixel multi-view reconstruction

Also Published As

Publication number Publication date
CN114339191A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
US10715782B2 (en) 3D system including a marker mode
US11961431B2 (en) Display processing circuitry
CN100483463C (en) System and method for rendering 3-D images on a 3-d image display screen
US6515663B1 (en) Apparatus for and method of processing three-dimensional images
US20100289819A1 (en) Image manipulation
CN115951504A (en) Three-dimensional glasses-free light field display using eye positions
CN113763301B (en) Three-dimensional image synthesis method and device for reducing miscut probability
US20240040098A1 (en) 3d system
CN112866676B (en) Naked eye three-dimensional display algorithm based on single-pixel multi-view reconstruction
Knorr et al. The avoidance of visual discomfort and basic rules for producing “good 3D” pictures
US10122987B2 (en) 3D system including additional 2D to 3D conversion
CN114339191B (en) Naked eye three-dimensional display method based on multi-viewpoint reconstruction
US10277879B2 (en) 3D system including rendering with eye displacement
US10148933B2 (en) 3D system including rendering with shifted compensation
US10121280B2 (en) 3D system including rendering with three dimensional transformation
CN114879377A (en) Parameter determination method, device and equipment of horizontal parallax three-dimensional light field display system
CN113935907A (en) Method, apparatus, electronic device, and medium for pre-correcting image aberration
Panahpourtehrani et al. 3D imaging system using multi-focus plenoptic camera and tensor display
US10225542B2 (en) 3D system including rendering with angular compensation
WO2017083509A1 (en) Three dimensional system
US12028502B2 (en) Three dimensional glasses free light field display using eye location
Hristov Research of Modern Technologies and Approaches for the Development of a Web-Based Information System for Visualization of Three-Dimensional Models...
US10284837B2 (en) 3D system including lens modeling
KR101784208B1 (en) System and method for displaying three-dimension image using multiple depth camera
CN117319631A (en) Light field display system and image rendering method based on human eye tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant