CN112087617A - Method, apparatus and computer readable medium for generating two-dimensional light field image - Google Patents

Method, apparatus and computer readable medium for generating two-dimensional light field image Download PDF

Info

Publication number
CN112087617A
CN112087617A CN201910506051.4A CN201910506051A CN112087617A CN 112087617 A CN112087617 A CN 112087617A CN 201910506051 A CN201910506051 A CN 201910506051A CN 112087617 A CN112087617 A CN 112087617A
Authority
CN
China
Prior art keywords
light field
dimensional
field image
virtual camera
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910506051.4A
Other languages
Chinese (zh)
Inventor
陈志强
周磊
惠新标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maijie Information Technology Co ltd
Original Assignee
Shanghai Maijie Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maijie Information Technology Co ltd filed Critical Shanghai Maijie Information Technology Co ltd
Priority to CN201910506051.4A priority Critical patent/CN112087617A/en
Publication of CN112087617A publication Critical patent/CN112087617A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/32Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using arrays of controllable light sources; using moving apertures or moving light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for generating a two-dimensional light field image, wherein the two-dimensional light field image is used for being matched with a point light source array to present a three-dimensional light field of a virtual three-dimensional scene, and the method comprises the following steps: determining the position, the size and the central point of a two-dimensional image plane where the two-dimensional light field image is located; determining the position and the size of an orthogonal projection plane where a virtual camera for shooting a virtual three-dimensional scene is located; determining a shooting position coordinate set of the virtual camera in an orthogonal projection plane; setting a virtual camera on each position coordinate in the shooting position coordinate set, wherein the visual angles shoot orthogonal projection images of a plurality of visual angles towards a central point; extracting a plurality of light field image regions from the orthographic projection images of the plurality of viewing angles; carrying out affine transformation on the plurality of light field image areas to obtain a plurality of corrected light field image areas; combining the plurality of corrected light field image regions to obtain a two-dimensional light field image.

Description

Method, apparatus and computer readable medium for generating two-dimensional light field image
Technical Field
The invention relates to a method and a device for generating a two-dimensional image, in particular to a method and a device for generating two-dimensional image data for describing a three-dimensional space stereoscopic light field.
Background
People have strong expectations for being able to actually view stereoscopic images. The current technologies of stereoscopic movies, stereoscopic televisions, vr (virtual reality), ar (augmented reality), etc. are gradually emerging under the demand, and meet the requirements of people to a certain extent. However, such stereoscopic imaging purely from the viewpoint of human eyes has many limitations, and the implementation method is not natural.
On the one hand, stereoscopic imaging using these several techniques requires the wearing of stereoscopic glasses for viewing. The conventional stereoscopic glasses are large, heavy and inconvenient. The human eye is sensitive to certain unnatural factors of such stereoscopic eyewear and may cause discomfort for long-term viewing. Although some naked-eye 3D display devices can get rid of the constraint of glasses, the imaging effect of the glasses is affected by factors such as visual angles and distances to a great extent, so that the glasses are difficult to meet the visual perception of audiences in different positions when being watched by multiple people, and the effect of the glasses is far from meeting the normal watching requirement of people.
On the other hand, technologies such as VR, AR, and naked-eye 3D require a relatively bulky high-performance computing terminal in order to perform high-computation-amount stereoscopic image or video processing. The image or video generation speed is slow, the time consumption is long, and the requirements of a large number of applications cannot be met.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method and a device for quickly generating a two-dimensional light field image for describing a three-dimensional space stereoscopic light field.
In order to solve the technical problem, the invention provides a method for generating a two-dimensional light field image, wherein the two-dimensional light field image is used for being matched with a point light source array to present a three-dimensional light field of a virtual three-dimensional scene, and the method comprises the following steps: determining the position, the size and the central point of a two-dimensional image plane where the two-dimensional light field image is located; determining the position and the size of an orthogonal projection plane where a virtual camera for shooting the virtual three-dimensional scene is located, wherein the virtual three-dimensional scene is limited in the orthogonal projection shooting range; determining a shooting position coordinate set of the virtual camera in the orthogonal projection plane; setting the virtual camera at each position coordinate in the set of shooting position coordinates and shooting orthogonal projection images of a plurality of view angles with view angles towards the central point; extracting a plurality of light field image regions from the orthographic projection images of the plurality of view angles; performing affine transformation on the plurality of light field image regions to obtain a plurality of corrected light field image regions; combining the plurality of corrected light field image regions to obtain the two-dimensional light field image.
In an embodiment of the invention, a size of the orthogonal projection plane in a first direction is larger than a size of the two-dimensional image plane in the first direction, and a size of the orthogonal projection plane in a second direction is larger than the size of the two-dimensional image plane in the second direction, so that the orthogonal projection images of the plurality of view angles completely cover the two-dimensional image plane, wherein the first direction and the second direction are perpendicular.
In an embodiment of the present invention, the step of determining the shooting position coordinate set of the camera in the orthogonal projection plane includes: determining the moving range of the virtual camera according to the target visible angle of the stereoscopic light field and the distance between the orthogonal projection plane and the two-dimensional image plane; determining the number of shooting position lattices of the virtual camera in the orthogonal projection plane according to the number of the visual angles of the stereoscopic light field; and determining a shooting position coordinate set of the virtual camera in the orthogonal projection plane according to a shooting position dot matrix formed by the shooting position points and the moving range of the virtual camera.
In an embodiment of the invention, the distance between the virtual camera and the central point when the virtual camera looks at the central point horizontally is determined as the distance between the orthogonal projection plane and the two-dimensional image plane.
In an embodiment of the present invention, the step of combining the plurality of corrected light field image regions comprises: combining pixel points at the same positions of the plurality of corrected light field image areas into an object pixel grid according to the relative view angle positions of the plurality of corrected light field image areas; and combining all object pixel grids to obtain the two-dimensional light field image.
In an embodiment of the invention, each of the light field image regions is a parallelogram, and each of the corrected light field image regions is a rectangle.
In an embodiment of the invention, in the set of shooting position coordinates, the intervals between the shooting positions are equal.
In an embodiment of the present invention, in the set of shooting position coordinates, the distance between adjacent shooting positions monotonically increases in at least one of the horizontal direction and the vertical direction as the distance from the center of the orthogonal projection plane increases.
In order to solve the above technical problem, the present invention further provides an apparatus for generating a two-dimensional light field image, comprising: a memory for storing instructions executable by the processor; a processor for executing the instructions to implement the method as described above.
To solve the above technical problem, the present invention also proposes a computer readable medium storing computer program code, which when executed by a processor implements the method as described above.
The invention utilizes the orthogonal projection camera to generate a two-dimensional light field image, can be matched with a point light source array to realize the three-dimensional display of a three-dimensional object, and has vivid effect; according to the invention, a plurality of light field image areas obtained by the orthogonal projection camera at a plurality of shooting positions are combined into a two-dimensional light field image, so that the operation times are greatly reduced, and the image generation speed is improved.
Drawings
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below, wherein:
fig. 1 is a schematic structural diagram of a stereoscopic light field display device according to an embodiment of the invention;
FIG. 2 is an exemplary flow chart of a method of generating a two-dimensional light field image in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a method of generating a two-dimensional light field image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a set of shooting position coordinates of a virtual camera in accordance with an embodiment of the present invention;
FIG. 5 is a diagram illustrating a virtual camera performing a capture operation according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of a light field image region in an embodiment of the invention;
FIG. 7 is a schematic diagram of a corrected light field image region in one embodiment of the invention;
FIG. 8 is a schematic of a two-dimensional light field image of an embodiment of the present invention;
FIG. 9 is an exemplary flow chart for determining a set of shooting position coordinates for a virtual camera within an orthographic projection plane in some embodiments of the invention;
FIG. 10A is a schematic diagram of a method for determining a set of coordinates of a shooting position of a virtual camera according to an embodiment of the present invention;
FIG. 10B is a second schematic diagram illustrating the determination of the coordinate set of the capturing position of the virtual camera according to an embodiment of the present invention;
FIG. 11 is an exemplary flow chart for combining corrected light field image regions to obtain a two-dimensional light field image in one embodiment of the present invention;
FIG. 12 is a schematic diagram of the principle of combining corrected light field image regions to obtain a two-dimensional light field image in an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments disclosed below.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
In describing the embodiments of the present application in detail, the cross-sectional views illustrating the structure of the device are not enlarged partially in a general scale for convenience of illustration, and the schematic drawings are only examples, which should not limit the scope of the present application. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
For convenience in description, spatial relational terms such as "below," "beneath," "below," "under," "over," "upper," and the like may be used herein to describe one element or feature's relationship to another element or feature as illustrated in the figures. It will be understood that these terms of spatial relationship are intended to encompass other orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary words "below" and "beneath" can encompass both an orientation of up and down. The device may have other orientations (rotated 90 degrees or at other orientations) and the spatial relationship descriptors used herein should be interpreted accordingly. Further, it will also be understood that when a layer is referred to as being "between" two layers, it can be the only layer between the two layers, or one or more intervening layers may also be present.
In the context of this application, a structure described as having a first feature "on" a second feature may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features are formed in between the first and second features, such that the first and second features may not be in direct contact.
The two-dimensional light field image generated by the method for generating the two-dimensional light field image according to the invention can be applied to a stereoscopic light field display device. To assist in the description of the method of the present invention, a description of such a stereoscopic light field display device will first be given.
Fig. 1 is a schematic structural diagram of a stereoscopic light field display device according to an embodiment of the invention. Referring to fig. 1, the stereoscopic light field display device 100 includes a light field image layer 110 and a point light source array 120. The light field image layer 110 and the point light source array 120 in this embodiment are rectangular thin-layer structures, and the light field image layer 110 and the point light source array 120 are arranged in parallel with a distance S therebetween. Preferably, the light field image layer 110 and the point light source array 120 are the same size and shape.
The present invention is not intended to be limited to the thickness or shape of the structures shown. In other embodiments, the light-field image layer 110 and the point light source array 120 may have other thicknesses (the thickness may range from 0.1 mm to 20 mm), and may have other shapes, such as a circle, an ellipse, a square, and the like. In other embodiments, both the size and shape of the light-field image layer 110 and the point light source array 120 may be different.
The light field image layer 110 is used to display a two-dimensional light field image. The two-dimensional light field image is generated according to the method for generating the two-dimensional light field image, and comprises image information of different visual angles of the three-dimensional object model. The two-dimensional light field image may be a planar image or a curved image. The two-dimensional light field image includes, but is not limited to, a still image and a moving image.
Referring to FIG. 1, the array of point light sources 120 is a thin layer including a plurality of point light sources on the array of point light sources 120, as indicated by the white dots on the array of point light sources 120 in FIG. 1. The plurality of point light sources are distributed on the point light source array 120 in an array manner according to a certain rule. Light rays are emitted from each point light source, penetrate through the light field image layer 110 at the corresponding position, and are spread within a certain three-dimensional angle range. The light rays emitted from the point light sources give related and different light rays in different directions through the two-dimensional light field image displayed on the light field image layer 110, thereby simulating a stereoscopic light field emitted from the virtual three-dimensional model in space and realizing three-dimensional stereoscopic display corresponding to the two-dimensional light field image. As shown in fig. 1, the point light source array 120 is located on one side of the light field image layer 110, and a user can observe a stereoscopic light field of a virtual three-dimensional model in a space on the other side of the light field image layer 110, and a range in which the stereoscopic light field of the virtual three-dimensional model can be observed in this area is referred to as a visible range.
In some embodiments, the point light sources in the point light source array 120 may be light emitting diode lamps, optical fibers, or the like. The on, off and brightness of each point light source is independently controllable.
In some embodiments, the array of point light sources 120 may be a light emitting panel and an array of apertures overlying the panel. The small hole array layer is provided with a plurality of small holes distributed according to a certain rule. The aperture array layer is made of an opaque material except for the apertures. The light emitted by the light-emitting panel can be emitted out through the small holes on the small hole array layer. The plurality of small holes can be through holes, so that light rays emitted from the light-emitting panel can be emitted through the small holes; the plurality of small holes may be formed of a light-transmitting material instead of the through holes, so that light emitted from the light-emitting panel can be emitted through the small holes. The shape of the plurality of small holes is not limited in the present invention, and may be circular, elliptical, rectangular, or the like. Preferably, the apertures are circular in shape.
In some embodiments, the light emitting panel may be an organic light emitting diode panel.
In some embodiments, a transparent layer is also included between the light field image layer 110 and the point light source array 120. The virtual three-dimensional image displayed by the stereoscopic light field display device 100 of the present invention can be optimized by adjusting the thickness of the transparent layer, the thickness of the light field image layer 110, and the thickness of the point light source array 120.
In some embodiments, the light field image layer 110 and/or the array of point light sources 120 may be a transparent material. For example, the light field image layer 110 and/or the array of point light sources 120 may be plastic, glass, or an organic transparent material. In some embodiments, the organic transparent material may be acryl, polyethylene terephthalate (PET), or Polystyrene (PS).
FIG. 2 is an exemplary flow chart of a method of generating a two-dimensional light field image in accordance with an embodiment of the present invention. Referring to fig. 2, the method includes the steps of:
step 210, determining the position, size and central point of the two-dimensional image plane where the two-dimensional light field image is located.
FIG. 3 is a schematic diagram of a method of generating a two-dimensional light field image according to an embodiment of the present invention. Referring to fig. 3, a two-dimensional image plane 310, in which a two-dimensional light field image is generated, is represented by a checkerboard. The position of the two-dimensional image plane 310 may be any position in the virtual space.
In this embodiment, the two-dimensional image plane 310 is rectangular. As shown in fig. 3, the first direction X and the second direction Y are perpendicular to each other. The length of the two-dimensional image plane 310 in the first direction X is referred to as the width, and the length of the two-dimensional image plane 310 in the second direction Y is referred to as the height. The dimensions of the two-dimensional image plane 310 are represented by a width I and a height J. The center point of the two-dimensional image plane 310 is the center point of the rectangle.
It should be noted that the method for generating a two-dimensional light field image according to the present invention can be performed in a virtual computation space, and the quantities used in the computation process are all virtual. For this step 210 and fig. 3, the two-dimensional image plane 310 does not actually exist. For convenience of calculation and presentation, the present invention is illustrated by taking the two-dimensional image plane 310 as a rectangle, but fig. 3 is not intended to limit the shape, size and position of the two-dimensional image plane 310. In other embodiments, the two-dimensional image plane 310 may have other shapes, such as a circle, and the center point of the corresponding two-dimensional image plane 310 may be the dot thereof.
Step 220, determining the position and size of the orthogonal projection plane where the virtual camera for shooting the virtual three-dimensional scene is located.
In this step, it is assumed that a virtual three-dimensional scene is photographed by the virtual camera 320. The virtual camera 320 is located on an orthogonal projection surface 330. In this embodiment, the virtual three-dimensional scene is defined between a two-dimensional image plane 310 and an orthogonal projection plane 330. Referring to FIG. 3, there is a three-dimensional virtual object 340 between the two-dimensional image plane 310 and the orthogonal projection plane 330. The invention does not limit the volume of the three-dimensional virtual object 340 in the virtual three-dimensional scene. The volume of the three-dimensional virtual object 340 may be beyond the limits defined by the virtual three-dimensional scene, in which case the virtual camera 320 takes an orthographic projection image of the portion of the three-dimensional virtual object 340 located in the virtual three-dimensional scene.
The virtual camera 320 in the present invention is an orthogonal projection camera. Orthogonal projections of three-dimensional virtual objects in a virtual three-dimensional scene can be captured with the virtual camera 320. The image formed by shooting by the orthogonal projection camera has no effect of large and small distance, and the object proportion is not changed due to projection. Lines that are parallel in three-dimensional space are also parallel in two-dimensional space using orthogonal projection.
In the method of generating a two-dimensional light field image of the present invention, assuming that all the light emitted from the two-dimensional image plane 310 reaching the orthogonal projection plane 330 can be regarded as parallel light, when an object exists in the virtual three-dimensional scene, a part of the light is blocked by the object and cannot reach the orthogonal projection plane 330. Conversely, capturing a three-dimensional virtual object 340 located in the virtual three-dimensional scene using the virtual camera 320 from the orthogonal projection plane 330 corresponds to capturing an orthogonal projection of the three-dimensional virtual object 340 onto the two-dimensional image plane 310. As shown in fig. 3, it is assumed that the virtual camera 320 is located at the center of the orthogonal projection plane 330, and the three-dimensional virtual object 340 is located within the photographing range of the orthogonal projection of the virtual camera 320, that is, the virtual camera 320 photographs the three-dimensional virtual object 340 at the front view position, and the orthogonal projection image obtained includes the two-dimensional image plane 310 having the front view projection view of the three-dimensional virtual object 340 as shown in fig. 3.
In the present embodiment, the distance between the two-dimensional image plane 310 and the orthogonal projection plane 330 is set to G. The distance G may be a surface distance between the two-dimensional image plane 310 and the orthogonal projection plane 330. The virtual camera 320 may capture a three-dimensional virtual object at multiple locations on the orthographic projection plane 330.
In some embodiments, the two-dimensional image plane 310 and the orthogonal projection plane 330 are parallel to each other.
As shown in fig. 3, the length of the orthogonal projection surface 330 in the first direction X is referred to as the width, and the length of the orthogonal projection surface 330 in the second direction Y is referred to as the height. The dimensions of the orthographic projection surface 330 are represented by a width T and a height S.
The two-dimensional light field image to be generated by the present invention is used to cooperate with the point light source array 120 of the stereoscopic light field display device 100 to present the stereoscopic light field of the virtual three-dimensional scene, and therefore, all information of the two-dimensional light field image to be displayed should be included in the two-dimensional image plane 310. However, when the virtual camera 320 is at some photographing angle, the size of the obtained orthographic projection image may exceed the range defined by the two-dimensional image plane 310. Therefore, in some embodiments, the width T of the orthographic projection surface 330 is greater than the width I of the two-dimensional image surface 310, and the height S of the orthographic projection surface 330 is greater than the height J of the two-dimensional image surface 310, the width T and the height S of the orthographic projection surface 330 being such that orthographic projection images taken by the virtual camera 320 at any position of the orthographic projection surface 330 completely cover the two-dimensional image surface 310.
And step 230, determining a shooting position coordinate set of the virtual camera in the orthogonal projection plane.
In the method of generating a two-dimensional light field image of the present invention, a virtual camera 320 captures a three-dimensional virtual object at multiple locations within an orthogonal projection plane 330. The plurality of positions of the virtual camera 320 within the orthographic projection plane 330 constitute a set of shooting position coordinates of the virtual camera 320.
Fig. 4 is a schematic diagram of a shooting position coordinate set of a virtual camera according to an embodiment of the present invention. Referring to fig. 4, a plurality of black dots on the orthogonal projection plane 330 represent the shooting positions of the virtual camera 320. In the embodiment shown in fig. 4, the plurality of shot positions form a matrix of N rows and M columns, i.e., there are N × M shot position points in the shot position coordinate set, and the intervals between adjacent shot position points are equal. Fig. 4 is not intended to limit the size, position lattice shape, and number of shooting position coordinate sets of the virtual camera.
In some embodiments, the spacing of adjacent capture position points in the capture position coordinate set monotonically increases in at least one of the horizontal or vertical directions as one moves away from the center of the orthographic projection plane 330.
Step 240, setting a virtual camera on each position coordinate in the set of shooting position coordinates and shooting orthogonal projection images of multiple viewing angles by the viewing angles towards the central point.
In this step, the virtual camera 320 traverses each shooting position point in the shooting position coordinate set. The present invention does not limit the order of traversal. Preferably, each position coordinate in the shooting position coordinate set is traversed in the order of the storage positions of the shooting position coordinate set in the memory.
Fig. 5 is a schematic diagram of a virtual camera performing shooting according to an embodiment of the invention. Referring to fig. 5, the virtual camera 320 is located at one of the shooting position coordinate sets on the orthogonal projection plane 330. The shooting position point may correspond to the position Q shown in fig. 4. The virtual camera 320 is directed from the position Q toward the center point O of the two-dimensional image plane 310. Obviously, in order to face the center point O, the virtual camera 320 itself needs to be rotated by a certain angle. In step 240, a three-dimensional virtual object (not shown) is set in the virtual three-dimensional scene between the two-dimensional image plane 310 and the orthogonal projection plane 330. The virtual camera 320 projects the captured image of the three-dimensional virtual object on the two-dimensional image plane 310 to obtain an orthogonal projection image corresponding to the captured position.
It should be noted that the two-dimensional image plane 310 is used for representing the projection position of the two-dimensional light field image in the method for generating the two-dimensional light field image of the present invention, such as the position of a display screen or a television, and is not an actually existing plane. In some embodiments, the three-dimensional virtual object may not be located in the virtual three-dimensional scene between the two-dimensional image plane 310 and the orthogonal projection plane 330, but rather on the other side of the two-dimensional image plane 310. In this case, the three-dimensional virtual object is photographed by the virtual camera 320, and the resulting image is back-projected at the position of the two-dimensional image plane 310, and the orthographic projection image can also be used to generate a corresponding two-dimensional light field image.
In some embodiments, in step 240, the rotation angle at which the virtual camera takes the orthographic projection images of multiple viewing angles may also be recorded. Let the rotation angle of the virtual camera 320 be zero when the virtual camera 320 is at the center point O of the horizontal front two-dimensional image plane 310.
Step 250, a plurality of light field image regions are extracted from the orthographic projection images of the plurality of viewing angles.
Referring to fig. 5, when the virtual camera 320 is in different photographing positions, the rotation angle thereof may be different, and the obtained orthographic projection images may be different.
FIG. 6 is a schematic diagram of a light field image region in an embodiment of the invention. Referring to fig. 6, the virtual camera 320 may photograph one orthogonal projection image 620 obtained at each photographing position in the photographing position coordinate set. The orthographic projection image 620 and the orthographic projection surface 330 are of equal size to ensure that the orthographic projection image 620 completely covers the two-dimensional image plane 310.
In fig. 6, light field image region 610 is located in orthographic projection image 620. The light field image region 610 contains the two-dimensional image plane 310 and the projection of the three-dimensional virtual object thereon. Since the virtual camera 320 requires a certain rotation angle for photographing toward the center point O of the two-dimensional image plane 310 at the photographing position shown in fig. 5, the obtained light field image area 610 has a certain distortion compared to the front view angle. Referring to fig. 6, a light field image area 610 captured by the virtual camera 320 from the capturing position shown in fig. 5 is represented by a deformed checkerboard. The light field image region 610 may be composed of four coordinates characterizing the positions of the four vertices of the region W ', denoted as W' [ W1', W2', W3', W4' ].
The procedure for obtaining the region W' will be explained below.
Assume that the virtual camera 320 is still in position Q shown in fig. 4. In the present embodiment, the sampling range W of the virtual camera 320 for its front light field at the position Q is described by a rectangle whose coordinates of the four vertex positions characterize the sampling range, which is denoted as W [ W1, W2, W3, W4 ]. In other embodiments, the sampling range may be described by other shapes.
To generate the two-dimensional light field image that the present invention needs to generate, the virtual camera 320 needs to rotate its angle towards the center point of the two-dimensional image plane 310. The rotation angles of the virtual camera 320 include an up-down view angle rotation angle and a left-right view angle rotation angle.The up-down viewing angle here means a viewing angle in the second direction Y, and the left-right viewing angle means a viewing angle in the first direction X. Let the angle of rotation of the virtual camera 320 from the top view angle to the bottom view angle be
Figure BDA0002091857450000101
The left-right view angle rotation angle is θ. Rotation transformation matrices M1 and M2 are obtained from the rotation angle, respectively, as shown in the following equation:
Figure BDA0002091857450000102
Figure BDA0002091857450000103
let the coordinates of the four vertices W1, W2, W3, W4 of the sampling range W be W1(x1, y1), W2(x2, y2), W3(x3, y3), W4(x4, y4), respectively, and denote the coordinates of the four vertices as a vector [ x, y,0,0], that is,
Figure BDA0002091857450000104
the above W, M1 and M2 are substituted into the following coordinate matrix transformation formula:
W'=W*M1*M2
the region-range coordinates [ W1', W2', W3', W4' ] of the region W ' can be obtained.
It will be appreciated that the light field image region 610 may be located in a region W' that is beyond the extent of the two-dimensional image plane 310, but not beyond the extent of the orthographic projection plane 330.
In this step, for any one shooting position, there is a corresponding light field image region 610 in the orthogonal projection image 620, and the shape of each light field image region 610 is related to the rotation angle of the virtual camera 320.
In the embodiment of the present invention, the rotation angle of the virtual camera 320 is a parameter of the virtual camera 320 itself, and can be directly obtained.
Step 260, performing affine transformation on the plurality of light field image regions to obtain a plurality of corrected light field image regions.
Since the light field image area 610 obtained by the virtual camera 320 at some shooting positions is deformed, and the two-dimensional light field image to be generated by the present invention should be limited within the range of the two-dimensional image plane 310, the light field image area 610 deformed or beyond the range of the two-dimensional image plane 310 needs to be corrected. In this step, the light field image region 610 is transformed to the rectangular region where the two-dimensional image plane 310 is located through affine transformation, and the transformed light field image region 610 is referred to as a corrected light field image region.
FIG. 7 is a diagram of a corrected light field image region in an embodiment of the invention. Referring to fig. 7, light-field image region 610 is affine transformed by step 260 to obtain a corrected light-field image region 710. The size of the corrected light-field image region 710 is the same as the size of the two-dimensional image plane 310. As shown in FIG. 7, the checkerboard pattern on the two-bit image plane 310 restores the original shape after the transformation in this step. Whereas the orthogonally projected image portion of the three-dimensional virtual object is significantly changed from before the transformation.
The light field image area obtained corresponding to each shooting position of the virtual camera 320 is subjected to the correction of this step, obtaining a corrected light field image area corresponding to each shooting position.
Referring to fig. 6 and 7, in some embodiments, the deformed light field image region 610 obtained by the virtual camera 320 is a parallelogram and the corrected light field image region 710 is a rectangle due to the characteristics of orthogonal projection.
In other embodiments, other mathematical transformations may be used to correct the light field image area 610.
The pixel information contained in each corrected light field image region 710 corresponds to a projection of a three-dimensional virtual object obtained by the virtual camera 320 in a certain rotational direction onto the two-dimensional image plane 310. Because the orthogonal projection camera is adopted in the invention, all the light rays reaching the virtual camera 320 can be considered as parallel light and have the same incident angle, so that the virtual camera 320 only needs to shoot once to obtain the light field image area 610 and the corrected light field image area 710 corresponding to the shooting position.
Step 270, combining the plurality of corrected light field image regions to obtain a two-dimensional light field image.
FIG. 8 is a schematic of a two-dimensional light field image of an embodiment of the present invention. Referring to fig. 8, a two-dimensional light field image 810 includes a plurality of object pixel grids 811 therein. Each object pixel grid 811 includes N × M pixels corresponding to N × M capturing position coordinates of the virtual camera 320 in the capturing position coordinate set. As can be understood in connection with step 260, the pixels in the object pixel grid 811 come from the corrected light field image region corresponding to the location of the pixel.
In the embodiment shown in fig. 8, N-M-5.
Taking object pixel grid a11 and object pixel grid a12 as examples, pixels 821 and 822 located at row 1 and column 1 in the two object pixel grids are both from a corrected light field image area obtained by the virtual camera 320 at the row 1 and column 1 position coordinates in the shooting position coordinate set; the row 4, column 1 pixel 823 and 824 in the two object pixel grids are both from the corrected light field image area obtained by the virtual camera 320 at the row 4, column 1 position coordinate in the capture position coordinate set. As to which pixel in the corrected light field image region the pixel values of pixels 821, 822, 823 and 824 specifically come from, there is no particular limitation, and various changes can be made within a reasonable range by those skilled in the art.
According to the method of generating a two-dimensional light field image of the present invention, a two-dimensional light field image may be generated such that the two-dimensional light field image is stereoscopically displayed in front of the stereoscopic light field display device 100 in cooperation with the point light source array 120.
Fig. 9 is an exemplary flow chart for determining a set of shooting position coordinates for a virtual camera within an orthographic projection plane in some embodiments of the invention. Referring to fig. 9, in some embodiments, the step 230 of generating a two-dimensional light field image of the present invention may be performed by:
and 231, determining the moving range of the virtual camera according to the target visual angle of the stereoscopic light field and the distance between the orthogonal projection plane and the two-dimensional image plane.
Fig. 10A is one of schematic diagrams of determining a set of shooting position coordinates of a virtual camera in an embodiment of the present invention. Referring to fig. 10A, when viewed from the center point O of the two-dimensional image plane 310 toward the orthogonal projection plane 330 where the virtual camera 320 is located, the range in which the virtual camera can be viewed is limited by the viewing angle. In the present embodiment, the viewing angles include a left-right viewing angle α and an up-down viewing angle β. Here, the left-right viewing angle α represents a viewing angle in the first direction X, and the up-down viewing angle β represents a viewing angle in the second direction Y.
Referring to fig. 10A, the distance between the center point O of the two-dimensional image plane 310 and the center point O' of the orthogonal projection plane 330 is G. The center point O' is a photographing position where the virtual camera 320 horizontally faces the center point O of the two-dimensional image plane 310. The center point O of the two-dimensional image plane 310 corresponds to the visible range on the orthogonal projection plane 330, which is determined by the left-right visible angle α, the up-down visible angle β, and the distance G. Specifically, the following formula can be used to calculate:
E(O)∈(-G*tan(α/2),G*tan(α/2)) (0°<α<180°)
F(O)∈(-G*tan(β/2),G*tan(β/2)) (0°<β<180°)
in the above formula, the center point O' of the orthogonal projection plane 330 is used as the origin of coordinates, e (O) represents the visible range in the first direction X, and f (O) represents the visible range in the second direction Y.
For any point on the two-dimensional image plane 310, there is a corresponding field of view on the orthographic projection plane 330. All of these visual ranges are used together to define the size of the orthogonal projection plane 330, as well as the range of motion of the virtual camera 320 at the location where the orthogonal projection plane 330 is located.
And step 232, determining the number of shooting position points of the virtual camera in the orthogonal projection plane according to the number of the visual angles of the stereoscopic light field.
Fig. 10B is a second schematic diagram illustrating the principle of determining the coordinate set of the shooting position of the virtual camera according to an embodiment of the present invention. Referring to fig. 10B, a plurality of points representing the photographing positions of the virtual camera 320 are included in the orthogonal projection plane 330.
In this step, the number of shooting points is determined according to the number of viewing angles of the stereoscopic light field required to present the virtual three-dimensional scene. The number of viewing angles here may include a number M in the first direction X and a number N in the second direction Y. The number of the viewing angles is related to the resolution of the virtual three-dimensional scene to be presented, and also affects the amount of computation for generating the corresponding two-dimensional light field image. The larger the number of viewing angles, the higher the resolution of the virtual three-dimensional scene to be rendered, and the larger the amount of computation to generate the corresponding two-dimensional light field image.
Referring to fig. 10B, in the present embodiment, the virtual camera 320 has M shooting position points in the first direction X and N shooting position points in the second direction Y. The shot position points are arranged in a matrix form to form an M x N shot position lattice. The virtual camera 320 may photograph a virtual three-dimensional scene over the M x N photographing location points.
And 233, determining a shooting position coordinate set of the virtual camera in the orthogonal projection plane according to the shooting position dot matrix formed by the shooting position points and the moving range of the virtual camera.
As shown in fig. 10B, the plurality of shooting position points determined in step 232 constitute a shooting position lattice of the virtual camera 320. From the shooting position lattice and the movable range of the virtual camera 320, a shooting position coordinate set of the virtual camera 320 within the orthogonal projection plane 330 can be determined. The coordinate set may be represented by using the center point O' of the orthogonal projection plane 330 as the origin of coordinates, or may be represented by using an arbitrary position on the orthogonal projection plane 330 as the origin of coordinates.
In this embodiment, the shooting position lattice is a rectangular array that is uniformly distributed, i.e., the spacing between each shooting position in the shooting position coordinate set is equal. It should be noted that fig. 10A and 10B are not intended to limit the shape and size of the orthogonal projection plane 330, and the shape and number of shooting positions of the virtual camera 320. In other embodiments, the orthographic projection 330 may have other shapes, and the shooting position lattice of the virtual camera 320 may have other distribution forms, such as a non-uniform distribution matrix.
FIG. 11 is an exemplary flow chart for combining corrected light field image regions to obtain a two-dimensional light field image in an embodiment of the present invention. Referring to fig. 11, in some embodiments, the step 270 of generating a two-dimensional light field image of the present invention may be performed by:
and 271, combining pixel points at the same position of the plurality of corrected light field image areas into an object pixel grid according to the relative view angle positions of the plurality of corrected light field image areas.
FIG. 12 is a schematic diagram of the principle of combining corrected light field image regions to obtain a two-dimensional light field image in an embodiment of the present invention. Referring to fig. 12, the region 801 includes N × M corrected light field image regions, which correspond to N × M capturing position points of the virtual camera 320 in its capturing position coordinate set, respectively.
Each corrected light-field image region comprises n × m pixels. Combining the pixel points at a same position in each corrected light field image region to obtain an object image pixel grid corresponding to the position, wherein the object image pixel grid comprises N × M pixels. Finally, n × m object pixel grids can be obtained.
In the embodiment shown in fig. 12, M-N-5. The corresponding object pixel grid is square. In other embodiments, M may not be equal to N.
In the embodiment shown in fig. 12, n is 5 and m is 8.
Step 272, combine all object pixel grids to obtain a two-dimensional light field image.
As previously described, the two-dimensional light-field image 810 is composed of a grid of n x m object pixels. Each object image pixel grid comprises N M pixels. The number and division of the object image pixel grid in the two-dimensional light field image 810 is determined by the shape and size of the two-dimensional image plane 310 and the resolution of the two-dimensional light field image to be obtained. A higher resolution corresponds to a greater number of object pixel grids and a lower resolution corresponds to a lesser number of object pixel grids.
Referring to FIG. 12, an object image pixel grid A11 is located at row 1 and column 1 in a two-dimensional light field image 810. The pixel value of each pixel in object pixel grid a11 is derived from the pixel values of the pixels in row 1 and column 1 in each of the corrected light-field image regions. And, the positions of these pixel points in the object pixel grid a11 correspond to the relative view angle positions of the corrected light field image areas where they are located, that is, the positions of the virtual camera 320 in the shooting position lattice.
For example, the relative perspective position of the corrected light-field image region B11 is row 1, column 1 in the shooting position lattice, the relative perspective position of the corrected light-field image region B12 is row 1, column 2 in the shooting position lattice, the relative perspective position of the corrected light-field image region B13 is row 1, column 3 in the shooting position lattice, the relative perspective position of the corrected light-field image region B21 is row 2, column 1 in the shooting position lattice, and so on.
In the object pixel grid a11, the pixel P1 'located in row 1, column 1 is equal to the pixel P1 in row 1, column 1 in the corrected light-field image region B11, the pixel P2' located in row 1, column 2 is equal to the pixel P2 in row 1, column 1 in the corrected light-field image region B12, and so on; the pixel P3' located in row 2 and column 1 is equal to the pixel P3 in row 1 and column 1 in the corrected light-field image region B21.
In the object pixel grid a12, the pixel P4 'located in row 1, column 1 is equal to the pixel P4 in row 1, column 2 in the corrected light-field image region B11, the pixel P5' located in row 1, column 2 is equal to the pixel P5 in row 1, column 2 in the corrected light-field image region B12, and so on.
In the object pixel grid A4m, the pixel P6' located at row 1, column 3 is equal to the pixel P6 at row 4, column m in the corrected light-field image region B13. In the embodiment shown in fig. 8, m is 8.
It should be noted that, the two pixels being equal in this document means that the pixel values of the two pixels are equal.
According to the method of generating a two-dimensional light field image of the present embodiment, the number of pixels in the generated two-dimensional light field image 820 is N × M × N × M.
The invention also includes an apparatus for generating a two-dimensional light field image comprising a memory and a processor. The memory is used for storing instructions executable by the processor; the processor is configured to execute the instructions to implement the method of generating a two-dimensional light field image of the present invention.
According to the method and apparatus for generating a two-dimensional light field image of the present invention, a two-dimensional light field image may be generated such that the two-dimensional light field image is stereoscopically displayed in front of the stereoscopic light field display apparatus 100 after passing through the point light source array 120. The method and the device for generating the two-dimensional light field image have the following beneficial effects:
firstly, people can directly watch corresponding three-dimensional images from different visual angles through naked eyes without wearing special glasses or other additional equipment, and the effect is vivid.
Secondly, corrected light field image areas obtained at different shooting positions by the orthogonal projection camera are used as two-dimensional light field images in the view angle direction, so that the operation times are greatly reduced, and the image generation speed is increased.
Aspects of the methods and apparatus of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable medium may comprise a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. The computer readable medium can be any computer readable medium that can communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
This application uses specific words to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Although the present invention has been described with reference to the present specific embodiments, it will be appreciated by those skilled in the art that the above embodiments are merely illustrative of the present invention, and various equivalent changes and substitutions may be made without departing from the spirit of the invention, and therefore, changes and modifications to the above embodiments within the spirit of the invention are intended to fall within the scope of the claims of the present application.

Claims (10)

1. A method of generating a two-dimensional light field image for use in rendering a stereoscopic light field of a virtual three-dimensional scene in cooperation with an array of point light sources, the method comprising the steps of:
determining the position, the size and the central point of a two-dimensional image plane where the two-dimensional light field image is located;
determining the position and the size of an orthogonal projection plane where a virtual camera for shooting the virtual three-dimensional scene is located, wherein the virtual three-dimensional scene is limited in the orthogonal projection shooting range;
determining a shooting position coordinate set of the virtual camera in the orthogonal projection plane;
setting the virtual camera at each position coordinate in the set of shooting position coordinates and shooting orthogonal projection images of a plurality of view angles with view angles towards the central point;
extracting a plurality of light field image regions from the orthographic projection images of the plurality of view angles;
performing affine transformation on the plurality of light field image regions to obtain a plurality of corrected light field image regions;
combining the plurality of corrected light field image regions to obtain the two-dimensional light field image.
2. The method of claim 1, wherein the size of the orthographic projection plane in a first direction is greater than the size of the two-dimensional image plane in the first direction, and the size of the orthographic projection plane in a second direction is greater than the size of the two-dimensional image plane in the second direction, such that the orthographic projection images for the plurality of viewing angles completely cover the two-dimensional image plane, wherein the first direction and the second direction are perpendicular.
3. The method of claim 1, wherein determining the set of shot position coordinates of the camera within the orthogonal projection plane comprises:
determining the moving range of the virtual camera according to the target visible angle of the stereoscopic light field and the distance between the orthogonal projection plane and the two-dimensional image plane;
determining the number of shooting position lattices of the virtual camera in the orthogonal projection plane according to the number of the visual angles of the stereoscopic light field; and
and determining a shooting position coordinate set of the virtual camera in the orthogonal projection plane according to a shooting position dot matrix formed by the shooting position points and the moving range of the virtual camera.
4. The method of claim 3, further comprising determining a distance from the center point when the virtual camera is looking horizontally at the center point as a distance between the orthographic projection plane and a two-dimensional image plane.
5. The method of claim 1, wherein the step of combining the plurality of corrected light field image regions comprises:
combining pixel points at the same positions of the plurality of corrected light field image areas into an object pixel grid according to the relative view angle positions of the plurality of corrected light field image areas;
and combining all object pixel grids to obtain the two-dimensional light field image.
6. The method of claim 1 wherein each of the light field image regions is a parallelogram and each of the corrected light field image regions is a rectangle.
7. The method of claim 1, wherein the spacing of the capture positions in the set of capture position coordinates is equal.
8. The method of claim 1, wherein a spacing between adjacent capture positions in the set of capture position coordinates monotonically increases in at least one of a horizontal or vertical direction away from a center of the orthographic projection plane.
9. An apparatus for generating a two-dimensional light field image, comprising:
a memory for storing instructions executable by the processor;
a processor for executing the instructions to implement the method of any one of claims 1-8.
10. A computer-readable medium having stored thereon computer program code which, when executed by a processor, implements the method of any of claims 1-8.
CN201910506051.4A 2019-06-12 2019-06-12 Method, apparatus and computer readable medium for generating two-dimensional light field image Pending CN112087617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910506051.4A CN112087617A (en) 2019-06-12 2019-06-12 Method, apparatus and computer readable medium for generating two-dimensional light field image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910506051.4A CN112087617A (en) 2019-06-12 2019-06-12 Method, apparatus and computer readable medium for generating two-dimensional light field image

Publications (1)

Publication Number Publication Date
CN112087617A true CN112087617A (en) 2020-12-15

Family

ID=73733326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910506051.4A Pending CN112087617A (en) 2019-06-12 2019-06-12 Method, apparatus and computer readable medium for generating two-dimensional light field image

Country Status (1)

Country Link
CN (1) CN112087617A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114007017A (en) * 2021-11-18 2022-02-01 浙江博采传媒有限公司 Video generation method and device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1977544A (en) * 2004-05-12 2007-06-06 塞特雷德股份公司 3D display method and apparatus
CN104297930A (en) * 2014-10-09 2015-01-21 深圳市华星光电技术有限公司 Integrated imaging three-dimensional display device and system
CN108513123A (en) * 2017-12-06 2018-09-07 中国人民解放军陆军装甲兵学院 A kind of pattern matrix generation method that integration imaging light field is shown
CN108769462A (en) * 2018-06-06 2018-11-06 北京邮电大学 Free-viewing angle scene walkthrough method and device
CN109410345A (en) * 2018-10-15 2019-03-01 四川长虹电器股份有限公司 Target light field creation method based on Unity3D
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1977544A (en) * 2004-05-12 2007-06-06 塞特雷德股份公司 3D display method and apparatus
CN104297930A (en) * 2014-10-09 2015-01-21 深圳市华星光电技术有限公司 Integrated imaging three-dimensional display device and system
CN108513123A (en) * 2017-12-06 2018-09-07 中国人民解放军陆军装甲兵学院 A kind of pattern matrix generation method that integration imaging light field is shown
CN108769462A (en) * 2018-06-06 2018-11-06 北京邮电大学 Free-viewing angle scene walkthrough method and device
CN109410345A (en) * 2018-10-15 2019-03-01 四川长虹电器股份有限公司 Target light field creation method based on Unity3D
CN109829981A (en) * 2019-02-16 2019-05-31 深圳市未来感知科技有限公司 Three-dimensional scenic rendering method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114007017A (en) * 2021-11-18 2022-02-01 浙江博采传媒有限公司 Video generation method and device and storage medium

Similar Documents

Publication Publication Date Title
US7787009B2 (en) Three dimensional interaction with autostereoscopic displays
JP6636163B2 (en) Image display method, method of generating shaped sledge curtain, and head mounted display device
KR102415501B1 (en) Method for assuming parameter of 3d display device and 3d display device thereof
US20180063513A1 (en) Stitching frames into a panoramic frame
US11004267B2 (en) Information processing apparatus, information processing method, and storage medium for generating a virtual viewpoint image
CN107924556B (en) Image generation device and image display control device
CN107193124A (en) The small spacing LED display parameters design methods of integration imaging high density
WO2021197370A1 (en) Light field display method and system, storage medium and display panel
CN108513123A (en) A kind of pattern matrix generation method that integration imaging light field is shown
US20180184066A1 (en) Light field retargeting for multi-panel display
JP2019186762A (en) Video generation apparatus, video generation method, program, and data structure
KR101399274B1 (en) multi 3-DIMENSION CAMERA USING MULTI PATTERN BEAM AND METHOD OF THE SAME
CN110870304B (en) Method and apparatus for providing information to a user for viewing multi-view content
CN112087617A (en) Method, apparatus and computer readable medium for generating two-dimensional light field image
CN112087620B (en) Splicing generation method for multiple display devices for displaying stereoscopic light field
CN112087616A (en) Method, apparatus and computer readable medium for generating two-dimensional light field image
US20140347352A1 (en) Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images
CN112087618A (en) Method, device and computer readable medium for generating two-dimensional light field image
CN112087614A (en) Method, device and computer readable medium for generating two-dimensional light field image
Oishi et al. An instant see-through vision system using a wide field-of-view camera and a 3d-lidar
Park et al. Enhancement of viewing angle and viewing distance in integral imaging by head tracking
KR20230022153A (en) Single-image 3D photo with soft layering and depth-aware restoration
Lai et al. Exploring manipulation behavior on video see-through head-mounted display with view interpolation
KR101567002B1 (en) Computer graphics based stereo floting integral imaging creation system
CN114967170A (en) Display processing method and device based on flexible naked-eye three-dimensional display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201215

RJ01 Rejection of invention patent application after publication