CN112399168A - Multi-viewpoint image generation method, storage medium and display device - Google Patents

Multi-viewpoint image generation method, storage medium and display device Download PDF

Info

Publication number
CN112399168A
CN112399168A CN202011287892.XA CN202011287892A CN112399168A CN 112399168 A CN112399168 A CN 112399168A CN 202011287892 A CN202011287892 A CN 202011287892A CN 112399168 A CN112399168 A CN 112399168A
Authority
CN
China
Prior art keywords
sub
pixel
viewpoint
viewpoints
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011287892.XA
Other languages
Chinese (zh)
Other versions
CN112399168B (en
Inventor
孙炎
楚明磊
史天阔
习艳会
姬治华
侯一凡
赵晨曦
张硕
彭项君
张小牤
孙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202011287892.XA priority Critical patent/CN112399168B/en
Publication of CN112399168A publication Critical patent/CN112399168A/en
Application granted granted Critical
Publication of CN112399168B publication Critical patent/CN112399168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A multi-viewpoint image generation method, a storage medium, and a display device, the multi-viewpoint image generation method comprising: determining the number of viewpoints contained in a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the grating unit in the first direction; determining the number of viewpoints contained by a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained by a single sub-pixel in the first direction; determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction; and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs. The scheme provided by the embodiment realizes the generation of the multi-view three-dimensional image.

Description

Multi-viewpoint image generation method, storage medium and display device
Technical Field
The disclosed embodiments relate to, but not limited to, display technologies, and more particularly, to a multi-view image generation method, a storage medium, and a display device.
Background
The naked eye three-dimensional (3D) Display device includes a Display panel, which generally includes a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), and the like, and an image splitter, which includes a slit grating or a lenticular grating, and the like. The display panel includes a plurality of pixels, and each pixel generally includes three colors of sub-pixels RGB. The image splitting device comprises a plurality of image splitting units which are arranged in parallel, the image splitting device is attached to the front of the display panel according to a certain angle, sub-pixels at different positions can be split to different positions in space, therefore, multi-viewpoint images are arranged and rendered according to the light splitting characteristic of the image splitting device and then displayed on the display panel, after passing through the image splitting device, the images at different viewpoints can be seen by two eyes of a person, and then the person can feel stereoscopic impression through brain processing.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The disclosed embodiments provide a multi-viewpoint image generation method, a storage medium, and a display device, which implement multi-viewpoint three-dimensional display.
In one aspect, the present disclosure provides a multi-view image generating method applied to a display apparatus including a display panel and a sub-pixel device, the display panel including a plurality of pixels distributed in an array, the pixels including a plurality of sub-pixels, the sub-pixel device including a plurality of grating units parallel to each other, the method including:
determining the number of sub-pixels covered by the grating unit in the first direction, and determining the number of viewpoints contained by a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the grating unit in the first direction; the first direction is a pixel row direction, namely the arrangement direction of sub-pixels in the same pixel;
determining the number of viewpoints contained in a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained in the single sub-pixel in the first direction, wherein the second direction is a pixel column direction;
determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction;
and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs.
In an exemplary embodiment, the determining the number of sub-pixels covered by the raster unit in the first direction includes:
the grating unit is covered in a first directionNumber of sub-pixels of the cover
Figure BDA0002782966250000021
P is the width of the grating unit, SwAnd determining the positive and negative of theta according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction, wherein the theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel.
In an exemplary embodiment, the determining the number of viewpoints contained by a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the raster unit in the first direction includes:
number of views contained by a single sub-pixel in a first direction
Figure BDA0002782966250000022
V is the total number of viewpoints, PxThe number of the sub-pixels covered by the grating unit in the first direction is shown.
In an exemplary embodiment, the determining the number of viewpoints included in a single sub-pixel in the second direction according to a first length of the sub-pixel in the first direction and a second length of the sub-pixel in the second direction, a fitting angle of the image splitting device with the display panel, and the number of viewpoints included in a single sub-pixel in the first direction includes:
determining an offset value according to a first length of the sub-pixel in a first direction, a second length of the sub-pixel in a second direction and a joint angle between the image splitting device and the display panel; determining the number of viewpoints contained by a single sub-pixel in a second direction according to the number of viewpoints contained by the single sub-pixel in the first direction and the offset value; wherein the content of the first and second substances,
the offset value
Figure BDA0002782966250000031
Wherein, the SwIs a first length of the sub-pixel in a first direction, ShAnd determining the positive and negative of theta according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction, wherein theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel.
In an exemplary embodiment, the determining the number of viewpoints contained by a single sub-pixel in the second direction according to the number of viewpoints contained by a single sub-pixel in the first direction and the offset value includes:
number of views V contained by a single sub-pixel in the second directiony=Vx*Shiftx
Wherein, VxFor the number of views, Shift, contained by a single sub-pixel in said first directionxIs the offset value.
In an exemplary embodiment, the determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction includes:
Vi,1=(Vfirst-(i-1)*Vy) mod V, if Vi,1When the value is 0, then Vi,1=V,i∈[1,M]
Vi,j=(Vi,1+(j-1)*Vx) mod V, if Vi,jWhen the value is 0, then Vi,j=V,j∈[1,N]
Wherein V is the total number of view points, Vi,jIs the view point, V, to which the sub-pixel of the ith row and the jth column belongsfirstIs the viewpoint to which the sub-pixel of the 1 st row and the 1 st column belongs, M is the row number of the sub-pixel, N is the column number of the sub-pixel, VxNumber of views, V, contained for a single sub-pixel in a first directionyIs the number of views a single sub-pixel contains in the second direction.
In an exemplary embodiment, the determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs includes:
and determining the position of the sub-pixel, and taking the gray-scale value of the corresponding position in the image of the viewpoint to which the sub-pixel belongs as the gray-scale value of the sub-pixel.
In an exemplary embodiment, after determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction, before determining the gray level value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs, the method further includes:
and replacing the viewpoints of part of the sub-pixels, and replacing the viewpoints with the same viewpoint when the viewpoints of different sub-pixels are the same, so that the ratio of the first parallax of the left-eye viewpoint and the right-eye viewpoint after replacement to the second parallax of the viewpoint of the adjacent sub-pixel in the first direction meets the preset condition when the viewpoints of the left-eye viewpoint and the right-eye viewpoint are not replaced.
In an exemplary embodiment, the meeting the preset condition includes: the ratio of the first disparity to the second disparity is at [ -1, 1 ].
In an exemplary embodiment, replacing the viewpoints to which the partial sub-pixels belong includes:
the viewpoints comprise viewpoints 1 to V which are sequentially distributed along a first direction, when a viewpoint K to which the sub-pixels belong is located in a range from a viewpoint K to the viewpoint V, the viewpoint K is replaced by a viewpoint S-K, the viewpoint S-K is smaller than or equal to V and larger than or equal to 1, and the viewpoint K is a preset value.
In another aspect, embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions for performing the above-described multi-view image generation method.
In another aspect, the present disclosure provides a display device, including a processor and a memory storing a computer program operable on the processor, wherein the processor executes the computer program to implement the steps of the multi-view image generation method.
The disclosed embodiments provide the multi-view image generation method, the storage medium, and the display device, the multi-view image generation method including: determining the number of viewpoints contained in a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the grating unit in the first direction; determining the number of viewpoints contained by a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained by a single sub-pixel in the first direction; determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction; and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs. The scheme provided by the embodiment realizes the generation of the multi-view three-dimensional image.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a schematic diagram of a three-dimensional display according to an embodiment;
fig. 2 is a schematic diagram of a display device provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a display panel according to an embodiment;
FIG. 4 is a schematic diagram of an image splitter and a display panel according to an embodiment;
FIG. 5 is a flow diagram of a multi-view image generation method provided by an embodiment;
FIG. 6 is a schematic view illustrating a viewpoint of a sub-pixel according to an embodiment;
FIG. 7 is a diagram illustrating an embodiment of replacing a viewpoint to which a sub-pixel belongs;
fig. 8 is a schematic diagram of a display device according to an embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
In the drawings, the size of each component, the thickness of layers, or regions may be exaggerated for clarity. Therefore, the embodiments of the present disclosure are not necessarily limited to the dimensions, and the shapes and sizes of the respective components in the drawings do not reflect a true scale. Further, the drawings schematically show ideal examples, and the embodiments of the present disclosure are not limited to the shapes or numerical values shown in the drawings.
The ordinal numbers such as "first", "second", "third", etc., in this disclosure are provided to avoid confusion among the constituent elements, and do not indicate any order, number, or importance.
In the present disclosure, for convenience, terms indicating orientation or positional relationship such as "middle", "upper", "lower", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like are used to explain positional relationship of constituent elements with reference to the drawings, only for convenience of description and simplification of description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured in a specific orientation, and be operated, and thus, should not be construed as limiting the present disclosure. The positional relationship of the components is changed as appropriate in accordance with the direction in which each component is described. Therefore, the words described in the disclosure are not limited thereto, and may be replaced as appropriate.
In the present disclosure, "parallel" means a state in which an angle formed by two straight lines is-10 ° or more and 10 ° or less, and therefore, includes a state in which the angle is-5 ° or more and 5 ° or less. The term "perpendicular" refers to a state in which the angle formed by two straight lines is 80 ° or more and 100 ° or less, and therefore includes a state in which the angle is 85 ° or more and 95 ° or less.
As shown in fig. 1, which is a principle of naked-eye 3D display using a lenticular lens, pixels at different positions on a display panel are refracted and split by the lenticular lens, and light paths are changed to form different visual regions in space, so that when two eyes of a person are located in the correct visual regions (i.e., the left eye receives a left viewpoint image and the right eye receives a right viewpoint image), the person can feel stereoscopic impression.
In the embodiments of the present disclosure, a multi-view image generation method (or called a layout method) is provided, which can fuse multi-view content according to related parameters of a display panel and a sub-image device and the number of views to be displayed, so that a stereoscopic image can be accurately displayed.
Fig. 2 is a schematic diagram of a display device according to an exemplary embodiment. As shown in fig. 2, the display device comprises a display panel 10 and an image splitting device 20, wherein the image splitting device 20 comprises a plurality of grating units 21 parallel to each other, and the width of the grating units 21 is P. The extending direction Q of the grating unit 21 and the middle pixel column direction (Y direction in the figure) of the display panel have an angle θ, that is, the image splitter 20 is attached to the surface of the display panel 10 at the angle θ. The display panel 10 includes a plurality of pixels, and the pixels are located on a focal plane of the grating unit 21. In the present embodiment, a lenticular lens grating is shown as the image splitting device 20. The disclosed embodiments are not so limited and may be other gratings.
As shown in fig. 3, the display panel 10 includes a plurality of pixels 11 distributed in an array, and the pixels 11 include a plurality of sub-pixels 12, for example, a red sub-pixel R, a green sub-pixel G, and a blue sub-pixel B, but the embodiment of the disclosure is not limited thereto, and may beOther color sub-pixels, or more than 3 sub-pixels, such as 4 sub-pixels. The sub-pixels 12 in the same pixel 11 are distributed along a first direction X, also called the pixel row direction, and a second direction Y, also called the pixel column direction, and the first direction X and the second direction Y may be perpendicular. The length of the sub-pixel 12 along the first direction X is Sw(also referred to as sub-pixel width) the length of the sub-pixel 12 in the second direction Y is Sh(also referred to as sub-pixel height).
As shown in fig. 4, the image splitter 20 is attached to the surface of the display panel 10 at an angle θ, where θ ∈ (-90 °, 90 °), where the angle θ is on a plane parallel to the display panel 10, an angle between the extending direction Q of the orthographic projection of the grating unit 21 and the second direction Y is determined according to an orientation from the extending direction Q of the orthographic projection of the grating unit 21 to the second direction Y, where θ is positive when Q to P are clockwise, θ is positive as shown in fig. 4, and θ is negative when Q to P are counterclockwise.
Due to the fact that the design parameters of different 3D display devices are different, the multi-view images can be determined according to the parameters matched with the 3D display devices, compatibility is high, more application scenes can be met, the parameters can be configured for the devices with the same design parameters only once, and the parameters can be as shown in the table 1.
TABLE 1 display Panel and image splitting device parameters
Figure BDA0002782966250000071
Fig. 5 is a flowchart of a multi-view image generation method according to an exemplary embodiment. As shown in fig. 5, the multi-viewpoint image generating method provided by the present embodiment is applied to the display apparatus described above, and includes:
step 501, determining the number P of sub-pixels covered by the grating unit 21 in the first direction XxAccording to the total number of viewpoints V and the number of sub-pixels P covered by the raster unit 21 in the first directionxDetermining the number of views V contained by a single sub-pixel in a first direction Xx(ii) a The first directionX is the direction of a pixel row, namely the arrangement direction of sub-pixels in the same pixel;
step 502, according to the first length S of the sub-pixel in the first direction XwAnd a second length S in a second direction YhThe bonding angle θ between the image splitter 20 and the display panel 10, and the number of viewing points V included in a single sub-pixel in the first direction XxDetermining the number of views V contained by a single sub-pixel in the second direction Yy(ii) a The second direction Y is a pixel column direction;
step 503, according to the number V of viewpoints contained by a single sub-pixel in the first direction XxAnd the number of views V contained by a single sub-pixel in the second direction YyDetermining a viewpoint to which the sub-pixel belongs;
step 504, determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs.
The scheme provided by the embodiment can determine the view point of the sub-pixel according to the related parameters of the display panel and the image splitting device and the number of the view points to be displayed, can accurately display the three-dimensional image, brings continuous 3D effect, has strong compatibility, and can be suitable for various devices.
In an exemplary embodiment, the image of the viewpoint may be obtained by photographing using an imaging device; or, after the image of a part of viewpoints is shot, the image of other viewpoints is generated according to the image of the existing viewpoint.
In an exemplary embodiment, the determining the number P of sub-pixels covered by the grating unit 21 in the first direction XxThe method comprises the following steps:
Figure BDA0002782966250000081
p is the width of the grating unit, SwThe length of the sub-pixel along the first direction X is theta epsilon (-90 DEG, 90 DEG), the theta is the attaching angle of the image dividing device and the display panel, namely, the extending direction of the orthographic projection of the grating unit 21 and the second direction Y on the plane parallel to the display panelAnd determining the positive and negative of the theta according to the orientation from the extending direction Q of the orthographic projection of the grating unit to the second direction Y.
In an exemplary embodiment, the number of sub-pixels covered in the first direction according to the total number of viewpoints V and the raster unit PxDetermining the number of views V contained by a single sub-pixel in a first directionxThe method comprises the following steps:
number of views contained by a single sub-pixel in a first direction
Figure BDA0002782966250000082
The sub-pixels at different positions in the raster unit 21 have different directions after being split, and each raster unit is covered on the display panel in parallel at an angle theta, so that the sub-pixels at the same position in different raster units 21 have the same view point direction, and when the number of input view points is V, the number P of the sub-pixels covered by the raster unit 21 in the first directionxThe number V of viewpoints contained in a single sub-pixel in the first direction Xx(also understood as the difference V in the number of viewpoints of horizontally adjacent sub-pixelsx) Is composed of
Figure BDA0002782966250000091
In an exemplary embodiment, the first length S in the first direction X of the sub-pixel is determined according to the first length S of the sub-pixelwAnd a second length S in a second direction YhThe bonding angle θ between the image splitter 20 and the display panel 10, and the number of viewing points V included in a single sub-pixel in the first direction XxDetermining the number of views V contained by a single sub-pixel in the second direction YyThe method comprises the following steps:
according to the first length S of the sub-pixel in the first direction XwAnd a second length S in a second direction YhDetermining an offset value Shift according to the bonding angle theta between the image splitter 20 and the display panel 10x,ShiftxAlso called the number of sub-pixel shifts of adjacent rows;
according to the number V of viewpoints contained by a single sub-pixel in the first direction XxAnd the offset valueThe number of views a single sub-pixel contains in the second direction Y is determined.
In an exemplary embodiment, the first length S in the first direction X of the sub-pixel is determined according to the first length S of the sub-pixelwAnd a second length S in a second direction YhDetermining an offset value according to a fitting angle θ between the image splitter and the display panel may include:
the offset value
Figure BDA0002782966250000092
In an exemplary embodiment, the number of viewpoints V included according to a single sub-pixel in the first direction X isxAnd the determining the number of viewpoints contained by a single sub-pixel in the second direction Y by the offset value comprises:
number of views V contained by a single sub-pixel in the second directiony=Vx*Shiftx
Wherein the number V of viewpoints contained in a single sub-pixel in the vertical directionyIt can also be understood as the difference V between the numbers of viewpoints of the vertically adjacent sub-pixelsy
In an exemplary embodiment, the number of viewpoints V included according to a single sub-pixel in the first direction X isxAnd the number of views V contained by a single sub-pixel in the second direction YyDetermining the viewpoint to which the sub-pixel belongs comprises:
Vi,1=(Vfirst-(i-1)*Vy) mod V, if Vi,1When the value is 0, then Vi,1=V,i∈[1,M]
Vi,j=(Vi,1+(j-1)*Vx) mod V, if Vi,jWhen the value is 0, then Vi,j=V,j∈[1,N]
Where mod is a modulo operation, Vi,jIs the view point, V, to which the sub-pixel of the ith row and the jth column belongsfirstAnd M is the number of rows of the sub-pixels, and N is the number of columns of the sub-pixels, wherein the view point of the sub-pixels in the 1 st row and the 1 st column belongs to. In this embodiment, the viewpoint V to which the sub-pixel of the first column of each row belongs may be calculated firsti,1Then according to the view point of the sub-pixel of the first column of each rowVi,1And calculating the view points of the sub-pixels of the other columns of the row. That is, the viewpoint to which the sub-pixels belong may be calculated from the 1 st row and 1 st column, and after the viewpoint to which the sub-pixels belong in one row is calculated, the viewpoint to which the sub-pixels in the next row belong may be calculated. The row 1 and column 1 sub-pixels may be the sub-pixels in the upper left corner of the display panel. The number of rows of sub-pixels may be the same as the number of rows of pixels and the number of columns of sub-pixels may be 3 times the number of columns of pixels (when each pixel comprises 3 sub-pixels).
VfirstThe viewpoint may be specified as needed, for example, the 2 nd viewpoint, and the embodiment of the present disclosure is not limited thereto and may be other values. In another embodiment, VfirstIt may be the view point to which the sub-pixels of other rows and other columns belong.
In an exemplary embodiment, V's of the same row may be groupedi,jAnd performing cyclic shift to obtain the view point of the sub-pixel of the row.
In an exemplary embodiment, at Vx=6,V y3, Vfirst 2 (i.e., V)1,12), V28 (the numbers of the viewpoints are 1 to 28) is taken as an example, and the viewpoint to which the sub-pixels belong is calculated. Wherein:
the viewpoint to which the sub-pixel of the 1 st row and the 2 nd column belongs is V1,2=(V1,1+(j-1)*Vx)mod V=(2+(2-1)*6)mod 28=8;
The viewpoint to which the sub-pixel of the 1 st row and the 3 rd column belongs is V1,3=(2+(3-1)*6)mod28=14;
By analogy, the viewpoint to which the sub-pixel of the 1 st row and the 14 th column belongs is V1,3(2+ (14-1) × 6) mod28 ═ 24; subsequently, similarly, no description is given.
The viewpoint to which the sub-pixel of the 2 nd row and the 1 st column belongs is V2,1=(Vfirst-(i-1)*Vy) modV (2- (2-1) × 3) mod28 ═ 27; the calculation of the view point to which the sub-pixels in the 2 nd row, the 2 nd column and the N th column belong is similar to the calculation of the view point to which the sub-pixels in the 2 nd row, the 2 nd column and the N th column belong, and the description is not repeated. The calculation results are shown in fig. 6. In fig. 6, only the viewpoints of the 4 rows and 18 columns of sub-pixels are shown, and the viewpoints of the other sub-pixels are not shown and can be calculated in a similar way. In addition, it can be seen that columns 1 and 15The viewpoints of the pixels are the same, and the viewpoints of the sub-pixels in the 2 nd column and the 16 th column are the same, so that only the viewpoint to which one column of sub-pixels belongs in the sub-pixels in the multiple columns with the same viewpoint can be calculated, and the calculated viewpoint to which the column of sub-pixels belongs can be taken as the viewpoint to which the sub-pixels in the other columns with the same viewpoint as the column of sub-pixels belong.
In this embodiment, the negative modulo operation outputs a positive remainder, e.g., V2,1=(2-(2-1)*3)mod28=(-1)mod28=27;V3,1=(Vfirst-(i-1)*Vy)modV=(2-(3-1)*3)mod28=(-4)mod28=24;V4,1=(Vfirst-(i-1)*Vy)modV=(2-(4-1)*3)mod28=(-7)mod28=21;V5,1=(Vfirst-(i-1)*Vy)modV=(2-(5-1)*3)mod28=(-10)mod28=18;V12,1=(Vfirst-(i-1)*Vy)modV=(2-(12-1)*3)mod28=(-31)mod28=25。
In an exemplary embodiment, the determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs includes:
and determining the position of the sub-pixel, and taking the gray-scale value of the corresponding position in the image of the viewpoint to which the sub-pixel belongs as the gray-scale value of the sub-pixel. For example, the gray scale value of the sub-pixel in the ith row and the jth column in the image of the associated viewpoint is used as the gray scale value of the sub-pixel. Here, for example only, the gray-scale value of the ith row and the jth column in the image of the viewpoint may be weighted as the gray-scale value of the sub-pixel, and so on.
The 3D film in the cinema is 2 viewpoints, the left eye receives the left viewpoint image and the right eye receives the right viewpoint image through 3D glasses image splitting, the naked eye 3D display device generally displays multi-viewpoint content, when the eyes move, the eyes are easily in a reverse viewing area or a dead zone, namely the left eye receives the right viewpoint image and the right eye also receives the left viewpoint image, and discomfort is caused to people. In an exemplary embodiment, the viewpoint arrangement is optimized (the viewpoint to which the sub-pixels belong is changed), the reverse-viewing effect is weakened, the dizziness and the double image feeling caused by the reverse-viewing area are relieved, the viewpoint is more continuous, and the naked eye 3D effect is more comfortable. In the embodiment of the present disclosure, the negative parallax degree can be alleviated by sacrificing the number of viewpoints.
In an exemplary embodiment, after determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction, before determining the gray level value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs, the method further includes:
and replacing the viewpoints of part of the sub-pixels, and replacing the viewpoints with the same viewpoint when the viewpoints of different sub-pixels are the same, so that the ratio of the first parallax of the left-eye viewpoint and the right-eye viewpoint after replacement to the second parallax of the viewpoint of the adjacent sub-pixel in the first direction (also called normalized first parallax) meets the preset condition when the replacement is not performed. According to the scheme provided by the embodiment, the parallax of the left eye and the right eye is controlled by replacing part of viewpoints, and 3D experience is improved conveniently. Taking fig. 6 as an example, the second parallax of the viewpoint to which the adjacent sub-pixels in the first direction belong is 6.
In an exemplary embodiment, the meeting the preset condition may include: the ratio of the first disparity to the second disparity is at [ -1, 1 ]. Here, this is merely an example, and other conditions that can mitigate the reverse view may be set as needed, for example, the ratio of the third parallax to the second parallax of the left-eye viewpoint and the right-eye viewpoint at the same corresponding positions may be smaller than that when the viewpoint replacement is not performed. For example, taking 28 viewpoints as an example, if viewpoint replacement is not performed, the second parallax is 6, if left-eye viewpoint is 23, the right-eye viewpoint is 1, and if left-eye parallax is-22, then normalized parallax is-3.6, and some viewpoints may be replaced so that the absolute value of the parallax normalized at the same position is less than 3.6.
In an exemplary embodiment, replacing the viewpoints to which the partial sub-pixels belong includes:
the viewpoints comprise viewpoints 1 to V which are sequentially distributed along a first direction, when a viewpoint K to which the sub-pixels belong is located in a range from a viewpoint K to the viewpoint V, the viewpoint K is replaced by a viewpoint S-K, the viewpoint S-K is smaller than or equal to V and larger than or equal to 1, and the viewpoint K is a preset value. In this embodiment, reverse order replacement is performed. For example, for V-28 viewpoints and K-16, the viewpoints 16 to 28 may be replaced with the viewpoints 14 to 2, respectively, and S-30.
Taking the 28 viewpoints illustrated in fig. 6 as an example, the difference between two adjacent sub-pixels in the horizontal direction is 6 viewpoints (second viewpoints), which can be understood as a pair of left and right eyes every 6 viewpoints, so that when the left eye is located in the viewpoint corresponding to the viewpoints 1 to 22, the left eye is located in the front view region, the right eye must fall in the viewing region of the viewpoints 7 to 28, and when the left eye moves to the viewing region corresponding to the viewpoints 23 to 28, the left eye is located in the back view region, and the user feels dizzy and double images. As shown in table 2-1 and table 2-2, the first two rows are left and right eye positions, the third and fourth rows are right-eye viewpoint-left eye viewpoint disparities, i.e. first disparities, and the result of normalizing the first disparities (first parallax/second parallax). It can be seen that when the left eye is at the viewpoint of 23-28, the first parallax is negative, and the negative vision region cannot be completely eliminated, but only the degree of retrovision is reduced.
In this embodiment, the negative parallax degree is alleviated by sacrificing the number of viewpoints, so that the maximum negative parallax is consistent with the positive parallax. For example, the original viewpoints 16 to 28 are replaced with viewpoints 14 to 2, the parallax result at this time is recalculated, after the replacement, as shown in table 3-1 and table 3-2 (the bold part in table 3-1 and table 3-2 is the replaced viewpoint), the first two rows are the left and right eye positions, the third and fourth rows are the parallax between the right eye viewpoint and the left eye viewpoint, i.e., the first parallax, and the result after the first parallax is normalized (the first parallax/the second parallax), in table 3-1 and table 3-2, the positive and negative parallaxes in the fourth row are both between [ -1, 1], and the degree of reverse viewing is reduced. In this embodiment, after the viewpoint to which each sub-pixel belongs is calculated, the viewpoints 16 to 28 are respectively replaced with the viewpoints 14 to 2 to achieve the effect of the inverse view optimization, that is, the viewpoint 16 is replaced with the viewpoint 14, the viewpoint 17 is replaced with the viewpoint 13, and so on, the viewpoint 28 is replaced with the viewpoint 2, and the viewpoint to which the sub-pixel belongs in fig. 6 is replaced with the viewpoint shown in fig. 7.
In the above embodiment, the original 16 to 28 viewpoints are all replaced, and in another embodiment, only a part of the viewpoints may be replaced, that is, only the display effect of the partial reflection viewing area may be improved.
In the above embodiment, 28 viewpoints are taken as an example for description, and for other viewpoints, the scheme is similar, and the optimization of the reflex viewing area can be realized by replacing part of the viewpoints in a reverse order so that the normalized disparity is between [ -1, 1 ].
TABLE 2-1 parallax
Figure BDA0002782966250000131
TABLE 2-2 parallax
Figure BDA0002782966250000132
TABLE 3-1 remission Sum-Vision protocol
Figure BDA0002782966250000141
TABLE 3-2
Figure BDA0002782966250000142
The disclosed embodiments provide a computer-readable storage medium storing computer-executable instructions for the above-described multi-view image generation method.
As shown in fig. 8, the embodiment of the present disclosure provides a display device, which includes a processor 801 and a memory 802 storing a computer program operable on the processor 801, wherein the processor 801 executes the computer program to implement the steps of the multi-view image generation method. The display device may be: any product or component with a display function, such as a mobile phone, a tablet computer, a television, a display, a notebook computer, a digital photo frame, a navigator and the like. The display device may be a naked eye 3D display device.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
It will be understood by those skilled in the art that various modifications and equivalent arrangements of the embodiments of the present disclosure may be made without departing from the spirit and scope of the embodiments of the present disclosure and shall be covered by the appended claims.

Claims (12)

1. A multi-view image generating method applied to a display apparatus including a display panel and a sub-imaging device, wherein the display panel includes a plurality of pixels distributed in an array, the pixels include a plurality of sub-pixels, and the sub-imaging device includes a plurality of grating units parallel to each other, the method comprising:
determining the number of sub-pixels covered by the grating unit in the first direction, and determining the number of viewpoints contained by a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the grating unit in the first direction; the first direction is a pixel row direction, namely the arrangement direction of sub-pixels in the same pixel;
determining the number of viewpoints contained in a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained in the single sub-pixel in the first direction, wherein the second direction is a pixel column direction;
determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction;
and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs.
2. The method of claim 1, wherein the determining the number of sub-pixels covered by the raster unit in the first direction comprises:
the number of sub-pixels covered by the grating unit in the first direction
Figure FDA0002782966240000011
P is the width of the grating unit, SwAnd determining the positive and negative of theta according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction, wherein the theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel.
3. The method of claim 1, wherein determining the number of views a single sub-pixel in the first direction contains based on the total number of views and the number of sub-pixels covered by the raster unit in the first direction comprises:
number of views contained by a single sub-pixel in a first direction
Figure FDA0002782966240000012
V is the total number of viewpoints, PxThe number of the sub-pixels covered by the grating unit in the first direction is shown.
4. The method according to claim 1, wherein the determining the number of viewpoints included in a single sub-pixel in the second direction based on a first length of the sub-pixel in the first direction and a second length of the sub-pixel in the second direction, a bonding angle of the image splitter to the display panel, and the number of viewpoints included in a single sub-pixel in the first direction comprises:
determining an offset value according to a first length of the sub-pixel in a first direction, a second length of the sub-pixel in a second direction and a joint angle between the image splitting device and the display panel; determining the number of viewpoints contained by a single sub-pixel in a second direction according to the number of viewpoints contained by the single sub-pixel in the first direction and the offset value; wherein the content of the first and second substances,
the offset value
Figure FDA0002782966240000021
Wherein, the SwIs a first length of the sub-pixel in a first direction, ShThe second length of the sub-pixel in the second direction is theta, the joint angle of the image splitting device and the display panel is theta, namely the angle between the extending direction of the orthographic projection of the grating unit and the second direction is on a plane parallel to the display panel, and the angle between the extending direction of the orthographic projection of the grating unit and the second direction is thetaThe orientation determines the positive and negative of theta.
5. The method according to claim 4, wherein the determining the number of views contained by a single sub-pixel in a second direction from the number of views contained by a single sub-pixel in the first direction and the offset value comprises:
number of views V contained by a single sub-pixel in the second directiony=Vx*Shiftx
Wherein, VxFor the number of views, Shift, contained by a single sub-pixel in said first directionxIs the offset value.
6. The method according to any of claims 1 to 5, wherein said determining the viewpoint of a single sub-pixel in the first direction according to the number of viewpoints of the sub-pixel and the number of viewpoints of the sub-pixel in the second direction comprises:
Vi,1=(Vfirst-(i-1)*Vy) mod V, if Vi,1When the value is 0, then Vi,1=V,i∈[1,M]
Vi,j=(Vi,1+(j-1)*Vx) mod V, if Vi,jWhen the value is 0, then Vi,j=V,j∈[1,N]
Wherein V is the total number of view points, Vi,jIs the view point, V, to which the sub-pixel of the ith row and the jth column belongsfirstIs the viewpoint to which the sub-pixel of the 1 st row and the 1 st column belongs, M is the row number of the sub-pixel, N is the column number of the sub-pixel, VxNumber of views, V, contained for a single sub-pixel in a first directionyIs the number of views a single sub-pixel contains in the second direction.
7. The method according to any of claims 1 to 5, wherein said determining a gray level value of said sub-pixel from the image of the viewpoint to which said sub-pixel belongs comprises:
and determining the position of the sub-pixel, and taking the gray-scale value of the corresponding position in the image of the viewpoint to which the sub-pixel belongs as the gray-scale value of the sub-pixel.
8. The method according to any one of claims 1 to 5, wherein after determining the viewpoint to which the sub-pixel belongs based on the number of viewpoints contained in a single sub-pixel in the first direction and the number of viewpoints contained in a single sub-pixel in the second direction, before determining the gray level value of the sub-pixel based on the image of the viewpoint to which the sub-pixel belongs, the method further comprises:
and replacing the viewpoints of part of the sub-pixels, and replacing the viewpoints with the same viewpoint when the viewpoints of different sub-pixels are the same, so that the ratio of the first parallax of the left-eye viewpoint and the right-eye viewpoint after replacement to the second parallax of the viewpoint of the adjacent sub-pixel in the first direction meets the preset condition when the viewpoints of the left-eye viewpoint and the right-eye viewpoint are not replaced.
9. The multi-viewpoint image generation method according to claim 8,
the meeting of the preset conditions comprises: the ratio of the first disparity to the second disparity is at [ -1, 1 ].
10. The method according to claim 8, wherein said replacing the viewpoints to which the partial sub-pixels belong comprises:
the viewpoints comprise viewpoints 1 to V which are sequentially distributed along a first direction, when a viewpoint K to which the sub-pixels belong is located in a range from a viewpoint K to the viewpoint V, the viewpoint K is replaced by a viewpoint S-K, the viewpoint S-K is smaller than or equal to V and larger than or equal to 1, and the viewpoint K is a preset value.
11. A computer-readable storage medium storing computer-executable instructions for performing the multi-view image generation method of any one of claims 1-10.
12. A display device comprising a processor and a memory storing a computer program operable on the processor, wherein the processor when executing the program performs the steps of the multi-view image generation method according to any of claims 1 to 10.
CN202011287892.XA 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device Active CN112399168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011287892.XA CN112399168B (en) 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011287892.XA CN112399168B (en) 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device

Publications (2)

Publication Number Publication Date
CN112399168A true CN112399168A (en) 2021-02-23
CN112399168B CN112399168B (en) 2022-12-02

Family

ID=74605877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011287892.XA Active CN112399168B (en) 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device

Country Status (1)

Country Link
CN (1) CN112399168B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763473A (en) * 2021-09-08 2021-12-07 未来科技(襄阳)有限公司 Method and device for determining viewpoint width and storage medium
CN115022612A (en) * 2022-05-31 2022-09-06 北京京东方技术开发有限公司 Driving method and device of display device and display equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626517A (en) * 2009-08-07 2010-01-13 四川大学 Method for synthesizing stereo image from parallax image in a real-time manner
CN102510515A (en) * 2011-12-29 2012-06-20 中航华东光电有限公司 Grating-type multi-viewpoint stereo image synthesis method
CN102843569A (en) * 2012-08-31 2012-12-26 上海易维视科技有限公司 Multi-viewpoint automatic stereoscopic display device
KR20140025783A (en) * 2012-08-22 2014-03-05 엘지디스플레이 주식회사 Stereoscopic image display device and method for driving the same
CN103686117A (en) * 2012-09-18 2014-03-26 上海易维视科技有限公司 Super multi-view automatic three-dimensional display device
CN105338336A (en) * 2015-10-28 2016-02-17 四川长虹电器股份有限公司 Lenticular grating and LCD screen-based stereo image fast synthesis method
CN106067969A (en) * 2016-06-30 2016-11-02 北京邮电大学 The multiple views 3D display packing of a kind of column mirror grating and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626517A (en) * 2009-08-07 2010-01-13 四川大学 Method for synthesizing stereo image from parallax image in a real-time manner
CN102510515A (en) * 2011-12-29 2012-06-20 中航华东光电有限公司 Grating-type multi-viewpoint stereo image synthesis method
KR20140025783A (en) * 2012-08-22 2014-03-05 엘지디스플레이 주식회사 Stereoscopic image display device and method for driving the same
CN102843569A (en) * 2012-08-31 2012-12-26 上海易维视科技有限公司 Multi-viewpoint automatic stereoscopic display device
CN103686117A (en) * 2012-09-18 2014-03-26 上海易维视科技有限公司 Super multi-view automatic three-dimensional display device
CN105338336A (en) * 2015-10-28 2016-02-17 四川长虹电器股份有限公司 Lenticular grating and LCD screen-based stereo image fast synthesis method
CN106067969A (en) * 2016-06-30 2016-11-02 北京邮电大学 The multiple views 3D display packing of a kind of column mirror grating and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763473A (en) * 2021-09-08 2021-12-07 未来科技(襄阳)有限公司 Method and device for determining viewpoint width and storage medium
CN113763473B (en) * 2021-09-08 2024-03-29 未来科技(襄阳)有限公司 Viewpoint width determining method and device and storage medium
CN115022612A (en) * 2022-05-31 2022-09-06 北京京东方技术开发有限公司 Driving method and device of display device and display equipment
WO2023231700A1 (en) * 2022-05-31 2023-12-07 京东方科技集团股份有限公司 Driving method and apparatus for display apparatus, and display device
CN115022612B (en) * 2022-05-31 2024-01-09 北京京东方技术开发有限公司 Driving method and device of display device and display equipment

Also Published As

Publication number Publication date
CN112399168B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
KR102030830B1 (en) Curved multiview image display apparatus and control method thereof
JP3966830B2 (en) 3D display device
JP3944188B2 (en) Stereo image display method, stereo image imaging method, and stereo image display apparatus
KR102121389B1 (en) Glassless 3d display apparatus and contorl method thereof
TWI357987B (en) A three-dimension image display device and a displ
US20100046069A1 (en) Method and arrangement for three-dimensional representation
US20110242289A1 (en) Display apparatus and stereoscopic image display method
KR101329962B1 (en) Three-dimensional image display
US20100033556A1 (en) Three-Dimensional Image Display Device and Three-Dimensional Image Display Method
JP2008085503A (en) Three-dimensional image processing apparatus, method and program, and three-dimensional image display device
CN112399168B (en) Multi-viewpoint image generation method, storage medium and display device
US20150237334A1 (en) Stereoscopic display device
KR20120030005A (en) Image processing device and method, and stereoscopic image display device
KR20150055322A (en) multi view image display apparatus and multi view image display method thereof
CN111323935A (en) N-viewpoint three-dimensional display device and driving method thereof
CN110351544B (en) Three-dimensional image coding method and device for naked eye 3D display
US20160014400A1 (en) Multiview image display apparatus and multiview image display method thereof
KR102130133B1 (en) Back light apparatus and three dimentional image display apparatus comprising the same
CN110381305A (en) Naked eye 3D's removes crosstalk method, system, storage medium and electronic equipment
CN108271013B (en) Display device and display method
US20180152695A1 (en) Autostereoscopic 3-dimensional display
KR101852819B1 (en) An apparatus and a method for displaying a 3-dimensional image
Minami et al. Portrait and landscape mode convertible stereoscopic display using parallax barrier
WO2021147944A1 (en) 3d display device having a processor for correcting pseudo stereoscopic effect
US10735715B2 (en) Autostereoscopic 3-dimensional display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant