CN112399168B - Multi-viewpoint image generation method, storage medium and display device - Google Patents

Multi-viewpoint image generation method, storage medium and display device Download PDF

Info

Publication number
CN112399168B
CN112399168B CN202011287892.XA CN202011287892A CN112399168B CN 112399168 B CN112399168 B CN 112399168B CN 202011287892 A CN202011287892 A CN 202011287892A CN 112399168 B CN112399168 B CN 112399168B
Authority
CN
China
Prior art keywords
sub
pixel
viewpoint
viewpoints
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011287892.XA
Other languages
Chinese (zh)
Other versions
CN112399168A (en
Inventor
孙炎
楚明磊
史天阔
习艳会
姬治华
侯一凡
赵晨曦
张硕
彭项君
张小牤
孙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202011287892.XA priority Critical patent/CN112399168B/en
Publication of CN112399168A publication Critical patent/CN112399168A/en
Application granted granted Critical
Publication of CN112399168B publication Critical patent/CN112399168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A multi-viewpoint image generation method, a storage medium, and a display device, the multi-viewpoint image generation method comprising: determining the number of viewpoints contained in a single sub-pixel in the first direction according to the total viewpoint number and the number of sub-pixels covered by the grating unit in the first direction; determining the number of viewpoints contained by a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained by a single sub-pixel in the first direction; determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction; and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs. The scheme provided by the embodiment realizes the generation of the multi-view three-dimensional image.

Description

Multi-viewpoint image generation method, storage medium and display device
Technical Field
The disclosed embodiments relate to, but not limited to, display technologies, and more particularly, to a multi-view image generation method, a storage medium, and a display device.
Background
The naked eye three-dimensional (3D) Display device includes a Display panel, which generally includes a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), and the like, and an image splitter, which includes a slit grating or a lenticular grating, and the like. The display panel includes a plurality of pixels, and each pixel generally includes three colors of sub-pixels RGB. The image splitting device comprises a plurality of image splitting units which are arranged in parallel, the image splitting device is attached to the front of the display panel according to a certain angle, sub-pixels at different positions can be split to different positions in space, therefore, multi-viewpoint images are arranged and rendered according to the light splitting characteristic of the image splitting device and then displayed on the display panel, after passing through the image splitting device, people can see images at different viewpoints by two eyes, and then the people can feel stereoscopic impression by brain processing.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The disclosed embodiments provide a multi-viewpoint image generation method, a storage medium, and a display device, which implement multi-viewpoint three-dimensional display.
In one aspect, the present disclosure provides a multi-view image generating method applied to a display apparatus including a display panel and a sub-pixel device, the display panel including a plurality of pixels distributed in an array, the pixels including a plurality of sub-pixels, the sub-pixel device including a plurality of grating units parallel to each other, the method including:
determining the number of sub-pixels covered by the grating unit in the first direction, and determining the number of viewpoints contained by a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the grating unit in the first direction; the first direction is a pixel row direction, namely the arrangement direction of sub-pixels in the same pixel;
determining the number of viewpoints contained in a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained in the single sub-pixel in the first direction, wherein the second direction is a pixel column direction;
determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction;
and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs.
In an exemplary embodiment, the determining the number of sub-pixels covered by the raster unit in the first direction includes:
the number of sub-pixels covered by the grating unit in the first direction
Figure BDA0002782966250000021
P is the width of the grating unit, S w And determining the length of the sub-pixel along a first direction by theta epsilon (-90 degrees and 90 degrees), wherein theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel, and the positive and negative of theta are determined according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction.
In an exemplary embodiment, the determining the number of viewpoints contained by a single sub-pixel in the first direction according to the total number of viewpoints and the number of sub-pixels covered by the raster unit in the first direction includes:
number of views contained in a single sub-pixel in first direction
Figure BDA0002782966250000022
V is the total number of viewpoints, P x The number of the sub-pixels covered by the grating unit in the first direction is shown.
In an exemplary embodiment, the determining the number of viewpoints included in a single sub-pixel in the second direction according to a first length of the sub-pixel in the first direction and a second length of the sub-pixel in the second direction, a fitting angle of the image splitting device with the display panel, and the number of viewpoints included in a single sub-pixel in the first direction includes:
determining an offset value according to a first length of the sub-pixel in a first direction, a second length of the sub-pixel in a second direction and a joint angle between the image splitting device and the display panel; determining the number of viewpoints contained by a single sub-pixel in a second direction according to the number of viewpoints contained by the single sub-pixel in the first direction and the offset value; wherein, the first and the second end of the pipe are connected with each other,
the offset value
Figure BDA0002782966250000031
Wherein, the S w Is a first length of the sub-pixel in a first direction, S h And determining the positive and negative of theta according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction, wherein theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel.
In an exemplary embodiment, the determining the number of viewpoints contained by a single sub-pixel in the second direction according to the number of viewpoints contained by a single sub-pixel in the first direction and the offset value includes:
number of viewpoints V contained in a single sub-pixel in the second direction y =V x *Shift x
Wherein, V x Shift being the number of views a single sub-pixel contains in said first direction x Is that theAn offset value.
In an exemplary embodiment, the determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction includes:
V i,1 =(V first -(i-1)*V y ) mod V, if V i,1 If not than 0, then V i,1 =V,i∈[1,M]
V i,j =(V i,1 +(j-1)*V x ) mod V, if V i,j If not than 0, then V i,j =V,j∈[1,N]
Wherein V is the total number of view points, V i,j Is the view point to which the sub-pixel of the ith row and jth column belongs, V first Is the view point of the sub-pixel of the 1 st row and the 1 st column, M is the row number of the sub-pixel, N is the column number of the sub-pixel, V x Number of views, V, contained for a single sub-pixel in a first direction y The number of views a single sub-pixel contains in the second direction.
In an exemplary embodiment, the determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs includes:
and determining the position of the sub-pixel, and taking the gray value of the corresponding position in the image of the viewpoint to which the sub-pixel belongs as the gray value of the sub-pixel.
In an exemplary embodiment, after determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction, before determining the grayscale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs, the method further includes:
and replacing the viewpoints to which part of the sub-pixels belong, and when the viewpoints to which different sub-pixels belong are the same, replacing by using the same viewpoint, so that the ratio of the first parallax of the left-eye viewpoint and the right-eye viewpoint after replacement to the second parallax of the viewpoint to which the adjacent sub-pixels belong in the first direction meets the preset condition when the replacement is not performed.
In an exemplary embodiment, the meeting the preset condition includes: the ratio of the first parallax to the second parallax is located at [ -1,1].
In an exemplary embodiment, replacing the viewpoints to which the partial sub-pixels belong includes:
the viewpoints comprise viewpoints 1 to V which are distributed along a first direction in sequence, when a viewpoint K to which the sub-pixels belong is located in a range from a viewpoint K to the viewpoint V, the viewpoint K is replaced by a viewpoint S-K, the S-K is less than or equal to V and greater than or equal to 1, and the K is a preset value.
In another aspect, the disclosed embodiments provide a computer-readable storage medium storing computer-executable instructions for performing the above-described multi-view image generation method.
In another aspect, the present disclosure provides a display device, including a processor and a memory storing a computer program operable on the processor, wherein the processor executes the computer program to implement the steps of the multi-view image generation method.
The disclosed embodiments provide the multi-view image generation method, the storage medium, and the display device, the multi-view image generation method including: determining the number of viewpoints contained in a single sub-pixel in the first direction according to the total viewpoint number and the number of sub-pixels covered by the grating unit in the first direction; determining the number of viewpoints contained in a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained in the single sub-pixel in the first direction; determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained by the single sub-pixel in the first direction and the number of viewpoints contained by the single sub-pixel in the second direction; and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs. The scheme provided by the embodiment realizes the generation of the multi-view three-dimensional image.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
The accompanying drawings are included to provide a further understanding of the present invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and are not intended to limit the invention.
FIG. 1 is a schematic diagram of a three-dimensional display according to an embodiment;
fig. 2 is a schematic diagram of a display device provided in an embodiment of the disclosure;
FIG. 3 is a schematic view of a display panel according to an embodiment;
FIG. 4 is a schematic diagram of an image splitter and a display panel according to an embodiment;
FIG. 5 is a flow diagram of a multi-view image generation method provided by an embodiment;
FIG. 6 is a schematic view illustrating a viewpoint to which a sub-pixel belongs according to an embodiment;
FIG. 7 is a diagram illustrating an embodiment of replacing a viewpoint to which a sub-pixel belongs;
fig. 8 is a schematic diagram of a display device according to an embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
In the drawings, the size of each component, the thickness of a layer, or a region may be exaggerated for clarity. Therefore, the embodiments of the present disclosure are not necessarily limited to the dimensions, and the shapes and sizes of the respective components in the drawings do not reflect a true scale. Further, the drawings schematically show ideal examples, and the embodiments of the present disclosure are not limited to the shapes or numerical values shown in the drawings.
The ordinal numbers such as "first", "second", "third", etc., in this disclosure are provided to avoid confusion among the constituent elements, and do not indicate any order, number, or importance.
In the present disclosure, for convenience, the terms "middle", "upper", "lower", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicating the orientation or positional relationship are used to explain the positional relationship of the constituent elements with reference to the drawings only for the convenience of description and simplification of description, but not to indicate or imply that the device or element referred to must have a specific orientation, be configured in a specific orientation, and operate, and thus, should not be construed as limiting the present disclosure. The positional relationship of the components is changed as appropriate in accordance with the direction in which each component is described. Therefore, the words described in the disclosure are not limited thereto, and may be replaced as appropriate.
In the present disclosure, "parallel" means a state in which an angle formed by two straight lines is-10 ° or more and 10 ° or less, and therefore, includes a state in which the angle is-5 ° or more and 5 ° or less. The term "perpendicular" refers to a state in which the angle formed by two straight lines is 80 ° or more and 100 ° or less, and therefore includes a state in which the angle is 85 ° or more and 95 ° or less.
As shown in fig. 1, which is a principle of naked-eye 3D display using a lenticular lens, pixels at different positions on a display panel are refracted and split by the lenticular lens, and light paths are changed to form different visual regions in space, so that when two eyes of a person are located in the correct visual regions (i.e., the left eye receives a left viewpoint image and the right eye receives a right viewpoint image), the person can feel stereoscopic impression.
In the embodiments of the present disclosure, a multi-view image generation method (or called a layout method) is provided, which can fuse multi-view content according to related parameters of a display panel and a sub-image device and the number of views to be displayed, so that a stereoscopic image can be accurately displayed.
Fig. 2 is a schematic diagram of a display device according to an exemplary embodiment. As shown in fig. 2, the display device comprises a display panel 10 and an image splitting device 20, wherein the image splitting device 20 comprises a plurality of grating units 21 parallel to each other, and the width of the grating units 21 is P. The extending direction Q of the grating unit 21 and the middle pixel column direction (Y direction in the figure) of the display panel have an angle θ, that is, the image splitter 20 is attached to the surface of the display panel 10 at the angle θ. The display panel 10 includes a plurality of pixels, and the pixels are located on a focal plane of the grating unit 21. In the present embodiment, a lenticular lens grating is shown as the image splitting device 20. The disclosed embodiments are not so limited and may be other gratings.
As shown in fig. 3, the display panel 10 includes a plurality of pixels 11 distributed in an array, and the pixels 11 include a plurality of sub-pixels 12, for example, a red sub-pixel R, a green sub-pixel G, and a blue sub-pixel B, but the embodiment of the disclosure is not limited thereto, and may be other color sub-pixels, or may be more than 3 sub-pixels, such as 4 sub-pixels. The sub-pixels 12 in the same pixel 11 are distributed along a first direction X, also called pixel row direction, and a second direction Y, also called pixel column direction, and the first direction X and the second direction Y may be perpendicular. The length of the sub-pixel 12 along the first direction X is S w (also referred to as sub-pixel width) and the length of the sub-pixel 12 in the second direction Y is S h (also referred to as sub-pixel height).
As shown in fig. 4, the image splitter 20 is attached to the surface of the display panel 10 at an angle θ, where θ ∈ (-90 °,90 °), where the angle θ is on a plane parallel to the display panel 10, an angle between the extending direction Q of the orthographic projection of the grating unit 21 and the second direction Y is determined according to an orientation from the extending direction Q of the orthographic projection of the grating unit 21 to the second direction Y, where θ is positive when Q to P are clockwise, θ is positive as shown in fig. 4, and θ is negative when Q to P are counterclockwise.
Because the design parameters of different 3D display devices are different, the multi-viewpoint images can be determined according to the parameters matched with the 3D display devices, so that the compatibility is strong, more application scenes can be met, the parameters can be configured for devices with the same design parameters only once, and the parameters can be shown in table 1.
TABLE 1 display Panel and image splitting device parameters
Figure BDA0002782966250000071
Fig. 5 is a flowchart of a multi-view image generation method according to an exemplary embodiment. As shown in fig. 5, the multi-viewpoint image generating method provided by the present embodiment is applied to the display apparatus described above, and includes:
step 501, determining the number P of sub-pixels covered by the grating unit 21 in the first direction X x According to the total number of viewpoints V and the number of sub-pixels P covered by the raster unit 21 in the first direction x Determining the number of views V contained by a single sub-pixel in a first direction X x (ii) a The first direction X is a pixel row direction, that is, an arrangement direction of sub-pixels in the same pixel;
step 502, according to the first length S of the sub-pixel in the first direction X w And a second length S in a second direction Y h A bonding angle θ between the image splitter 20 and the display panel 10, and a number of viewing points V included in a single sub-pixel in the first direction X x Determining the number of views V contained by a single sub-pixel in the second direction Y y (ii) a The second direction Y is a pixel column direction;
step 503, according to the number V of viewpoints contained by a single sub-pixel in the first direction X x And the number of views V contained by a single sub-pixel in the second direction Y y Determining a viewpoint to which the sub-pixel belongs;
step 504, determining the gray level value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs.
According to the scheme provided by the embodiment, the view point to which the sub-pixel belongs can be determined according to the related parameters of the display panel and the image splitting device and the number of the view points to be displayed, the three-dimensional image can be accurately displayed, a continuous 3D effect is brought, the compatibility is high, and the method and the device can be suitable for various devices.
In an exemplary embodiment, the image of the viewpoint may be obtained by photographing using an imaging device; or, after the image of a part of viewpoints is shot, the image of other viewpoints is generated according to the image of the existing viewpoint.
In an exemplary embodiment, the determining the number P of sub-pixels covered by the grating unit 21 in the first direction X x The method comprises the following steps:
Figure BDA0002782966250000081
p is the width of the grating unit, S w The length of the sub-pixel along the first direction X is theta epsilon (-90 degrees, 90 degrees), the theta is the attaching angle of the image splitting device and the display panel, namely, on a plane parallel to the display panel, the extending direction of the orthographic projection of the grating unit 21 and the second direction Y form an included angle, and the positive and the negative of the theta are determined according to the orientation from the extending direction Q of the orthographic projection of the grating unit to the second direction Y.
In an exemplary embodiment, the number of sub-pixels covered in the first direction according to the total number of viewpoints V and the raster unit P x Determining the number of views V contained by a single sub-pixel in a first direction x The method comprises the following steps:
number of views contained by a single sub-pixel in a first direction
Figure BDA0002782966250000082
The sub-pixels at different positions in the raster unit 21 have different directions after being split, and each raster unit is covered on the display panel in parallel at an angle theta, so that the sub-pixels at the same position in different raster units 21 have the same view point direction, and when the number of input view points is V, the number P of the sub-pixels covered by the raster unit 21 in the first direction x Then a single sub-image in the first direction XNumber of viewpoints V contained in pixel x (also understood as the difference V in the number of viewpoints of horizontally adjacent sub-pixels x ) Is composed of
Figure BDA0002782966250000091
In an exemplary embodiment, the first length S in the first direction X of the sub-pixel is dependent on w And a second length S in a second direction Y h A bonding angle θ between the image splitter 20 and the display panel 10, and a number of viewing points V included in a single sub-pixel in the first direction X x Determining the number of views V contained by a single sub-pixel in the second direction Y y The method comprises the following steps:
according to the first length S of the sub-pixel in the first direction X w And a second length S in a second direction Y h Determining an offset value Shift according to the bonding angle theta between the image splitter 20 and the display panel 10 x ,Shift x Also called the number of sub-pixel shifts of adjacent rows;
according to the number V of viewpoints contained by a single sub-pixel in the first direction X x And the offset value determines the number of views a single sub-pixel contains in the second direction Y.
In an exemplary embodiment, the first length S in the first direction X of the sub-pixel is dependent on w And a second length S in a second direction Y h Determining an offset value according to a fitting angle θ between the image splitter and the display panel may include:
the offset value
Figure BDA0002782966250000092
In an exemplary embodiment, the number of viewpoints V included according to a single sub-pixel in the first direction X is x And the determining the number of viewpoints contained by a single sub-pixel in the second direction Y by the offset value comprises:
number of views V contained by a single sub-pixel in the second direction y =V x *Shift x
Wherein the number V of viewpoints contained in a single sub-pixel in the vertical direction y It can also be understood as the difference V between the numbers of viewpoints of the vertically adjacent sub-pixels y
In an exemplary embodiment, the number of viewpoints V included according to a single sub-pixel in the first direction X is x And the number of views V contained by a single sub-pixel in the second direction Y y Determining the viewpoint to which the sub-pixel belongs comprises:
V i,1 =(V first -(i-1)*V y ) mod V, if V i,1 If not than 0, then V i,1 =V,i∈[1,M]
V i,j =(V i,1 +(j-1)*V x ) mod V, if V i,j If not than 0, then V i,j =V,j∈[1,N]
Where mod is a modulo operation, V i,j Is the view point, V, to which the sub-pixel of the ith row and the jth column belongs first The viewpoint to which the sub-pixel of the 1 st row and the 1 st column belongs, M is the row number of the sub-pixel, and N is the column number of the sub-pixel. In this embodiment, the viewpoint V to which the sub-pixel of the first column of each row belongs may be calculated first i,1 Then according to the view point V to which the sub-pixel of the first column of each row belongs i,1 And calculating the view points of the sub-pixels of the other columns of the row. That is, the viewpoint to which the sub-pixels belong may be calculated from the 1 st row and 1 st column, and after the viewpoint to which the sub-pixels belong in one row is calculated, the viewpoint to which the sub-pixels in the next row belong may be calculated. The row 1 and column 1 sub-pixels may be the sub-pixels in the upper left corner of the display panel. The number of rows of sub-pixels may be the same as the number of rows of pixels and the number of columns of sub-pixels may be 3 times the number of columns of pixels (when each pixel comprises 3 sub-pixels).
V first The viewpoint may be specified as needed, for example, the 2 nd viewpoint, and the embodiment of the present disclosure is not limited thereto and may be other values. In another embodiment, V first It may be the view point to which the sub-pixels of other rows and other columns belong.
In an exemplary embodiment, V of the same row may be grouped i,j And performing cyclic shift to obtain the view point of the sub-pixel of the row.
In an exemplary embodiment, at V x =6,V y =3,vfirst =2 (i.e. V) 1,1 = 2), V =28 (the numbers of the viewpoints are 1 to 28) as an example, and the viewpoint to which the sub-pixels belong is calculated. Wherein:
the viewpoint to which the sub-pixel of the 1 st row and the 2 nd column belongs is V 1,2 =(V 1,1 +(j-1)*V x )mod V=(2+(2-1)*6)mod 28=8;
The viewpoint to which the sub-pixel of the 1 st row and the 3 rd column belongs is V 1,3 =(2+(3-1)*6)mod28=14;
By analogy, the viewpoint of the sub-pixel of the 1 st row and the 14 th column is V 1,3 = 2+ (14-1) × 6) mod28=24; subsequently, similarly, no description is given.
The viewpoint to which the sub-pixel of the 2 nd row and the 1 st column belongs is V 2,1 =(V first -(i-1)*V y ) modV = (2- (2-1) × 3) mod28=27; the calculation of the view point to which the sub-pixels in the 2 nd row, the 2 nd column and the N th column belong is similar to the calculation of the view point to which the sub-pixels in the 2 nd row, the 2 nd column and the N th column belong, and the description is not repeated. The calculation results are shown in fig. 6. In fig. 6, only the viewpoints of the 4 rows and 18 columns of sub-pixels are shown, and the viewpoints of the other sub-pixels are not shown, and the calculation is performed in a similar way. In addition, it can be seen that the viewpoints of the 1 st column and the 15 th column of subpixels are the same, and the viewpoints of the 2 nd column and the 16 th column of subpixels are the same, so that only the viewpoint to which one column of subpixels in the multiple columns of subpixels with the same viewpoint belongs may be calculated, and the calculated viewpoint to which the column of subpixels belongs may be regarded as the viewpoint to which the other column of subpixels with the same viewpoint as the column of subpixels belongs.
In this embodiment, the negative modulo operation outputs a positive remainder, e.g., V 2,1 =(2-(2-1)*3)mod28=(-1)mod28=27;V 3,1 =(V first -(i-1)*V y )modV=(2-(3-1)*3)mod28=(-4)mod28=24;V 4,1 =(V first -(i-1)*V y )modV=(2-(4-1)*3)mod28=(-7)mod28=21;V 5,1 =(V first -(i-1)*V y )modV=(2-(5-1)*3)mod28=(-10)mod28=18;V 12,1 =(V first -(i-1)*V y )modV=(2-(12-1)*3)mod28=(-31)mod28=25。
In an exemplary embodiment, the determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs includes:
and determining the position of the sub-pixel, and taking the gray-scale value of the corresponding position in the image of the viewpoint to which the sub-pixel belongs as the gray-scale value of the sub-pixel. For example, the gray scale value of the sub-pixel in the ith row and the jth column in the image of the associated viewpoint is used as the gray scale value of the sub-pixel. Here, for example only, the gray-scale value of the ith row and the jth column in the image of the viewpoint may be weighted as the gray-scale value of the sub-pixel, and so on.
The 3D film in the cinema is 2 viewpoints, the left eye receives the left viewpoint image and the right eye receives the right viewpoint image through 3D glasses image splitting, the naked eye 3D display device generally displays multi-viewpoint content, when the eyes move, the eyes are easily in a reverse viewing area or a dead zone, namely the left eye receives the right viewpoint image and the right eye also receives the left viewpoint image, and discomfort is caused to people. In an exemplary embodiment, the viewpoint arrangement is optimized (the viewpoint to which the sub-pixels belong is changed), the anti-vision effect is weakened, the dizziness and the double-image feeling caused by the anti-vision area are relieved, and the viewpoint is more continuous and the naked eye 3D effect is more comfortable. In the embodiment of the present disclosure, the negative parallax degree can be alleviated by sacrificing the number of viewpoints.
In an exemplary embodiment, after determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction, before determining the gray level value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs, the method further includes:
and replacing the viewpoints of part of the sub-pixels, and replacing the viewpoints with the same viewpoint when the viewpoints of different sub-pixels are the same, so that the ratio of the first parallax of the left-eye viewpoint and the right-eye viewpoint after replacement to the second parallax of the viewpoint of the adjacent sub-pixel in the first direction (also called normalized first parallax) meets the preset condition when the replacement is not performed. According to the scheme provided by the embodiment, the parallax of left and right eyes is controlled by replacing part of viewpoints, and 3D experience is improved conveniently. Taking fig. 6 as an example, the second parallax of the viewpoint to which the adjacent sub-pixels in the first direction belong is 6.
In an exemplary embodiment, the meeting the preset condition may include: the ratio of the first parallax to the second parallax is located at [ -1,1]. Here is only an example, other conditions that can mitigate the reverse view may be set as needed, for example, the ratio of the third parallax to the second parallax of the left-eye viewpoint and the right-eye viewpoint at the same corresponding positions may be smaller than that when the viewpoint replacement is not performed. For example, taking 28 viewpoints as an example, if the viewpoint is not replaced, the second parallax is 6, if the left-eye viewpoint is 23, the right-eye viewpoint is 1, and if the left-eye parallax is-22, the normalized parallax is-3.6, and some viewpoints may be replaced so that the absolute value of the parallax normalized at the same position is less than 3.6.
In an exemplary embodiment, replacing the viewpoints to which the part of the sub-pixels belong includes:
the viewpoints comprise viewpoints 1 to V which are distributed along a first direction in sequence, when a viewpoint K to which the sub-pixels belong is located in a range from a viewpoint K to the viewpoint V, the viewpoint K is replaced by a viewpoint S-K, the S-K is less than or equal to V and greater than or equal to 1, and the K is a preset value. In this embodiment, reverse order replacement is performed. For example, for V =28 viewpoints, K =16, the viewpoint 16 to the viewpoint 28 may be replaced with the viewpoint 14 to the viewpoint 2, respectively, when S =30.
Taking the 28 viewpoint illustrated in fig. 6 as an example, the difference between two adjacent sub-pixels in the horizontal direction is 6 viewpoints (second viewpoint), which can be understood as a pair of left and right eyes every 6 viewpoints, so that when the left eye is located in the viewpoint corresponding to the viewpoints 1 to 22, the left eye is located in the front view region, the right eye must be located in the view region of the viewpoints 7 to 28, and when the left eye moves to the view region corresponding to the viewpoints 23 to 28, the left eye is located in the back view region, and the user feels dizzy and double images. As shown in table 2-1 and table 2-2, the first two rows are left and right eye positions, the third and fourth rows are right-left eye viewpoint parallax, i.e. first parallax, and the first parallax is normalized (first parallax/second parallax). It can be seen that when the left eye is at the viewpoint of 23-28, the first parallax is negative, and the negative vision region cannot be completely eliminated, but only the degree of retrospective vision is reduced.
In this embodiment, the negative parallax degree is relieved by sacrificing the number of viewpoints, so that the maximum negative parallax is consistent with the positive parallax. For example, the original viewpoints 16 to 28 are replaced with viewpoints 14 to 2, the parallax result at this time is recalculated, after the replacement, as shown in table 3-1 and table 3-2 (the bold part in table 3-1 and table 3-2 is the replaced viewpoint), the first two rows are the left and right eye positions, the third and fourth rows are the parallax between the right eye viewpoint and the left eye viewpoint, i.e., the first parallax, and the result of normalization of the first parallax (the first parallax/the second parallax), and in table 3-1 and table 3-2, the positive and negative parallaxes in the fourth row are both between [ -1,1], so that the degree of reverse view is reduced. In this embodiment, after the viewpoint to which each sub-pixel belongs is calculated, the viewpoints 16 to 28 are replaced with the viewpoints 14 to 2, respectively, to achieve the effect of the inverse view optimization, that is, the viewpoint 16 is replaced with the viewpoint 14, the viewpoint 17 is replaced with the viewpoint 13, and so on, the viewpoint 28 is replaced with the viewpoint 2, and as shown in fig. 7 after the viewpoint to which the sub-pixel belongs is replaced in fig. 6.
In the above embodiment, the original 16 to 28 viewpoints are all replaced, and in another embodiment, only a part of the viewpoints may be replaced, that is, only the display effect of the partial reflection viewing area may be improved.
In the above embodiment, 28 viewpoints are taken as an example for description, and for other viewpoints, the scheme is similar, and the optimization of the back-viewing region can be realized by replacing part of the viewpoints in the reverse order so that the normalized disparity is between [ -1,1].
TABLE 2-1 parallax
Figure BDA0002782966250000131
TABLE 2-2 parallax
Figure BDA0002782966250000132
TABLE 3-1 remission Sum-Vision protocol
Figure BDA0002782966250000141
TABLE 3-2
Figure BDA0002782966250000142
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions for the above-described multi-view image generation method.
As shown in fig. 8, the embodiment of the present disclosure provides a display device, which includes a processor 801 and a memory 802 storing a computer program operable on the processor 801, wherein the processor 801 executes the computer program to implement the steps of the multi-view image generation method. The display device may be: any product or component with a display function, such as a mobile phone, a tablet computer, a television, a display, a notebook computer, a digital photo frame, a navigator and the like. The display device may be a naked eye 3D display device.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
It will be understood by those skilled in the art that various modifications and equivalent arrangements of the embodiments of the present disclosure may be made without departing from the spirit and scope of the embodiments of the present disclosure and shall be covered by the appended claims.

Claims (12)

1. A multi-view image generating method applied to a display apparatus including a display panel and a division device, the display panel including a plurality of pixels arranged in an array, the pixels including a plurality of sub-pixels, the division device including a plurality of raster units parallel to each other, the method comprising:
determining the number of sub-pixels covered by the grating unit in the first direction, and determining the number of viewpoints contained in a single sub-pixel in the first direction according to the total viewpoint number and the number of sub-pixels covered by the grating unit in the first direction; the first direction is a pixel row direction, namely the arrangement direction of sub-pixels in the same pixel;
determining the number of viewpoints contained in a single sub-pixel in a second direction according to the first length of the sub-pixel in the first direction and the second length of the sub-pixel in the second direction, the attaching angle of the image splitting device and the display panel and the number of viewpoints contained in the single sub-pixel in the first direction, wherein the second direction is a pixel column direction;
determining the viewpoint to which the sub-pixel belongs according to the number of viewpoints contained in the single sub-pixel in the first direction and the number of viewpoints contained in the single sub-pixel in the second direction;
and determining the gray-scale value of the sub-pixel according to the image of the viewpoint to which the sub-pixel belongs.
2. The multi-view image generation method of claim 1, wherein said determining the number of sub-pixels covered by the raster unit in the first direction comprises:
the number of sub-pixels covered by the grating unit in the first direction
Figure FDA0002782966240000011
P is the width of the grating unit, S w And determining the length of the sub-pixel along a first direction by theta epsilon (-90 degrees and 90 degrees), wherein theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel, and the positive and negative of theta are determined according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction.
3. The method of claim 1, wherein determining the number of views a single sub-pixel in the first direction contains based on the total number of views and the number of sub-pixels covered by the raster unit in the first direction comprises:
number of views contained by a single sub-pixel in a first direction
Figure FDA0002782966240000012
V is the total number of viewpoints, P x The number of the sub-pixels covered by the grating unit in the first direction is shown.
4. The method according to claim 1, wherein the determining the number of viewpoints included in a single sub-pixel in the second direction based on a first length of the sub-pixel in the first direction and a second length of the sub-pixel in the second direction, a bonding angle of the image splitter to the display panel, and the number of viewpoints included in a single sub-pixel in the first direction comprises:
determining an offset value according to a first length of the sub-pixel in a first direction, a second length of the sub-pixel in a second direction and a joint angle between the image splitting device and the display panel; determining the number of viewpoints contained in a single sub-pixel in a second direction according to the number of viewpoints contained in the single sub-pixel in the first direction and the offset value; wherein, the first and the second end of the pipe are connected with each other,
the offset value
Figure FDA0002782966240000021
Wherein, the S w Is a first length of the sub-pixel in a first direction, S h And determining the positive and negative of theta according to the direction from the extending direction of the orthographic projection of the grating unit to the second direction, wherein theta is the attaching angle of the image splitting device and the display panel, namely the included angle between the extending direction of the orthographic projection of the grating unit and the second direction on a plane parallel to the display panel.
5. The method according to claim 4, wherein the determining the number of views contained by a single sub-pixel in a second direction from the number of views contained by a single sub-pixel in the first direction and the offset value comprises:
number of views V contained by a single sub-pixel in the second direction y =V x *Shift x
Wherein, V x For the number of views, shift, contained by a single sub-pixel in said first direction x Is the offset value.
6. The method according to any of claims 1 to 5, wherein said determining the viewpoint to which the sub-pixel belongs based on the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction comprises:
V i,1 =(V first -(i-1)*V y ) mod V, if V i,1 If not than 0, then V i,1 =V,i∈[1,M]
V i,j =(V i,1 +(j-1)*V x ) mod V, if V i,j If not than 0, then V i,j =V,j∈[1,N]
Wherein V is the total number of visual points, V i,j Is the view point, V, to which the sub-pixel of the ith row and the jth column belongs first Is the viewpoint to which the sub-pixel of the 1 st row and the 1 st column belongs, M is the row number of the sub-pixel, N is the column number of the sub-pixel, V x Number of views, V, contained by a single sub-pixel in a first direction y Is the number of views a single sub-pixel contains in the second direction.
7. The method according to any one of claims 1 to 5, wherein said determining the gray-scale values of the sub-pixels from the image of the viewpoint to which the sub-pixels belong comprises:
and determining the position of the sub-pixel, and taking the gray value of the corresponding position in the image of the viewpoint to which the sub-pixel belongs as the gray value of the sub-pixel.
8. The method according to any one of claims 1 to 5, wherein after determining the viewpoint to which the sub-pixel belongs based on the number of viewpoints included in a single sub-pixel in the first direction and the number of viewpoints included in a single sub-pixel in the second direction, before determining the grayscale value of the sub-pixel based on the image of the viewpoint to which the sub-pixel belongs, the method further comprises:
and replacing the viewpoints of part of the sub-pixels, and replacing the viewpoints with the same viewpoint when the viewpoints of different sub-pixels are the same, so that the ratio of the first parallax of the left-eye viewpoint and the right-eye viewpoint after replacement to the second parallax of the viewpoint of the adjacent sub-pixel in the first direction meets the preset condition when the viewpoints of the left-eye viewpoint and the right-eye viewpoint are not replaced.
9. The multi-viewpoint image generation method according to claim 8,
the meeting of the preset conditions comprises: the ratio of the first parallax to the second parallax is located at [ -1,1].
10. The multi-viewpoint image generation method according to claim 8, wherein the replacing viewpoints to which a part of sub-pixels belong includes:
the viewpoints comprise viewpoints 1 to V which are sequentially distributed along a first direction, when a viewpoint K to which the sub-pixels belong is located in a range from a viewpoint K to the viewpoint V, the viewpoint K is replaced by a viewpoint S-K, the viewpoint S-K is smaller than or equal to V and larger than or equal to 1, and the viewpoint K is a preset value.
11. A computer-readable storage medium storing computer-executable instructions for performing the multi-view image generation method of any one of claims 1-10.
12. A display device comprising a processor and a memory storing a computer program operable on the processor, wherein the processor when executing the program performs the steps of the multi-view image generation method of any of claims 1 to 10.
CN202011287892.XA 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device Active CN112399168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011287892.XA CN112399168B (en) 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011287892.XA CN112399168B (en) 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device

Publications (2)

Publication Number Publication Date
CN112399168A CN112399168A (en) 2021-02-23
CN112399168B true CN112399168B (en) 2022-12-02

Family

ID=74605877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011287892.XA Active CN112399168B (en) 2020-11-17 2020-11-17 Multi-viewpoint image generation method, storage medium and display device

Country Status (1)

Country Link
CN (1) CN112399168B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763473B (en) * 2021-09-08 2024-03-29 未来科技(襄阳)有限公司 Viewpoint width determining method and device and storage medium
CN115022612B (en) * 2022-05-31 2024-01-09 北京京东方技术开发有限公司 Driving method and device of display device and display equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626517B (en) * 2009-08-07 2010-11-17 四川大学 Method for synthesizing stereo image from parallax image in a real-time manner
CN102510515B (en) * 2011-12-29 2015-08-05 中航华东光电有限公司 A kind of grating-type multi-viewpoint stereo image synthesis method
KR101990334B1 (en) * 2012-08-22 2019-06-18 엘지디스플레이 주식회사 Stereoscopic image display device and method for driving the same
CN102843569B (en) * 2012-08-31 2017-06-16 上海易维视科技有限公司 A kind of Automatic multiple-viewpoint stereoscopic display
CN103686117B (en) * 2012-09-18 2016-01-20 上海易维视科技有限公司 A kind of super Automatic multiple-viewpoint stereoscopic display
CN105338336B (en) * 2015-10-28 2017-03-22 四川长虹电器股份有限公司 Lenticular grating and LCD screen-based stereo image fast synthesis method
CN106067969B (en) * 2016-06-30 2018-02-27 北京邮电大学 A kind of multiple views 3D display method and device of column mirror grating

Also Published As

Publication number Publication date
CN112399168A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
JP3966830B2 (en) 3D display device
KR102030830B1 (en) Curved multiview image display apparatus and control method thereof
JP3944188B2 (en) Stereo image display method, stereo image imaging method, and stereo image display apparatus
CN106170084B (en) Multi-view image display apparatus, control method thereof, and multi-view image generation method
CN106604018B (en) 3D display device and control method thereof
TWI446007B (en) Three - dimensional image display device and three - dimensional image display method
US8427528B2 (en) Three-dimensional image display device and three-dimensional image display method
TWI357987B (en) A three-dimension image display device and a displ
KR101329962B1 (en) Three-dimensional image display
JP2010524309A (en) Method and configuration for three-dimensional display
KR102174258B1 (en) Glassless 3d display apparatus and contorl method thereof
US20160021365A1 (en) Image displaying method and image displaying device
CN112399168B (en) Multi-viewpoint image generation method, storage medium and display device
EP2070339A1 (en) Apparatus, method and computer program product for three-dimensional image processing
US20150237334A1 (en) Stereoscopic display device
CN111323935A (en) N-viewpoint three-dimensional display device and driving method thereof
KR101975246B1 (en) Multi view image display apparatus and contorl method thereof
US20140192047A1 (en) Stereoscopic image display device, image processing device, and image processing method
CN110351544B (en) Three-dimensional image coding method and device for naked eye 3D display
CN110381305A (en) Naked eye 3D's removes crosstalk method, system, storage medium and electronic equipment
CN108271013B (en) Display device and display method
TWI526717B (en) Naked eye stereoscopic display device and method for arranging pixel thereof
KR102233116B1 (en) Stereopsis image display device and method of driving the same
JP5810011B2 (en) Display device and electronic device
CN112188185B (en) Method for displaying stereoscopic image on display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant