US20220046226A1 - Method and device for operating a lenticular display - Google Patents
Method and device for operating a lenticular display Download PDFInfo
- Publication number
- US20220046226A1 US20220046226A1 US17/509,346 US202117509346A US2022046226A1 US 20220046226 A1 US20220046226 A1 US 20220046226A1 US 202117509346 A US202117509346 A US 202117509346A US 2022046226 A1 US2022046226 A1 US 2022046226A1
- Authority
- US
- United States
- Prior art keywords
- user
- content
- angle
- horizontal angle
- perspective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000001514 detection method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- -1 802.3x Chemical compound 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- WQZGKKKJIJFFOK-GASJEMHNSA-N Glucose Natural products OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O WQZGKKKJIJFFOK-GASJEMHNSA-N 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 239000008103 glucose Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- APTZNLHMIGJTEW-UHFFFAOYSA-N pyraflufen-ethyl Chemical compound C1=C(Cl)C(OCC(=O)OCC)=CC(C=2C(=C(OC(F)F)N(C)N=2)Cl)=C1F APTZNLHMIGJTEW-UHFFFAOYSA-N 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/368—Image reproducers using viewer tracking for two or more viewers
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/26—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
- G02B30/27—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
- G02B30/28—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays involving active lenticular arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/305—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N2013/40—Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene
- H04N2013/403—Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene the images being monoscopic
Definitions
- the present disclosure generally relates to lenticular displays and, in particular, to systems, methods, and devices for displaying different content to different users via a lenticular display.
- Lenticular displays are capable displaying different content at different angles. For example, when viewing a lenticular display from a first angle, a video clip is seen and when viewing the lenticular display from a second angle, a different video clip is seen.
- lenticular displays display different content at different angles
- FIG. 1 illustrates a first perspective view of an example operating environment at a first time.
- FIG. 2 illustrates a second perspective view of the example operating environment at the first time.
- FIG. 3 illustrates a third perspective view of the example operating environment at the first time.
- FIG. 4 illustrates a top view of the example operating environment at the first time.
- FIG. 5 illustrates a top view of the example operating environment at a second time.
- FIG. 6 illustrates a top view of the example operating environment at a third time.
- FIG. 7 illustrates a top view of the example operating environment at the third time in which each eye of each user is presented different content.
- FIG. 8 illustrates a side view of the example operating environment at the third time.
- FIG. 9 is a flowchart representation of a method of operating a lenticular display in accordance with some implementations.
- FIG. 10 is a block diagram of an example of the device of FIG. 1 in accordance with some implementations.
- a method is performed at a device including a processor, non-transitory memory, an image sensor, and a lenticular display.
- the method includes capturing, using the image sensor, a first image.
- the method includes determining, from the first image, a first horizontal angle of a first user with respect to a line perpendicular to the lenticular display and a first horizontal angle of a second user with respect to the line perpendicular to the lenticular display.
- the method includes displaying, via the lenticular display, first content at the first horizontal angle of the first user and second content at the first horizontal angle of the second user.
- the method includes capturing, using the image sensor, a second image.
- the method includes determining, from the second image, a second horizontal angle of the first user with respect to the line perpendicular to the lenticular display and a second horizontal angle of the second user with respect to the line perpendicular to the lenticular display.
- the method includes displaying, via the lenticular display, the first content at the second horizontal angle of the first user and the second content at the second horizontal angle of the second user.
- a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors.
- the one or more programs include instructions for performing or causing performance of any of the methods described herein.
- a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein.
- a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
- Lenticular displays are capable displaying different content at different angles. For example, when viewing a lenticular display from a first angle, a video clip is seen and when viewing the lenticular display from a second angle, a different video clip is seen.
- a lenticular display includes a matrix of pixels over which a lenticular lens pattern is laid.
- a first set of the matrix of pixels is visible from a first angle
- a second set of the matrix of pixels is visible from a second angle
- a third set of pixels is visible from a third angle, and so on.
- this feature of lenticular displays is used to present different content to different users as they move with respect to the lenticular display. For example, in various implementations, a first user and a second user are tracked by an image sensor and first content and second content are displayed to the first user and the second user at whatever angle they are with respect to the lenticular display.
- FIG. 1 illustrates a first perspective view of an example operating environment at a first time 100 A.
- the example operating environment at the first time 100 A includes an electronic device 101 having an image sensor 112 and a lenticular display 110 .
- the example operating environment at the first time 100 A includes a first user 120 A and a second user 120 B of different heights viewing the lenticular display.
- the first user 120 A is at a first location and the second user 120 B is at a second location.
- FIG. 1 illustrates only two users in the example operating environment, in various implementations, any number of users may be present in the example operating environment and have different content presented thereto.
- FIG. 2 illustrates a second perspective view of the example operating environment at the first time 100 A.
- the second perspective view is illustrated from a position behind the first user 120 A looking towards the device 101 . From this angle, first content 130 A can be seen on the lenticular display 110 .
- the first content 130 A includes a cylinder 140 viewed from particular vertical angle. At the particular vertical angle, the top 141 of the cylinder 140 can be seen.
- FIG. 3 illustrates a third perspective view of the example operating environment at the first time 110 A.
- the third perspective view is illustrated from a position behind the second user 120 B looking towards the device 101 . From this angle, second content 130 B can be seen on the lenticular display 110 .
- the second content 130 B includes the same cylinder 140 viewed from a different vertical angle. At the different vertical angle, the top 141 of the cylinder 140 cannot be seen. However, at the different vertical angle, the bottom 142 of the cylinder 140 can be seen.
- the first content 130 A and the second content 130 B are two different images. In various implementations, the first content 130 A and the second content 130 B are two different videos. In various implementations, the first content 130 A and the second content 130 B are different versions of the same underlying content. For example, the second content 130 B may be a censored version of the first content 130 A.
- the first content 130 A and/or second content 130 B is based on metadata regarding the user viewing the content. For example, if the first user 120 A is associated with metadata indicating that the first user 120 A has permission to view certain content, but that second user 120 B is not associated with metadata indicating that the second user 120 B has permission to view the certain content, the first content 130 A may include that certain content whereas the second content 130 A may not include that certain content, but rather, other content.
- the first user 120 A may be associated with metadata indicating that the first user 120 A has permission to watch television shows rated TV-MA or less
- the second user 120 B is associated with metadata indicating that the second user 120 B has permission to watch television shows rated TV-PG or less.
- the first content 130 A may include a TV-MA rated television show and the second content may include a different show (rated TV-PG or less) or a censored version of the TV-MA rated television show.
- the first content 130 A and the second content 130 A are different perspective views of the same object or scene.
- the object is a virtual object.
- the object is a three-dimensional object.
- the scene is a virtual scene.
- the scene is a three-dimensional scene.
- FIG. 4 illustrates a top view of the example operating environment at the first time 100 A.
- the device 101 determines a first horizontal angle ( ⁇ 1 ) of the first user 120 A with respect to a line 115 perpendicular to the lenticular display 110 and a second horizontal angle ( ⁇ 2 ) of the second user 120 B with respect to the line 115 perpendicular to the lenticular display 110 .
- the first horizontal angle ( ⁇ 1 ) is approximately ⁇ 30 degrees and the second horizontal angle ( ⁇ 2 ) is approximately 45 degrees.
- the device 101 determines the first horizontal angle ( ⁇ 1 ) and the second horizontal angle ( ⁇ 2 ) using the image sensor 112 to capture an image of the first user 120 A and the second user 120 B and detecting the first user 120 A and the second user 120 B in the captured image.
- the device 101 controls the lenticular display 110 to display the first content 130 A at the first horizontal angle ( ⁇ 1 ) (e.g., to the first user 120 A) and the second content 130 B at the second horizontal angle ( ⁇ 2 ) (e.g., to the second user 120 B).
- FIG. 5 illustrates a top view of the example operating environment at a second time 100 B.
- the first user 120 A has moved from the first position to a third position and the second user 120 B has moved from the second position to a fourth position.
- the device 101 determines a third horizontal angle ( ⁇ 3 ) of the first user 120 A with respect to the line 115 perpendicular to the lenticular display 110 and a fourth horizontal angle ( ⁇ 4 ) of the second user 120 B with respect to the line 115 perpendicular to the lenticular display 110 .
- the third horizontal angle ( ⁇ 3 ) is approximately ⁇ 60 degrees
- the fourth horizontal angle ( ⁇ 4 ) is approximately ⁇ 30 degrees.
- the device 101 controls the lenticular display 110 to display the first content 130 A at the third horizontal angle ( ⁇ 3 ) (e.g., to the first user 120 A) and the second content 130 B at the fourth horizontal angle ( ⁇ 4 ) (e.g., to the second user 120 B).
- ⁇ 3 the third horizontal angle
- ⁇ 4 the fourth horizontal angle
- the device 101 controls the lenticular display 110 to display the first content 130 A at the third horizontal angle ( ⁇ 3 ) (e.g., to the first user 120 A) and the second content 130 B at the fourth horizontal angle ( ⁇ 4 ) (e.g., to the second user 120 B).
- FIG. 6 illustrates a top view of the example operating environment at a third time 100 C.
- the first user 120 A remains at the third position and the second user 120 B remains at the fourth position.
- the device 101 has moved (e.g., rotated), changing the horizontal angles of the users with respect to the line 115 perpendicular to the lenticular display 110 .
- the device 101 determines a fifth horizontal angle ( ⁇ 5 ) of the first user 120 A with respect to the line 115 perpendicular to the lenticular display 110 and a sixth horizontal angle ( ⁇ 6 ) of the second user 120 B with respect to the line 115 perpendicular to the lenticular display 110 .
- the device 101 determines the fifth horizontal angle ( ⁇ 5 ) and the sixth horizontal angle ( ⁇ 6 ) based on data from a pose estimation unit of the device 101 , such as an inertial measurement unit (IMU), IR encoder, or potentiometer.
- IMU inertial measurement unit
- IR encoder IR encoder
- potentiometer potentiometer
- the device 101 controls the lenticular display 110 to display the first content 130 A at the fifth horizontal angle ( ⁇ 5 ) (e.g., to the first user 120 A) and the second content 130 B at the sixth horizontal angle ( ⁇ 6 ) (e.g., to the second user 120 B).
- FIG. 7 illustrates a top view of the example operating environment at the third time 100 C in which each eye of each user is presented different content.
- the first user 120 A remains at the third position
- the second user 120 B remains at the fourth position
- the device 101 has been moved.
- the device 101 determines the fifth horizontal angle ( ⁇ 5 ) of a first eye of the first user 120 A with respect to the line 115 perpendicular to the lenticular display 110 , the sixth horizontal angle ( ⁇ 6 ) of a first eye of the second user 120 B with respect to the line 115 perpendicular to the lenticular display 110 , a seventh angle ( ⁇ 7 ) of a second eye of the first user 120 A with respect to the line 115 perpendicular to the lenticular display 110 , and an eighth angle ( ⁇ 8 ) of a second eye of the second user 120 B with respect to the line 115 perpendicular to the lenticular display 110 .
- the device 101 controls the lenticular display 110 to display the first content 130 A at the fifth horizontal angle ( ⁇ 5 ) (e.g., to the first eye of the first user 120 A) and the second content 130 B at the sixth horizontal angle ( ⁇ 6 ) (e.g., to the first eye of the second user 120 B).
- the device 101 further controls the lenticular display 110 to display third content at the seventh horizontal angle ( ⁇ 7 ) (e.g., to the second eye of the first user 120 A) and fourth content at the eighth horizontal angle ( ⁇ 8 ) (e.g., to the second eye of the second user 120 B).
- the first content 130 A and the third content are different perspectives of the same object or scene.
- the second content 130 B and the fourth content are different perspectives of the same object or scene.
- FIG. 8 illustrates a side view of the example operating environment at the third time 100 C.
- the device 101 determines a first vertical angle ( ⁇ 1 ) of the first user 120 A with respect to the line 115 perpendicular to the lenticular display 110 and a second vertical angle ( ⁇ 2 ) of the second user 120 B with respect to the line 115 perpendicular to lenticular display 110 .
- the device 101 controls the lenticular display 110 to display the first content 130 A at the fifth horizontal angle ( ⁇ 5 ) (e.g., to the first user 120 A) based on the first vertical angle ( ⁇ 1 ) and the second content 130 B at the sixth horizontal angle ( ⁇ 6 ) (e.g., to the second user 120 B) based on the second vertical angle ( ⁇ 2 ).
- the first content 130 A includes a virtual three-dimensional object (e.g., a cylinder) displayed from a first horizontal perspective based on the fifth horizontal angle ( ⁇ 5 ) and a first vertical perspective based on the first vertical angle ( ⁇ 1 ), the third content includes the same virtual object displayed from a second horizontal perspective based on the seventh horizontal angle ( ⁇ 7 ) and the first vertical perspective based on the first vertical angle ( ⁇ 1 ), the second content 130 B includes the same virtual object displayed from a third horizontal perspective based on the sixth horizontal angle ( ⁇ 6 ) and a second vertical perspective based on the second vertical angle ( ⁇ 2 ), and the fourth content includes the same virtual object displayed from a fourth horizontal perspective based on the eighth horizontal angle ( ⁇ 8 ) and the second vertical perspective based on the second vertical angle ( ⁇ 2 ).
- a virtual three-dimensional object e.g., a cylinder
- the third content includes the same virtual object displayed from a second horizontal perspective based on the seventh horizontal angle ( ⁇ 7 ) and the first vertical perspective based on the
- FIG. 9 is a flowchart representation of a method 900 of operating a lenticular display in accordance with some implementations.
- the method 900 is performed by a device with one or more processors, non-transitory memory, an image sensor, and a lenticular display (e.g., the device 101 of FIG. 1 ).
- the method 900 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
- the method 900 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
- the method 900 begins, in block 910 , with the device capturing, using the image sensor, a first image.
- the method 900 continues, in block 920 , with the device determining, from the first image, a first horizontal angle of a first user at a first time with respect to a line perpendicular to the lenticular display and a first horizontal angle of a second user at the first time with respect to the line perpendicular to the lenticular display.
- the first time is when the first image is captured.
- the device based on an image captured using the image sensor 112 ) determines the first horizontal angle ( ⁇ 1 ) and the second horizontal angle ( ⁇ 2 ).
- the device 101 employs a face detection algorithm on the first image to determine locations in the image of the first user and the second user. The locations in the image are mapped to horizontal (and, in various implementations, vertical) angles with respect to the line perpendicular to the lenticular display.
- the method 900 continues, in block 930 , with the device displaying, via the lenticular display, first content at the first horizontal angle of the first user and second content, different than the first content, at the first horizontal angle of the second user.
- the first content includes a first image and the second content includes a second image different than the first image. In various implementations, the first content includes a first video and the second content includes a second video different than the first video.
- the first content includes a first version of content and the second content includes a second version of the content different from the first version of the content.
- the second version of the content is a censored version of the first version of the content.
- displaying the first content at the first horizontal angle of the first user includes displaying an object or scene at a first perspective and displaying the second content at the first horizontal angle of the second user includes displaying the object or scene at a second perspective different than the first perspective.
- the device 101 displays, for the first user 120 A, a cylinder 140 from a first vertical perspective at which the top 141 is visible and displays, for the second user 120 B, the cylinder 140 from a second vertical perspective at which the top 141 is not visible, but at which the bottom 142 is visible.
- the method 900 further includes determining, based on the first image, an identity of the first user and an identity of the second user, wherein the first content is based on the identity of the first user and the second content is based on the identity of the second user.
- the first user is associated with a first subscription video service account and the second user is associated with a second subscription video service account and a respective “next episode” is displayed to each user.
- the method 900 further includes determining, from the first image, a first vertical angle of the first user at the first time with respect to the line perpendicular to the lenticular display and a first vertical angle of the second user at the first time with respect to the line perpendicular to the lenticular display.
- the device 101 determines the first vertical angle ( ⁇ 1 ) of the first user and the second vertical angle ( ⁇ 2 ) of the second user.
- displaying the first content at the first horizontal angle of the first user and the second content at the first horizontal angle of the second user is based on the first vertical angle of the first user and the first vertical angle of the second user. For example, in FIGS.
- the device 101 displays, for the first user 120 A, a cylinder 140 from a first vertical perspective at which the top 141 is visible and displays, for the second user 120 B, the cylinder 140 from a second vertical perspective at which the top 141 is not visible, but at which the bottom 142 is visible.
- displaying the first content at the first horizontal angle of the first user includes displaying an object or scene at a first vertical perspective based on the first vertical angle and displaying the second content at the first horizontal angle of the second user includes displaying the object or scene at a second vertical perspective based on the second vertical angle.
- the first content and the second content are displayed at the same vertical perspective based on the first vertical angle and the second vertical angle.
- a virtual object is displayed at a vertical perspective based on the average of the first vertical angle and the second vertical angle.
- the method 900 continues, at block 940 , with the device determining a second horizontal angle of the first user at a second time, different than the first time, with respect to the line perpendicular to the lenticular display and a second horizontal angle of the second user at the second time with respect to the line perpendicular to the lenticular display.
- the device 101 determines the first horizontal angle ( ⁇ 1 ) of the first user 120 A at the first time and the second horizontal angle ( ⁇ 2 ) of the second user 120 B at the first time
- the device 101 determines the third horizontal angle ( ⁇ 3 ) of the first user 120 A at the second time and the fourth horizontal angle ( ⁇ 4 ) of the second user 120 B at the second time.
- the second horizontal angle of the first user is different than the first horizontal angle of the first user.
- the third horizontal angle ( ⁇ 3 ) is different than the first horizontal angle ( ⁇ 1 ).
- the second horizontal angle of the second user is the same as the first horizontal angle of the first user.
- the fourth horizontal angle ( ⁇ 4 ) is the same as the first horizontal angle ( ⁇ 1 ).
- the method 900 includes capturing, using the image sensor, a second image, wherein determining the second horizontal angle of the first user and the second horizontal angle of the second user is based on the second image.
- the method 900 includes receiving data from a pose estimation unit (IMU), wherein determining the second horizontal angle of the first user and the second horizontal angle of the second user is based on the data from the pose estimation unit.
- IMU pose estimation unit
- the method 900 continues, in block 950 , with the device displaying, via the lenticular display, the first content at the second horizontal angle of the first user and the second content at the second horizontal angle of the second user.
- the method 900 further includes determining a third horizontal angle of the first user at the first time with respect to the line perpendicular to the lenticular display and a third horizontal angle of the second user at the first time with respect to the line perpendicular to the lenticular display. For example, in FIG. 7 , the device 101 determines the seventh horizontal angle ( ⁇ 7 ) of the first user 120 A and the eighth horizontal angle ( ⁇ 8 ) of the second user 120 B. The method 900 further includes displaying, via the lenticular display, third content at the third horizontal angle of the first user and fourth content, different than the third content, at the third horizontal angle of the second user.
- displaying the first content at the first horizontal angle of the first user includes displaying an object or scene at a first horizontal perspective based on the first horizontal angle of the first user and displaying the third content at the third horizontal angle at the third horizontal angle of the first user includes displaying the object or scene at second horizontal perspective based on the third horizontal angle of the first user.
- FIG. 10 is a block diagram of an example of the device 101 of FIG. 1 in accordance with some implementations. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein.
- the device 101 includes one or more processing units 1002 (e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like), one or more input/output (I/O) devices and sensors 1006 , one or more communication interfaces 1008 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, and/or the like type interface), one or more programming (e.g., I/O) interfaces 1010 , a lenticular display 1012 , an image sensor 1014 , a memory 1020 , and one or more communication buses 1004 for interconnecting these and various other components.
- processing units 1002 e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like
- I/O input/
- the one or more communication buses 1004 include circuitry that interconnects and controls communications between system components.
- the one or more I/O devices and sensors 1006 include at least one of an inertial measurement unit (IMU), an accelerometer, a gyroscope, a thermometer, one or more microphones, one or more speakers, one or more biometric sensors (e.g., blood pressure monitor, heart rate monitor, breathing monitor, electrodermal monitor, blood oxygen sensor, blood glucose sensor, etc.), a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like.
- the lenticular display 1012 is configured to display different content to different users at different angles.
- the lenticular display 1012 includes holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electro-mechanical system (MEMS), and/or the like display types.
- the lenticular display 1012 corresponds to diffractive, reflective, polarized, holographic, etc. waveguide displays.
- the lenticular display 1012 is capable of presenting mixed reality and/or virtual reality content.
- the image sensor 1014 includes one or more RGB cameras (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), one or more infrared (IR) cameras, one or more event-based cameras, and/or the like.
- CMOS complimentary metal-oxide-semiconductor
- CCD charge-coupled device
- IR infrared
- the memory 1020 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices.
- the memory 1020 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
- the memory 1020 optionally includes one or more storage devices remotely located from the one or more processing units 1002 .
- the memory 1020 comprises a non-transitory computer readable storage medium.
- the memory 1020 or the non-transitory computer readable storage medium of the memory 1020 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 1030 and a content presentation module 1040 .
- the operating system 1030 includes procedures for handling various basic system services and for performing hardware dependent tasks.
- the content presentation module 1040 is configured to present different content to different users at different angles via the lenticular display 1012 .
- the content presentation module 1040 includes a user detection unit 1042 and a content presenting unit 1044 .
- the user detection unit 1042 is configured to determine a first horizontal angle of a first user with respect to a line perpendicular to the lenticular display 1012 and a second horizontal angle of a second user with respect to the line perpendicular to the lenticular display 1012 .
- the user detection unit 1042 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the content presenting unit 1044 is configured to display, via the lenticular display, first content at the first horizontal angle of the first user and second content, different than the first content, at the first horizontal angle of the second user.
- the content presenting unit 1044 includes instructions and/or logic therefor, and heuristics and metadata therefor.
- the user detection unit 1042 and the content presenting unit 1044 are shown as residing on a single device (e.g., the device 101 of FIG. 1 ), it should be understood that in other implementations, the user detection unit 1042 and the content presenting unit 1044 may be located in separate computing devices.
- FIG. 10 is intended more as a functional description of the various features that could be present in a particular implementation as opposed to a structural schematic of the implementations described herein.
- items shown separately could be combined and some items could be separated.
- some functional modules shown separately in FIG. 10 could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various implementations.
- the actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some implementations, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation.
- first first
- second second
- first node first node
- first node second node
- first node first node
- second node second node
- the first node and the second node are both nodes, but they are not the same node.
- the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context.
- the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Abstract
In one implementation, a method of operating a lenticular display is performed by a device including a processor, non-transitory memory, an image sensor, and a lenticular display. The method includes displaying, via the lenticular display, first content at a horizontal angle of a first user and second content, different than the first content, at a horizontal angle of a second user. The method further includes displaying, via the lenticular display, the first content at a second horizontal angle of the first user and the second content at a second horizontal angle of the second user.
Description
- This application is a continuation of U.S. patent application Ser. No. 16/922,350, filed on Jul. 7, 2020, which claims priority to U.S. Provisional Patent App. No. 62/906,946, filed on Sep. 27, 2019, which are both hereby incorporated by reference in their entirety.
- The present disclosure generally relates to lenticular displays and, in particular, to systems, methods, and devices for displaying different content to different users via a lenticular display.
- Lenticular displays are capable displaying different content at different angles. For example, when viewing a lenticular display from a first angle, a video clip is seen and when viewing the lenticular display from a second angle, a different video clip is seen.
- Whereas some lenticular displays display different content at different angles, it may be desirable to display different content to different users while the users move with respect to the lenticular display.
- So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
-
FIG. 1 illustrates a first perspective view of an example operating environment at a first time. -
FIG. 2 illustrates a second perspective view of the example operating environment at the first time. -
FIG. 3 illustrates a third perspective view of the example operating environment at the first time. -
FIG. 4 illustrates a top view of the example operating environment at the first time. -
FIG. 5 illustrates a top view of the example operating environment at a second time. -
FIG. 6 illustrates a top view of the example operating environment at a third time. -
FIG. 7 illustrates a top view of the example operating environment at the third time in which each eye of each user is presented different content. -
FIG. 8 illustrates a side view of the example operating environment at the third time. -
FIG. 9 is a flowchart representation of a method of operating a lenticular display in accordance with some implementations. -
FIG. 10 is a block diagram of an example of the device ofFIG. 1 in accordance with some implementations. - In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
- Various implementations disclosed herein include devices, systems, and methods for displaying different content to different users via a lenticular display. In various implementations, a method is performed at a device including a processor, non-transitory memory, an image sensor, and a lenticular display. The method includes capturing, using the image sensor, a first image. The method includes determining, from the first image, a first horizontal angle of a first user with respect to a line perpendicular to the lenticular display and a first horizontal angle of a second user with respect to the line perpendicular to the lenticular display. The method includes displaying, via the lenticular display, first content at the first horizontal angle of the first user and second content at the first horizontal angle of the second user. The method includes capturing, using the image sensor, a second image. The method includes determining, from the second image, a second horizontal angle of the first user with respect to the line perpendicular to the lenticular display and a second horizontal angle of the second user with respect to the line perpendicular to the lenticular display. The method includes displaying, via the lenticular display, the first content at the second horizontal angle of the first user and the second content at the second horizontal angle of the second user.
- In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
- Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices, and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
- Lenticular displays are capable displaying different content at different angles. For example, when viewing a lenticular display from a first angle, a video clip is seen and when viewing the lenticular display from a second angle, a different video clip is seen.
- In various implementations, a lenticular display includes a matrix of pixels over which a lenticular lens pattern is laid. In various implementations, a first set of the matrix of pixels is visible from a first angle, a second set of the matrix of pixels is visible from a second angle, a third set of pixels is visible from a third angle, and so on.
- In various implementations described below, this feature of lenticular displays is used to present different content to different users as they move with respect to the lenticular display. For example, in various implementations, a first user and a second user are tracked by an image sensor and first content and second content are displayed to the first user and the second user at whatever angle they are with respect to the lenticular display.
-
FIG. 1 illustrates a first perspective view of an example operating environment at afirst time 100A. The example operating environment at thefirst time 100A includes anelectronic device 101 having animage sensor 112 and alenticular display 110. The example operating environment at thefirst time 100A includes afirst user 120A and asecond user 120B of different heights viewing the lenticular display. At the first time, thefirst user 120A is at a first location and thesecond user 120B is at a second location. - Although
FIG. 1 illustrates only two users in the example operating environment, in various implementations, any number of users may be present in the example operating environment and have different content presented thereto. -
FIG. 2 illustrates a second perspective view of the example operating environment at thefirst time 100A. The second perspective view is illustrated from a position behind thefirst user 120A looking towards thedevice 101. From this angle,first content 130A can be seen on thelenticular display 110. Thefirst content 130A includes acylinder 140 viewed from particular vertical angle. At the particular vertical angle, thetop 141 of thecylinder 140 can be seen. -
FIG. 3 illustrates a third perspective view of the example operating environment at the first time 110A. The third perspective view is illustrated from a position behind thesecond user 120B looking towards thedevice 101. From this angle,second content 130B can be seen on thelenticular display 110. Thesecond content 130B includes thesame cylinder 140 viewed from a different vertical angle. At the different vertical angle, thetop 141 of thecylinder 140 cannot be seen. However, at the different vertical angle, thebottom 142 of thecylinder 140 can be seen. - In various implementations, the
first content 130A and thesecond content 130B are two different images. In various implementations, thefirst content 130A and thesecond content 130B are two different videos. In various implementations, thefirst content 130A and thesecond content 130B are different versions of the same underlying content. For example, thesecond content 130B may be a censored version of thefirst content 130A. - In various implementations, the
first content 130A and/orsecond content 130B is based on metadata regarding the user viewing the content. For example, if thefirst user 120A is associated with metadata indicating that thefirst user 120A has permission to view certain content, but thatsecond user 120B is not associated with metadata indicating that thesecond user 120B has permission to view the certain content, thefirst content 130A may include that certain content whereas thesecond content 130A may not include that certain content, but rather, other content. For example, thefirst user 120A may be associated with metadata indicating that thefirst user 120A has permission to watch television shows rated TV-MA or less, whereas thesecond user 120B is associated with metadata indicating that thesecond user 120B has permission to watch television shows rated TV-PG or less. Thus, thefirst content 130A may include a TV-MA rated television show and the second content may include a different show (rated TV-PG or less) or a censored version of the TV-MA rated television show. - In various implementations, as in
FIGS. 2 and 3 , thefirst content 130A and thesecond content 130A are different perspective views of the same object or scene. In various implementations, the object is a virtual object. In various implementations, the object is a three-dimensional object. In various implementations, the scene is a virtual scene. In various implementations, the scene is a three-dimensional scene. -
FIG. 4 illustrates a top view of the example operating environment at thefirst time 100A. Thedevice 101 determines a first horizontal angle (θ1) of thefirst user 120A with respect to aline 115 perpendicular to thelenticular display 110 and a second horizontal angle (θ2) of thesecond user 120B with respect to theline 115 perpendicular to thelenticular display 110. InFIG. 4 , the first horizontal angle (θ1) is approximately −30 degrees and the second horizontal angle (θ2) is approximately 45 degrees. - In various implementations, the
device 101 determines the first horizontal angle (θ1) and the second horizontal angle (θ2) using theimage sensor 112 to capture an image of thefirst user 120A and thesecond user 120B and detecting thefirst user 120A and thesecond user 120B in the captured image. Thedevice 101 controls thelenticular display 110 to display thefirst content 130A at the first horizontal angle (θ1) (e.g., to thefirst user 120A) and thesecond content 130B at the second horizontal angle (θ2) (e.g., to thesecond user 120B). -
FIG. 5 illustrates a top view of the example operating environment at asecond time 100B. At the second time, thefirst user 120A has moved from the first position to a third position and thesecond user 120B has moved from the second position to a fourth position. Thedevice 101 determines a third horizontal angle (θ3) of thefirst user 120A with respect to theline 115 perpendicular to thelenticular display 110 and a fourth horizontal angle (θ4) of thesecond user 120B with respect to theline 115 perpendicular to thelenticular display 110. InFIG. 5 , the third horizontal angle (θ3) is approximately −60 degrees and the fourth horizontal angle (θ4) is approximately −30 degrees. - The
device 101 controls thelenticular display 110 to display thefirst content 130A at the third horizontal angle (θ3) (e.g., to thefirst user 120A) and thesecond content 130B at the fourth horizontal angle (θ4) (e.g., to thesecond user 120B). Thus, even though the first horizontal angle (θ1) and the fourth horizontal angle (θ4) are, coincidentally, the same, different content is displayed at that angle at the first time (e.g., thefirst content 130A to thefirst user 120A) and the second time (e.g., thesecond content 130B to thesecond user 120B). -
FIG. 6 illustrates a top view of the example operating environment at athird time 100C. At the third time, thefirst user 120A remains at the third position and thesecond user 120B remains at the fourth position. However, thedevice 101 has moved (e.g., rotated), changing the horizontal angles of the users with respect to theline 115 perpendicular to thelenticular display 110. - The
device 101 determines a fifth horizontal angle (θ5) of thefirst user 120A with respect to theline 115 perpendicular to thelenticular display 110 and a sixth horizontal angle (θ6) of thesecond user 120B with respect to theline 115 perpendicular to thelenticular display 110. In various implementations, thedevice 101 determines the fifth horizontal angle (θ5) and the sixth horizontal angle (θ6) based on data from a pose estimation unit of thedevice 101, such as an inertial measurement unit (IMU), IR encoder, or potentiometer. InFIG. 6 , the third horizontal angle (θ3) is approximately −45 degrees and the fourth horizontal angle (θ4) is approximately −15 degrees. Thedevice 101 controls thelenticular display 110 to display thefirst content 130A at the fifth horizontal angle (θ5) (e.g., to thefirst user 120A) and thesecond content 130B at the sixth horizontal angle (θ6) (e.g., to thesecond user 120B). -
FIG. 7 illustrates a top view of the example operating environment at thethird time 100C in which each eye of each user is presented different content. At the third time, thefirst user 120A remains at the third position, thesecond user 120B remains at the fourth position, and thedevice 101 has been moved. - The
device 101 determines the fifth horizontal angle (θ5) of a first eye of thefirst user 120A with respect to theline 115 perpendicular to thelenticular display 110, the sixth horizontal angle (θ6) of a first eye of thesecond user 120B with respect to theline 115 perpendicular to thelenticular display 110, a seventh angle (θ7) of a second eye of thefirst user 120A with respect to theline 115 perpendicular to thelenticular display 110, and an eighth angle (θ8) of a second eye of thesecond user 120B with respect to theline 115 perpendicular to thelenticular display 110. - The
device 101 controls thelenticular display 110 to display thefirst content 130A at the fifth horizontal angle (θ5) (e.g., to the first eye of thefirst user 120A) and thesecond content 130B at the sixth horizontal angle (θ6) (e.g., to the first eye of thesecond user 120B). Thedevice 101 further controls thelenticular display 110 to display third content at the seventh horizontal angle (θ7) (e.g., to the second eye of thefirst user 120A) and fourth content at the eighth horizontal angle (θ8) (e.g., to the second eye of thesecond user 120B). - In various implementations, the
first content 130A and the third content are different perspectives of the same object or scene. Similarly, in various implementations, thesecond content 130B and the fourth content are different perspectives of the same object or scene. -
FIG. 8 illustrates a side view of the example operating environment at thethird time 100C. Thedevice 101 determines a first vertical angle (φ1) of thefirst user 120A with respect to theline 115 perpendicular to thelenticular display 110 and a second vertical angle (φ2) of thesecond user 120B with respect to theline 115 perpendicular tolenticular display 110. Thedevice 101 controls thelenticular display 110 to display thefirst content 130A at the fifth horizontal angle (θ5) (e.g., to thefirst user 120A) based on the first vertical angle (φ1) and thesecond content 130B at the sixth horizontal angle (θ6) (e.g., to thesecond user 120B) based on the second vertical angle (φ2). - As a particular example, in various implementations, the
first content 130A includes a virtual three-dimensional object (e.g., a cylinder) displayed from a first horizontal perspective based on the fifth horizontal angle (θ5) and a first vertical perspective based on the first vertical angle (φ1), the third content includes the same virtual object displayed from a second horizontal perspective based on the seventh horizontal angle (θ7) and the first vertical perspective based on the first vertical angle (φ1), thesecond content 130B includes the same virtual object displayed from a third horizontal perspective based on the sixth horizontal angle (θ6) and a second vertical perspective based on the second vertical angle (φ2), and the fourth content includes the same virtual object displayed from a fourth horizontal perspective based on the eighth horizontal angle (θ8) and the second vertical perspective based on the second vertical angle (φ2). Thus, each user stereoscopically views a virtual three-dimensional object with the correct vertical perspective for that user. -
FIG. 9 is a flowchart representation of amethod 900 of operating a lenticular display in accordance with some implementations. In various implementations, themethod 900 is performed by a device with one or more processors, non-transitory memory, an image sensor, and a lenticular display (e.g., thedevice 101 ofFIG. 1 ). In some implementations, themethod 900 is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, themethod 900 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory). - The
method 900 begins, inblock 910, with the device capturing, using the image sensor, a first image. - The
method 900 continues, inblock 920, with the device determining, from the first image, a first horizontal angle of a first user at a first time with respect to a line perpendicular to the lenticular display and a first horizontal angle of a second user at the first time with respect to the line perpendicular to the lenticular display. In various implementations, the first time is when the first image is captured. For example, inFIG. 4 , the device (based on an image captured using the image sensor 112) determines the first horizontal angle (θ1) and the second horizontal angle (θ2). In various implementations, thedevice 101 employs a face detection algorithm on the first image to determine locations in the image of the first user and the second user. The locations in the image are mapped to horizontal (and, in various implementations, vertical) angles with respect to the line perpendicular to the lenticular display. - The
method 900 continues, inblock 930, with the device displaying, via the lenticular display, first content at the first horizontal angle of the first user and second content, different than the first content, at the first horizontal angle of the second user. - In various implementations, the first content includes a first image and the second content includes a second image different than the first image. In various implementations, the first content includes a first video and the second content includes a second video different than the first video.
- In various implementations, the first content includes a first version of content and the second content includes a second version of the content different from the first version of the content. For example, in various implementations, the second version of the content is a censored version of the first version of the content.
- In various implementations, displaying the first content at the first horizontal angle of the first user includes displaying an object or scene at a first perspective and displaying the second content at the first horizontal angle of the second user includes displaying the object or scene at a second perspective different than the first perspective. For example, in
FIGS. 2 and 3 , thedevice 101 displays, for thefirst user 120A, acylinder 140 from a first vertical perspective at which the top 141 is visible and displays, for thesecond user 120B, thecylinder 140 from a second vertical perspective at which the top 141 is not visible, but at which the bottom 142 is visible. - In various implementations, the
method 900 further includes determining, based on the first image, an identity of the first user and an identity of the second user, wherein the first content is based on the identity of the first user and the second content is based on the identity of the second user. For example, in various implementations, the first user is associated with a first subscription video service account and the second user is associated with a second subscription video service account and a respective “next episode” is displayed to each user. In various implementations, themethod 900 further includes determining, from the first image, a first vertical angle of the first user at the first time with respect to the line perpendicular to the lenticular display and a first vertical angle of the second user at the first time with respect to the line perpendicular to the lenticular display. For example, inFIG. 8 , thedevice 101 determines the first vertical angle (φ1) of the first user and the second vertical angle (φ2) of the second user. In various implementations, wherein displaying the first content at the first horizontal angle of the first user and the second content at the first horizontal angle of the second user is based on the first vertical angle of the first user and the first vertical angle of the second user. For example, inFIGS. 2 and 3 , thedevice 101 displays, for thefirst user 120A, acylinder 140 from a first vertical perspective at which the top 141 is visible and displays, for thesecond user 120B, thecylinder 140 from a second vertical perspective at which the top 141 is not visible, but at which the bottom 142 is visible. Accordingly, in various implementations, displaying the first content at the first horizontal angle of the first user includes displaying an object or scene at a first vertical perspective based on the first vertical angle and displaying the second content at the first horizontal angle of the second user includes displaying the object or scene at a second vertical perspective based on the second vertical angle. - In various implementations, the first content and the second content are displayed at the same vertical perspective based on the first vertical angle and the second vertical angle. For example, in various implementations, a virtual object is displayed at a vertical perspective based on the average of the first vertical angle and the second vertical angle.
- The
method 900 continues, atblock 940, with the device determining a second horizontal angle of the first user at a second time, different than the first time, with respect to the line perpendicular to the lenticular display and a second horizontal angle of the second user at the second time with respect to the line perpendicular to the lenticular display. For example, whereas inFIG. 4 , thedevice 101 determines the first horizontal angle (θ1) of thefirst user 120A at the first time and the second horizontal angle (θ2) of thesecond user 120B at the first time, inFIG. 5 , thedevice 101 determines the third horizontal angle (θ3) of thefirst user 120A at the second time and the fourth horizontal angle (θ4) of thesecond user 120B at the second time. - In various implementations, the second horizontal angle of the first user is different than the first horizontal angle of the first user. For example, in
FIGS. 4 and 5 , the third horizontal angle (θ3) is different than the first horizontal angle (θ1). In various implementations, the second horizontal angle of the second user is the same as the first horizontal angle of the first user. For example, inFIGS. 4 and 5 , the fourth horizontal angle (θ4) is the same as the first horizontal angle (θ1). - In various implementations, the
method 900 includes capturing, using the image sensor, a second image, wherein determining the second horizontal angle of the first user and the second horizontal angle of the second user is based on the second image. In various implementations, themethod 900 includes receiving data from a pose estimation unit (IMU), wherein determining the second horizontal angle of the first user and the second horizontal angle of the second user is based on the data from the pose estimation unit. - The
method 900 continues, inblock 950, with the device displaying, via the lenticular display, the first content at the second horizontal angle of the first user and the second content at the second horizontal angle of the second user. - In various implementations, the
method 900 further includes determining a third horizontal angle of the first user at the first time with respect to the line perpendicular to the lenticular display and a third horizontal angle of the second user at the first time with respect to the line perpendicular to the lenticular display. For example, inFIG. 7 , thedevice 101 determines the seventh horizontal angle (θ7) of thefirst user 120A and the eighth horizontal angle (θ8) of thesecond user 120B. Themethod 900 further includes displaying, via the lenticular display, third content at the third horizontal angle of the first user and fourth content, different than the third content, at the third horizontal angle of the second user. - In various implementations, displaying the first content at the first horizontal angle of the first user includes displaying an object or scene at a first horizontal perspective based on the first horizontal angle of the first user and displaying the third content at the third horizontal angle at the third horizontal angle of the first user includes displaying the object or scene at second horizontal perspective based on the third horizontal angle of the first user.
-
FIG. 10 is a block diagram of an example of thedevice 101 ofFIG. 1 in accordance with some implementations. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein. To that end, as a non-limiting example, in some implementations thedevice 101 includes one or more processing units 1002 (e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like), one or more input/output (I/O) devices andsensors 1006, one or more communication interfaces 1008 (e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, and/or the like type interface), one or more programming (e.g., I/O) interfaces 1010, alenticular display 1012, animage sensor 1014, amemory 1020, and one ormore communication buses 1004 for interconnecting these and various other components. - In some implementations, the one or
more communication buses 1004 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices andsensors 1006 include at least one of an inertial measurement unit (IMU), an accelerometer, a gyroscope, a thermometer, one or more microphones, one or more speakers, one or more biometric sensors (e.g., blood pressure monitor, heart rate monitor, breathing monitor, electrodermal monitor, blood oxygen sensor, blood glucose sensor, etc.), a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like. - In some implementations, the
lenticular display 1012 is configured to display different content to different users at different angles. In some implementations, thelenticular display 1012 includes holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electro-mechanical system (MEMS), and/or the like display types. In some implementations, thelenticular display 1012 corresponds to diffractive, reflective, polarized, holographic, etc. waveguide displays. In various implementations, thelenticular display 1012 is capable of presenting mixed reality and/or virtual reality content. - In various implementations, the
image sensor 1014 includes one or more RGB cameras (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), one or more infrared (IR) cameras, one or more event-based cameras, and/or the like. - The
memory 1020 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some implementations, thememory 1020 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Thememory 1020 optionally includes one or more storage devices remotely located from the one ormore processing units 1002. Thememory 1020 comprises a non-transitory computer readable storage medium. In some implementations, thememory 1020 or the non-transitory computer readable storage medium of thememory 1020 stores the following programs, modules and data structures, or a subset thereof including anoptional operating system 1030 and acontent presentation module 1040. - The
operating system 1030 includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, thecontent presentation module 1040 is configured to present different content to different users at different angles via thelenticular display 1012. To that end, in various implementations, thecontent presentation module 1040 includes a user detection unit 1042 and acontent presenting unit 1044. - In some implementations, the user detection unit 1042 is configured to determine a first horizontal angle of a first user with respect to a line perpendicular to the
lenticular display 1012 and a second horizontal angle of a second user with respect to the line perpendicular to thelenticular display 1012. To that end, in various implementations, the user detection unit 1042 includes instructions and/or logic therefor, and heuristics and metadata therefor. - In some implementations, the
content presenting unit 1044 is configured to display, via the lenticular display, first content at the first horizontal angle of the first user and second content, different than the first content, at the first horizontal angle of the second user. To that end, in various implementations, thecontent presenting unit 1044 includes instructions and/or logic therefor, and heuristics and metadata therefor. - Although the user detection unit 1042 and the
content presenting unit 1044 are shown as residing on a single device (e.g., thedevice 101 ofFIG. 1 ), it should be understood that in other implementations, the user detection unit 1042 and thecontent presenting unit 1044 may be located in separate computing devices. - Moreover,
FIG. 10 is intended more as a functional description of the various features that could be present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, some functional modules shown separately inFIG. 10 could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various implementations. The actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some implementations, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation. - While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
- It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node” are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
- The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Claims (20)
1. A method comprising:
at a device including one or more processors, non-transitory memory, an image sensor and a lenticular display:
capturing, using the image sensor, an image;
determining, from the image, a horizontal angle of a first user with respect to a line perpendicular to the lenticular display, a vertical angle of the first user with respect to the line perpendicular to the display, a horizontal angle of a second user with respect to the line perpendicular to the lenticular display, and a vertical angle of the second user with respect to the line perpendicular to the display; and
displaying, via the lenticular display, first content at the horizontal angle of the first user and second content, different than the first content, at the horizontal angle of the second user, wherein the first content is based on the vertical angle of the first user and the second content is based on the vertical angle of the second user.
2. The method of claim 1 , wherein the first content includes a first image and the second content includes a second image different than the first image.
3. The method of claim 1 , wherein the first content includes a first video and the second content includes a second video different than the first video.
4. The method of claim 1 , wherein the first content includes a first version of content and the second content includes a second version of the content different from the first version of the content.
5. The method of claim 4 , wherein the second version of the content is a censored version of the first version of the content.
6. The method of claim 1 , further comprising determining, based on the image, an identity of the first user and an identity of the second user, wherein the first content is based on the identity of the first user and the second content is based on the identity of the second user.
7. The method of claim 1 , wherein displaying the first content at the horizontal angle of the first user includes displaying an object or a scene at a first perspective and displaying the second content at the horizontal angle of the second user includes displaying the object or the scene at a second perspective different than the first perspective.
8. The method of claim 7 , wherein the first perspective is a first vertical perspective based on the vertical angle of the first user and the second perspective is a second vertical perspective based on the vertical angle of the second user.
9. The method of claim 7 , wherein the first perspective is a first horizontal perspective based on the horizontal angle of the first user and the second perspective is a second horizontal perspective based on the horizontal angle of the second user.
10. The method of claim 9 , wherein the first content and second content are displayed at a same vertical perspective based on the vertical angle of the first user and the vertical angle of the second user.
11. The method of claim 10 , wherein the same vertical perspective is based on an average of the vertical angle of the first user and the vertical angle of the second user.
12. A device comprising:
an image sensor;
a lenticular display; and
a non-transitory memory; and
one or more processors to:
capture, using the image sensor, an image;
determine, from the first image, a horizontal angle of a first user with respect to a line perpendicular to the lenticular display, a vertical angle of the first user with respect to the line perpendicular to the display, a horizontal angle of a second user with respect to the line perpendicular to the lenticular display, and a vertical angle of the second user with respect to the line perpendicular to the display; and
display, via the lenticular display, first content at the horizontal angle of the first user and second content, different than the first content, at the horizontal angle of the second user, wherein the first content is based on the vertical angle of the first user and the second content is based on the vertical angle of the second user.
13. The device of claim 12 , wherein the first content includes a first version of content and the second content includes a second version of the content different from the first version of the content.
14. The device of claim 13 , wherein the second version of the content is a censored version of the first version of the content.
15. The device of claim 12 , wherein the one or more processors are to display the first content at the horizontal angle of the first user by displaying an object or a scene at a first perspective and are to display the second content at the horizontal angle of the second user by displaying the object or the scene at a second perspective different than the first perspective.
16. The device of claim 15 , wherein the first perspective is a first vertical perspective based on the vertical angle of the first user and the second perspective is a second vertical perspective based on the vertical angle of the second user.
17. The device of claim 15 , wherein the first perspective is a first horizontal perspective based on the horizontal angle of the first user and the second perspective is a second horizontal perspective based on the horizontal angle of the second user.
18. The device of claim 17 , wherein the first content and second content are displayed at a same vertical perspective based on the vertical angle of the first user and the vertical angle of the second user.
19. The device of claim 18 , wherein the same vertical perspective is based on an average of the vertical angle of the first user and the vertical angle of the second user.
20. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with an image sensor and a lenticular display cause the device to:
capture, using the image sensor, an image;
determine, from the image, a horizontal angle of a first user with respect to a line perpendicular to the lenticular display, a vertical angle of the first user with respect to the line perpendicular to the display, a horizontal angle of a second user with respect to the line perpendicular to the lenticular display, and a vertical angle of the second user with respect to the line perpendicular to the display; and
display, via the lenticular display, first content at the horizontal angle of the first user and second content, different than the first content, at the horizontal angle of the second user, wherein the first content is based on the vertical angle of the first user and the second content is based on the vertical angle of the second user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/509,346 US11765341B2 (en) | 2019-09-27 | 2021-10-25 | Method and device for operating a lenticular display |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962906946P | 2019-09-27 | 2019-09-27 | |
US16/922,350 US11184605B2 (en) | 2019-09-27 | 2020-07-07 | Method and device for operating a lenticular display |
US17/509,346 US11765341B2 (en) | 2019-09-27 | 2021-10-25 | Method and device for operating a lenticular display |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/922,350 Continuation US11184605B2 (en) | 2019-09-27 | 2020-07-07 | Method and device for operating a lenticular display |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220046226A1 true US20220046226A1 (en) | 2022-02-10 |
US11765341B2 US11765341B2 (en) | 2023-09-19 |
Family
ID=71729024
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/922,350 Active US11184605B2 (en) | 2019-09-27 | 2020-07-07 | Method and device for operating a lenticular display |
US17/509,346 Active 2040-09-11 US11765341B2 (en) | 2019-09-27 | 2021-10-25 | Method and device for operating a lenticular display |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/922,350 Active US11184605B2 (en) | 2019-09-27 | 2020-07-07 | Method and device for operating a lenticular display |
Country Status (5)
Country | Link |
---|---|
US (2) | US11184605B2 (en) |
EP (1) | EP4035409A1 (en) |
KR (2) | KR102503261B1 (en) |
CN (2) | CN114450970B (en) |
WO (1) | WO2021061256A1 (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060268240A1 (en) * | 2005-05-24 | 2006-11-30 | Miles Mark W | Multiple-view display for non-stereoscopic viewing |
US20120300046A1 (en) * | 2011-05-24 | 2012-11-29 | Ilya Blayvas | Method and System for Directed Light Stereo Display |
US20150149902A1 (en) * | 2013-11-26 | 2015-05-28 | At&T Intellectual Property I, Lp | Manipulation of media content to overcome user impairments |
US20160219268A1 (en) * | 2014-04-02 | 2016-07-28 | Telefonaktiebolaget L M Ericsson (Publ) | Multi-view display control |
US20160247324A1 (en) * | 2015-02-25 | 2016-08-25 | Brian Mullins | Augmented reality content creation |
US20180004000A1 (en) * | 2015-01-21 | 2018-01-04 | Tesseland Llc | Advanced refractive optics for immersive virtual reality |
US20180077384A1 (en) * | 2016-09-09 | 2018-03-15 | Google Inc. | Three-dimensional telepresence system |
US20180224661A1 (en) * | 2015-10-06 | 2018-08-09 | Fujifilm Corporation | Lenticular display |
US20190122638A1 (en) * | 2018-12-20 | 2019-04-25 | Intel Corporation | Methods and apparatus to control rendering of different content for different view angles of a display |
US10762809B1 (en) * | 2012-09-12 | 2020-09-01 | Delorean, Llc | Vehicle-mounted, motion-controlled sign |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090146915A1 (en) | 2007-12-05 | 2009-06-11 | Marathe Madhav V | Multiple view display device |
US8500284B2 (en) * | 2008-07-10 | 2013-08-06 | Real View Imaging Ltd. | Broad viewing angle displays and user interfaces |
US20120200676A1 (en) | 2011-02-08 | 2012-08-09 | Microsoft Corporation | Three-Dimensional Display with Motion Parallax |
CN104429065B (en) * | 2012-07-18 | 2017-03-15 | 皇家飞利浦有限公司 | Automatic stereo lenticular display device |
KR102143372B1 (en) * | 2013-09-30 | 2020-08-13 | 엘지디스플레이 주식회사 | Multi view display and method of fabricating the same |
US9294670B2 (en) | 2014-01-24 | 2016-03-22 | Amazon Technologies, Inc. | Lenticular image capture |
US10321126B2 (en) * | 2014-07-08 | 2019-06-11 | Zspace, Inc. | User input device camera |
KR101841719B1 (en) * | 2016-06-24 | 2018-03-23 | (주)피엑스디 | Multi-display apparatus and the operation method thereof |
WO2018027110A1 (en) * | 2016-08-05 | 2018-02-08 | University Of Rochester | Virtual window |
-
2020
- 2020-07-07 KR KR1020227014197A patent/KR102503261B1/en active IP Right Grant
- 2020-07-07 US US16/922,350 patent/US11184605B2/en active Active
- 2020-07-07 CN CN202080067776.9A patent/CN114450970B/en active Active
- 2020-07-07 CN CN202310459700.6A patent/CN116506715A/en active Pending
- 2020-07-07 WO PCT/US2020/040988 patent/WO2021061256A1/en active Application Filing
- 2020-07-07 EP EP20743554.6A patent/EP4035409A1/en active Pending
- 2020-07-07 KR KR1020237005993A patent/KR20230031986A/en active Application Filing
-
2021
- 2021-10-25 US US17/509,346 patent/US11765341B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060268240A1 (en) * | 2005-05-24 | 2006-11-30 | Miles Mark W | Multiple-view display for non-stereoscopic viewing |
US20120300046A1 (en) * | 2011-05-24 | 2012-11-29 | Ilya Blayvas | Method and System for Directed Light Stereo Display |
US10762809B1 (en) * | 2012-09-12 | 2020-09-01 | Delorean, Llc | Vehicle-mounted, motion-controlled sign |
US20150149902A1 (en) * | 2013-11-26 | 2015-05-28 | At&T Intellectual Property I, Lp | Manipulation of media content to overcome user impairments |
US20160219268A1 (en) * | 2014-04-02 | 2016-07-28 | Telefonaktiebolaget L M Ericsson (Publ) | Multi-view display control |
US20180004000A1 (en) * | 2015-01-21 | 2018-01-04 | Tesseland Llc | Advanced refractive optics for immersive virtual reality |
US20160247324A1 (en) * | 2015-02-25 | 2016-08-25 | Brian Mullins | Augmented reality content creation |
US20180224661A1 (en) * | 2015-10-06 | 2018-08-09 | Fujifilm Corporation | Lenticular display |
US20180077384A1 (en) * | 2016-09-09 | 2018-03-15 | Google Inc. | Three-dimensional telepresence system |
US20190122638A1 (en) * | 2018-12-20 | 2019-04-25 | Intel Corporation | Methods and apparatus to control rendering of different content for different view angles of a display |
Also Published As
Publication number | Publication date |
---|---|
US11184605B2 (en) | 2021-11-23 |
KR20230031986A (en) | 2023-03-07 |
WO2021061256A1 (en) | 2021-04-01 |
CN114450970A (en) | 2022-05-06 |
CN116506715A (en) | 2023-07-28 |
US11765341B2 (en) | 2023-09-19 |
KR102503261B1 (en) | 2023-02-24 |
EP4035409A1 (en) | 2022-08-03 |
CN114450970B (en) | 2023-05-23 |
US20210099692A1 (en) | 2021-04-01 |
KR20220058666A (en) | 2022-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9930315B2 (en) | Stereoscopic 3D camera for virtual reality experience | |
US9813693B1 (en) | Accounting for perspective effects in images | |
CN102572492B (en) | Image processing device and method | |
WO2018054267A1 (en) | Image display method and device utilized in virtual reality-based apparatus | |
US8922627B2 (en) | Image processing device, image processing method and imaging device | |
US11627390B2 (en) | Encoding method, playing method and apparatus for image stabilization of panoramic video, and method for evaluating image stabilization algorithm | |
US20230334684A1 (en) | Scene camera retargeting | |
US20140204083A1 (en) | Systems and methods for real-time distortion processing | |
GB2554925A (en) | Display of visual data with a virtual reality headset | |
US10198842B2 (en) | Method of generating a synthetic image | |
TWI613904B (en) | Stereo image generating method and electronic apparatus utilizing the method | |
US11373273B2 (en) | Method and device for combining real and virtual images | |
US11765341B2 (en) | Method and device for operating a lenticular display | |
CN114390186A (en) | Video shooting method and electronic equipment | |
CN110557552A (en) | Portable image acquisition equipment | |
TWI502271B (en) | Controlling method and electronic apparatus | |
US20240078640A1 (en) | Perspective Correction with Gravitational Smoothing | |
US10210624B2 (en) | Method for determining depth for stereoscopic reconstruction of three dimensional images | |
US11715220B1 (en) | Method and device for depth sensor power savings | |
US20240098232A1 (en) | Partial Perspective Correction with Mitigation of Vertical Disparity | |
US20240098243A1 (en) | Predictive Perspective Correction | |
US20240078743A1 (en) | Stereo Depth Markers | |
WO2023068087A1 (en) | Head-mounted display, information processing device, and information processing method | |
WO2023048940A1 (en) | Perspective correction with depth map clamping | |
CN117981293A (en) | Perspective correction with depth map truncation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |