WO2021110038A1 - 3d显示设备、3d图像显示方法 - Google Patents

3d显示设备、3d图像显示方法 Download PDF

Info

Publication number
WO2021110038A1
WO2021110038A1 PCT/CN2020/133332 CN2020133332W WO2021110038A1 WO 2021110038 A1 WO2021110038 A1 WO 2021110038A1 CN 2020133332 W CN2020133332 W CN 2020133332W WO 2021110038 A1 WO2021110038 A1 WO 2021110038A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
eye
image
pixels
display
Prior art date
Application number
PCT/CN2020/133332
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
黄玲溪
Original Assignee
北京芯海视界三维科技有限公司
视觉技术创投私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京芯海视界三维科技有限公司, 视觉技术创投私人有限公司 filed Critical 北京芯海视界三维科技有限公司
Priority to US17/781,058 priority Critical patent/US20230007228A1/en
Priority to EP20895613.6A priority patent/EP4068768A4/en
Publication of WO2021110038A1 publication Critical patent/WO2021110038A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Definitions

  • This application relates to the field of 3D display technology, for example, to 3D display devices and 3D image display methods.
  • 3D display technology has become a research hotspot in imaging technology because it can present lifelike visual experience to users.
  • the embodiments of the present disclosure provide a 3D display device, a 3D image display method, a computer-readable storage medium, and a computer program product to solve the technical problem of 3D display distortion.
  • a 3D display device including: a multi-viewpoint 3D display screen, including a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and a plurality of composite sub-pixels Each composite sub-pixel in the 3D display device includes multiple sub-pixels corresponding to multiple viewpoints of the 3D display device; the viewing angle determining device is configured to determine the user perspective of the user; the 3D processing device is configured to be based on the user perspective and according to the 3D model The depth of field information renders the corresponding sub-pixels among the multiple composite sub-pixels.
  • the 3D processing device is configured to generate a 3D image from the depth information based on the user's perspective, and render corresponding sub-pixels according to the 3D image.
  • the 3D display device further includes: an eye positioning device configured to determine the user's eye space position; the 3D processing device is configured to determine the user's eye point of view based on the eye space position, and based on 3D The image renders the sub-pixels corresponding to the viewpoint of the eye.
  • the eye positioning device includes: an eye locator configured to take a user image of the user; an eye positioning image processor configured to determine the spatial position of the eye based on the user image; and eye positioning data
  • the interface is configured to transmit eye space position information indicating the eye space position.
  • the eye locator includes: a first camera configured to take a first image; and a second camera configured to take a second image; wherein the eye positioning image processor is configured to be based on the first image. At least one of the one image and the second image recognizes the existence of the eye and determines the spatial position of the eye based on the recognized eye.
  • the eye locator includes: a camera configured to take an image; and a depth detector configured to obtain the user's eye depth information; wherein the eye locating image processor is configured to recognize based on the image The existence of the eye and the spatial position of the eye is determined based on the identified eye position and eye depth information.
  • the user perspective is the angle between the user and the display plane of the multi-viewpoint 3D display screen.
  • the user perspective is the angle between the user's line of sight and the display plane of the multi-viewpoint 3D display screen, where the user's line of sight is the line between the midpoint of the line of the user's eyes and the center of the multi-viewpoint 3D display screen.
  • the user perspective is: the angle between the user's line of sight and at least one of the horizontal, vertical, and depth directions of the display plane; or the angle between the user's line of sight and the projection of the user's line of sight on the display plane.
  • the 3D display device further includes: a 3D signal interface configured to receive a 3D model.
  • a 3D image display method which includes: determining the user's perspective of the user; and based on the user's perspective, rendering the composite sub-pixels of the composite pixels in the multi-viewpoint 3D display screen according to the depth information of the 3D model. Corresponding sub-pixel.
  • rendering the corresponding sub-pixels in the composite sub-pixels of the composite pixel in the multi-viewpoint 3D display screen according to the depth information of the 3D model includes: generating a 3D image from the depth information based on the user's perspective, and according to The 3D image renders the corresponding sub-pixels.
  • the 3D image display method further includes: determining the spatial position of the user's eyes; determining the viewpoint of the user's eyes based on the spatial position of the eyes; and rendering sub-pixels corresponding to the viewpoint of the eyes based on the 3D image.
  • determining the eye space position of the user includes: taking a user image of the user; determining the eye space position based on the user image; and transmitting eye space position information indicating the eye space position.
  • taking a user image of the user and determining the spatial position of the eye based on the user image includes: taking a first image; taking a second image; and recognizing the position of the eye based on at least one of the first image and the second image. Exist; and Determine the spatial position of the eye based on the identified eye.
  • capturing a user image of the user and determining the spatial position of the eye based on the user image includes: capturing an image; acquiring the depth information of the user's eye; recognizing the presence of the eye based on the image; and based on the recognized eye location and The eye depth information jointly determines the spatial position of the eye.
  • the user perspective is the angle between the user and the display plane of the multi-viewpoint 3D display screen.
  • the user perspective is the angle between the user's line of sight and the display plane of the multi-viewpoint 3D display screen, where the user's line of sight is the line between the midpoint of the line of the user's eyes and the center of the multi-viewpoint 3D display screen.
  • the user perspective is: the angle between the user's line of sight and at least one of the horizontal, vertical, and depth directions of the display plane; or the angle between the user's line of sight and the projection of the user's line of sight on the display plane.
  • the 3D image display method further includes: receiving a 3D model.
  • a 3D display device including: a processor; and a memory storing program instructions; the processor is configured to execute the method as described above when the program instructions are executed.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned 3D image display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned 3D image display method.
  • the 3D display device, 3D image display method, computer readable storage medium, and computer program product provided by the embodiments of the present disclosure can achieve the following technical effects:
  • FIGS. 1A to 1C are schematic diagrams of a 3D display device according to an embodiment of the present disclosure.
  • Fig. 2 is a schematic diagram of an eye positioning device according to an embodiment of the present disclosure
  • FIG. 3 is a geometric relationship model of using two cameras to determine the spatial position of the eye according to an embodiment of the present disclosure
  • Fig. 4 is a schematic diagram of an eye positioning device according to another embodiment of the present disclosure.
  • Fig. 5 is a geometric relationship model for determining the spatial position of an eye using a camera and a depth detector according to an embodiment of the present disclosure
  • Fig. 6 is a schematic diagram of a user perspective according to an embodiment of the present disclosure.
  • Fig. 7 is a schematic diagram of a user perspective according to another embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of generating 3D images corresponding to different user perspectives according to an embodiment of the present disclosure
  • FIGS. 9A to 9E are schematic diagrams of the correspondence between viewpoints and sub-pixels according to an embodiment of the present disclosure.
  • FIG. 10 is a flowchart of a display method of a 3D display device according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a 3D display device according to an embodiment of the present disclosure.
  • the optical axis of a camera Zb: the optical axis of the second camera; 401a: the focal plane of the first camera; 401b: the focal plane of the second camera; Oa: the lens center of the first camera; Ob: the lens center of the second camera ; XRa: the X-axis coordinate of the user's right eye in the focal plane of the first camera; XRb: the X-axis coordinate of the user's right eye in the focal plane of the second camera; XLa: the user's left eye in the focal plane of the first camera X-axis coordinates of imaging in the plane; XLb: X-axis coordinates of the user's left eye imaging in the focal plane of the second camera; T; the distance between the first camera and the second camera; DR: the right eye and the first camera and the second camera The distance between the plane where the camera is located; DL: the distance between the left eye and the plane where the first and second cameras are located; ⁇ : the inclination angle between the user
  • a 3D display device which includes a multi-viewpoint 3D display screen (for example: a multi-viewpoint naked eye 3D display screen), a viewing angle determining device configured to determine a user's perspective
  • the depth information of the 3D model or the 3D video is a 3D processing device that renders the corresponding sub-pixels in the composite sub-pixels in the composite pixels included in the multi-view 3D display screen.
  • the 3D processing device generates a 3D image based on the user's perspective and according to the depth information of the 3D model or the 3D video, for example, generates a 3D image corresponding to the user's perspective.
  • the correspondence between the user's perspective and the generated 3D image is similar to that when the user looks at a real scene from different angles, he will see the scene performance corresponding to that angle.
  • the 3D images generated from the 3D model or the depth information of the 3D video may be different.
  • a 3D image that follows the user’s perspective is generated.
  • the 3D images seen by the user in each perspective are different, so that the user can feel as if watching a real object with the help of a multi-view 3D display screen. Improve the display and improve the user experience.
  • FIG. 1A shows a schematic diagram of a 3D display device 100 according to an embodiment of the present disclosure.
  • the 3D display device 100 includes a multi-viewpoint 3D display screen 110, a 3D processing device 130, an eye positioning device 150, a viewing angle determination device 160, a 3D signal interface 140 and a processor 120.
  • the multi-view 3D display screen 110 may include a display panel and a grating (not shown) covering the display panel.
  • the display panel may include m columns and n rows (m ⁇ n) composite pixels 400 and thus define a display resolution of m ⁇ n.
  • the display resolution of m ⁇ n may be, for example, a resolution above full high definition (FHD), including but not limited to: 1920 ⁇ 1080, 1920 ⁇ 1200, 2048 ⁇ 1280, 2560 ⁇ 1440, 3840 ⁇ 2160, and so on.
  • FHD full high definition
  • Each composite pixel includes a plurality of composite sub-pixels, and each composite sub-pixel includes i sub-pixels of the same color corresponding to i viewpoints, where i ⁇ 3.
  • the green composite sub-pixel 420 and the blue composite sub-pixel 430 composed of i 6 blue sub-pixels B.
  • each composite pixel has a square shape. Multiple composite sub-pixels in each composite pixel may be arranged in parallel to each other. The i sub-pixels in each composite sub-pixel may be arranged in rows.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
  • the 3D display device 100 may also be provided with more than one 3D processing device 130, which process the sub-pixels of each composite sub-pixel of each composite pixel of the 3D display screen 110 in parallel, serial, or a combination of series and parallel. Rendering.
  • more than one 3D processing device may be allocated in other ways and process multiple rows and multiple columns of composite pixels or composite sub-pixels of the 3D display screen 110 in parallel, which falls within the scope of the embodiments of the present disclosure.
  • the 3D processing device 130 may also optionally include a buffer 131 to buffer the received 3D video image.
  • the processor is included in a computer or a smart terminal, such as a mobile terminal.
  • the processor can be used as a processor unit of a computer or an intelligent terminal.
  • the processor 120 may be disposed outside the 3D display device 100.
  • the 3D display device 100 may be a multi-view 3D display with a 3D processing device, such as a non-smart 3D TV.
  • the 3D display device includes a processor inside. Based on this, the 3D signal interface 140 is an internal interface connecting the processor 120 and the 3D processing device 130.
  • a 3D display device 100 may be, for example, a mobile terminal, and the 3D signal interface 140 may be a MIPI, a mini-MIPI interface, an LVDS interface, a min-LVDS interface, or a Display Port interface.
  • the processor 120 of the 3D display device 100 may further include a register 121.
  • the register 121 can be configured to temporarily store instructions, data, and addresses.
  • the register 121 may be configured to receive information about the display requirements of the multi-view 3D display screen 110.
  • the 3D display device 100 may further include a codec configured to decompress and codec the compressed 3D video signal and send the decompressed 3D video signal to the 3D processing device 130 via the 3D signal interface 140.
  • the 3D display device 100 may include an eye positioning device configured to obtain/determine eye positioning data.
  • the 3D display device 100 includes an eye positioning device 150 communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive eye positioning data.
  • the eye positioning device 150 can be connected to the processor 120 and the 3D processing device 130 at the same time, so that on the one hand, the 3D processing device 130 can directly obtain eye positioning data from the eye positioning device 150, and on the other hand, the eye positioning Other information obtained by the device 150 from the processor 120 may be processed by the 3D processing device 130.
  • the eye positioning data includes eye space position information indicating the user's eye space position.
  • the eye space position information can be expressed in the form of three-dimensional coordinates, for example, including the user's eyes/face and multi-viewpoint 3D display
  • the distance information between the screen or the eye positioning device that is, the depth information of the user's eyes/face
  • the vertical position information of the user's eyes/face on the multi-viewpoint 3D display screen or eye positioning device The spatial position of the eye can also be expressed in the form of two-dimensional coordinates containing any two of the distance information, the horizontal position information, and the vertical position information.
  • the eye positioning data may also include the viewpoint (viewpoint position) where the user's eyes (for example, both eyes) are located, the user's perspective, and the like.
  • the eye positioning device includes an eye locator configured to capture a user image (for example, a user's face image), an eye positioning image processor configured to determine the spatial position of the eye based on the captured user image, and An eye positioning data interface configured to transmit eye spatial position information, and the eye spatial position information indicates the eye spatial position.
  • a user image for example, a user's face image
  • an eye positioning image processor configured to determine the spatial position of the eye based on the captured user image
  • An eye positioning data interface configured to transmit eye spatial position information, and the eye spatial position information indicates the eye spatial position.
  • the eye locator includes a first camera configured to take a first image and a second camera configured to take a second image
  • the eye locator image processor is configured to be based on the first image and the second image At least one of the images recognizes the existence of the eye and determines the spatial position of the eye based on the recognized eye.
  • the eye positioning device 150 includes an eye locator 151, an eye positioning image processor 152, and an eye positioning data interface 153.
  • the eye locator 151 includes, for example, a first camera 151a, which is a black and white camera, and a second camera 151b, which is, for example, a black and white camera.
  • the first camera 151a is configured to capture a first image such as a black and white image
  • the second camera 151b is configured to capture a second image such as a black and white image.
  • the eye positioning device 150 may be placed in the front of the 3D display device 100, for example, in the multi-viewpoint 3D display screen 110.
  • the photographing objects of the first camera 151a and the second camera 151b may be the user's face.
  • at least one of the first camera and the second camera may be a color camera and is configured to capture color images.
  • the eye positioning data interface 153 of the eye positioning device 150 is communicatively connected to the 3D processing device 130 of the 3D display device 100, so that the 3D processing device 130 can directly receive the eye positioning data.
  • the eye positioning image processor 152 of the eye positioning device 150 may be communicatively connected to or integrated with the processor 120, whereby the eye positioning data can be received from the processor 120 through the eye positioning data interface 153 Transmitted to the 3D processing device 130.
  • the eye locator 151 is also provided with an infrared emitting device 154.
  • the infrared emitting device 154 is configured to selectively emit infrared light to supplement the light when the ambient light is insufficient, for example, when shooting at night, so that the ambient light is weak. It is also possible to capture the first image and the second image in which the user's face and eyes can be recognized.
  • the display device may be configured to control the infrared emitting device to turn on or adjust it based on the received light sensing signal when the first camera or the second camera is working, for example, when the light sensing signal is detected to be lower than a predetermined threshold. size.
  • the light sensing signal is received by an ambient light sensor integrated in the processing terminal or the display device. The above-mentioned operations for the infrared emitting device can also be completed by an eye positioning device or a processing terminal integrated with the eye positioning device.
  • the infrared emitting device 154 is configured to emit infrared light with a wavelength greater than or equal to 1.5 microns, that is, long-wave infrared light. Compared with short-wave infrared light, long-wave infrared light has a weaker ability to penetrate the skin, so it is less harmful to the eyes.
  • the captured first image and second image are transmitted to the eye positioning image processor 152.
  • the eye positioning image processor 152 may be configured to have a visual recognition function (such as a face recognition function), and may be configured to recognize eyes based on at least one of the first image and the second image and based on the recognized eyes. Department to determine the spatial position of the eye. Recognizing the eyes may first recognize the face based on at least one of the first image and the second image, and then recognize the eyes through the recognized face.
  • a visual recognition function such as a face recognition function
  • the eye positioning image processor 152 may determine the viewpoint of the user's eye based on the spatial position of the eye. In some other embodiments, the 3D processing device 130 determines the viewpoint of the user's eyes based on the acquired spatial positions of the eyes.
  • the first camera and the second camera may be the same camera, for example, the same black and white camera, or the same color camera. In other embodiments, the first camera and the second camera may be different cameras, for example, different black and white cameras, or different color cameras. In the case where the first camera and the second camera are different cameras, in order to determine the spatial position of the eye, the first image and the second image can be calibrated or corrected.
  • At least one of the first camera and the second camera is a wide-angle camera.
  • Fig. 3 schematically shows a geometric relationship model for using two cameras to determine the spatial position of the eye.
  • the first camera and the second camera are the same camera, and therefore have the same focal length f.
  • the optical axis Za of the first camera 151a is parallel to the optical axis Zb of the second camera 151b, and the focal plane 401a of the first camera 151a and the focal plane 401b of the second camera 151b are in the same plane and perpendicular to the optical axes of the two cameras .
  • the line connecting the lens centers Oa and Ob of the two cameras is parallel to the focal planes of the two cameras.
  • FIG. 3 schematically shows a geometric relationship model for using two cameras to determine the spatial position of the eye.
  • the first camera and the second camera are the same camera, and therefore have the same focal length f.
  • the optical axis Za of the first camera 151a is parallel to the optical axis Zb of the second camera 151b
  • the direction of the line connecting the lens centers Oa to Ob of the two cameras as the X-axis direction and the optical axis of the two cameras as the Z-axis direction shows the geometric relationship model of the XZ plane.
  • the X-axis direction is also a horizontal direction
  • the Y-axis direction is also a vertical direction
  • the Z-axis direction is a direction perpendicular to the XY plane (also referred to as a depth direction).
  • the lens center Oa of the first camera 151a is taken as the origin
  • the lens center Ob of the second camera 151b is taken as the origin
  • R and L represent the user's right eye and left eye
  • XRa and XRb are the X-axis coordinates of the user's right eye R in the focal planes 401a and 401b of the two cameras
  • XLa and XLb are the user's left eye L in two X-axis coordinates of the imaging in the focal planes 401a and 401b of each camera.
  • the distance T between the two cameras and their focal length f are also known. According to the geometric relationship of similar triangles, the distances DR and DL between the right eye R and the left eye L and the plane where the two cameras set above are located are:
  • the line between the user's eyes (or the user's face) and the plane where the two cameras are set above are inclined to each other and the inclination angle is ⁇ .
  • the tilt angle ⁇ is zero.
  • the 3D display device 100 may be a computer or a smart terminal, such as a mobile terminal. However, it is conceivable that, in some embodiments, the 3D image display device 100 may also be a non-intelligent display terminal, such as a non-intelligent 3D TV.
  • the eye positioning device 150 including the two cameras 151a and 151b is placed in the front of the multi-view 3D display screen, or in other words, is substantially located in the same plane as the display plane of the multi-view 3D display screen. Therefore, the distances DR and DL between the user's right eye R and the left eye L and the plane where the two cameras are set as described above are exemplarily obtained in the embodiment shown in FIG.
  • the distance between the multi-viewpoint 3D display screen (or the depth of the user's right and left eyes), and the inclination angle ⁇ between the user's face and the plane of the two cameras set above is the user's face relative to the multi-viewpoint 3D The tilt angle of the display.
  • the eye positioning data interface 153 is configured to transmit the tilt angle or parallelism of the user's eyes relative to the eye positioning device 150 or the multi-view 3D display screen 110. This can help render 3D images more accurately.
  • the eye spatial position information DR, DL, ⁇ , and P obtained as an example above are transmitted to the 3D processing device 130 through the eye positioning data interface 153.
  • the 3D processing device 130 is based on the received eye spatial position The information determines where the user's eyes are.
  • the 3D processing device 130 may pre-store a correspondence table between the spatial position of the eye and the viewpoint of the 3D display device. After obtaining the spatial position information of the eyes, the viewpoint of the user's eyes can be determined based on the correspondence table.
  • the correspondence table may also be received/read by the 3D processing device from other components (such as processors) with storage functions.
  • the eye spatial position information DR, DL, ⁇ , and P obtained as an example above can also be directly transmitted to the processor of the 3D display device 100, and the 3D processing device 130 uses the eye positioning data interface 153. Receive/read eye spatial position information from the processor.
  • the first camera 151a is configured to capture a first image sequence including a plurality of first images arranged in time
  • the second camera 151b is configured to capture a first image sequence including a plurality of second images arranged in time.
  • the eye positioning image processor 152 may include a synchronizer 155.
  • the synchronizer 155 is configured to determine the first image and the second image that are time-synchronized in the first image sequence and the second image sequence. The first image and the second image determined to be time synchronized are used for eye recognition and determination of the spatial position of the eye.
  • the eye positioning image processor 152 includes a buffer 156 and a comparator 157.
  • the buffer 156 is configured to buffer the first image sequence and the second image sequence.
  • the comparator 157 is configured to compare a plurality of first images and second images in the first image sequence and the second image sequence. By comparison, it can be judged whether the spatial position of the eye has changed, and it can also be judged whether the eye is still in the viewing range, etc. Determining whether the eyes are still in the viewing range can also be performed by the 3D processing device.
  • the eye positioning image processor 152 is configured to not recognize the presence of eyes in the current first image and the second image in the first image sequence and the second image sequence, and is before or after the first image sequence.
  • the spatial position information of the eyes determined based on the previous or subsequent first image and the second image is used as the current spatial position information of the eyes. This situation may occur, for example, when the user briefly turns his head. In this case, the user's face and eyes may not be recognized for a short time.
  • the first camera and the second camera are configured to capture the first image sequence and the second image sequence at a frequency of 24 frames/sec or more, for example, at a frequency of 30 frames/sec, or for example, at 60 frames. /Sec frequency shooting.
  • the first camera and the second camera are configured to shoot at the same frequency as the refresh frequency of the multi-view 3D display screen of the 3D display device.
  • the eye locator includes at least one camera configured to take at least one image and a depth detector configured to obtain the depth information of the user's eyes, and the eye locator image processor is configured to be based on the captured image. At least one image recognizes the presence of the eye, and determines the spatial position of the eye based on the recognized eye and eye depth information.
  • Fig. 4 shows an example in which the eye locator in the eye locating device is configured with a single camera and a depth detector.
  • the eye positioning device 150 includes an eye locator 151, an eye positioning image processor 152, and an eye positioning data interface 153.
  • the eye locator 151 includes a camera 155 such as a black-and-white camera and a depth detector 158.
  • the camera 155 is configured to capture at least one image, for example, a black and white image
  • the depth detector 158 is configured to acquire eye depth information of the user.
  • the eye positioning device 150 may be placed in the front of the 3D display device 100, for example, in the multi-viewpoint 3D display screen 110.
  • the camera 155 photographs the user's face, and recognizes the face or eyes based on the captured image.
  • the depth detector obtains the depth information of the eyes, and can also obtain the depth information of the face, and obtains the depth information of the eye based on the face depth information.
  • the camera 155 may be a color camera, and is configured to capture color images.
  • two or more cameras 155 and depth detector 158 can also be used to determine the spatial position of the eye.
  • the eye positioning data interface 153 of the eye positioning device 150 is communicatively connected to the 3D processing device 130 of the 3D display device 100, so that the 3D processing device 130 can directly receive the eye positioning data.
  • the eye positioning image processor 152 may be communicatively connected to or integrated with the processor 120 of the 3D display device 100, whereby the eye positioning data may be transmitted from the processor 120 through the eye positioning data interface 153 To the 3D processing device 130.
  • the eye locator 151 is also provided with an infrared emitting device 154.
  • the infrared emitting device 154 is configured to selectively emit infrared light to supplement the light when the ambient light is insufficient, for example, when shooting at night, so that it can be recognized even when the ambient light is weak. Display the image of the user’s face and eyes.
  • the display device may be configured to control the infrared emitting device to turn on or adjust its size based on the received light sensing signal when the camera is working, for example, when the light sensing signal is detected to be lower than a predetermined threshold.
  • the light sensing signal is received by an ambient light sensor integrated in the processing terminal or the display device. The above-mentioned operations for the infrared emitting device can also be completed by an eye positioning device or a processing terminal integrated with the eye positioning device.
  • the infrared emitting device 154 is configured to emit infrared light with a wavelength greater than or equal to 1.5 microns, that is, long-wave infrared light. Compared with short-wave infrared light, long-wave infrared light has a weaker ability to penetrate the skin, so it is less harmful to the eyes.
  • the captured image is transmitted to the eye positioning image processor 152.
  • the eye positioning image processor can be configured to have a visual recognition function (such as a face recognition function), and can be configured to recognize a face based on the captured image and determine based on the recognized eye position and the user's eye depth information
  • the spatial position of the eye, and based on the spatial position of the eye, the viewpoint of the user's eye is determined.
  • the 3D processing device determines the viewpoint of the user's eyes based on the acquired spatial positions of the eyes.
  • the camera is a wide-angle camera.
  • the depth detector 158 is configured as a structured light camera or a TOF camera.
  • Fig. 5 schematically shows a geometric relationship model for determining the spatial position of an eye using a camera and a depth detector.
  • the camera has a focal length f, an optical axis Z, and a focal plane FP.
  • R and L represent the user's right eye and left eye, respectively
  • XR and XL represent the user's right eye R and left eye L in the camera.
  • the image captured by the camera 155 includes the left and right eyes of the user, and the X-axis (horizontal direction) coordinates of the left and right eyes in the focal plane FP of the camera 155 can be known.
  • Y-axis (vertical direction) coordinates As shown in FIG. 5, taking the lens center O of the camera 155 as the origin, the X axis and the Y axis (not shown) perpendicular to the X axis form a camera plane MCP, which is parallel to the focal plane FP.
  • the optical axis direction Z of the camera 155 is also the depth direction. That is, in the XZ plane shown in FIG.
  • the X-axis coordinates XR and XL of the left eye and the right eye imaging in the focal plane FP are known.
  • the focal length f of the camera 155 is known.
  • the inclination angles ⁇ R and ⁇ L of the projection of the line connecting the left and right eyes and the center O of the camera lens in the XZ plane with respect to the X axis can be calculated.
  • the Y-axis coordinates of the left eye and right eye imaging in the focal plane FP are known, and combined with the known focal length f, the left eye and right eye can be calculated
  • the images taken by the camera 155 including the left and right eyes of the user and the depth information of the left and right eyes obtained by the depth detector 158 it can be known that the left and right eyes are on the camera 155.
  • the angle ⁇ between the projection of the line connecting the left eye and the right eye in the XZ plane and the X axis can be calculated.
  • the angle between the projection of the line connecting the left eye and the right eye in the YZ plane and the Y axis can be calculated.
  • the distance of the user's right eye R and left eye L relative to the camera plane MCP/multi-viewpoint 3D display screen can be known.
  • DR and DL Based on this, it can be obtained that the angle ⁇ between the projection of the user's eyes in the XZ plane and the X axis and the interpupillary distance P are respectively:
  • the distances DR and DL when the distances DR and DL are not equal and the included angle ⁇ is not zero, it can be considered that the user faces the display plane of the multi-view 3D display screen at a certain inclination angle.
  • the distances DR and DL are equal and the viewing angle ⁇ is zero, it can be considered that the user is looking up at the display plane of the multi-view 3D display screen.
  • a threshold may be set for the included angle ⁇ , and when the included angle ⁇ does not exceed the threshold, it can be considered that the user is looking up at the display plane of the multi-view 3D display screen.
  • the user's perspective can be obtained based on the recognized eye or the determined spatial position of the eye, and based on the user's perspective, a 3D image corresponding to the user's perspective can be generated from a 3D model or a 3D video including depth information. , So that the 3D effect displayed according to the 3D image is follow-up to the user, so that the user can get the feeling of viewing a real object or scene at a corresponding angle.
  • the user's perspective is the angle of the user relative to the camera.
  • the user's perspective may be the angle between the user's eye (monocular) and the lens center O of the camera with respect to the camera coordinate system.
  • the included angle is, for example, the included angle ⁇ X between the line and the X-axis (lateral) in the camera coordinate system, or the included angle ⁇ Y between the line and the Y-axis (vertical) in the camera coordinate system, or ⁇ (X,Y) means.
  • the included angle is, for example, the included angle between the projection of the line in the XY plane of the camera coordinate system and the line.
  • the included angle is, for example, the included angle ⁇ X between the projection connected to the XY plane of the camera coordinate system and the X axis, or the included angle ⁇ Y between the projection connected to the XY plane of the camera coordinate system and the Y axis. , Or expressed by ⁇ (X,Y).
  • the user's perspective may be the angle between the line between the midpoint of the user's eyes and the lens center O of the camera (ie, the user's line of sight) with respect to the camera coordinate system.
  • the included angle is, for example, the included angle ⁇ X between the user's line of sight and the X axis (lateral) in the camera coordinate system, or the included angle ⁇ Y between the user's line of sight and the Y axis (vertical) in the camera coordinate system, or ⁇ (X,Y) means.
  • the included angle is, for example, the included angle between the projection of the user's line of sight in the XY plane of the camera coordinate system and the connection line.
  • the included angle is, for example, the included angle ⁇ X between the projection of the user's line of sight in the XY plane of the camera coordinate system and the X axis (lateral), or the projection of the user's line of sight in the XY plane of the camera coordinate system and the Y axis ( Vertical) included angle ⁇ Y, or expressed as ⁇ (X, Y).
  • the user's perspective may be the angle between the line of the user's eyes with respect to the camera coordinate system.
  • the included angle is, for example, the included angle ⁇ X between the binocular line and the X axis in the camera coordinate system, or the included angle ⁇ Y between the binocular line and the Y axis in the camera coordinate system, or the angle ⁇ (X, Y ) Means.
  • the included angle is, for example, the included angle between the projection of the line of the two eyes in the XY plane of the camera coordinate system and the line.
  • the included angle is, for example, the included angle ⁇ X between the projection of the binocular connection in the XY plane of the camera coordinate system and the X axis, or the projection of the binocular connection in the XY plane of the camera coordinate system and the Y axis.
  • the user's perspective may be the angle between the plane of the user's face relative to the camera coordinate system.
  • the included angle is, for example, the included angle between the plane where the face is located and the XY plane in the camera coordinate system.
  • the plane where the face is located can be determined by extracting multiple facial features, and the facial features can be, for example, the forehead, eyes, ears, corners of the mouth, and chin.
  • the user's perspective may be the angle of the user with respect to the display plane of the multi-view 3D display screen or the multi-view 3D display screen.
  • the coordinate system of the multi-view 3D display screen or display plane is defined in this article, where the center of the multi-view 3D display screen or the center o of the display plane is the origin, the horizontal (horizontal) straight line is the x-axis, and the vertical direction is The straight line is the y axis, and the straight line perpendicular to the xy plane is the z axis (depth direction).
  • the user's perspective may be the angle between the user's eyes (monocular) and the center o of the multi-view 3D display screen or display plane relative to the coordinate system of the multi-view 3D display screen or display plane.
  • the included angle is, for example, the included angle ⁇ x between the line and the x-axis in the coordinate system, or the included angle ⁇ y between the line and the y-axis in the coordinate system, or represented by ⁇ (x,y).
  • the included angle is, for example, the included angle between the projection of the line in the xy plane of the coordinate system and the line.
  • the included angle is, for example, the included angle ⁇ x between the projection of the line in the xy plane of the coordinate system and the x-axis, or the included angle ⁇ y between the projection of the line in the xy plane of the coordinate system and the y-axis, or Expressed by ⁇ (x,y).
  • the user's perspective may be the line between the midpoint of the user's eyes and the center o of the multi-view 3D display screen or display plane (ie, the user's line of sight) relative to the coordinate system of the multi-view 3D display screen or display plane.
  • the included angle is, for example, the included angle ⁇ x between the user's line of sight and the x-axis in the coordinate system, or the included angle ⁇ y between the user's line of sight and the y-axis in the coordinate system, or ⁇ (x ,y) indicates that in the figure, R indicates the user's right eye, and L indicates the user's left eye.
  • R indicates the user's right eye
  • L indicates the user's left eye.
  • the included angle is, for example, the included angle ⁇ k between the projection k of the user's line of sight in the xy plane of the coordinate system and the user's line of sight.
  • the included angle is, for example, the included angle ⁇ x between the projection of the user's line of sight in the xy plane of the coordinate system and the X axis, or the included angle ⁇ y between the projection of the user's line of sight in the xy plane of the coordinate system and the y axis, or Expressed by ⁇ (x,y).
  • the user's perspective may be the angle between the line of the user's eyes relative to the coordinate system of the multi-viewpoint 3D display screen or the display plane.
  • the included angle is, for example, the included angle ⁇ x between the line and the x-axis in the coordinate system, or the included angle ⁇ y between the line and the y-axis in the coordinate system, or represented by ⁇ (x,y).
  • the included angle is, for example, the included angle between the projection of the line in the xy plane of the coordinate system and the line.
  • the included angle is, for example, the included angle ⁇ x between the projection connected to the xy plane of the coordinate system and the x-axis, or the included angle ⁇ y between the projection connected to the xy plane of the camera coordinate system and the y axis. Or expressed in ⁇ (x,y).
  • the user's perspective may be the angle of the plane where the user's face is located with respect to the coordinate system of the multi-viewpoint 3D display screen or the display plane.
  • the included angle is, for example, the included angle between the plane where the face is located and the xy plane in the coordinate system.
  • the plane where the face is located can be determined by extracting multiple facial features, and the facial features can be, for example, the forehead, eyes, ears, corners of the mouth, and chin.
  • the camera is placed in front of the multi-view 3D display screen.
  • the camera coordinate system can be regarded as the coordinate system of a multi-view 3D display screen or a display plane.
  • the 3D display device may be provided with a viewing angle determining device.
  • the viewing angle determination device may be software, such as a computing module, program instructions, etc., or may be hardware.
  • the viewing angle determination device may be integrated in the 3D processing device, or it may be integrated in the eye positioning device, and it may also send user viewing angle data to the 3D processing device.
  • the viewing angle determination device 160 is in communication connection with the 3D processing device 130.
  • the 3D processing device can receive the user's perspective data, and generate a 3D image corresponding to the user's perspective based on the user's perspective data, and the viewpoint of the user's eyes (for example, both eyes) determined based on the eye positioning data to render composite sub-pixels based on the generated 3D image The sub-pixels associated with the viewpoint.
  • the 3D processing device may receive the spatial position information of the eye determined by the eye positioning device 150 and the user's perspective data determined by the viewing angle determination device 160.
  • FIG. 1A the viewing angle determination device 160 is in communication connection with the 3D processing device 130.
  • the 3D processing device can receive the user's perspective data, and generate a 3D image corresponding to the user's perspective based on the user's perspective data, and the viewpoint of the user's eyes (for example, both eyes) determined based on the eye positioning data to render composite sub-pixels based on the generated 3D image The sub-
  • the viewing angle determination device 160 may be integrated in the eye positioning device 150, for example, integrated in the eye positioning image processor 152, and the eye positioning device 150 is communicatively connected with the 3D processing device, Sending eye positioning data including user perspective data and eye spatial position information to the 3D processing device.
  • the viewing angle determination device may be integrated in the 3D processing device, and the 3D processing device receives the eye spatial position information and determines the user's viewing angle data based on the eye spatial position information.
  • the eye positioning device is respectively communicatively connected with the 3D processing device and the viewing angle determining device, and sends eye spatial position information to both, and the viewing angle determining device determines the user's viewing angle data based on the eye spatial position information and sends it to 3D processing device.
  • the 3D processing device After the 3D processing device receives or determines the user's perspective data, it can generate 3D images that match the perspective from the received 3D model or 3D video including depth information based on the user's perspective data, so that it can provide information to users in different user perspectives.
  • the user presents 3D images with different depth of field information and rendered images, so that the user obtains a visual experience similar to viewing real objects from different angles.
  • Fig. 8 schematically shows different 3D images generated based on the same 3D model for different user perspectives.
  • the 3D processing device receives the 3D model 600 with depth information, and also receives or confirms multiple different user perspectives.
  • the 3D processing device For each user's perspective, the 3D processing device generates different 3D images 601 and 602 from the 3D model 600.
  • R represents the user's right eye
  • L represents the user's left eye.
  • the sub-pixels corresponding to the corresponding viewpoints are respectively rendered according to different 3D images 601 and 602 generated by the depth information corresponding to different user perspectives, where the corresponding viewpoint refers to the viewpoint of the user's eyes determined by the eye positioning data.
  • the 3D display effect obtained is based on different user perspectives. According to the change of the user's perspective, this follow-up effect can be, for example, following in the horizontal direction, or following in the vertical direction, or following in the depth direction, or in the horizontal, vertical, and depth directions. The components follow.
  • Multiple different user perspectives can be generated based on multiple users, or based on the movement or actions of the same user.
  • the user's perspective is detected and determined in real time.
  • the change of the user's perspective is detected and determined in real time, and when the change of the user's perspective is less than a predetermined threshold, a 3D image is generated based on the user's perspective before the change. This situation may occur when the user temporarily shakes his head or makes posture adjustments in a small amplitude or within a small range, such as performing posture adjustments on a fixed seat. At this time, the user perspective before the change is still used as the current user perspective, and a 3D image corresponding to the depth information corresponding to the current user perspective is generated.
  • the viewpoint of the user's eyes may be determined based on the identified eyes or the determined spatial positions of the eyes.
  • the correspondence between the eye spatial position information and the viewpoint may be stored in the processor in the form of a correspondence table and received by the 3D processing device.
  • the correspondence between the eye spatial position information and the viewpoint may be stored in the 3D processing device in the form of a correspondence table.
  • a 3D display device can have multiple viewpoints.
  • the user's eyes can see the display of the corresponding sub-pixel in the composite sub-pixel of each composite pixel in the multi-view 3D display screen at each viewpoint position (spatial position).
  • the two different images seen by the user's eyes at different viewpoints form a parallax, and a 3D image is synthesized in the brain.
  • the 3D processing device may render the corresponding sub-pixel in each composite sub-pixel.
  • the correspondence between viewpoints and sub-pixels may be stored in the processor in the form of a correspondence table and received by the 3D processing device.
  • the correspondence between viewpoints and sub-pixels may be stored in the 3D processing device in the form of a correspondence table.
  • a processor or a 3D processing device based on the generated 3D image, a processor or a 3D processing device generates two parallel images, such as a left-eye parallax image and a right-eye parallax image.
  • the generated 3D image is used as one of the two parallel images, for example, as one of the left-eye parallax image and the right-eye parallax image, and one of the two parallel images is generated based on the 3D image.
  • the other one for example, the other of the left-eye parallax image and the right-eye parallax image is generated.
  • the 3D processing device Based on one of the two images, the 3D processing device renders at least one sub-pixel in each composite sub-pixel according to the determined viewpoint position of the user's eyes; and based on the other of the two images In a frame, at least another sub-pixel in each composite sub-pixel is rendered according to the determined viewpoint position of the other eye among the determined viewpoint positions of both eyes of the user.
  • the 3D display device has 8 viewpoints V1-V8.
  • Each composite pixel 500 in the multi-viewpoint 3D display screen of the 3D display device is composed of three composite sub-pixels 510, 520, and 530.
  • Each composite sub-pixel is composed of 8 sub-pixels of the same color corresponding to 8 viewpoints.
  • the composite sub-pixel 510 is a red composite sub-pixel composed of 8 red sub-pixels R
  • the composite sub-pixel 520 is a green composite sub-pixel composed of 8 green sub-pixels G
  • the composite sub-pixel 530 is composed of 8 A blue composite sub-pixel composed of two blue sub-pixels B.
  • Multiple composite pixels are arranged in an array in a multi-view 3D display screen. For clarity, only one composite pixel 500 in the multi-view 3D display screen is shown in the figure.
  • the structure of other composite pixels and the rendering of sub-pixels can refer to the description of the illustrated composite pixels.
  • the 3D processing device may render the composite according to the 3D image generated by the 3D model or the depth information of the 3D video corresponding to the user’s perspective. Corresponding sub-pixels in sub-pixels.
  • the user's left eye is at the viewpoint V2, and the right eye is at the viewpoint V5.
  • the left and right eye parallax images corresponding to the two viewpoints V2 and V5 are generated, and the composite sub-pixel 510 is rendered.
  • , 520, and 530 respectively correspond to the sub-pixels of the two viewpoints V2 and V5.
  • the 3D processing device may render the composite according to the 3D image generated by the 3D model or the depth information of the 3D video corresponding to the user’s perspective.
  • the sub-pixels corresponding to the two viewpoints in the sub-pixels are rendered, and the sub-pixels corresponding to the respective adjacent viewpoints of the two viewpoints are rendered.
  • the user's left eye is at the viewpoint V2
  • the right eye is at the viewpoint V6.
  • the left and right eye parallax images corresponding to the two viewpoints V2 and V6 are generated, and the composite sub-pixel 510 is rendered.
  • , 520, and 530 respectively correspond to the sub-pixels of the two viewpoints V2 and V6, and also render the sub-pixels corresponding to the viewpoints adjacent to each side of the viewpoints V2 and V6.
  • the sub-pixels corresponding to the viewpoints that are adjacent on one side of the viewpoints V2 and V6 may also be simultaneously rendered.
  • the 3D processing device when it is determined based on the spatial position information of the eyes that the eyes of the user are each located between two viewpoints, the 3D processing device generates a 3D image corresponding to the user's perspective based on the 3D model or the depth information of the 3D video.
  • the sub-pixels corresponding to these four viewpoints in the composite sub-pixel can be rendered.
  • the user's left eye is between viewpoints V2 and V3
  • the right eye is between viewpoints V5 and V6
  • the left and right eyes corresponding to viewpoints V2, V3 and V5, V6 are generated based on the 3D image.
  • the parallax image is rendered, and the composite sub-pixels 510, 520, and 530 respectively correspond to the sub-pixels of the viewpoints V2, V3 and V5, V6.
  • the 3D corresponding to the user's perspective is generated according to the depth information of the 3D model or the 3D video.
  • the 3D processing device may switch from rendering the sub-pixel corresponding to the viewpoint position before the change in the composite sub-pixel to rendering the sub-pixel corresponding to the viewpoint position after the change in the composite sub-pixel.
  • the user's left eye moves from viewpoint V1 to viewpoint V3, and the right eye moves from viewpoint V5 to viewpoint V7.
  • the rendered sub-pixels of composite sub-pixels 510, 520, and 530 are adjusted accordingly to adapt to the changed viewpoint positions. .
  • the 3D processing device may render the composite subunit according to the 3D image generated by the 3D model or the depth information of the 3D video corresponding to each user's perspective.
  • the sub-pixels corresponding to the viewpoints of each user's eyes.
  • the eyes of the first user are at viewpoints V2 and V4, respectively, and the eyes of the second user are at viewpoints V5 and V7, respectively.
  • the left and right eye parallax images corresponding to the viewpoints V5 and V7 are generated.
  • the 3D processing device renders composite sub-pixels 510, 520, and 530 respectively corresponding to sub-pixels of viewpoints V2 and V4, V5 and V7.
  • this theoretical correspondence relationship may be uniformly set or modulated when the 3D display device is produced from the assembly line, and may also be stored in the 3D display device in the form of a correspondence relationship table, such as in a processor or a 3D processing device. Due to the installation, material, or alignment of the grating, when a 3D display device is actually used, there may be a problem that the sub-pixels viewed at the viewpoint position in space do not correspond to the theoretical sub-pixels. This has an impact on the correct display of 3D images.
  • the 3D display device calibrate or correct the correspondence between the sub-pixels and the viewpoint existing in the actual use of the 3D display device.
  • the correspondence between viewpoints and sub-pixels that exists during the actual use of the 3D display device is referred to as “corrected correspondence”.
  • the "corrected correspondence” may be deviated from the “theoretical correspondence", or it may be consistent.
  • the process of obtaining the "correction correspondence” is also the process of finding the correspondence between the viewpoint and the sub-pixels in the actual display process.
  • the multi-view 3D display screen or the display panel may be divided into a plurality of correction areas, respectively
  • the correction correspondence between the sub-pixels in each correction area and the viewpoint is determined, and then the correction correspondence data in each area is stored in a region, for example, stored in a processor or a 3D processing device in the form of a correspondence table.
  • the corrected correspondence between at least one sub-pixel and the viewpoint in each correction area is obtained through detection, and the corrected correspondence between other sub-pixels in each correction area and the viewpoint is based on the detected correction.
  • the corresponding relationship is calculated or estimated through mathematical calculations.
  • Mathematical calculation methods include: linear difference, linear extrapolation, nonlinear difference, nonlinear extrapolation, Taylor series approximation, linear change of reference coordinate system, nonlinear change of reference coordinate system, exponential model and triangular transformation.
  • the multi-viewpoint 3D display screen is defined with multiple correction areas, and the combined area range of all the correction zones is 90% to 100% of the area of the multi-viewpoint 3D display screen.
  • the multiple correction areas are arranged in an array in the multi-view 3D display screen.
  • each correction area may be defined by one composite pixel including three composite sub-pixels.
  • each correction area may be defined by two or more composite pixels.
  • each correction area may be defined by two or more composite sub-pixels.
  • each correction area may be defined by two or more composite sub-pixels that do not belong to the same composite pixel.
  • the deviation of the corrected correspondence between sub-pixels and viewpoints in one correction area from the theoretical correspondence and the deviation of the corrected correspondence between sub-pixels and viewpoints in another correction area from the theoretical correspondence can be consistent or basically consistent, or inconsistent.
  • an embodiment according to the present disclosure provides a 3D image display method for the above-mentioned 3D display device.
  • the 3D image display method includes:
  • the corresponding sub-pixels in the composite sub-pixels of the composite pixels in the multi-viewpoint 3D display screen can also be rendered according to the depth information of the 3D video.
  • the 3D image display method includes:
  • S500 based on the determined viewpoint of the user's eyes, render the corresponding sub-pixel in the composite sub-pixel of the composite pixel in the multi-viewpoint 3D display screen according to the generated 3D image, wherein the corresponding sub-pixel refers to the composite sub-pixel and the determined sub-pixel in the composite sub-pixel.
  • the sub-pixel corresponding to the viewpoint of the user.
  • determining the user's perspective includes: detecting the user's perspective in real time.
  • generating a 3D image according to the depth information of the 3D model or 3D video includes: determining the change of the user's perspective detected in real time; and when the change of the user's perspective is less than a predetermined threshold, based on the previous A 3D image is generated from the user's perspective.
  • the 3D display device 300 includes a processor 320 and a memory 310.
  • the electronic device 300 may further include a communication interface 340 and a bus 330.
  • the processor 320, the communication interface 340, and the memory 310 communicate with each other through the bus 330.
  • the communication interface 340 may be configured to transmit information.
  • the processor 320 may call the logic instructions in the memory 310 to execute the method of displaying a 3D picture in a 3D display device based on the user's perspective in the above embodiment.
  • the aforementioned logic instructions in the memory 310 can be implemented in the form of a software functional unit and when sold or used as an independent product, they can be stored in a computer readable storage medium.
  • the memory 310 can be used to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 320 executes functional applications and data processing by running the program instructions/modules stored in the memory 310, that is, implements the method of switching and displaying 3D images and 2D images in the electronic device in the foregoing method embodiment.
  • the memory 310 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the terminal device and the like.
  • the memory 310 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned 3D image display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned 3D image display method.
  • the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
  • the aforementioned storage media can be non-transitory storage media, including: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other media that can store program codes, or it can be a transient storage medium. .
  • each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
  • the relevant parts can be referred to the description of the method parts.
  • the disclosed methods and products can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of units may only be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to implement this embodiment.
  • the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more options for realizing the specified logic function.
  • Execute instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Liquid Crystal (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请涉及3D显示技术领域,公开一种3D显示设备,包括:多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素包括对应于3D显示设备的多个视点的多个子像素;视角确定装置,被配置为确定用户的用户视角;3D处理装置,被配置为基于用户视角,依据3D模型的景深信息渲染多个复合子像素中的相应子像素。该设备可以解决3D显示失真的问题。本申请还公开一种3D图像显示方法、计算机可读存储介质、计算机程序产品。

Description

3D显示设备、3D图像显示方法
本申请要求在2019年12月05日提交中国知识产权局、申请号为201911231149.X、发明名称为“3D显示设备、3D图像显示方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及3D显示技术领域,例如涉及3D显示设备、3D图像显示方法。
背景技术
3D显示技术因为能向用户呈现栩栩如生的视觉体验而成为影像技术中的研究热点。
在实现本公开实施例的过程中,发现相关技术中至少存在如下问题:各个位置的用户看到的都是相同的3D图像,只有一定范围内的用户会产生真实感受,范围外的其它用户会感觉到显示失真。
本背景技术仅为了便于了解本领域的相关技术,并不视作对现有技术的承认。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了简单的概括。该概括不是泛泛评述,也不是要确定关键/重要组成元素或描绘这些实施例的保护范围,而是作为后面的详细说明的序言。
本公开实施例提供了一种3D显示设备、3D图像显示方法、计算机可读存储介质、计算机程序产品,以解决3D显示失真的技术问题。
在一些实施例中,提供了一种3D显示设备,包括:多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素包括对应于3D显示设备的多个视点的多个子像素;视角确定装置,被配置为确定用户的用户视角;3D处理装置,被配置为基于用户视角,依据3D模型的景深信息渲染多个复合子像素中的相应子像素。
在一些实施例中,3D处理装置被配置为基于用户视角,由景深信息生成3D图像,并依据3D图像渲染相应子像素。
在一些实施例中,3D显示设备还包括:眼部定位装置,被配置为确定用户的眼部空间位置;3D处理装置被配置为基于眼部空间位置确定用户的眼部所在视点,并基于3D图像 渲染与眼部所在视点相应的子像素。
在一些实施例中,眼部定位装置包括:眼部定位器,被配置为拍摄用户的用户图像;眼部定位图像处理器,被配置为基于用户图像确定眼部空间位置;和眼部定位数据接口,被配置为传输表明眼部空间位置的眼部空间位置信息。
在一些实施例中,眼部定位器包括:第一摄像头,被配置为拍摄第一图像;和第二摄像头,被配置为拍摄第二图像;其中,眼部定位图像处理器被配置为基于第一图像和第二图像中的至少一副图像识别眼部的存在且基于识别到的眼部确定眼部空间位置。
在一些实施例中,眼部定位器包括:摄像头,被配置为拍摄图像;和深度检测器,被配置为获取用户的眼部深度信息;其中,眼部定位图像处理器被配置为基于图像识别眼部的存在且基于识别到的眼部位置和眼部深度信息确定眼部空间位置。
在一些实施例中,用户视角为用户与多视点3D显示屏的显示平面之间的夹角。
在一些实施例中,用户视角为用户视线与多视点3D显示屏的显示平面之间的夹角,其中用户视线为用户双眼连线的中点与多视点3D显示屏的中心的连线。
在一些实施例中,用户视角为:用户视线与显示平面的横向、竖向和深度方向中至少之一的夹角;或用户视线与用户视线在显示平面内的投影之间的夹角。
在一些实施例中,3D显示设备还包括:3D信号接口,被配置为接收3D模型。
在一些实施例中,提供了一种3D图像显示方法,包括:确定用户的用户视角;和基于用户视角,依据3D模型的景深信息渲染多视点3D显示屏中的复合像素的复合子像素中的相应子像素。
在一些实施例中,基于用户视角,依据3D模型的景深信息渲染多视点3D显示屏中的复合像素的复合子像素中的相应子像素包括:基于用户视角,由景深信息生成3D图像,并依据3D图像渲染相应子像素。
在一些实施例中,3D图像显示方法还包括:确定用户的眼部空间位置;基于眼部空间位置确定用户的眼部所在视点;和基于3D图像渲染与眼部所在视点相应的子像素。
在一些实施例中,确定用户的眼部空间位置包括:拍摄用户的用户图像;基于用户图像确定眼部空间位置;和传输表明眼部空间位置的眼部空间位置信息。
在一些实施例中,拍摄用户的用户图像并基于用户图像确定眼部空间位置包括:拍摄第一图像;拍摄第二图像;基于第一图像和第二图像中的至少一幅图像识别眼部的存在;和基于识别到的眼部确定眼部空间位置。
在一些实施例中,拍摄用户的用户图像并基于用户图像确定眼部空间位置包括:拍摄图像;获取用户的眼部深度信息;基于图像识别眼部的存在;和基于识别到的眼部位置和 眼部深度信息共同确定眼部空间位置。
在一些实施例中,用户视角为用户与多视点3D显示屏的显示平面之间的夹角。
在一些实施例中,用户视角为用户视线与多视点3D显示屏的显示平面之间的夹角,其中用户视线为用户双眼连线的中点与多视点3D显示屏的中心的连线。
在一些实施例中,用户视角为:用户视线与显示平面的横向、竖向和深度方向中至少之一的夹角;或用户视线与用户视线在显示平面内的投影之间的夹角。
在一些实施例中,3D图像显示方法还包括:接收3D模型。
在一些实施例中,提供了一种3D显示设备,包括:处理器;和存储有程序指令的存储器;处理器被配置为在执行程序指令时,执行如上所述的方法。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的3D图像显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的3D图像显示方法。
本公开实施例提供的3D显示设备、3D图像显示方法,以及计算机可读存储介质、计算机程序产品,可以实现以下技术效果:
向用户提供基于视角的随动3D显示效果。不同角度的用户能观看到不同的3D显示画面,显示效果逼真。不同角度的显示效果还能随用户的视角变化而随动调整。以向用户呈现良好的视觉效果。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1A至图1C是根据本公开实施例的3D显示设备的示意图;
图2是根据本公开实施例的眼部定位装置的示意图;
图3是根据本公开实施例的利用两个摄像头确定眼部空间位置的几何关系模型;
图4是根据本公开另一实施例的眼部定位装置的示意图;
图5是根据本公开实施例的利用摄像头和深度检测器确定眼部空间位置的几何关系模型;
图6是根据本公开实施例的用户视角的示意图;
图7是根据本公开另一实施例的用户视角的示意图;
图8是根据本公开实施例的生成对应于不同用户视角的3D图像的示意图;
图9A至图9E是根据本公开实施例的视点与子像素的对应关系示意图;
图10是根据本公开实施例的3D显示设备的显示方法流程图;和
图11是根据本公开实施例的3D显示设备的示意图。
附图标记:
100:3D显示设备;110:多视点3D显示屏;120:处理器;121:寄存器;130:3D处理装置;131:缓存器;140:3D信号接口;150:眼部定位装置;151:眼部定位器;151a:第一摄像头;151b:第二摄像头;152:眼部定位图像处理器;153:眼部定位数据接口;154:红外发射装置;155:摄像头;156:缓存器;157:比较器;158:深度检测器;160:视角确定装置;300:3D显示设备;310:存储器;320:处理器;330:总线;340:通信接口;400:复合像素;410:红色复合子像素;420:绿色复合子像素;430:蓝色复合子像素;500:复合像素;510:红色复合子像素;520:绿色复合子像素;530:蓝色复合子像素;f:焦距;Za:第一摄像头的光轴;Zb:第二摄像头的光轴;401a:第一摄像头的焦平面;401b:第二摄像头的焦平面;Oa:第一摄像头的镜头中心;Ob:第二摄像头的镜头中心;XRa:用户右眼在第一摄像头的焦平面内成像的X轴坐标;XRb:用户右眼在第二摄像头的焦平面内成像的X轴坐标;XLa:用户左眼在第一摄像头的焦平面内成像的X轴坐标;XLb:用户左眼在第二摄像头的焦平面内成像的X轴坐标;T;第一摄像头和第二摄像头的间距;DR:右眼与第一摄像头和第二摄像头所在平面的间距;DL:左眼与第一摄像头和第二摄像头所在平面的间距;α:用户双眼连线与第一摄像头和第二摄像头所在平面的倾斜角度;P:用户双眼间距或瞳距;Z;光轴;FP:焦平面;XR:用户右眼在摄像头的焦平面内成像的X轴坐标;XL:用户左眼在摄像头的焦平面内成像的X轴坐标;O:镜头中心;MCP:摄像头平面;βR:左眼与镜头中心的连线在XZ平面内的投影相对于X轴的倾斜角;βL:右眼与镜头中心的连线在XZ平面内的投影相对于X轴的倾斜角;α:用户双眼连线在XZ平面内的投影与X轴的夹角;P:用户双眼的瞳距。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。
根据本公开的实施例提供了一种3D显示设备,包括多视点3D显示屏(例如:多视点 裸眼3D显示屏)、配置为确定用户的用户视角的视角确定装置、配置为基于用户视角并依据3D模型或3D视频的景深信息渲染多视点3D显示屏所包含的复合像素中的复合子像素中的相应子像素的3D处理装置。
在一些实施例中,3D处理装置基于用户视角并依据3D模型或3D视频的景深信息生成3D图像,例如生成对应于用户视角的3D图像。用户视角与生成的3D图像的对应关系,类似于用户从不同角度看向真实存在的场景会看到对应于该角度的场景表现。对于不同的用户视角,由3D模型或3D视频的景深信息生成的3D图像有可能不同。由此,生成了基于用户视角而随动的3D图像,各个视角下的用户所看到的3D图像是不同的,从而借助多视点3D显示屏能够使用户感受到犹如观看真实物体般的感觉,改进显示效果并改善用户体验。
图1A示出了根据本公开实施例的3D显示设备100的示意图。如图1A所示,3D显示设备100包括多视点3D显示屏110、3D处理装置130、眼部定位装置150、视角确定装置160、3D信号接口140和处理器120。
在一些实施例中,多视点3D显示屏110可包括显示面板和覆盖显示面板的光栅(未示出)。显示面板可以包括m列n行(m×n)个复合像素400并因此限定出m×n的显示分辨率。m×n的显示分辨率例如可以为全高清(FHD)以上的分辨率,包括但不限于:1920×1080、1920×1200、2048×1280、2560×1440、3840×2160等。每个复合像素包括多个复合子像素,每个复合子像素包括对应于i个视点的i个同色子像素,其中i≥3。
图1A示意性地示出了m×n个复合像素中的一个复合像素400,包括由i=6个红色子像素R构成的红色复合子像素410、由i=6个绿色子像素G构成的绿色复合子像素420和由i=6个蓝色子像素B构成的蓝色复合子像素430。3D显示设备100相应具有i=6个视点(V1-V6)。在其他实施例中可以想到i为大于或小于6的其他值,如10、30、50、100等。
在一些实施例中,每个复合像素呈正方形。每个复合像素中的多个复合子像素可以彼此平行布置。每个复合子像素中的i个子像素可以成行布置。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。在一些实施例中,3D显示设备100也可设置有一个以上3D处理装置130,它们并行、串行或串并行结合地处理对3D显示屏110的各复合像素的各复合子像素的子像素的渲染。本领域技术人员将明白,一个以上3D处理装置可以有其他的方式分配且并行处理3D显示屏110的多行多列复合像素或复合子像素,这落入本公开实施例的范围内。如图1A所示的实施例,3D处理装置130还可以选择性地包括缓存器131,以便缓存所接收到的3D视频的图像。
在一些实施例中,处理器被包括在计算机或智能终端中,这样的智能终端例如为移动终端。或者,处理器可以作为计算机或智能终端的处理器单元。但是可以想到,在一些实施例中,处理器120可以设置在3D显示设备100的外部,例如3D显示设备100可以为带有3D处理装置的多视点3D显示器,例如非智能的3D电视。
在一些实施例中,3D显示设备内部包括处理器。基于此,3D信号接口140为连接处理器120与3D处理装置130的内部接口。这样的3D显示设备100例如可以是移动终端,3D信号接口140可以为MIPI、mini-MIPI接口、LVDS接口、min-LVDS接口或Display Port接口。
如图1A所示,3D显示设备100的处理器120还可包括寄存器121。寄存器121可配置为暂存指令、数据和地址。在一些实施例中,寄存器121可被配置为接收有关多视点3D显示屏110的显示要求的信息。在一些实施例中,3D显示设备100还可以包括编解码器,配置为对压缩的3D视频信号解压缩和编解码并将解压缩的3D视频信号经3D信号接口140发送至3D处理装置130。
在一些实施例中,3D显示设备100可以包括配置为获取/确定眼部定位数据的眼部定位装置。例如图1B所示的实施例中,3D显示设备100包括通信连接至3D处理装置130的眼部定位装置150,由此3D处理装置130可以直接接收眼部定位数据。在一些实施例中,眼部定位装置150可同时连接处理器120和3D处理装置130,使得一方面3D处理装置130可以直接从眼部定位装置150获取眼部定位数据,另一方面眼部定位装置150从处理器120获取的其他信息可以被3D处理装置130处理。
在一些实施例中,眼部定位数据包括表明用户的眼部空间位置的眼部空间位置信息,眼部空间位置信息可以三维坐标形式表现,例如包括用户的眼部/脸部与多视点3D显示屏或眼部定位装置之间的间距信息(也就是用户的眼部/脸部的深度信息)、观看的眼部/脸部在多视点3D显示屏或眼部定位装置的横向上的位置信息、用户的眼部/脸部在多视点3D显示屏或眼部定位装置的竖向上的位置信息。眼部空间位置也可以用包含间距信息、横向位置信息和竖向位置信息中的任意两个信息的二维坐标形式表现。眼部定位数据还可以包括用户的眼部(例如双眼)所在的视点(视点位置)、用户视角等。
在一些实施例中,眼部定位装置包括配置为拍摄用户图像(例如用户脸部图像)的眼部定位器、配置为基于所拍摄的用户图像确定眼部空间位置的眼部定位图像处理器和配置为传输眼部空间位置信息的眼部定位数据接口,眼部空间位置信息表明眼部空间位置。
在一些实施例中,眼部定位器包括配置为拍摄第一图像的第一摄像头和配置为拍摄第二图像的第二摄像头,而眼部定位图像处理器配置为基于第一图像和第二图像中的至少一 副图像识别眼部的存在且基于识别到的眼部确定眼部空间位置。
图2示出了眼部定位装置中的眼部定位器配置有两个摄像头的示例。如图所示,眼部定位装置150包括眼部定位器151、眼部定位图像处理器152和眼部定位数据接口153。眼部定位器151包括例如为黑白摄像头的第一摄像头151a和例如为黑白摄像头的第二摄像头151b。第一摄像头151a配置为拍摄例如为黑白图像的第一图像,第二摄像头151b配置为拍摄例如为黑白图像的第二图像。眼部定位装置150可以前置于3D显示设备100中,例如前置于多视点3D显示屏110中。第一摄像头151a和第二摄像头151b的拍摄对象可以是用户脸部。在一些实施例中,第一摄像头和第二摄像头中的至少一个可以是彩色摄像头,并且配置为拍摄彩色图像。
在一些实施例中,眼部定位装置150的眼部定位数据接口153通信连接至3D显示设备100的3D处理装置130,由此3D处理装置130可以直接接收眼部定位数据。在另一些实施例中,眼部定位装置150的眼部定位图像处理器152可通信连接至或集成至处理器120,由此眼部定位数据可以从处理器120通过眼部定位数据接口153被传输至3D处理装置130。
可选地,眼部定位器151还设置有红外发射装置154。在第一摄像头或第二摄像头工作时,红外发射装置154配置为选择性地发射红外光,以在环境光线不足时、例如在夜间拍摄时起到补光作用,从而在环境光线弱的条件下也可以拍摄能识别出用户脸部及眼部的第一图像和第二图像。
在一些实施例中,显示设备可以配置为在第一摄像头或第二摄像头工作时,基于接收到的光线感应信号,例如检测到光线感应信号低于预定阈值时,控制红外发射装置开启或调节其大小。在一些实施例中,光线感应信号是由处理终端或显示设备集成的环境光传感器接收的。上述针对红外发射装置的操作也可以由眼部定位装置或集成有眼部定位装置的处理终端来完成。
可选地,红外发射装置154配置为发射波长大于或等于1.5微米的红外光,亦即长波红外光。与短波红外光相比,长波红外光穿透皮肤的能力较弱,因此对眼部的伤害较小。
拍摄到的第一图像和第二图像被传输至眼部定位图像处理器152。眼部定位图像处理器152可以配置为具有视觉识别功能(如脸部识别功能),并且可以配置为基于第一图像和第二图像中的至少一幅图像识别出眼部以及基于识别出的眼部确定眼部空间位置。识别眼部可以是先基于第一图像和第二图像中的至少一幅图像识别出脸部,再经由识别的脸部来识别眼部。
在一些实施例中,眼部定位图像处理器152可以基于眼部空间位置确定用户眼部所处 的视点。在另一些实施例中,由3D处理装置130基于获取的眼部空间位置来确定用户眼部所处的视点。
在一些实施例中,第一摄像头和第二摄像头可以是相同的摄像头,例如相同的黑白摄像头,或相同的彩色摄像头。在另一些实施例中,第一摄像头和第二摄像头可以是不同的摄像头,例如不同的黑白摄像头,或不同的彩色摄像头。在第一摄像头和第二摄像头是不同摄像头的情况下,为了确定眼部的空间位置,可以对第一图像和第二图像进行校准或矫正。
在一些实施例中,第一摄像头和第二摄像头中至少一个摄像头是广角的摄像头。
图3示意性地示出了利用两个摄像头确定眼部的空间位置的几何关系模型。在图3所示实施例中,第一摄像头和第二摄像头是相同的摄像头,因此具有相同的焦距f。第一摄像头151a的光轴Za与第二摄像头151b的光轴Zb平行,而第一摄像头151a的焦平面401a和第二摄像头151b的焦平面401b处于同一平面内并且垂直于两个摄像头的光轴。基于上述设置,两个摄像头的镜头中心Oa和Ob的连线平行于两个摄像头的焦平面。在图3所示实施例中,以两个摄像头的镜头中心Oa到Ob的连线方向作为X轴方向并且以两个摄像头的光轴方向为Z轴方向示出XZ平面的几何关系模型。在一些实施例中,X轴方向也是水平方向,Y轴方向也是竖直方向,Z轴方向是垂直于XY平面的方向(也可称为深度方向)。
在图3所示实施例中,以第一摄像头151a的镜头中心Oa为其原点,以第二摄像头151b的镜头中心Ob为其原点。R和L分别表示用户的右眼和左眼,XRa和XRb分别为用户右眼R在两个摄像头的焦平面401a和401b内成像的X轴坐标,XLa和XLb分别为用户左眼L在两个摄像头的焦平面401a和401b内成像的X轴坐标。此外,两个摄像头的间距T以及它们的焦距f也是已知的。根据相似三角形的几何关系可得出右眼R和左眼L与如上设置的两个摄像头所在平面的间距DR和DL分别为:
Figure PCTCN2020133332-appb-000001
Figure PCTCN2020133332-appb-000002
并且可得出用户双眼连线与如上设置的两个摄像头所在平面的倾斜角度α以及用户双眼间距或瞳距P分别为:
Figure PCTCN2020133332-appb-000003
Figure PCTCN2020133332-appb-000004
在图3所示实施例中,用户双眼连线(或用户脸部)与如上设置的两个摄像头所在平面相互倾斜并且倾斜角度为α。当用户脸部与如上设置的两个摄像头所在平面相互平行时(亦即当用户平视两个摄像头时),倾斜角度α为零。
在一些实施例中,3D显示设备100可以是计算机或智能终端、如移动终端。但是可以想到,在一些实施例中,3D图像显示设备100也可以是非智能的显示终端、如非智能的3D电视。在一些实施例中,包括两个摄像头151a、151b的眼部定位装置150前置于多视点3D显示屏中,或者说与多视点3D显示屏的显示平面基本位于在同一平面内。因此,在图3所示实施例中示例性得出的用户的右眼R和左眼L与如上设置的两个摄像头所在平面的间距DR和DL即为用户的右眼R和左眼L相对于多视点3D显示屏的间距(或者说是用户的右眼和左眼的深度),而用户脸部与如上设置的两个摄像头所在平面的倾斜角度α即为用户脸部相对于多视点3D显示屏的倾斜角度。
在一些实施例中,眼部定位数据接口153配置为传输用户双眼相对于眼部定位装置150或多视点3D显示屏110的倾斜角度或平行度。这可有利于更精确地呈现3D图像。
在一些实施例中,如上示例性得出的眼部空间位置信息DR、DL、α和P通过眼部定位数据接口153传输至3D处理装置130。3D处理装置130基于接收到的眼部空间位置信息确定用户眼部所在的视点。在一些实施例中,3D处理装置130可预先存储有眼部空间位置与3D显示设备的视点之间的对应关系表。在获得眼部空间位置信息后,基于对应关系表即可确定用户眼部所处的视点。或者,对应关系表也可以是3D处理装置从其他带有存储功能的元器件(例如处理器)接收/读取的。
在一些实施例中,如上示例性得出的眼部空间位置信息DR、DL、α和P也可被直接传输至3D显示设备100的处理器,而3D处理装置130通过眼部定位数据接口153从处理器接收/读取眼部空间位置信息。
在一些实施例中,第一摄像头151a配置为拍摄包括按照时间前后排列的多幅第一图像的第一图像序列,而第二摄像头151b配置为拍摄包括按照时间前后排列的多幅第二图像的第二图像序列。眼部定位图像处理器152可以包括同步器155。同步器155配置为确定第一图像序列和第二图像序列中时间同步的第一图像和第二图像。被确定为时间同步的第一图像和第二图像用于眼部的识别以及眼部空间位置的确定。
在一些实施例中,眼部定位图像处理器152包括缓存器156和比较器157。缓存器156 配置为缓存第一图像序列和第二图像序列。比较器157配置为比较第一图像序列和第二图像序列中的多幅第一图像和第二图像。通过比较可以判断眼部的空间位置是否变化,也可以判断眼部是否还处于观看范围内等。判断眼部是否还处于观看范围内也可以是由3D处理装置来执行的。
在一些实施例中,眼部定位图像处理器152配置为在第一图像序列和第二图像序列中的当前第一图像和第二图像中未识别到眼部的存在且在之前或之后的第一图像和第二图像中识别到眼部的存在时,基于之前或之后的第一图像和第二图像确定的眼部空间位置信息作为当前的眼部空间位置信息。这种情况可能出现在例如用户短暂转动头部时。在这种情况下,有可能短暂地无法识别到用户的脸部及眼部。
在一些实施例中,也可以对基于上述之前和之后的能识别出脸部及眼部的第一图像和第二图像所确定的眼部空间位置信息取平均值、进行数据拟合、进行插值或以其他方法处理,并且将得到的结果作为当前的眼部空间位置信息。
在一些实施例中,第一摄像头和第二摄像头配置为以24帧/秒或以上的频率拍摄第一图像序列和第二图像序列,例如以30帧/秒的频率拍摄,或者例如以60帧/秒的频率拍摄。
在一些实施例中,第一摄像头和第二摄像头配置为以与3D显示设备的多视点3D显示屏刷新频率相同的频率进行拍摄。
在一些实施例中,眼部定位器包括配置为拍摄至少一幅图像的至少一个摄像头和配置为获取用户的眼部深度信息的深度检测器,而眼部定位图像处理器配置为基于所拍摄的至少一幅图像识别眼部的存在,并基于识别到的眼部和眼部深度信息确定眼部空间位置。
图4示出了眼部定位装置中的眼部定位器配置有单个摄像头和深度检测器的示例。如图所示,眼部定位装置150包括眼部定位器151、眼部定位图像处理器152和眼部定位数据接口153。眼部定位器151包括例如为黑白摄像头的摄像头155和深度检测器158。摄像头155被配置为拍摄例如为黑白图像的至少一幅图像,而深度检测器158配置为获取用户的眼部深度信息。眼部定位装置150可以前置于3D显示设备100中,例如前置于多视点3D显示屏110中。摄像头155的拍摄对象是用户脸部,基于拍摄到的图像识别出脸部或眼部。深度检测器获取眼部深度信息,也可以获取脸部深度信息,并基于脸部深度信息获取眼部深度信息。在一些实施例中,摄像头155可以用彩色摄像头,并且配置为拍摄彩色图像。在一些实施例中,也可以采用两个或两个以上摄像头155与深度检测器158配合确定眼部空间位置。
在一些实施例中,眼部定位装置150的眼部定位数据接口153通信连接至3D显示设备100的3D处理装置130,由此3D处理装置130可以直接接收眼部定位数据。在另一些 实施例中,眼部定位图像处理器152可通信连接至或集成至3D显示设备100的处理器120,由此眼部定位数据可以从处理器120通过眼部定位数据接口153被传输至3D处理装置130。
可选地,眼部定位器151还设置有红外发射装置154。在摄像头155工作时,红外发射装置154配置为选择性地发射红外光,以在环境光线不足时、例如在夜间拍摄时起到补光作用,从而在环境光线弱的条件下也可以拍摄能识别出用户脸部及眼部的图像。
在一些实施例中,显示设备可以配置为在摄像头工作时,基于接收到的光线感应信号,例如检测到光线感应信号低于预定阈值时,控制红外发射装置开启或调节其大小。在一些实施例中,光线感应信号是由处理终端或显示设备集成的环境光传感器接收的。上述针对红外发射装置的操作也可以由眼部定位装置或集成有眼部定位装置的处理终端来完成。
可选地,红外发射装置154配置为发射波长大于或等于1.5微米的红外光,亦即长波红外光。与短波红外光相比,长波红外光穿透皮肤的能力较弱,因此对眼部的伤害较小。
拍摄到的图像被传输至眼部定位图像处理器152。眼部定位图像处理器可以配置为具有视觉识别功能(例如脸部识别功能),并且可以配置为基于所拍摄的图像识别出脸部以及基于识别出的眼部位置和用户的眼部深度信息确定眼部的空间位置,并基于眼部的空间位置确定用户眼部所处的视点。在另一些实施例中,由3D处理装置基于获取的眼部空间位置确定用户眼部所处的视点。在一些实施例中,摄像头是广角的摄像头。在一些实施例中,深度检测器158构造为结构光摄像头或TOF摄像头。
图5示意性地示出了利用摄像头和深度检测器确定眼部的空间位置的几何关系模型。在图5所示实施例中,摄像头具有焦距f、光轴Z和焦平面FP,R和L分别表示用户的右眼和左眼,XR和XL分别为用户右眼R和左眼L在摄像头155的焦平面FP内成像的X轴坐标。
作为解释而非限制性地,通过摄像头155拍摄的包含了用户左眼和右眼的图像,可得知左眼和右眼在摄像头155的焦平面FP内成像的X轴(水平方向)坐标和Y轴(竖直方向)坐标。如图5所示,以摄像头155的镜头中心O为原点,X轴和与X轴垂直的Y轴(未示出)形成摄像头平面MCP,其与焦平面FP平行。摄像头155的光轴方向Z也是深度方向。也就是说,在图5所示的XZ平面内,左眼和右眼在焦平面FP内成像的X轴坐标XR、XL是已知的。此外,摄像头155的焦距f是已知的。在这种情况下,可算出左眼和右眼与摄像头镜头中心O的连线在XZ平面内的投影相对于X轴的倾斜角βR和βL。同理,在(未示出的)YZ平面内,左眼和右眼在焦平面FP内成像的Y轴坐标是已知的,再结合已知的焦距f,可算出左眼和右眼与摄像头镜头中心O的连线在YZ平面内的投影相 对于摄像头平面MCP的Y轴的倾斜角。
作为解释而非限制性地,通过摄像头155拍摄的包含了用户左眼和右眼的图像以及深度检测器158获取的左眼和右眼的深度信息,可得知左眼和右眼在摄像头155的坐标系内的空间坐标(X,Y,Z),其中,Z轴坐标即为深度信息。据此,如图5所示,可算出左眼和右眼的连线在XZ平面内的投影与X轴的夹角α。同理,在(未示出的)YZ平面内,可算出左眼和右眼的连线在YZ平面内的投影与Y轴的夹角。
如图5所示,在已知摄像头155的焦距f、双眼在焦平面FP内的X轴坐标XR、XL的情况下,可以得出用户的右眼R和左眼L与镜头中心O的连线在XZ平面内的投影相对于X轴的倾斜角βR和βL分别为:
Figure PCTCN2020133332-appb-000005
Figure PCTCN2020133332-appb-000006
在此基础上,通过深度检测器158获得的右眼R和左眼L的深度信息,可得知用户右眼R和左眼L相对于摄像头平面MCP/多视点3D显示屏的显示平面的距离DR和DL。据此,可以得出用户双眼连线在XZ平面内的投影与X轴的夹角α以及瞳距P分别为:
Figure PCTCN2020133332-appb-000007
Figure PCTCN2020133332-appb-000008
上述计算方法和数学表示仅是示意性的,本领域技术人员可以想到其他计算方法和数学表示,以得到所需的眼部的空间位置。本领域技术人员也可以想到,必要时将摄像头的坐标系与显示设备或多视点3D显示屏的坐标系进行变换。
在一些实施例中,当距离DR和DL不等并且夹角α不为零时,可认为用户以一定倾角面对多视点3D显示屏的显示平面。当距离DR和DL相等并且视角α为零时,可认为用户平视多视点3D显示屏的显示平面。在另一些实施例中,可以针对夹角α设定阈值,在夹角α不超过阈值的情况下,可以认为用户平视多视点3D显示屏的显示平面。
在一些实施例中,基于识别出的眼部或者说确定的眼部空间位置,能够得到用户视角,并基于用户视角来从3D模型或包括景深信息的3D视频生成与用户视角相对应的3D图像,从而依据3D图像所显示的3D效果对于用户来说是随动的,使用户获得彷如在对应角度观看真实物体或场景的感受。
在一些实施例中,用户视角是用户相对于摄像头的夹角。
在一些实施例中,用户视角可以是用户的眼部(单眼)与摄像头的镜头中心O的连线 相对于摄像头坐标系的夹角。在一些实施例中,夹角例如为连线与摄像头坐标系中的X轴(横向)的夹角θX、或者连线与摄像头坐标系中的Y轴(竖向)的夹角θY、或者以θ(X,Y)表示。在一些实施例中,夹角例如为连线在摄像头坐标系的XY平面内的投影与连线的夹角。在一些实施例中,夹角例如为连线在摄像头坐标系的XY平面内的投影与X轴的夹角θX、或者连线在摄像头坐标系的XY平面内的投影与Y轴的夹角θY、或者以θ(X,Y)表示。
在一些实施例中,用户视角可以是用户的双眼连线中点与摄像头的镜头中心O的连线(即用户视线)相对于摄像头坐标系的夹角。在一些实施例中,夹角例如为用户视线与摄像头坐标系中的X轴(横向)的夹角θX、或者用户视线与摄像头坐标系中的Y轴(竖向)的夹角θY、或者以θ(X,Y)表示。在一些实施例中,夹角例如为用户视线在摄像头坐标系的XY平面内的投影与连线的夹角。在一些实施例中,夹角例如为用户视线在摄像头坐标系的XY平面内的投影与X轴(横向)的夹角θX、或者用户视线在摄像头坐标系的XY平面内的投影与Y轴(竖向)的夹角θY、或者以θ(X,Y)表示。
在一些实施例中,用户视角可以是用户的双眼连线相对于摄像头坐标系的夹角。在一些实施例中,夹角例如为双眼连线与摄像头坐标系中的X轴的夹角θX、或者双眼连线与摄像头坐标系中的Y轴的夹角θY、或者以θ(X,Y)表示。在一些实施例中,夹角例如为双眼连线在摄像头坐标系的XY平面内的投影与连线的夹角。在一些实施例中,夹角例如为双眼连线在摄像头坐标系的XY平面内的投影与X轴的夹角θX、或者双眼连线在摄像头坐标系的XY平面内的投影与Y轴的夹角θY、或者以θ(X,Y)表示。
在一些实施例中,用户视角可以是用户的脸部所在平面相对于摄像头坐标系的夹角。在一些实施例中,夹角例如为脸部所在平面与摄像头坐标系中的XY平面的夹角。其中,脸部所在平面可通过提取多个脸部特征确定,脸部特征例如可以是前额、眼部、耳部、嘴角、下巴等。
在一些实施例中,用户视角可以是用户相对于多视点3D显示屏或多视点3D显示屏的显示平面的夹角。在本文中定义多视点3D显示屏或显示平面的坐标系,其中,以多视点3D显示屏的中心或显示平面的中心o为原点,以水平方向(横向)直线为x轴,以竖直方向直线为y轴,以垂直于xy平面的直线为z轴(深度方向)。
在一些实施例中,用户视角可以是用户的眼部(单眼)与多视点3D显示屏或显示平面的中心o的连线相对于多视点3D显示屏或显示平面的坐标系的夹角。在一些实施例中,夹角例如为连线与坐标系中的x轴的夹角θx、或者连线与坐标系中的y轴的夹角θy、或者以θ(x,y)表示。在一些实施例中,夹角例如为连线在坐标系的xy平面内的投影与连线 的夹角。在一些实施例中,夹角例如为连线在坐标系的xy平面内的投影与x轴的夹角θx、或者连线在坐标系的xy平面内的投影与y轴的夹角θy、或者以θ(x,y)表示。
在一些实施例中,用户视角可以是用户的双眼连线中点与多视点3D显示屏或显示平面的中心o的连线(即用户视线)相对于多视点3D显示屏或显示平面的坐标系的夹角。在一些实施例中,如图6所示,夹角例如为用户视线与坐标系中的x轴的夹角θx、或者用户视线与坐标系中的y轴的夹角θy、或者以θ(x,y)表示,图中R表示用户右眼,L表示用户左眼。在一些实施例中,如图7所示,夹角例如为用户视线在坐标系的xy平面内的投影k与用户视线的夹角θk。在一些实施例中,夹角例如为用户视线在坐标系的xy平面内的投影与X轴的夹角θx、或者用户视线在坐标系的xy平面内的投影与y轴的夹角θy、或者以θ(x,y)表示。
在一些实施例中,用户视角可以是用户的双眼连线相对于多视点3D显示屏或显示平面的坐标系的夹角。在一些实施例中,夹角例如为连线与坐标系中的x轴的夹角θx、或者连线与坐标系中的y轴的夹角θy、或者以θ(x,y)表示。在一些实施例中,夹角例如为连线在坐标系的xy平面内的投影与连线的夹角。在一些实施例中,夹角例如为连线在坐标系的xy平面内的投影与x轴的夹角θx、或者连线在摄像头坐标系的xy平面内的投影与y轴的夹角θy、或者以θ(x,y)表示。
在一些实施例中,用户视角可以是用户的脸部所在平面相对于多视点3D显示屏或显示平面的坐标系的夹角。在一些实施例中,夹角例如为脸部所在平面与坐标系中的xy平面的夹角。其中,脸部所在平面可通过提取多个脸部特征确定,脸部特征例如可以是前额、眼部、耳部、嘴角、下巴等。
在一些实施例中,摄像头前置于多视点3D显示屏上。在这种情况下,可以将摄像头坐标系视作多视点3D显示屏或显示平面的坐标系。
为确定用户视角,3D显示设备可设有视角确定装置。视角确定装置可以是软件,例如计算模块、程序指令等,也可以是硬件。视角确定装置可以集成在3D处理装置中,也可以集成在眼部定位装置中,也可以向3D处理装置发送用户视角数据。
在图1A所示出的实施例中,视角确定装置160与3D处理装置130通信连接。3D处理装置可以接收用户视角数据,并基于用户视角数据生成对应于用户视角的3D图像以及基于眼部定位数据所确定的用户眼部(例如双眼)所处视点根据生成的3D图像渲染复合子像素中与视点相关的子像素。在一些实施例中,如图1B所示,3D处理装置可接收由眼部定位装置150确定的眼部空间位置信息和由视角确定装置160确定的用户视角数据。在一些实施例中,如图1C所示,视角确定装置160可以集成在眼部定位装置150中,例如 集成在眼部定位图像处理器152中,眼部定位装置150与3D处理装置通信连接,向3D处理装置发送包括了用户视角数据、眼部空间位置信息的眼部定位数据。在另一些实施例中,视角确定装置可以集成在3D处理装置中,由3D处理装置接收眼部空间位置信息并基于眼部空间位置信息确定用户视角数据。在一些实施例中,眼部定位装置分别与3D处理装置和视角确定装置通信连接,并向两者发送眼部空间位置信息,由视角确定装置基于眼部空间位置信息确定用户视角数据并发送至3D处理装置。
3D处理装置接收或确定用户视角数据后,可以基于用户视角数据随动地从所接收的3D模型或包括景深信息的3D视频生成与该视角相符合的3D图像,从而能够向处于不同用户视角的用户呈现具有不同景深信息和渲染图像的3D图像,使用户获得与从不同角度观察真实物体相似的视觉感受。
图8示意性地示出了对于不同的用户视角基于同一3D模型生成的不同的3D图像。如图8所示,3D处理装置接收到具有景深信息的3D模型600,还接收到或确认了多个不同的用户视角。针对各个用户视角,3D处理装置由3D模型600生成了不同的3D图像601和602。图中R表示用户右眼,L表示用户左眼。依据由不同用户视角所对应的景深信息生成的不同的3D图像601、602来分别渲染相应视点所对应的子像素,其中相应视点是指由眼部定位数据确定的用户双眼所处的视点。对于用户来说,获得的3D显示效果是根据不同的用户视角随动的。依据用户视角的改变,这种随动效果例如可以是在水平方向上随动,或者是在竖直方向上随动,或者是在深度方向上随动,或者是在水平、竖直、深度方向的分量随动。
多个不同的用户视角可以是基于多个用户生成的,也可以是基于同一用户的运动或动作所生成的。
在一些实施例中,用户视角是实时检测并确定的。在一些实施例中,实时检测并确定用户视角的变化,在用户视角的变化小于预定阈值时,基于变化前的用户视角生成3D图像。这种情况可能发生在用户暂时小幅度或小范围内晃动头部或作出姿态调整时,例如在固定的座位上进行姿态调整。此时仍旧以发生变化前的用户视角作为当前用户视角并生成与当前用户视角所对应的景深信息相应的3D图像。
在一些实施例中,基于识别出的眼部或者确定的眼部空间位置可以确定用户眼部所在的视点。眼部空间位置信息与视点的对应关系可以对应关系表的形式存储在处理器中,并由3D处理装置接收。或者,眼部空间位置信息与视点的对应关系可以对应关系表的形式存储在3D处理装置中。
下面描述根据本公开的实施例的3D显示设备的显示。如上所述,3D显示设备可以具 有多个视点。用户的眼部在各视点位置(空间位置)处可看到多视点3D显示屏中各复合像素的复合子像素中相应的子像素的显示。用户的双眼在不同的视点位置看到的两个不同画面形成视差,在大脑中合成3D画面。
在一些实施例中,基于生成的3D图像和确定的用户眼部的视点,3D处理装置可以渲染各复合子像素中的相应子像素。视点与子像素的对应关系可以对应表的形式存储在处理器中,并由3D处理装置接收。或者,视点与子像素的对应关系可以对应表的形式存储在3D处理装置中。
在一些实施例中,基于生成的3D图像,由处理器或3D处理装置生成并列的两幅图像,例如左眼视差图像和右眼视差图像。在一些实施例中,将生成的3D图像作为并列的两幅图像中的一幅,例如作为左眼视差图像和右眼视差图像中的一幅,并基于3D图像生成并列的两幅图像中的另一幅,例如生成左眼视差图像和右眼视差图像中的另一幅。3D处理装置基于两幅图像中的一幅,依据确定的用户双眼的视点位置中的一只眼的视点位置,渲染各复合子像素中的至少一个子像素;并基于两幅图像中的另一幅,依据确定的用户双眼的视点位置中的另一眼的视点位置,渲染各复合子像素中的至少另一个子像素。
下面结合图9A至图9E所示实施例详细描述依据视点对子像素的渲染。在所示出的实施例中,3D显示设备具有8个视点V1-V8。3D显示设备的多视点3D显示屏中的每个复合像素500由三个复合子像素510、520和530构成。每个复合子像素由对应于8个视点的8个同色子像素构成。如图所示,复合子像素510是由8个红色子像素R构成的红色复合子像素,复合子像素520是由8个绿色子像素G构成的绿色复合子像素,复合子像素530是由8个蓝色子像素B构成的蓝色复合子像素。多个复合像素在多视点3D显示屏中以阵列形式布置。为清楚起见,图中仅示出了多视点3D显示屏中的一个复合像素500。其他复合像素的构造和子像素的渲染可以参照对所示出的复合像素的描述。
在一些实施例中,当基于眼部空间位置信息确定用户的双眼各对应一个视点时,依据由3D模型或3D视频的景深信息所生成的对应于用户视角的3D图像,3D处理装置可以渲染复合子像素中的相应子像素。
参考图9A,在所示实施例中,用户的左眼处于视点V2,右眼处于视点V5,基于3D图像生成对应于这两个视点V2和V5的左右眼视差图像,并渲染复合子像素510、520、530各自与这两个视点V2和V5相对应的子像素。
在一些实施例中,当基于眼部空间位置信息确定用户的双眼各对应一个视点时,依据由3D模型或3D视频的景深信息所生成的对应于用户视角的3D图像,3D处理装置可以渲染复合子像素中与这两个视点相对应的子像素,并渲染这两个视点各自的相邻视点所对 应的子像素。
参考图9B,在所示实施例中,用户的左眼处于视点V2,右眼处于视点V6,基于3D图像生成对应于这两个视点V2和V6的左右眼视差图像,并渲染复合子像素510、520、530各自与这两个视点V2和V6相对应的子像素,同时还渲染视点V2和V6各自两侧相邻的视点所对应的子像素。在一些实施例中,也可以同时渲染视点V2和V6各自单侧相邻的视点所对应的子像素。
在一些实施例中,当基于眼部空间位置信息确定用户的双眼各自位于两个视点之间时,依据由3D模型或3D视频的景深信息所生成的对应于用户视角的3D图像,3D处理装置可以渲染复合子像素中与这四个视点相对应的子像素。
参考图9C,在所示实施例中,用户的左眼处于视点V2和V3之间,右眼处于视点V5和V6之间,基于3D图像生成对应于视点V2、V3和V5、V6的左右眼视差图像,并渲染复合子像素510、520、530各自与视点V2、V3和V5、V6相对应的子像素。
在一些实施例中,当基于眼部空间位置信息确定用户双眼中至少一只眼部对应的视点位置发生了变化时,依据由3D模型或3D视频的景深信息所生成的对应于用户视角的3D图像,3D处理装置可以从渲染复合子像素中与变化前的视点位置对应的子像素切换为渲染复合子像素中与变化后的视点位置对应的子像素。
参考图9D,用户的左眼从视点V1移动至视点V3,右眼从视点V5移动至视点V7,复合子像素510、520、530各自被渲染的子像素相应进行调整,以适应变化的视点位置。
在一些实施例中,当基于眼部空间位置信息确定有一个以上用户时,依据由3D模型或3D视频的景深信息所生成的对应于每个用户视角的3D图像,3D处理装置可以渲染复合子像素中与每个用户的眼部所在视点对应的子像素。
参考图9E,面向3D显示设备有两个用户,第一个用户的双眼分别处于视点V2和V4,第二个用户的双眼分别处于视点V5和视点V7。依据3D模型或3D视频的景深信息生成分别对应于第一个用户视角的第一3D图像和对应于第二个用户视角的第二3D图像,并基于第一3D图像生成对应于视点V2和V4的左右眼视差图像,基于第二3D图像生成对应于视点V5和V7的左右眼视差图像。3D处理装置渲染复合子像素510、520、530各自对应于视点V2和V4、V5和V7的子像素。
在一些实施例中,3D显示设备的子像素与视点的对应关系存在理论对应关系。这种理论对应关系可以是在3D显示装置从流水线上生产出来时统一设定或调制的,还可以对应关系表的形式存储在3D显示设备中,例如存储在处理器中或3D处理装置中。由于光栅的安装、材质或对位等原因,在实际使用3D显示设备时,可能会出现在空间中的视点位置 所观看到的子像素与理论子像素不对应的问题。这对于3D图像的正确显示造成了影响。对3D显示设备实际使用过程中存在的子像素与视点的对应关系进行校准或校正,对于3D显示设备是有利的。在本公开所提供的实施例中,这种在3D显示设备的实际使用过程中存在的视点与子像素的对应关系被称为“校正对应关系”。“校正对应关系”相较于“理论对应关系”可能存在偏差,也有可能是一致的。
获得“校正对应关系”的过程也就是找到视点与子像素在实际显示过程中的对应关系的过程。在一些实施例中,为了确定多视点3D显示屏中各复合像素的复合子像素中的子像素与视点的校正对应关系,可将多视点3D显示屏或显示面板分为多个校正区域,分别对每个校正区域中的子像素与视点的校正对应关系进行确定,然后将各区域内的校正对应关系数据按区储存起来,例如以对应关系表的形式存储在处理器或3D处理装置中。
在一些实施例中,每个校正区域中的至少一个子像素与视点的校正对应关系是通过检测得出的,每个校正区域中其他子像素与视点的校正对应关系是参考被检测出来的校正对应关系通过数学计算推算或估算出的。数学计算方法包括:线性差值、线性外推、非线性差值、非线性外推、泰勒级数近似、参考坐标系线性变化、参考坐标系非线性变化、指数模型和三角变换等。
在一些实施例中,多视点3D显示屏定义有多个校正区域,所有校正区域联合起来的面积范围是多视点3D显示屏的面积的90%至100%。在一些实施例中,多个校正区域在多视点3D显示屏中呈阵列形式排布。在一些实施例中,每个校正区域可由包含三个复合子像素的一个复合像素来定义。在一些实施例中,每个校正区域可由两个或两个以上的复合像素来定义。在一些实施例中,每个校正区域可由两个或两个以上的复合子像素来定义。在一些实施例中,每个校正区域可由不属于同一个复合像素的两个或两个以上复合子像素来定义。
在一些实施例中,一个校正区域内的子像素与视点的校正对应关系相较于理论对应关系的偏差与另一个校正区域内的子像素与视点的校正对应关系相较于理论对应关系的偏差相比,可以是一致或基本一致的,也可以是不一致的。
根据本公开的实施例提供了3D图像显示的方法,用于上述的3D显示设备。如图10所示,3D图像的显示方法包括:
S10,确定用户的用户视角;和
S20,基于用户视角,依据3D模型的景深信息渲染多视点3D显示屏中的复合像素的复合子像素中的相应子像素。
在一些实施例中,也可以依据3D视频的景深信息渲染多视点3D显示屏中的复合像 素的复合子像素中的相应子像素。
在一些实施例中,3D图像的显示方法包括:
S100,确定用户的用户视角;
S200,确定用户眼部所处的视点;
S300,接收3D模型或包括景深信息的3D视频;
S400,基于确定的用户视角,依据3D模型或包括景深信息的3D视频生成3D图像;和
S500,基于确定的用户眼部所处视点,依据生成的3D图像渲染多视点3D显示屏中的复合像素的复合子像素中的相应子像素,其中相应子像素是指复合子像素中与所确定的用户所处视点相对应的子像素。
在一些实施例中,确定用户视角包括:实时检测用户视角。
在一些实施例中,基于确定的用户视角,依据3D模型或3D视频的景深信息生成3D图像包括:确定实时检测的用户视角的变化;和在用户视角的变化小于预定阈值时,基于变化前的用户视角生成3D图像。
本公开实施例提供了一种3D显示设备300,参考图11,3D显示设备300包括处理器320和存储器310。在一些实施例中,电子设备300还可以包括通信接口340和总线330。其中,处理器320、通信接口340和存储器310通过总线330完成相互间的通信。通信接口340可配置为传输信息。处理器320可以调用存储器310中的逻辑指令,以执行上述实施例的在3D显示设备中基于用户视角随动地显示3D画面的方法。
此外,上述的存储器310中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器310作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令/模块。处理器320通过运行存储在存储器310中的程序指令/模块,从而执行功能应用以及数据处理,即实现上述方法实施例中的在电子设备中切换显示3D图像和2D图像的方法。
存储器310可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器310可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的3D图像显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程 序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的3D图像显示方法。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
以上描述和附图充分地示出了本公开的实施例,以使本领域技术人员能够实践它们。其他实施例可以包括结构的、逻辑的、电气的、过程的以及其他的改变。实施例仅代表可能的变化。除非明确要求,否则单独的部件和功能是可选的,并且操作的顺序可以变化。一些实施例的部分和特征可以被包括在或替换其他实施例的部分和特征。本公开实施例的范围包括权利要求书的整个范围,以及权利要求书的所有可获得的等同物。而且,本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。另外,当用于本申请中时,术语“包括”等指陈述的特征、整体、步骤、操作、元素或组件中至少一项的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件或这些的分组的存在或添加。本文中,每个实施例重点说明的可以是与其他实施例的不同之处,各个实施例之间相同相似部分可以互相参见。对于实施例公开的方法、产品等而言,如果其与实施例公开的方法部分相对应,那么相关之处可以参见方法部分的描述。
本领域技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,可以取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法以实现所描述的功能,但是这种实现不应认为超出本公开实施例的范围。本领域技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本文所披露的实施例中,所揭露的方法、产品(包括但不限于装置、设备等),可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,可以仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。作为分离部件说明的单元可以是或者也 可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例。另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
附图中的流程图和框图显示了根据本公开实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,上述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。在附图中的流程图和框图所对应的描述中,不同的方框所对应的操作或步骤也可以以不同于描述中所披露的顺序发生,有时不同的操作或步骤之间不存在特定的顺序。例如,两个连续的操作或步骤实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。

Claims (23)

  1. 一种3D显示设备,包括:
    多视点3D显示屏,包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中的每个复合子像素包括对应于所述3D显示设备的多个视点的多个子像素;
    视角确定装置,被配置为确定用户的用户视角;
    3D处理装置,被配置为基于所述用户视角,依据3D模型的景深信息渲染所述多个复合子像素中的相应子像素。
  2. 根据权利要求1所述的3D显示设备,其中,所述3D处理装置被配置为基于所述用户视角,由所述景深信息生成3D图像,并依据所述3D图像渲染所述相应子像素。
  3. 根据权利要求2所述的3D显示设备,还包括:
    眼部定位装置,被配置为确定用户的眼部空间位置;
    所述3D处理装置被配置为基于所述眼部空间位置确定所述用户的眼部所在视点,并基于所述3D图像渲染与所述眼部所在视点相应的子像素。
  4. 根据权利要求3所述的3D显示设备,其中,所述眼部定位装置包括:
    眼部定位器,被配置为拍摄所述用户的用户图像;
    眼部定位图像处理器,被配置为基于所述用户图像确定所述眼部空间位置;和
    眼部定位数据接口,被配置为传输表明所述眼部空间位置的眼部空间位置信息。
  5. 根据权利要求4所述的3D显示设备,其中,所述眼部定位器包括:
    第一摄像头,被配置为拍摄第一图像;和
    第二摄像头,被配置为拍摄第二图像;
    其中,所述眼部定位图像处理器被配置为基于所述第一图像和所述第二图像中的至少一副图像识别眼部的存在且基于识别到的眼部确定所述眼部空间位置。
  6. 根据权利要求4所述的3D显示设备,其中,所述眼部定位器包括:
    摄像头,被配置为拍摄图像;和
    深度检测器,被配置为获取用户的眼部深度信息;
    其中,所述眼部定位图像处理器被配置为基于所述图像识别眼部的存在且基于识别到的眼部位置和所述眼部深度信息确定所述眼部空间位置。
  7. 根据权利要求1至6任一项所述的3D显示设备,其中,所述用户视角为所述用户与所述多视点3D显示屏的显示平面之间的夹角。
  8. 根据权利要求7所述的3D显示设备,其中,所述用户视角为用户视线与所述多视 点3D显示屏的显示平面之间的夹角,其中所述用户视线为用户双眼连线的中点与所述多视点3D显示屏的中心的连线。
  9. 根据权利要求8所述的3D显示设备,其中,所述用户视角为:
    所述用户视线与所述显示平面的横向、竖向和深度方向中至少之一的夹角;或
    所述用户视线与所述用户视线在所述显示平面内的投影之间的夹角。
  10. 根据权利要求1至6任一项所述的3D显示设备,还包括:3D信号接口,被配置为接收所述3D模型。
  11. 一种3D图像显示方法,包括:
    确定用户的用户视角;和
    基于所述用户视角,依据3D模型的景深信息渲染多视点3D显示屏中的复合像素的复合子像素中的相应子像素。
  12. 根据权利要求11所述的3D图像显示方法,其中,基于所述用户视角,依据3D模型的景深信息渲染多视点3D显示屏中的复合像素的复合子像素中的相应子像素包括:
    基于所述用户视角,由所述景深信息生成3D图像,并依据所述3D图像渲染所述相应子像素。
  13. 根据权利要求12所述的3D图像显示方法,还包括:
    确定用户的眼部空间位置;
    基于所述眼部空间位置确定所述用户的眼部所在视点;和
    基于所述3D图像渲染与所述眼部所在视点相应的子像素。
  14. 根据权利要求13所述的3D图像显示方法,其中,确定用户的眼部空间位置包括:
    拍摄所述用户的用户图像;
    基于所述用户图像确定所述眼部空间位置;和
    传输表明所述眼部空间位置的眼部空间位置信息。
  15. 根据权利要求14所述的3D图像显示方法,其中,拍摄所述用户的用户图像并基于所述用户图像确定所述眼部空间位置包括:
    拍摄第一图像;
    拍摄第二图像;
    基于所述第一图像和所述第二图像中的至少一幅图像识别眼部的存在;和
    基于识别到的眼部确定所述眼部空间位置。
  16. 根据权利要求14所述的3D图像显示方法,其中,拍摄所述用户的用户图像并基于所述用户图像确定所述眼部空间位置包括:
    拍摄图像;
    获取用户的眼部深度信息;
    基于所述图像识别眼部的存在;和
    基于识别到的眼部位置和所述眼部深度信息共同确定所述眼部空间位置。
  17. 根据权利要求11至16任一项所述的3D图像显示方法,其中,所述用户视角为所述用户与所述多视点3D显示屏的显示平面之间的夹角。
  18. 根据权利要求17所述的3D图像显示方法,其中,所述用户视角为用户视线与所述多视点3D显示屏的显示平面之间的夹角,其中所述用户视线为用户双眼连线的中点与所述多视点3D显示屏的中心的连线。
  19. 根据权利要求18所述的3D图像显示方法,其中,所述用户视角为:
    所述用户视线与所述显示平面的横向、竖向和深度方向中至少之一的夹角;或
    所述用户视线与所述用户视线在所述显示平面内的投影之间的夹角。
  20. 根据权利要求11至16任一项所述的3D图像显示方法,还包括:
    接收3D模型。
  21. 一种3D显示设备,包括:
    处理器;和
    存储有程序指令的存储器;
    所述处理器被配置为在执行所述程序指令时,执行如权利要求11或20任一项所述的3D图像显示方法。
  22. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行如权利要求11至20任一项所述的方法。
  23. 一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当该程序指令被计算机执行时,使所述计算机执行如权利要求11至20任一项所述的方法。
PCT/CN2020/133332 2019-12-05 2020-12-02 3d显示设备、3d图像显示方法 WO2021110038A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/781,058 US20230007228A1 (en) 2019-12-05 2020-12-02 3d display device and 3d image display method
EP20895613.6A EP4068768A4 (en) 2019-12-05 2020-12-02 3-DISPLAY DEVICE AND 3D IMAGE DISPLAY METHOD

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911231149.XA CN112929636A (zh) 2019-12-05 2019-12-05 3d显示设备、3d图像显示方法
CN201911231149.X 2019-12-05

Publications (1)

Publication Number Publication Date
WO2021110038A1 true WO2021110038A1 (zh) 2021-06-10

Family

ID=76160804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133332 WO2021110038A1 (zh) 2019-12-05 2020-12-02 3d显示设备、3d图像显示方法

Country Status (5)

Country Link
US (1) US20230007228A1 (zh)
EP (1) EP4068768A4 (zh)
CN (1) CN112929636A (zh)
TW (1) TWI788739B (zh)
WO (1) WO2021110038A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040184A (zh) * 2021-11-26 2022-02-11 京东方科技集团股份有限公司 图像显示方法、系统、存储介质及计算机程序产品

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079765B (zh) * 2021-11-17 2024-05-28 京东方科技集团股份有限公司 图像显示方法、装置及系统
CN115278200A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件
CN115278201A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030063383A1 (en) * 2000-02-03 2003-04-03 Costales Bryan L. Software out-of-focus 3D method, system, and apparatus
CN102693065A (zh) * 2011-03-24 2012-09-26 介面光电股份有限公司 立体影像视觉效果处理方法
CN106454307A (zh) * 2015-08-07 2017-02-22 三星电子株式会社 针对多个用户的光场渲染的方法和设备
CN207320118U (zh) * 2017-08-31 2018-05-04 昆山国显光电有限公司 像素结构、掩膜版及显示装置
CN109993823A (zh) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 阴影渲染方法、装置、终端及存储介质
CN211128024U (zh) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 3d显示设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050275914A1 (en) * 2004-06-01 2005-12-15 Vesely Michael A Binaural horizontal perspective hands-on simulator
CN101006492A (zh) * 2004-06-01 2007-07-25 迈克尔·A.·韦塞利 水平透视显示
KR101629479B1 (ko) * 2009-11-04 2016-06-10 삼성전자주식회사 능동 부화소 렌더링 방식 고밀도 다시점 영상 표시 시스템 및 방법
KR101694821B1 (ko) * 2010-01-28 2017-01-11 삼성전자주식회사 다시점 비디오스트림에 대한 링크 정보를 이용하는 디지털 데이터스트림 전송 방법와 그 장치, 및 링크 정보를 이용하는 디지털 데이터스트림 전송 방법과 그 장치
KR102192986B1 (ko) * 2014-05-23 2020-12-18 삼성전자주식회사 영상 디스플레이 장치 및 영상 디스플레이 방법
CN105323573B (zh) * 2014-07-16 2019-02-05 北京三星通信技术研究有限公司 三维图像显示装置和方法
KR101975246B1 (ko) * 2014-10-10 2019-05-07 삼성전자주식회사 다시점 영상 디스플레이 장치 및 그 제어 방법
EP3261328B1 (en) * 2016-06-03 2021-10-13 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable storage medium
KR20210030072A (ko) * 2019-09-09 2021-03-17 삼성전자주식회사 홀로그래픽 투사 방식을 이용한 다중 영상 디스플레이 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030063383A1 (en) * 2000-02-03 2003-04-03 Costales Bryan L. Software out-of-focus 3D method, system, and apparatus
CN102693065A (zh) * 2011-03-24 2012-09-26 介面光电股份有限公司 立体影像视觉效果处理方法
CN106454307A (zh) * 2015-08-07 2017-02-22 三星电子株式会社 针对多个用户的光场渲染的方法和设备
CN207320118U (zh) * 2017-08-31 2018-05-04 昆山国显光电有限公司 像素结构、掩膜版及显示装置
CN109993823A (zh) * 2019-04-11 2019-07-09 腾讯科技(深圳)有限公司 阴影渲染方法、装置、终端及存储介质
CN211128024U (zh) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 3d显示设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068768A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040184A (zh) * 2021-11-26 2022-02-11 京东方科技集团股份有限公司 图像显示方法、系统、存储介质及计算机程序产品

Also Published As

Publication number Publication date
CN112929636A (zh) 2021-06-08
TWI788739B (zh) 2023-01-01
EP4068768A1 (en) 2022-10-05
TW202123694A (zh) 2021-06-16
US20230007228A1 (en) 2023-01-05
EP4068768A4 (en) 2023-08-02

Similar Documents

Publication Publication Date Title
WO2021110038A1 (zh) 3d显示设备、3d图像显示方法
CN211128024U (zh) 3d显示设备
WO2021110035A1 (zh) 眼部定位装置、方法及3d显示设备、方法和终端
US9848184B2 (en) Stereoscopic display system using light field type data
KR20160121798A (ko) 직접적인 기하학적 모델링이 행해지는 hmd 보정
CN108093244B (zh) 一种远程随动立体视觉系统
US9467685B2 (en) Enhancing the coupled zone of a stereoscopic display
CN101636747A (zh) 二维/三维数字信息获取和显示设备
JP2010250452A (ja) 任意視点画像合成装置
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、系统
CN104599317A (zh) 一种实现3d扫描建模功能的移动终端及方法
JP2005142957A (ja) 撮像装置及び方法、撮像システム
CN107545537A (zh) 一种从稠密点云生成3d全景图片的方法
US20170257614A1 (en) Three-dimensional auto-focusing display method and system thereof
CN112929638B (zh) 眼部定位方法、装置及多视点裸眼3d显示方法、设备
JP2017046065A (ja) 情報処理装置
CN211531217U (zh) 3d终端
CN114898440A (zh) 液晶光栅的驱动方法及显示装置、其显示方法
US20230316810A1 (en) Three-dimensional (3d) facial feature tracking for autostereoscopic telepresence systems
CN214756700U (zh) 3d显示设备
JP2005174148A (ja) 撮像装置及び方法、撮像システム
KR20180092187A (ko) 증강 현실 제공 시스템
JP2024062935A (ja) 立体視表示コンテンツを生成する方法および装置
US20240137481A1 (en) Method And Apparatus For Generating Stereoscopic Display Contents
KR101668117B1 (ko) 스테레오 카메라의 주시각 제어 장치 및 그가 구비된 3차원 영상 처리 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20895613

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020895613

Country of ref document: EP

Effective date: 20220627

NENP Non-entry into the national phase

Ref country code: DE