WO2021110034A1 - 眼部定位装置、方法及3d显示设备、方法 - Google Patents

眼部定位装置、方法及3d显示设备、方法 Download PDF

Info

Publication number
WO2021110034A1
WO2021110034A1 PCT/CN2020/133328 CN2020133328W WO2021110034A1 WO 2021110034 A1 WO2021110034 A1 WO 2021110034A1 CN 2020133328 W CN2020133328 W CN 2020133328W WO 2021110034 A1 WO2021110034 A1 WO 2021110034A1
Authority
WO
WIPO (PCT)
Prior art keywords
black
white image
eye
white
eye positioning
Prior art date
Application number
PCT/CN2020/133328
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
黄玲溪
Original Assignee
北京芯海视界三维科技有限公司
视觉技术创投私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京芯海视界三维科技有限公司, 视觉技术创投私人有限公司 filed Critical 北京芯海视界三维科技有限公司
Priority to EP20895782.9A priority Critical patent/EP4068769A4/en
Priority to US17/780,504 priority patent/US20230007225A1/en
Publication of WO2021110034A1 publication Critical patent/WO2021110034A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • This application relates to 3D display technology, for example, to eye positioning devices and methods, and 3D display devices and methods.
  • the embodiments of the present application intend to provide an eye positioning device and method, a 3D display device and method, a computer-readable storage medium, and a computer program product.
  • an eye positioning device including: an eye locator including a first black and white camera configured to take a first black and white image and a second black and white camera configured to take a second black and white image;
  • the eye positioning image processor is configured to recognize the presence of an eye based on at least one of the first black and white image and the second black and white image, and to determine the eye based on the recognized eye in the first black and white image and the second black and white image Spatial location.
  • the spatial position of the user's eyes can be determined with high precision, so as to improve the 3D display quality.
  • the eye positioning device further includes an eye positioning data interface configured to transmit eye spatial position information indicating the spatial position of the eye.
  • the eye locator further includes an infrared emitting device.
  • the infrared emitting device is configured to emit infrared light with a wavelength greater than or equal to 1.5 microns.
  • the first black and white camera and the second black and white camera are configured to respectively capture a first black and white image sequence including a first black and white image and a second black and white image sequence including a second black and white image.
  • the eye positioning image processor includes a synchronizer configured to determine the time-synchronized first black-and-white image and the second black-and-white image, so as to identify the eyes and determine the spatial position of the eyes.
  • the eye positioning image processor includes: a buffer configured to buffer a plurality of first black and white images and second black and white images in the first black and white image sequence and the second black and white image sequence; and the comparator is configured To compare the first black and white image and the second black and white image before and after the first black and white image sequence and the second black and white image sequence; the arbiter is configured to, when the comparator compares the first black and white image sequence with the second black and white image sequence When the presence of eyes is not recognized in the current first black-and-white image and the second black-and-white image in the image sequence and the presence of eyes is recognized in the first or subsequent black-and-white images and second black-and-white images, it will be based on the previous or The eye space position determined by the subsequent first black and white image and the second black and white image is used as the current eye space position.
  • the first or second black-and-white camera when the first or second black-and-white camera is stuck or frame skipped, it can provide the user with a more consistent display image and ensure the viewing experience.
  • a 3D display device including: a multi-viewpoint 3D display screen, including multiple sub-pixels corresponding to multiple viewpoints; the eye positioning device as described above, to obtain the eye spatial position; 3D processing The device is configured to determine the corresponding viewpoint according to the eye spatial position obtained by the eye positioning device, and render the sub-pixels of the multi-view 3D display screen corresponding to the viewpoint based on the 3D signal.
  • the multi-view 3D display screen includes a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and each composite sub-pixel of the plurality of composite sub-pixels is corresponding to a plurality of composite sub-pixels. It is composed of multiple sub-pixels for each viewpoint.
  • the 3D processing device is communicatively connected with the eye positioning device.
  • it further includes: a 3D photographing device configured to capture 3D images; the 3D photographing device includes a depth-of-field camera and at least two color cameras.
  • the eye positioning device is integrated with the 3D camera.
  • the 3D camera is placed in front of the 3D display device.
  • an eye positioning method including: taking a first black-and-white image and a second black-and-white image; recognizing the presence of the eye based on at least one of the first black-and-white image and the second black-and-white image; The eyes recognized in the first black-and-white image and the second black-and-white image determine the spatial position of the eye.
  • the eye positioning method further includes: transmitting eye space position information indicating the eye space position.
  • the eye positioning method further includes: using an infrared emitting device to emit infrared light when the first black-and-white camera or the second black-and-white camera is working.
  • the eye positioning method further includes: separately photographing a first black and white image sequence including the first black and white image and a second black and white image sequence including the second black and white image.
  • the eye positioning method further includes: determining the first black-and-white image and the second black-and-white image that are synchronized in time.
  • the eye positioning method further includes: buffering a plurality of first black and white images and second black and white images in the first black and white image sequence and the second black and white image sequence; comparing the first black and white image sequence with the second black and white image sequence Before and after multiple first black and white images and second black and white images; when comparing the current first black and white image and second black and white image in the first black and white image sequence and the second black and white image sequence, the existence of eyes is not recognized And when the presence of eyes is recognized in the first black and white image and the second black and white image before or after, the spatial position of the eyes determined based on the first black and white image and the second black and white image before or after is used as the current eye Spatial location.
  • a 3D display method which includes: obtaining a user's eye space position; determining the corresponding viewpoint according to the eye space position; rendering a subview corresponding to the viewpoint of a multi-viewpoint 3D display screen based on a 3D signal Pixels.
  • the 3D display method further includes: providing a multi-viewpoint 3D display screen, including a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and each of the plurality of composite sub-pixels Each composite sub-pixel is composed of multiple sub-pixels corresponding to multiple viewpoints.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned eye positioning method and 3D display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned eye positioning method, 3D display method.
  • FIG. 1A and 1B are schematic structural diagrams of a 3D display device according to an embodiment of the present disclosure
  • Fig. 1C is a schematic structural diagram of an eye positioning device according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of the hardware structure of a 3D display device according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of the software structure of the 3D display device shown in FIG. 2;
  • FIG. 4 is a schematic diagram of determining the spatial position of the eye using the eye positioning device according to an embodiment of the present disclosure
  • 5A to 5C are schematic front views of a 3D display device according to an embodiment of the present disclosure.
  • 6A and 6B are schematic diagrams of the positional relationship between a user's face and a 3D display device according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of steps of an eye positioning method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of implementing the display of a multi-view 3D display screen of a 3D display device by using a 3D display method according to an embodiment of the present disclosure, wherein the eyes of the user each correspond to one viewpoint.
  • an eye positioning device including: an eye locator including a first black and white camera configured to take a first black and white image and a second black and white camera configured to take a second black and white image; An eye positioning image processor, configured to recognize the presence of eyes based on at least one of the first black-and-white image and the second black-and-white image, and to determine the eyes based on the positions of the eyes in the first black-and-white image and the second black-and-white image Spatial location; the eye location data interface is configured to transmit the eye spatial location information of the eye spatial location.
  • the spatial position of the user's eyes can be determined with high accuracy.
  • the eye locator further includes an infrared emitting device.
  • the infrared emitting device is configured to emit infrared light with a wavelength greater than or equal to 1.5 microns.
  • the first black-and-white camera and the second black-and-white camera are configured to capture the first black-and-white image sequence and the second black-and-white image sequence, respectively.
  • the eye positioning image processor includes a synchronizer configured to determine the first black and white image and the second black and white image that are synchronized in time.
  • the eye positioning image processor includes: a buffer configured to buffer a plurality of first black-and-white images and second black-and-white images in the first black-and-white image sequence and the second black-and-white image sequence; and a comparator configured to compare A plurality of first and second black and white images before and after in the first black and white image sequence and the second black and white image sequence.
  • the eye positioning image processor is configured such that the presence of eyes is not recognized in the current first black-and-white image and the second black-and-white image in the first black-and-white image sequence and the second black-and-white image sequence. Or when the presence of eyes is recognized in the first black-and-white image and the second black-and-white image, the spatial position information of the eye determined based on the first or subsequent black-and-white image and the second black-and-white image is used as the current eye space position information .
  • the first or second black-and-white camera when the first or second black-and-white camera is stuck or frame skipped, it can provide the user with a more consistent display image and ensure the viewing experience.
  • the first black and white camera and the second black and white camera are configured to capture the first black and white image sequence and the second black and white image sequence at a frequency of 24 frames per second or more.
  • a 3D display device including a multi-viewpoint 3D display screen (for example: a multi-viewpoint naked eye 3D display screen), and a video signal interface configured to receive video frames of a 3D video signal (3D signal) ( Signal interface), a 3D processing device communicatively connected with the video signal interface, and the eye positioning device as described above, the multi-viewpoint 3D display screen includes multiple sub-pixels corresponding to multiple viewpoints, and the 3D processing device is configured as a video based on the 3D video signal
  • the frame renders sub-pixels related to a predetermined viewpoint, and the predetermined viewpoint is determined by the user's eye space position information.
  • the multi-view 3D display screen includes multiple composite pixels, each of the multiple composite pixels includes multiple composite sub-pixels, and each composite sub-pixel is composed of multiple same-color sub-pixels corresponding to multiple viewpoints.
  • the 3D processing device is communicatively connected with the eye positioning data interface of the eye positioning device.
  • the 3D display device further includes a 3D photographing device configured to capture 3D images.
  • the 3D photographing device includes a camera assembly and a 3D image processor, and the camera assembly includes a first color camera, a second color camera, and a depth camera.
  • the eye positioning device is integrated with the 3D camera.
  • the 3D camera is a front camera.
  • an eye positioning method including: taking a first black-and-white image at a first position; taking a second black-and-white image at a second position, wherein the second position is different from the first position; At least one of the black-and-white image and the second black-and-white image recognizes the presence of eyes; determines the spatial position of the eye based on the position of the eye existing in the first black-and-white image and the second black-and-white image; and transmits the eye space position of the eye Spatial location information.
  • the eye positioning method further includes: using an infrared emitting device to emit infrared light when the first or second black-and-white camera is working.
  • the eye positioning method further includes: separately photographing the first black-and-white image sequence and the second black-and-white image sequence.
  • the eye positioning method further includes: determining the first black-and-white image and the second black-and-white image that are synchronized in time.
  • the eye positioning method further includes: buffering a plurality of first black and white images and second black and white images in the first black and white image sequence and the second black and white image sequence; comparing the first black and white image sequence with the second black and white image sequence Multiple first black and white images and second black and white images before and after in.
  • the eye positioning method further includes: the presence of eyes is not recognized in the current first black and white image and the second black and white image in the first black and white image sequence and the second black and white image sequence and before or after
  • the spatial position information of the eyes determined based on the first black and white image and the second black and white image before or after is used as the current spatial position information of the eyes.
  • the eye positioning method further includes: shooting the first black-and-white image sequence and the second black-and-white image sequence at a frequency of 24 frames per second or more.
  • a 3D display method is provided, which is suitable for a 3D display device.
  • the 3D display device includes a multi-viewpoint 3D display screen, including multiple sub-pixels corresponding to multiple viewpoints; the 3D display method includes: transmitting a 3D video signal Video frame; receive or read the user’s eye spatial position information, the eye spatial position information is determined using the above-mentioned eye positioning method; based on the eye spatial position information to determine the eye point of view; based on the point of view, according to the The video frame of the received 3D video signal renders related sub-pixels.
  • the 3D display method further includes: providing a multi-viewpoint 3D display screen, including a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and each composite sub-pixel corresponds to a plurality of viewpoints. It is composed of multiple sub-pixels of the same color.
  • a 3D display device which includes a processor and a memory storing program instructions, and also includes a multi-view 3D display screen.
  • the processor is configured to execute the above-mentioned 3D display when the program instructions are executed. Display method.
  • FIG. 1A shows a schematic structural diagram of a 3D display device 100 provided according to an embodiment of the present disclosure.
  • a 3D display device 100 is provided, which includes a multi-view 3D display screen 110, a signal interface 140 configured to receive video frames of a 3D video signal, and a signal interface 140 communicatively connected to the signal interface 140.
  • the 3D processing device 130 and the eye positioning device 150 are communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive eye positioning data.
  • the multi-view 3D display screen 110 may include a display panel and a grating (not labeled) covering the display panel.
  • the multi-view 3D display screen 110 may include m ⁇ n composite pixels CP and thus define a display resolution of m ⁇ n.
  • the multi-view 3D display screen 110 includes m columns and n rows of composite pixels and thus defines a display resolution of m ⁇ n.
  • the resolution of m ⁇ n may be a resolution above Full High Definition (FHD), including but not limited to 1920 ⁇ 1080, 1920 ⁇ 1200, 2048 ⁇ 1280, 2560 ⁇ 1440, 3840 ⁇ 2160, etc.
  • FHD Full High Definition
  • the 3D processing device is communicatively connected with the multi-view 3D display screen.
  • the 3D processing device is communicatively connected with the driving device of the multi-view 3D display screen.
  • each composite pixel CP includes a plurality of composite sub-pixels CSP, and each composite sub-pixel is composed of i sub-pixels of the same color corresponding to i viewpoints, i ⁇ 3.
  • the three composite sub-pixels respectively correspond to three colors, namely red (R), green (G) and blue (B). That is, the three composite sub-pixels of each composite pixel have 6 red, 6 green, or 6 blue sub-pixels, respectively.
  • the composite sub-pixels in the composite pixel are arranged in parallel.
  • Each composite sub-pixel includes sub-pixels in a single row.
  • each composite sub-pixel includes sub-pixels in a single row or array form.
  • the 3D display apparatus 100 may be provided with a single 3D processing device 130.
  • the single 3D processing device 130 simultaneously processes the rendering of the sub-pixels of each composite sub-pixel of each composite pixel of the 3D display screen 110.
  • the 3D display device 100 may also be provided with more than one 3D processing device 130, which process the sub-pixels of each composite sub-pixel of each composite pixel of the 3D display screen 110 in parallel, serial, or a combination of series and parallel. Rendering.
  • more than one 3D processing device may be allocated in other ways and process multiple rows and multiple columns of composite pixels or composite sub-pixels of the 3D display screen 110 in parallel, which falls within the scope of the embodiments of the present disclosure.
  • the 3D processing device 130 may also optionally include a buffer 131 to buffer the received video frames.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
  • the 3D display device 100 may further include a processor 101 communicatively connected to the 3D processing device 130 through the signal interface 140.
  • the processor 101 is included in a computer or a smart terminal, such as a mobile terminal, or as a processor unit thereof.
  • the processor 101 may be arranged outside the 3D display device.
  • the 3D display device may be a multi-view 3D display with a 3D processing device, such as a non-intelligent 3D TV.
  • the following exemplary embodiments of the 3D display device include a processor inside.
  • the signal interface 140 is configured as an internal interface connecting the processor 101 and the 3D processing device 130. This structure can be more clarified with reference to the 3D display device 200 implemented in the mobile terminal mode shown in FIGS. 2 and 3.
  • the signal interface as the internal interface of the 3D display device may be MIPI, mini-MIPI interface, LVDS interface, min-LVDS interface or Display Port interface.
  • the processor 101 of the 3D display device 100 may further include a register 122.
  • the register 122 can be configured to temporarily store instructions, data, and addresses.
  • the register 122 may be configured to receive information about the display requirements of the multi-view 3D display screen 110
  • the 3D display device 100 may further include a codec configured to decompress and encode and decode the compressed 3D video signal and send the decompressed 3D video signal to the 3D processing device 130 via the signal interface 140.
  • a codec configured to decompress and encode and decode the compressed 3D video signal and send the decompressed 3D video signal to the 3D processing device 130 via the signal interface 140.
  • the 3D display device 100 further includes a 3D photographing device 120 configured to capture 3D images, and the eye positioning device 150 is integrated in In the 3D photographing device 120, it is also conceivable to be integrated into a conventional photographing device of a processing terminal or a display device.
  • the 3D camera 120 is configured as a front camera.
  • the 3D photographing device 120 includes a camera assembly 121, a 3D image processor 126, and a 3D image output interface 125.
  • the 3D photographing device 120 is integrated with the eye positioning device 150.
  • the camera assembly 121 includes a first color camera 121a, a second color camera 121b, and a depth camera 121c.
  • the 3D image processor 126 may be integrated in the camera assembly 121.
  • the first color camera 121a is configured to obtain a first color image of the subject
  • the second color camera 121b is configured to obtain a second color image of the subject
  • the synthesis of the intermediate point is obtained by combining the two color images.
  • Color image; the depth-of-field camera 121c is configured to obtain a depth-of-field image of the subject.
  • the composite color image and the depth image form a video frame of the 3D video signal.
  • the first color camera and the second color camera are the same color camera. In other embodiments, the first color camera and the second color camera may also be different color cameras. In this case, in order to obtain a color composite image, the first and second color images can be calibrated or corrected.
  • the depth-of-field camera 121c may be a time-of-flight (TOF) camera or a structured light camera. The depth camera 121c may be arranged between the first color camera and the second color camera.
  • the 3D image processor 126 is configured to synthesize the first and second color images into a composite color image, and to form a 3D image from the synthesized composite color image and the depth image.
  • the formed 3D image is transmitted to the processor 101 of the 3D display device 100 through the 3D image output interface 125.
  • the first and second color images and the depth image are directly transmitted to the processor 101 of the 3D display device 100 via the 3D image output interface 125, and the processor 101 performs processing such as synthesizing the color image and forming the 3D image.
  • the 3D image output interface 125 may also be communicatively connected to the 3D processing device 130 of the 3D display device 100, so that the 3D processing device 130 can perform processing such as synthesizing color images and forming 3D images.
  • At least one of the first color camera and the second color camera is a wide-angle color camera.
  • the eye positioning device 150 is integrated in the 3D photographing device 120 and includes an eye locator 151, an eye positioning image processor 152 and an eye positioning data interface 153.
  • the eye locator 151 includes a first black and white camera 151a and a second black and white camera 151b.
  • the first black and white camera 151a is configured to capture a first black and white image
  • the second black and white camera 151b is configured to capture a second black and white image.
  • the eye positioning device 150 is also front-mounted, and the shooting objects of the first black and white camera and the second black and white camera are The user's face.
  • the eye positioning data interface 153 of the eye positioning device 150 is communicatively connected to the 3D processing device 130 of the 3D display device 100, so that the 3D processing device 130 can directly receive the eye positioning data.
  • the eye positioning image processor 152 of the eye positioning device 150 may be communicatively connected to the processor 101 of the 3D display device 100, so that the eye positioning data can be obtained from the processor 101 through the eye positioning data interface. 153 is transmitted to the 3D processing device 130.
  • the eye positioning device 150 is communicatively connected with the camera assembly 221, so that the eye positioning data can be used when shooting 3D images.
  • the eye locator 151 is also provided with an infrared emitting device 154.
  • the infrared emitting device 154 is configured to selectively emit infrared light to supplement the light when the ambient light is insufficient, for example, when shooting at night, so that under the condition of weak ambient light It can also take a first or second black and white image that recognizes the user's face and eyes.
  • the eye positioning device 150 or the processing terminal or display device integrated with the eye positioning device may be configured to, when the first or second black-and-white camera is working, based on the received light sensing signal, for example, detecting When the light sensor signal is lower than a given threshold, control the opening of the infrared emitting device or adjust its size.
  • the light sensing signal is received from an ambient light sensor integrated in the processing terminal or display device, such as the ambient light sensor 2702.
  • the infrared emitting device 154 is configured to emit infrared light with a wavelength greater than or equal to 1.5 microns, that is, long-wave infrared light. Compared with short-wave infrared light, long-wave infrared light has a weaker ability to penetrate the skin, so it is less harmful to the eyes.
  • the captured first black and white image and second black and white image are transmitted to the eye positioning image processor 152.
  • the eye positioning image processor is configured to have a visual recognition function, such as a face recognition function, and is configured to recognize a face based on at least one of the two black and white images and recognize the eyes, and based on the two black and white images.
  • the position of the eye in the black and white image determines the spatial position of the eye.
  • the first black and white camera and the second black and white camera are the same black and white camera.
  • the first black and white camera and the second black and white camera may also be different black and white cameras. In this case, in order to determine the spatial position of the eye, the first black-and-white image and the second black-and-white image can be calibrated or corrected.
  • At least one of the first black and white camera and the second black and white camera is a wide-angle black and white camera.
  • Fig. 4 schematically shows a top view of a geometric relationship model for determining the spatial position of the eye using two black and white cameras.
  • the first black and white camera and the second black and white camera are the same black and white camera, and therefore have the same focal length f;
  • the focal plane 401a of the first black and white camera 151a and the focal plane 401b of the second black and white camera 151b are in the same plane and perpendicular to the optical axes of the two black and white cameras.
  • the line connecting the lens centers Oa and Ob of the two black and white cameras is parallel to the focal planes of the two black and white cameras.
  • the direction of the line connecting the lens centers Oa to Ob of the two black-and-white cameras is taken as the X-axis direction
  • the optical axis direction of the two black-and-white cameras is the Z-axis direction to show the geometric relationship model of the XZ plane. Top view.
  • the lens center Oa of the first black and white camera 151a is taken as the origin
  • the lens center Ob of the second black and white camera 151b is taken as the origin
  • R and L represent the user's right eye and left eye, respectively.
  • XRa and XRb are the X-axis coordinates of the user's right eye R in the focal planes 401a and 401b of the two black and white cameras.
  • XLa and XLb are the user's left eye L in the focal planes.
  • the distance T between the two black and white cameras and their focal length f are also known. According to the geometric relationship of similar triangles, it can be concluded that the distances DR and DL between the right eye R and the left eye L and the plane where the two black-and-white cameras set above are located are:
  • the user's eyes are connected, that is, the user's face and the plane where the two black and white cameras set above are inclined to each other and the inclination angle is ⁇ ; when the user's face and the two black and white cameras set above are located
  • the tilt angle ⁇ is zero.
  • the 3D display device 100 may be a computer or a smart terminal, such as a mobile terminal. However, it is conceivable that, in some embodiments, the 3D display device 100 may also be a non-smart display terminal, such as a non-smart 3D TV.
  • 5A, 5B, and 5C show schematic diagrams of a 3D display device 500 configured as a smart phone, a tablet computer, and a non-smart display, which has a multi-view 3D display screen 510, a front-facing 3D camera, and 3D shooting The device integrates an eye positioning device. In the embodiment shown in FIGS.
  • the 3D photographing device 120 including two color cameras 121a, 121b and the depth camera 121c and the integrated eye positioning device 150 and the 3D camera including two black and white cameras 151a, 151b
  • the multi-view 3D display screen 510 of the display device 500 is arranged in the same plane. Therefore, in the embodiment shown in FIG. 4, the distances DR and DL between the user's right eye R and left eye L and the plane where the two black-and-white cameras set above are exemplarily obtained are the user's right eye R and left eye L.
  • the distance from the multi-viewpoint 3D display screen, the inclination angle ⁇ between the user's face and the plane where the two black and white cameras set above are located is the inclination angle between the user's face and the multi-viewpoint 3D display screen.
  • FIG. 6A a schematic diagram of the multi-view 3D display screen of the 3D display device 600 when the user is looking up or down is shown, that is, the plane where the user's face is located and the plane where the display screen is located are parallel to each other, and the distance between the user's eyes and the display screen is the same.
  • the tilt angle ⁇ is zero.
  • FIG. 6B a schematic diagram of the user's face tilted relative to the multi-view 3D display screen of the 3D display device 600 is shown, that is, the plane where the user's face is located is not parallel to the plane where the display screen is located, and the distance between the user's eyes and the display screen is DR, DL is different, the inclination angle ⁇ is not zero.
  • the eye positioning data interface 153 is configured to transmit the tilt angle or parallelism of the user's eyes relative to the eye positioning device 150 or the multi-view 3D display screen 110. This can facilitate more accurate rendering of 3D images, which will be described below.
  • the eye spatial position information DR, DL, ⁇ , and P obtained exemplarily above are transmitted to the 3D processing device 130 through the eye positioning data interface 153.
  • the 3D processing device 130 determines the user's eyes based on the received eye spatial position information The point of view where it is located and provided by the multi-viewpoint 3D display 110, that is, the predetermined point of view.
  • the eye spatial position information DR, DL, ⁇ , and P obtained as an example above can also be directly transmitted to the processor 101 of the 3D display device 100, and the 3D processing device 130 receives the data from the processor 101 through the eye positioning data interface 153. Receive/read eye spatial position information.
  • the first black-and-white camera 151a is configured to capture a first black-and-white image sequence, which includes a plurality of first black-and-white images arranged back and forth in time
  • the second black-and-white camera 151b is configured to capture a second black-and-white image sequence, It includes a plurality of second black and white images arranged in time.
  • the eye positioning image processor 152 includes a synchronizer 155 configured to determine the first black and white image and the second black and white image that are time-synchronized in the first black and white image sequence and the second black and white image sequence.
  • the first black-and-white image and the second black-and-white image determined to be time synchronized are used for eye recognition and determination of the spatial position of the eye.
  • the eye positioning image processor 152 includes a buffer 156 and a comparator 157.
  • the buffer 156 is configured to buffer a plurality of first black-and-white images and second black-and-white images that are arranged sequentially in time in the first black-and-white image sequence and the second black-and-white image sequence.
  • the comparator 157 is configured to compare a plurality of first black-and-white images and second black-and-white images taken before and after time in the first black-and-white image sequence and the second black-and-white image sequence. By comparison, for example, it can be judged whether the spatial position of the eye has changed or whether the eye is still in the viewing range, and so on.
  • the eye positioning image processor 152 further includes a judge (not shown) configured to compare the current first black and white image sequence in the first black and white image sequence and the second black and white image sequence by the comparator.
  • a judge configured to compare the current first black and white image sequence in the first black and white image sequence and the second black and white image sequence by the comparator.
  • the eye space position determined by the black and white image is used as the current eye space position. In this case, for example, the user briefly turns his head. In this case, the user's face and eyes may not be recognized for a while.
  • first black-and-white images and second black-and-white images there are several first black-and-white images and second black-and-white images in the first black-and-white image sequence and the second black-and-white image sequence in the buffer segment of the buffer 156.
  • the face and eyes cannot be recognized from the current first black and white image and the second black and white image that are cached, but the first black and white image and the second black and white image before or after the cache can be Recognize the face and eyes.
  • the spatial position information of the eyes determined based on the first black-and-white image and the second black-and-white image that are taken later after the current first black-and-white image and the second black-and-white image can be used as the current eye Spatial position information; it is also possible to use the eye space position information determined based on the first black and white image and the second black and white image that are taken before the current first black and white image and the second black and white image, that is, as the current eye space location information.
  • the spatial position information of the eyes determined based on the first black and white image and the second black and white image that can recognize the face and eyes before and after the above can also be averaged, data fitting, interpolation, or other methods can be used. The method is processed, and the result obtained is used as the current spatial position information of the eye.
  • the first black and white camera and the second black and white camera are configured to capture the first black and white image sequence and the second black and white image sequence at a frequency of 24 frames per second or more.
  • the shooting is performed at a frequency of 30 frames per second.
  • shooting is performed at a frequency of 60 frames per second.
  • the first black-and-white camera and the second black-and-white camera are configured to shoot at the same frequency as the refresh frequency of the display screen of the 3D display device.
  • the 3D display device may be a 3D display device including a processor.
  • the 3D display device may be configured as a smart cell phone, tablet computer, smart TV, wearable device, in-vehicle device, notebook computer, ultra mobile personal computer (UMPC), netbook, personal digital assistant (PDA), etc.
  • UMPC ultra mobile personal computer
  • PDA personal digital assistant
  • FIG. 2 shows a schematic diagram of the hardware structure of a 3D display device 200 implemented as a mobile terminal, such as a tablet computer or a smart cellular phone.
  • the 3D display device 200 may include a processor 201, an external storage interface 202, an (internal) memory 203, a universal serial bus (USB) interface 204, a charging management module 205, a power management module 206, a battery 207, a mobile communication module 281, and wireless Communication module 283, antenna 282, 284, audio module 212, speaker 213, receiver 214, microphone 215, earphone interface 216, button 217, motor 218, indicator 219, subscriber identity module (SIM) card interface 260, multi-view 3D display
  • SIM subscriber identity module
  • the 3D photographing device 220 may include a camera assembly 221, a 3D image output interface 225, and an eye positioning device 250.
  • the sensor module 270 may include proximity light sensor 2701, ambient light sensor 2702, pressure sensor 2703, air pressure sensor 2704, magnetic sensor 2705, gravity sensor 2706, gyroscope sensor 2707, acceleration sensor 2708, distance sensor 2709, temperature sensor 2710, fingerprint Sensor 2711, touch sensor 2712, bone conduction sensor 2713, etc.
  • the structure illustrated in the embodiment of the present disclosure does not constitute a limitation on the 3D display device 200.
  • the 3D display device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 201 may include one or more processing units.
  • the processor 201 may include an application processor (AP), a modem processor, a baseband processor, a register 222, a graphics processing unit (GPU) 223, and image signal processing.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processing
  • controller memory
  • video codec 224 digital signal processor
  • DSP digital signal processor
  • NPU neural network processor
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor 201 may also be provided with a cache, which is configured to store instructions or data that the processor 201 has just used or used cyclically. When the processor 201 wants to use the instruction or data again, it can be directly called from the memory.
  • the processor 201 may include one or more interfaces.
  • Interfaces can include integrated circuit (I2C) interface, integrated circuit built-in audio (I2S) interface, pulse code modulation (PCM) interface, universal asynchronous receiver transmitter (UART) interface, mobile industry processor interface (MIPI), universal input and output (GPIO) interface, user identification module (SIM) interface, universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver transmitter
  • MIPI mobile industry processor interface
  • GPIO universal input and output
  • SIM user identification module
  • USB universal serial bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 201 may include multiple sets of I2C buses.
  • the processor 201 can communicate with the touch sensor 2712, the charger, the flash, the 3D camera 220 or its camera assembly 221, the eye positioning device 250, etc., respectively, through different I2C bus interfaces.
  • Both I2S interface and PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is configured to connect the processor 201 and the wireless communication module 283.
  • the MIPI interface may be configured to connect the processor 201 and the multi-view 3D display screen 210.
  • the MIPI interface can also be configured to connect peripheral devices such as the camera assembly 221 and the eye positioning device 250.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be configured to connect the processor 201 with the 3D camera 220 or its camera assembly 221, the multi-view 3D display screen 110, the wireless communication module 283, the audio module 212, the sensor module 270, and so on.
  • the USB interface 204 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 204 can be configured to connect a charger to charge the 3D display device 200, and can also be used to transfer data between the 3D display device 200 and peripheral devices. It can also be configured to connect headphones and play audio through the headphones.
  • the wireless communication function of the 3D display device 200 can be implemented by the antennas 282 and 284, the mobile communication module 281, the wireless communication module 283, the modem processor or the baseband processor, etc.
  • the antennas 282, 284 are configured to transmit and receive electromagnetic wave signals.
  • Each antenna in the 3D display device 200 may be configured to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the mobile communication module 281 may provide a solution for wireless communication including 2G/3G/4G/5G and the like applied to the 3D display device 200.
  • the mobile communication module 281 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 281 can receive electromagnetic waves by the antenna 282, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 281 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 282.
  • at least part of the functional modules of the mobile communication module 281 may be provided in the processor 201.
  • at least part of the functional modules of the mobile communication module 282 and at least part of the modules of the processor 201 may be provided in the same device.
  • the wireless communication module 283 can provide applications on the 3D display device 200 including wireless local area network (WLAN), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication technology (NFC), infrared technology (IR) and other wireless communication solutions.
  • the wireless communication module 283 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 283 receives electromagnetic waves via the antenna 284, modulates the frequency of the electromagnetic wave signals and filters them, and sends the processed signals to the processor 201.
  • the wireless communication module 283 may also receive the signal to be sent from the processor 201, perform frequency modulation, amplify it, and convert it to electromagnetic wave radiation via the antenna 284.
  • the antenna 282 of the 3D display device 200 is coupled with the mobile communication module 281, and the antenna 284 is coupled with the wireless communication module 283, so that the 3D display device 200 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA) , At least one of Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, or IR technologies.
  • GNSS may include at least one of Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Beidou Satellite Navigation System (BDS), Quasi-Zenith Satellite System (QZSS) or Satellite-Based Augmentation System (SBAS).
  • GPS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • BDS Beidou Satellite Navigation System
  • QZSS Quasi-Zenith Satellite System
  • SBAS Satellite-Based Augmentation System
  • the external interface configured to receive 3D video signals may include a USB interface 204, a mobile communication module 281, a wireless communication module 283, or a combination thereof.
  • other feasible interfaces configured to receive 3D video signals are also conceivable, such as the aforementioned interfaces.
  • the memory 203 may be configured to store computer executable program code, and the executable program code includes instructions.
  • the processor 201 executes various functional applications and data processing of the 3D display device 200 by running instructions stored in the memory 203.
  • the memory 203 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the 3D display device 200.
  • the memory 203 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
  • UFS universal flash memory
  • the external memory interface 202 may be configured to connect to an external memory card, such as a Micro SD card, so as to expand the storage capacity of the 3D display device 200.
  • the external memory card communicates with the processor 201 through the external memory interface 202 to realize the data storage function.
  • the memory of the 3D display device may include (internal) memory 203, an external memory card connected to external memory interface 202, or a combination thereof.
  • the signal interface may also adopt different internal interface connection modes or combinations of the above-mentioned embodiments.
  • the camera assembly 221 can collect images or videos in 2D or 3D, and output the collected videos via the 3D image output interface 225.
  • the eye positioning device 250 can determine the spatial position of the user's eyes.
  • the camera assembly 221, the 3D image output interface 225 and the eye positioning device 250 jointly form a 3D photographing device 220.
  • the 3D display device 200 implements a display function through a signal interface 240, a 3D processing device 230, an eye positioning device 250, a multi-view 3D display screen 210, and an application processor.
  • the 3D display device 200 may include a GPU, for example, the processor 201 is configured to process 3D video images, and may also process 2D video images.
  • the 3D display device 200 further includes a video codec 224 configured to compress or decompress digital video.
  • the signal interface 240 is configured to output a 3D video signal processed by the GPU or the codec 224 or both, such as a video frame of a decompressed 3D video signal, to the 3D processing device 230.
  • the GPU or codec 224 is integrated with a format adjuster.
  • the multi-view 3D display screen 210 is configured to display 3D images or videos and the like.
  • the multi-view 3D display screen 210 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light emitting diode (OLED), active matrix organic light emitting diode or active matrix organic light emitting diode (AMOLED), flexible light emitting diode (FLED), Mini-LED, Micro -LED, Micro-OLED, Quantum Dot Light Emitting Diode (QLED), etc.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light emitting diode
  • Mini-LED Micro -LED
  • Micro-OLED Quantum Dot Light Emitting Diode
  • the eye positioning device 250 is communicatively connected to the 3D processing device 230, so that the 3D processing device 230 can render the corresponding sub-pixels in the composite pixel (composite sub-pixel) based on the eye positioning data.
  • the eye positioning device 250 may also be connected to the processor 201, for example, the processor 201 is bypassed.
  • the 3D image output interface 225 of the 3D photographing device 220 may be communicatively connected to the processor 201 or the 3D processing device 230.
  • the 3D display device 200 can implement audio functions through an audio module 212, a speaker 213, a receiver 214, a microphone 215, a headphone interface 216, an application processor, and the like. For example, music playback, recording, etc.
  • the audio module 212 is configured to convert digital audio information into an analog audio signal for output, and is also configured to convert an analog audio input into a digital audio signal.
  • the audio module 212 may also be configured to encode and decode audio signals.
  • the audio module 212 may be provided in the processor 201, or part of the functional modules of the audio module 212 may be provided in the processor 201.
  • the speaker 213 is configured to convert audio electrical signals into sound signals.
  • the 3D display device 200 can listen to music through the speaker 213, or listen to a hands-free call.
  • the receiver 214 also called “earpiece” is configured to convert audio electrical signals into sound signals. When the 3D display device 200 answers a call or voice message, the voice can be picked up by bringing the receiver 214 close to the ear.
  • the microphone 215 is configured to convert a sound signal into an electric signal.
  • the earphone interface 216 is configured to connect a wired earphone.
  • the earphone interface 216 may be a USB interface 204, or may be a 3.5mm Open Mobile 3D Display Device Platform (OMTP) standard interface, or the American Cellular Telecommunications Industry Association (CTIA) standard interface.
  • OMTP Open Mobile 3D Display Device Platform
  • CTIA American Cellular Telecommunications Industry Association
  • the button 217 includes a power button, a volume button, and so on.
  • the button 217 may be a mechanical button. It can also be a touch button.
  • the 3D display device 200 may receive key input, and generate key signal input related to user settings and function control of the 3D display device 200.
  • the motor 218 can generate vibration prompts.
  • the motor 218 may be configured as an incoming call vibration notification, or may be configured as a touch vibration feedback.
  • the SIM card interface 260 is configured to connect to a SIM card.
  • the 3D display device 200 adopts an eSIM, that is, an embedded SIM card.
  • the ambient light sensor 2702 is configured to sense ambient light brightness.
  • the 3D display device 200 can adjust the brightness of the multi-viewpoint 3D display 210 or assist eye positioning according to the perceived brightness of the ambient light. For example, when the brightness of the ambient light is low, the eye positioning device 250 activates the infrared emitting device.
  • the ambient light sensor 2702 can also be configured to adjust the white balance when shooting with a black and white camera.
  • the pressure sensor 2703 is configured to sense a pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 2703 may be provided on the multi-view 3D display screen 210, which falls within the scope of the embodiments of the present disclosure.
  • the air pressure sensor 2704 is configured to measure air pressure. In some embodiments, the 3D display device 200 calculates the altitude based on the air pressure value measured by the air pressure sensor 2704 to assist positioning and navigation.
  • the magnetic sensor 2705 includes a Hall sensor.
  • the gravity sensor 2706 is a sensor that converts motion or gravity into electrical signals, and is mainly configured to measure parameters such as tilt angle, inertial force, impact, and vibration.
  • the gyro sensor 2707 may be configured to determine the movement posture of the 3D display device 200.
  • the acceleration sensor 2708 can detect the magnitude of the acceleration of the 3D display device 200 in various directions (generally three axes).
  • Distance sensor 2709 can be configured to measure distance
  • the temperature sensor 2710 may be configured to detect temperature.
  • the fingerprint sensor 2711 is configured to collect fingerprints.
  • the 3D display device 200 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the touch sensor 2712 may be disposed in the multi-viewpoint 3D display screen 210, and the touch screen is composed of the touch sensor 2712 and the multi-viewpoint 3D display screen 210, which is also called a “touch screen”.
  • the bone conduction sensor 2713 can acquire vibration signals.
  • the charging management module 205 is configured to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 205 may receive the charging input of the wired charger through the USB interface 204.
  • the charging management module 205 may receive the wireless charging input through the wireless charging coil of the 3D display device 200.
  • the power management module 206 is configured to connect the battery 207, the charging management module 205 and the processor 201.
  • the power management module 206 receives input from at least one of the battery 207 or the charge management module 205, and supplies power to the processor 201, the memory 203, the external memory, the multi-view 3D display 210, the camera assembly 221, and the wireless communication module 283.
  • the power management module 206 and the charging management module 205 may also be provided in the same device.
  • the software system of the 3D display device 200 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment shown in the present disclosure exemplifies the software structure of the 3D display device 200 by taking an Android system with a layered architecture as an example.
  • Android system with a layered architecture as an example.
  • the embodiments of the present disclosure can be implemented in different software systems, such as operating systems.
  • FIG. 3 is a schematic diagram of the software structure of the 3D display device 200 shown in FIG. 2.
  • the layered architecture divides the software into several layers. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer 310, the framework layer 320, the core class library and runtime (Runtime) 330, and the kernel layer 340, respectively.
  • the application layer 310 may include a series of application packages. As shown in Figure 3, the application package can include applications such as Bluetooth, WLAN, navigation, music, camera, calendar, call, video, gallery, map, short message, etc.
  • the 3D video display method according to the embodiment of the present disclosure may be implemented in a video application program, for example.
  • the framework layer 320 provides an application programming interface (API) and a programming framework for applications in the application layer.
  • the framework layer includes some predefined functions. For example, in some embodiments of the present disclosure, the function or algorithm for recognizing the collected 3D video image and the algorithm for processing the image may be included in the framework layer.
  • the framework layer 320 may include a resource manager, a phone manager, a content manager, a notification manager, a window manager, a view system, an installation package manager, and the like.
  • Android Runtime includes core libraries and virtual machines. Android Runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function to be called by the java language, and the other part is the core library of Android.
  • the application layer and the framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the framework layer as binary files.
  • the virtual machine is configured to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the core class library can include multiple functional modules. For example: 3D graphics processing library (for example: OpenGL ES), surface manager, image processing library, media library, graphics engine (for example: SGL), etc.
  • 3D graphics processing library for example: OpenGL ES
  • surface manager for example: image processing library
  • media library for example: SGL
  • graphics engine for example: SGL
  • the kernel layer 340 is a layer between hardware and software.
  • the kernel layer includes at least camera driver, audio and video interface, call interface, Wifi interface, sensor driver, power management, GPS interface.
  • a 3D display device as a mobile terminal with the structure shown in FIG. 2 and FIG. 3 is taken as an example to describe embodiments of 3D video transmission and display in the 3D display device; however, it is conceivable that in other embodiments More or fewer features can be included or changes can be made to the features.
  • the 3D display device 200 such as a mobile terminal, such as a tablet computer or a smart cell phone, is connected from a network, such as a cellular network, via a mobile communication module 281 and an antenna 282 or a wireless communication module 283 and an antenna 284 as external interfaces.
  • a network such as a cellular network
  • WLAN network and Bluetooth receive, for example, compressed 3D video signals.
  • the compressed 3D video signals are processed by GPU 223 for image processing, codec 224 is encoded, decoded, and decompressed, and then, for example, through signal interface 240 as an internal interface, such as MIPI interface or The mini-MIPI interface sends the decompressed 3D video signal to the 3D processing device 230.
  • the user's eye spatial position information is obtained through the eye positioning device 250.
  • a predetermined viewpoint is determined based on the spatial position information of the eyes.
  • the 3D processing device 230 renders the sub-pixels of the display screen correspondingly to a predetermined viewpoint, thereby realizing 3D video playback.
  • the 3D display device 200 reads the (internal) memory 203 or reads the compressed 3D image signal stored in an external memory card through the external memory interface 202, and performs 3D by corresponding processing, transmission and rendering. Image playback.
  • the 3D display device 200 receives the 3D image captured by the camera assembly 221 and transmitted via the 3D image output interface 225, and performs 3D image playback through corresponding processing, transmission, and rendering.
  • the playback of the aforementioned 3D image is implemented in a video application in the Android system application layer 310.
  • the embodiments of the present disclosure may also provide an eye positioning method, which is implemented by using the eye positioning device in the above-mentioned embodiment.
  • the eye positioning method includes:
  • S701 Take a first black-and-white image and a second black-and-white image
  • S702 Recognizing the presence of eyes based on at least one of the first black-and-white image and the second black-and-white image;
  • S703 Determine the spatial position of the eye based on the recognized eye in the first black-and-white image and the second black-and-white image.
  • a first black-and-white image is taken at a first position
  • a second black-and-white image is taken at a second position
  • the first position is different from the second position
  • the eye positioning method further includes: transmitting eye space position information indicating the eye space position.
  • the eye positioning method further includes: using an infrared emitting device to emit infrared light when the first black-and-white camera or the second black-and-white camera is working.
  • the eye positioning method further includes: separately photographing a first black and white image sequence including the first black and white image and a second black and white image sequence including the second black and white image.
  • the eye positioning method further includes: determining the first black-and-white image and the second black-and-white image that are synchronized in time.
  • the eye positioning method further includes: buffering a plurality of first black and white images and second black and white images in the first black and white image sequence and the second black and white image sequence; comparing the first black and white image sequence with the second black and white image sequence Before and after multiple first black and white images and second black and white images; when comparing the current first black and white image and second black and white image in the first black and white image sequence and the second black and white image sequence, the existence of eyes is not recognized And when the presence of eyes is recognized in the first black and white image and the second black and white image before or after, the spatial position of the eyes determined based on the first black and white image and the second black and white image before or after is used as the current eye Spatial location.
  • the eye positioning method includes: shooting the first black-and-white image sequence and the second black-and-white image sequence at a frequency of 24 frames/sec or more.
  • the embodiments of the present disclosure may also provide a 3D display method.
  • the 3D display method includes:
  • S803 Render sub-pixels corresponding to the viewpoint of the multi-viewpoint 3D display screen based on the 3D signal.
  • the 3D display method further includes: providing a multi-viewpoint 3D display screen, including a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and each of the plurality of composite sub-pixels Each composite sub-pixel is composed of multiple sub-pixels corresponding to multiple viewpoints.
  • a video frame based on the 3D video signal when it is determined based on the spatial position of the eyes that the eyes of the user each correspond to one viewpoint, a video frame based on the 3D video signal generates images of the two viewpoints of the user's eyes, and renders the two viewpoints in the composite sub-pixel. The corresponding sub-pixel.
  • the user's right eye is at the second viewpoint V2, and the left eye is at the fifth viewpoint V5.
  • the images of the two viewpoints V2 and V5 are generated, and the composite is rendered.
  • the sub-pixels the sub-pixels corresponding to these two viewpoints.
  • the user when the inclination angle or parallelism of the user's eyes relative to the multi-view 3D display screen is determined based on the spatial position of the eyes, the user can be provided with a targeted or customized display image to improve the user's The viewing experience.
  • the above-mentioned spatial position of the eye may be acquired or determined in real time, or may be acquired or determined periodically or randomly.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned eye positioning method and 3D display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned eye positioning method, 3D display method.
  • the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
  • the aforementioned storage media can be non-transitory storage media, including: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other media that can store program codes, or it can be a transient storage medium. .
  • a typical implementation entity is a computer or its processor or other components.
  • the computer can be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a computer Wearable devices, smart TVs, IoT systems, smart homes, industrial computers, single-chip microcomputer systems, or a combination of these devices.
  • the computer may include one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM).
  • the methods, programs, systems, devices, etc. in the embodiments of the present application can be executed or implemented in a single or multiple networked computers, and can also be practiced in a distributed computing environment.
  • tasks are performed by remote processing devices connected through a communication network.
  • the components of the device are described in the form of functional modules/units. It is conceivable that multiple functional modules/units are implemented in one or more "combined" functional modules/units and/or one or more software and/or hardware. It is also conceivable that a single functional module/unit is implemented by multiple sub-functional modules or a combination of sub-units and/or multiple software and/or hardware. The division of functional modules/units may only be a logical functional division. In an implementation manner, multiple modules/units may be combined or integrated into another system.
  • connections of the modules, units, devices, systems and their components described herein include direct or indirect connections, including feasible electrical, mechanical, and communication connections, especially wired or wireless connections between various interfaces, Including but not limited to HDMI, radar, USB, WiFi, cellular network.
  • the technical features, flowcharts and/or block diagrams of the methods and programs can be applied to corresponding devices, equipment, systems and their modules, units, and components.
  • the various embodiments and features of the device, equipment, system and its modules, units, and components can be applied to the methods and programs according to the embodiments of the present application.
  • computer program instructions can be loaded into the processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing equipment to produce a machine that has one process or multiple processes implemented in a flowchart and/or a block diagram. Corresponding functions or features in the box or multiple boxes.
  • the methods and programs according to the embodiments of the present application may be stored in a computer-readable memory or medium that can guide a computer or other programmable data processing equipment to work in a specific manner in the form of computer program instructions or programs.
  • the embodiments of the present application also relate to a readable memory or medium storing the methods, programs, and instructions that can implement the embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Holo Graphy (AREA)

Abstract

本申请涉及3D显示技术,公开一种眼部定位装置,包括:眼部定位器,包括被配置为拍摄第一黑白图像的第一黑白摄像头和被配置为拍摄第二黑白图像的第二黑白摄像头;眼部定位图像处理器,被配置为基于第一黑白图像和第二黑白图像中至少一幅识别眼部的存在且基于在第一黑白图像和第二黑白图像中识别到的眼部确定眼部空间位置。上述装置能高精度地确定用户眼部的空间位置,进而可以提高3D显示质量。本申请还公开一种眼部定位方法以及3D显示设备、方法、计算机可读存储介质、计算机程序产品。

Description

眼部定位装置、方法及3D显示设备、方法
本申请要求在2019年12月05日提交中国知识产权局、申请号为201911231206.4、发明名称为“人眼追踪装置、方法及3D显示设备、方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及3D显示技术,例如涉及眼部定位装置和方法以及3D显示设备和方法。
背景技术
在一些常规的脸部或眼部定位装置中,仅检测脸部与屏幕的距离,并依靠预设的或默认的瞳距来确定眼部所在的视点位置。这样识别的精度不高,可能会造成视点计算错误,无法满足高质量的3D显示。
本背景技术仅为了便于了解本领域的相关技术,并不视作对现有技术的承认。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了实施例的概括,其不是要确定关键/重要组成元素或描绘发明的保护范围,而是作为后面的详细说明的序言。
本申请的实施例意图提供眼部定位装置和方法以及3D显示设备和方法、计算机可读存储介质、计算机程序产品。
在一个方案中,提供了一种眼部定位装置,包括:眼部定位器,包括被配置为拍摄第一黑白图像的第一黑白摄像头和被配置为拍摄第二黑白图像的第二黑白摄像头;眼部定位图像处理器,被配置为基于第一黑白图像和第二黑白图像中至少一幅识别眼部的存在且基于在第一黑白图像和第二黑白图像中识别到的眼部确定眼部空间位置。
通过这种眼部定位装置,能够高精度地确定用户眼部的空间位置,以提高3D显示质量。
在一些实施例中,眼部定位装置还包括眼部定位数据接口,被配置为传输表明眼部空间位置的眼部空间位置信息。
在一些实施例中,眼部定位器还包括红外发射装置。
在一些实施例中,红外发射装置被配置为发射波长大于或等于1.5微米的红外光。
在一些实施例中,第一黑白摄像头和第二黑白摄像头被配置为分别拍摄包括第一黑白 图像的第一黑白图像序列和包括第二黑白图像的第二黑白图像序列。
在一些实施例中,眼部定位图像处理器包括同步器,被配置为确定时间同步的第一黑白图像和第二黑白图像,以便进行眼部的识别以及眼部空间位置的确定。
在一些实施例中,眼部定位图像处理器包括:缓存器,被配置为缓存第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;比较器,被配置为比较第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像;判决器,被配置为,当比较器通过比较在第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,将基于之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置作为当前的眼部空间位置。
基于此,例如在第一或第二黑白摄像头出现卡顿或跳帧等情况时,能够为用户提供更为连贯的显示画面,确保观看体验。
在另一方案中,提供了一种3D显示设备,包括:多视点3D显示屏,包括对应多个视点的多个子像素;如上文描述的眼部定位装置,以获得眼部空间位置;3D处理装置,被配置为根据眼部定位装置获得的眼部空间位置确定所对应的视点,并基于3D信号渲染多视点3D显示屏的与视点对应的子像素。
在一些实施例中,多视点3D显示屏包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成。
在一些实施例中,3D处理装置与眼部定位装置通信连接。
在一些实施例中,还包括:3D拍摄装置,被配置为采集3D图像;3D拍摄装置包括景深摄像头和至少两个彩色摄像头。
在一些实施例中,眼部定位装置与3D拍摄装置集成设置。
在一些实施例中,3D拍摄装置前置于3D显示设备。
在另一方案中,提供了一种眼部定位方法,包括:拍摄第一黑白图像和第二黑白图像;基于第一黑白图像和第二黑白图像中至少一幅识别眼部的存在;基于在第一黑白图像和第二黑白图像中识别到的眼部确定眼部空间位置。
在一些实施例中,眼部定位方法还包括:传输表明眼部空间位置的眼部空间位置信息。
在一些实施例中,眼部定位方法还包括:在第一黑白摄像头或第二黑白摄像头工作时,利用红外发射装置发射红外光。
在一些实施例中,眼部定位方法还包括:分别拍摄出包括第一黑白图像的第一黑白图 像序列和包括第二黑白图像的第二黑白图像序列。
在一些实施例中,眼部定位方法还包括:确定时间同步的第一黑白图像和第二黑白图像。
在一些实施例中,眼部定位方法还包括:缓存第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;比较第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像;当通过比较在第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,将基于之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置作为当前的眼部空间位置。
在另一方案中,提供了一种3D显示方法,包括:获得用户的眼部空间位置;根据眼部空间位置确定所对应的视点;基于3D信号渲染多视点3D显示屏的与视点对应的子像素。
在一些实施例中,3D显示方法还包括:提供多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的眼部定位方法、3D显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的眼部定位方法、3D显示方法。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1A和图1B是根据本公开实施例的3D显示设备的结构示意图;
图1C是根据本公开实施例的眼部定位装置的结构示意图;
图2是根据本公开实施例的3D显示设备的硬件结构示意图;
图3是图2所示的3D显示设备的软件结构示意图;
图4是利用根据本公开实施例的眼部定位装置确定眼部空间位置的示意图;
图5A至图5C是根据本公开实施例的3D显示设备的正面示意图;
图6A和图6B是根据本公开实施例的用户脸部与3D显示设备的位置关系示意图;
图7是根据本公开实施例的眼部定位方法的步骤示意图;
图8是根据本公开实施例的3D显示方法的步骤示意图;
图9是用根据本公开实施例的3D显示方法实现3D显示设备的多视点3D显示屏的显示的示意图,其中用户的双眼各对应一个视点。
附图标记:
100:3D显示设备;101:处理器;122:寄存器;110:多视点3D显示屏;120:3D拍摄装置;121:摄像头组件;121a:第一彩色摄像头;121b:第二彩色摄像头;121c:景深摄像头;125:3D图像输出接口;126:3D图像处理器;130:3D处理装置;131:缓存器;140:信号接口;150:眼部定位装置;151:眼部定位器;151a:第一黑白摄像头;151b:第二黑白摄像头;154:红外发射装置;152:眼部定位图像处理器;155:同步器;156:缓存器;157:比较器;153:眼部定位数据接口;CP:复合像素;CSP:复合子像素;200:3D显示设备;201:处理器;202:外部存储器接口;203:存储器;204:USB接口;205:充电管理模块;206:电源管理模块;207:电池;210:多视点3D显示屏;212:音频模块;213:扬声器;214:受话器;215:麦克风;216:耳机接口;217:按键;218:马达;219:指示器;220:3D拍摄装置;221:摄像头组件;222:寄存器;223:GPU;224:编解码器;225:3D图像输出接口;226:3D图像处理器;230:3D处理装置;240:信号接口;250:眼部定位装置;260:SIM卡接口;270:传感器模块;2701:接近光传感器;2702:环境光传感器;2703:压力传感器;2704:气压传感器;2705:磁传感器;2706:重力传感器;2707:陀螺仪传感器;2708:加速度传感器;2709:距离传感器;2710:温度传感器;2711:指纹传感器;2712:触摸传感器;2713:骨传导传感器;281:移动通信模块;282:天线;283:无线通信模块;284:天线;310:应用程序层;320:框架层;330:核心类库和运行时(Runtime);340:内核层;T:两个黑白摄像头的间距;401a:第一黑白摄像头151a的焦平面;401b:第二黑白摄像头151b的焦平面;f:焦距;Oa:第一黑白摄像头151a的镜头中心;Ob:第二黑白摄像头151b的镜头中心;Za:第一黑白摄像头151a的光轴;Zb:第二黑白摄像头151b的光轴;R:用户的右眼;L:用户的左眼;P:用户的瞳距;α:用户脸部与多视点3D显示屏的倾斜角度;XRa:用户右眼R在第一黑白摄像头151a的焦平面401a内成像的X轴坐标;XRb:用户右眼R在第二黑白摄像头151b的焦平面401b内成像的X轴坐标;XLa:用户左眼L在第一黑白摄像头151a的焦平面401a内成像的X轴坐标;XLb:用户左眼L在第二黑白摄像头151b的焦平面401b内成像的X轴坐标;DR:用户的右眼R与多视点3D显 示屏的间距;DL:用户的左眼L与多视点3D显示屏的间距;500:3D显示设备;600:3D显示设备。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。
在一个方案中,提供了一种眼部定位装置,包括:眼部定位器,包括被配置为拍摄第一黑白图像的第一黑白摄像头和被配置为拍摄第二黑白图像的第二黑白摄像头;眼部定位图像处理器,配置为基于第一黑白图像和第二黑白图像中至少一幅识别眼部的存在且基于第一黑白图像和第二黑白图像中存在的眼部的所在位置确定眼部空间位置;眼部定位数据接口,配置为传输眼部空间位置的眼部空间位置信息。
通过这种眼部定位装置,能够高精度地确定用户眼部的空间位置。
在一些实施例中,眼部定位器还包括红外发射装置。
在一些实施例中,红外发射装置配置为发射波长大于或等于1.5微米的红外光。
在一些实施例中,第一黑白摄像头和第二黑白摄像头配置为分别拍摄第一黑白图像序列和第二黑白图像序列。
在一些实施例中,眼部定位图像处理器包括同步器,配置为确定时间同步的第一黑白图像和第二黑白图像。
在一些实施例中,眼部定位图像处理器包括:缓存器,配置为缓存第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;比较器,配置为比较第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像。
在一些实施例中,眼部定位图像处理器配置为,在第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,基于之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置信息作为当前的眼部空间位置信息。
基于此,例如在第一或第二黑白摄像头出现卡顿或跳帧等情况时,能够为用户提供更为连贯的显示画面,确保观看体验。
在一些实施例中,第一黑白摄像头和第二黑白摄像头配置为以24帧/秒或以上的频率拍摄第一黑白图像序列和第二黑白图像序列。
在另一方案中,提供了一种3D显示设备,包括多视点3D显示屏(例如:多视点裸眼3D显示屏)、被配置为接收3D视频信号(3D信号)的视频帧的视频信号接口(信号接口)、 与视频信号接口通信连接的3D处理装置和如上所述的眼部定位装置,多视点3D显示屏包括对应多个视点的多个子像素,3D处理装置配置为基于3D视频信号的视频帧渲染与预定的视点相关的子像素,预定的视点由用户的眼部空间位置信息确定。
在一些实施例中,多视点3D显示屏包括多个复合像素,多个复合像素中的每个包括多个复合子像素,各复合子像素由对应于多个视点的多个同色子像素构成。
在一些实施例中,3D处理装置与眼部定位装置的眼部定位数据接口通信连接。
在一些实施例中,3D显示设备还包括被配置为采集3D图像的3D拍摄装置,3D拍摄装置包括摄像头组件和3D图像处理器,摄像头组件包括第一彩色摄像头、第二彩色摄像头和景深摄像头。
在一些实施例中,眼部定位装置与所述3D拍摄装置相集成。
在一些实施例中,3D拍摄装置是前置摄像装置。
在另一方案中,提供了一种眼部定位方法,包括:在第一位置拍摄第一黑白图像;在第二位置拍摄第二黑白图像,其中第二位置不同于第一位置;基于第一黑白图像和第二黑白图像中至少一幅识别眼部的存在;基于第一黑白图像和第二黑白图像中存在的眼部的所在位置确定眼部空间位置;和传输眼部空间位置的眼部空间位置信息。
在一些实施例中,眼部定位方法还包括:在第一或第二黑白摄像头工作时,利用红外发射装置发射红外光。
在一些实施例中,眼部定位方法还包括:分别拍摄出第一黑白图像序列和第二黑白图像序列。
在一些实施例中,眼部定位方法还包括:确定时间同步的第一黑白图像和第二黑白图像。
在一些实施例中,眼部定位方法还包括:缓存第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;比较第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像。
在一些实施例中,眼部定位方法还包括:在第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,基于之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置信息作为当前的眼部空间位置信息。
在一些实施例中,眼部定位方法还包括:以24帧/秒或以上的频率拍摄第一黑白图像序列和第二黑白图像序列。
在另一方案中,提供了一种3D显示方法,适用于3D显示设备,3D显示设备包括多 视点3D显示屏,包括对应多个视点的多个子像素;3D显示方法包括:传输3D视频信号的视频帧;接收或读取用户的眼部空间位置信息,眼部空间位置信息利用如上所述的眼部定位方法来确定;基于眼部空间位置信息确定眼部所在的视点;基于视点,依据所接收的所述3D视频信号的视频帧渲染相关的子像素。
在一些实施例中,3D显示方法还包括:提供多视点3D显示屏,包括多个复合像素,多个复合像素中的每个包括多个复合子像素,各复合子像素由对应于多个视点的多个同色子像素构成。
在另一方案中,提供了一种3D显示设备,包括处理器和存储有程序指令的存储器,还包括多视点3D显示屏,处理器被配置为在执行程序指令时,执行如上所述的3D显示方法。
图1A示出了根据本公开实施例提供的3D显示设备100的结构示意图。参考图1A,在本公开实施例中提供了一种3D显示设备100,其包括多视点3D显示屏110、被配置为接收3D视频信号的视频帧的信号接口140、与信号接口140通信连接的3D处理装置130和眼部定位装置150。眼部定位装置150通信连接至3D处理装置130,由此3D处理装置130可以直接接收眼部定位数据。
多视点3D显示屏110可包括显示面板和覆盖在显示面板上的光栅(未标识)。在图1A所示的实施例中,多视点3D显示屏110可包括m×n个复合像素CP并因此限定出m×n的显示分辨率。如图1A所示,多视点3D显示屏110包括m列n行个复合像素并因此限定出m×n的显示分辨率。
在一些实施例中,m×n的分辨率可以为全高清(FHD)以上的分辨率,包括但不限于,1920×1080、1920×1200、2048×1280、2560×1440、3840×2160等。
在一些实施例中,3D处理装置与多视点3D显示屏通信连接。
在一些实施例中,3D处理装置与多视点3D显示屏的驱动装置通信连接。
作为解释而非限制地,每个复合像素CP包括多个复合子像素CSP,各复合子像素由对应于i个视点的i个同色子像素构成,i≥3。在图1所示的实施例中,i=6,但可以想到i为其他数值。在所示的实施例中,多视点3D显示屏可相应地具有i(i=6)个视点(V1-V6),但可以想到可以相应地具有更多或更少个视点。
作为解释而非限制地,在图1所示的实施例中,每个复合像素包括三个复合子像素,并且每个复合子像素由对应于6视点(i=6)的6个同色子像素构成。三个复合子像素分别对应于三种颜色,即红(R)、绿(G)和蓝(B)。也就是说,每个复合像素的三个复合子 像素分别具有6个红色、6个绿色或6个蓝色的子像素。在图1所示的实施例中,复合像素中的各复合子像素平行布置。各复合子像素包括呈单行形式的子像素。但可以想到,复合像素中的复合子像素的不同排布方式或复合子像素中的子像素的不同排布形式,例如各复合子像素包括呈单列或阵列形式的子像素。
作为解释而非限制性地,例如图1A所示,3D显示设备100可设置有单个3D处理装置130。单个3D处理装置130同时处理对3D显示屏110的各复合像素的各复合子像素的子像素的渲染。在另一些实施例中,3D显示设备100也可设置有一个以上3D处理装置130,它们并行、串行或串并行结合地处理对3D显示屏110的各复合像素的各复合子像素的子像素的渲染。本领域技术人员将明白,一个以上3D处理装置可以有其他的方式分配且并行处理3D显示屏110的多行多列复合像素或复合子像素,这落入本公开实施例的范围内。
在一些实施例中,3D处理装置130还可以选择性地包括缓存器131,以便缓存所接收到的视频帧。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
继续参考图1A,3D显示设备100还可包括通过信号接口140通信连接至3D处理装置130的处理器101。在本文所示的一些实施例中,处理器101被包括在计算机或智能终端、如移动终端中或作为其处理器单元。但是可以想到,在一些实施例中,处理器101可以设置在3D显示设备的外部,例如3D显示设备可以为带有3D处理装置的多视点3D显示器,例如非智能的3D电视。
为简单起见,下文中的3D显示设备的示例性实施例内部包括处理器。基于此,信号接口140构造为连接处理器101和3D处理装置130的内部接口,参考图2和图3所示的以移动终端方式实施的3D显示设备200可更明确该结构。在本文所示的一些实施例中,作为3D显示设备的内部接口的信号接口可以为MIPI、mini-MIPI接口、LVDS接口、min-LVDS接口或Display Port接口。在一些实施例中,如图1A所示,3D显示设备100的处理器101还可包括寄存器122。寄存器122可被配置为暂存指令、数据和地址。在一些实施例中,寄存器122可被配置为接收有关多视点3D显示屏110的显示要求的信息
在一些实施例中,3D显示设备100还可以包括编解码器,配置为对压缩的3D视频信号解压缩和编解码并将解压缩的3D视频信号经信号接口140发送至3D处理装置130。
参考图1B,图1B所示出的实施例与图1A所示出的实施例的区别在于,3D显示设备100还包括被配置为采集3D图像的3D拍摄装置120,眼部定位装置150集成在3D拍摄装置120中,也可以想到集成到处理终端或显示设备的常规摄像装置中。如图1B所示, 3D拍摄装置120构造为前置拍摄装置。3D拍摄装置120包括摄像头组件121、3D图像处理器126、3D图像输出接口125。3D拍摄装置120与眼部定位装置150集成。
如图1B所示,摄像头组件121包括第一彩色摄像头121a、第二彩色摄像头121b、景深摄像头121c。在另一些未示出的实施例中,3D图像处理器126可以集成在摄像头组件121内。在一些实施例中,第一彩色摄像头121a配置为获得拍摄对象的第一彩色图像,第二彩色摄像头121b配置为获得拍摄对象的第二彩色图像,通过合成这两幅彩色图像获得中间点的合成彩色图像;景深摄像头121c配置为获得拍摄对象的景深图像。合成彩色图像和景深图像形成3D视频信号的视频帧。在本公开实施例中,第一彩色摄像头和第二彩色摄像头是相同的彩色摄像头。在另一些实施例中,第一彩色摄像头和第二彩色摄像头也可以是不同的彩色摄像头。在这种情况下,为了获得彩色合成图像,可以对第一和第二彩色图像进行校准或矫正。景深摄像头121c可以是飞行时间(TOF)摄像头或结构光摄像头。景深摄像头121c可以设置在第一彩色摄像头和第二彩色摄像头之间。
在一些实施例中,3D图像处理器126配置为将第一和第二彩色图像合成为合成彩色图像,并将合成的合成彩色图像与景深图像形成3D图像。所形成的3D图像通过3D图像输出接口125传输至3D显示设备100的处理器101。
可选地,第一和第二彩色图像以及景深图像经由3D图像输出接口125直接传输至3D显示设备100的处理器101,并通过处理器101进行上述合成彩色图像以及形成3D图像等处理。
可选地,3D图像输出接口125还可通信连接到3D显示设备100的3D处理装置130,从而可通过3D处理装置130进行上述合成彩色图像以及形成3D图像等处理。
在一些实施例中,第一彩色摄像头和第二彩色摄像头中至少一个摄像头是广角的彩色摄像头。
继续参考图1B,眼部定位装置150集成在3D拍摄装置120内并且包括眼部定位器151、眼部定位图像处理器152和眼部定位数据接口153。
眼部定位器151包括第一黑白摄像头151a和第二黑白摄像头151b。第一黑白摄像头151a配置为拍摄第一黑白图像,第二黑白摄像头151b配置为拍摄第二黑白图像。在上述3D拍摄装置120是前置的并且眼部定位装置150集成在3D拍摄装置120内的情况下,眼部定位装置150也是前置的,第一黑白摄像头和第二黑白摄像头的拍摄对象是用户脸部。
在一些实施例中,眼部定位装置150的眼部定位数据接口153通信连接至3D显示设备100的3D处理装置130,由此3D处理装置130可以直接接收眼部定位数据。在另一些实施例中,眼部定位装置150的眼部定位图像处理器152可通信连接至3D显示设备100 的处理器101,由此眼部定位数据可以从处理器101通过眼部定位数据接口153被传输至3D处理装置130。
在一些实施例中,眼部定位装置150与摄像头组件221通信连接,由此可在拍摄3D图像时使用眼部定位数据。
可选地,眼部定位器151还设置有红外发射装置154。在第一或第二黑白摄像头工作时,红外发射装置154配置为选择性地发射红外光,以在环境光线不足时、例如在夜间拍摄时起到补光作用,从而在环境光线弱的条件下也能拍摄能识别出用户脸部及眼部的第一或第二黑白图像。
在一些实施例中,眼部定位装置150或集成有眼部定位装置的处理终端或显示设备可以配置为,在第一或第二黑白摄像头工作时,基于接收到的光线感应信号,例如检测到光线感应信号低于给定阈值时,控制红外发射装置的开启或调节其大小。在一些实施例中,光线感应信号是从处理终端或显示设备集成的环境光传感器,如环境光传感器2702接收的。
可选地,红外发射装置154配置为发射波长大于或等于1.5微米的红外光,亦即长波红外光。与短波红外光相比,长波红外光穿透皮肤的能力较弱,因此对眼部的伤害较小。
拍摄到的第一黑白图像和第二黑白图像被传输至眼部定位图像处理器152。示例性地,眼部定位图像处理器配置为具有视觉识别功能、例如脸部识别功能,并且配置为基于这两幅黑白图像中至少一幅识别出脸部并识别出眼部以及基于这两幅黑白图像中存在的眼部的所在位置确定眼部空间位置。在本公开实施例中,第一黑白摄像头和第二黑白摄像头是相同的黑白摄像头。在另一些实施例中,第一黑白摄像头和第二黑白摄像头也可以是不同的黑白摄像头。在这种情况下,为了确定眼部空间位置,可以对第一黑白图像和第二黑白图像进行校准或矫正。
在一些实施例中,第一黑白摄像头和第二黑白摄像头中至少一个摄像头是广角的黑白摄像头。
图4示意性地示出了利用两个黑白摄像头确定眼部空间位置的几何关系模型的俯视图。在图4所示实施例中,第一黑白摄像头和第二黑白摄像头是相同的黑白摄像头,因此具有相同的焦距f;第一黑白摄像头151a的光轴Za与第二黑白摄像头151b的光轴Zb平行,第一黑白摄像头151a的焦平面401a和第二黑白摄像头151b的焦平面401b处于同一平面内并且垂直于两个黑白摄像头的光轴。基于上述设置,两个黑白摄像头的镜头中心Oa和Ob的连线平行于两个黑白摄像头的焦平面。在图4所示实施例中,以两个黑白摄像头的镜头中心Oa到Ob的连线方向作为X轴方向并且以两个黑白摄像头的光轴方向为Z轴方向 示出XZ平面的几何关系模型的俯视图。
在图4所示实施例中,以第一黑白摄像头151a的镜头中心Oa为其原点,以第二黑白摄像头151b的镜头中心Ob为其原点。R和L分别表示用户的右眼和左眼,XRa和XRb分别为用户右眼R在两个黑白摄像头的焦平面401a和401b内成像的X轴坐标,XLa和XLb分别为用户左眼L在两个黑白摄像头的焦平面401a和401b内成像的X轴坐标。此外,两个黑白摄像头的间距T以及它们的焦距f也是已知的。根据相似三角形的几何关系可得出右眼R和左眼L与如上设置的两个黑白摄像头所在平面的间距DR和DL分别为:
Figure PCTCN2020133328-appb-000001
Figure PCTCN2020133328-appb-000002
并且可得出用户双眼连线与如上设置的两个黑白摄像头所在平面的倾斜角度α以及用户双眼间距或瞳距P分别为:
Figure PCTCN2020133328-appb-000003
Figure PCTCN2020133328-appb-000004
在图4所示实施例中,用户双眼连线、亦即用户脸部与如上设置的两个黑白摄像头所在平面相互倾斜并且倾斜角度为α;当用户脸部与如上设置的两个黑白摄像头所在平面相互平行时、亦即当用户平视两个黑白摄像头时,倾斜角度α为零。
如上所述,在本文的一些实施例中,3D显示设备100可以是计算机或智能终端、如移动终端。但是可以想到,在一些实施例中,3D显示设备100也可以是非智能的显示终端、如非智能的3D电视。在图5A、图5B和图5C中示出分别构造为智能手机、平板电脑和非智能显示器的3D显示设备500的示意图,其具有多视点3D显示屏510、前置的3D拍摄装置并且3D拍摄装置集成有眼部定位装置。在图5A至图5C所示实施例中,包括两个彩色摄像头121a、121b和景深摄像头121c的3D拍摄装置120及其所集成的包括两个黑白摄像头151a、151b的眼部定位装置150与3D显示设备500的多视点3D显示屏510设置在同一平面内。因此,在图4所示实施例中示例性得出的用户的右眼R和左眼L与如上设置的两个黑白摄像头所在平面的间距DR和DL即为用户的右眼R和左眼L与多视点3D显示屏的间距,用户脸部与如上设置的两个黑白摄像头所在平面的倾斜角度α即为用户脸部与多视点3D显示屏的倾斜角度。
参考图6A,示出了用户正视或平视3D显示设备600的多视点3D显示屏的示意图,即用户脸部所在平面与显示屏所在平面相互平行,用户双眼与显示屏的间距DR、DL相同、倾斜角度α为零。
参考图6B,示出了用户脸部相对于3D显示设备600的多视点3D显示屏倾斜的示意图,即用户脸部所在平面与显示屏所在平面不相互平行,用户双眼与显示屏的间距DR、DL不同、倾斜角度α不为零。
在一些实施例中,眼部定位数据接口153配置为传输用户双眼相对于眼部定位装置150或多视点3D显示屏110的倾斜角度或平行度。这可有利于更精确地呈现3D图像,对此将在下文中描述。
例如,如上示例性得出的眼部空间位置信息DR、DL、α和P通过眼部定位数据接口153传输至3D处理装置130。3D处理装置130基于接收到的眼部空间位置信息确定用户双眼所在的且由多视点3D显示屏110提供的视点、即预定的视点。
例如,如上示例性得出的眼部空间位置信息DR、DL、α和P也可被直接传输至3D显示设备100的处理器101,3D处理装置130通过眼部定位数据接口153从处理器101接收/读取眼部空间位置信息。
在一些实施例中,第一黑白摄像头151a配置为拍摄出第一黑白图像序列,其包括按照时间前后排列的多幅第一黑白图像,第二黑白摄像头151b配置为拍摄出第二黑白图像序列,其包括按照时间前后排列的多幅第二黑白图像。
在一些实施例中,眼部定位图像处理器152包括同步器155,其配置为在第一黑白图像序列和第二黑白图像序列中确定时间同步的第一黑白图像和第二黑白图像。被确定为时间同步的第一黑白图像和第二黑白图像用于眼部的识别以及眼部空间位置的确定。
在一些实施例中,眼部定位图像处理器152包括缓存器156和比较器157。缓存器156配置为缓存第一黑白图像序列和第二黑白图像序列中分别按照时间前后排列的多幅第一黑白图像和第二黑白图像。比较器157配置为比较第一黑白图像序列和第二黑白图像序列中按照时间前后拍摄的多幅第一黑白图像和第二黑白图像。通过比较,例如可以判断眼部空间位置是否变化或者判断眼部是否还处于观看范围内等。
在一些实施例中,眼部定位图像处理器152还包括判决器(未示出),被配置为,当比较器通过比较在第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,将基于之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置作为当前的眼部空间位置。这种情况例如为用户短暂转动头部。在这种情况下,有可能短暂 地无法识别到用户的脸部及其眼部。
示例性地,在缓存器156的缓存段内存有第一黑白图像序列和第二黑白图像序列中的若干第一黑白图像和第二黑白图像。在某些情况下,无法从所缓存的当前第一黑白图像和第二黑白图像中识别出脸部及眼部,然而可以从所缓存的之前或之后的第一黑白图像和第二黑白图像中识别出脸部及眼部。在这种情况下,可以将基于在当前第一黑白图像和第二黑白图像之后的、也就是更晚拍摄的第一黑白图像和第二黑白图像确定的眼部空间位置信息作为当前的眼部空间位置信息;也可以将基于在当前第一黑白图像和第二黑白图像之前的、也就是更早拍摄的第一黑白图像和第二黑白图像确定的眼部空间位置信息作为当前的眼部空间位置信息。此外,也可以对基于上述之前和之后的能识别出脸部及眼部的第一黑白图像和第二黑白图像确定的眼部空间位置信息取平均值、进行数据拟合、进行插值或以其他方法处理,并且将得到的结果作为当前的眼部空间位置信息。
在一些实施例中,第一黑白摄像头和第二黑白摄像头配置为以24帧/秒或以上的频率拍摄第一黑白图像序列和第二黑白图像序列。示例性地,以30帧/秒的频率拍摄。示例性地,以60帧/秒的频率拍摄。
在一些实施例中,第一黑白摄像头和第二黑白摄像头配置为以与3D显示设备的显示屏刷新频率相同的频率进行拍摄。
如前所述,本公开实施例提供的3D显示设备可以是包含处理器的3D显示设备。在一些实施例中,3D显示设备可构造为智能蜂窝电话、平板电脑、智能电视、可穿戴设备、车载设备、笔记本电脑、超级移动个人计算机(UMPC)、上网本、个人数字助理(PDA)等。
示例性的,图2示出了实施为移动终端、如平板电脑或智能蜂窝电话的3D显示设备200的硬件结构示意图。3D显示设备200可以包括处理器201,外部存储接口202,(内部)存储器203,通用串行总线(USB)接口204,充电管理模块205,电源管理模块206,电池207,移动通信模块281,无线通信模块283,天线282、284,音频模块212,扬声器213,受话器214,麦克风215,耳机接口216,按键217,马达218,指示器219,用户标识模块(SIM)卡接口260,多视点3D显示屏210,3D处理装置230,信号接口240,3D拍摄装置220以及传感器模块230等。其中3D拍摄装置220可以包括摄像头组件221、3D图像输出接口225和眼部定位装置250。其中传感器模块270可以包括接近光传感器2701,环境光传感器2702,压力传感器2703,气压传感器2704,磁传感器2705,重力传感器2706,陀螺仪传感器2707,加速度传感器2708,距离传感器2709,温度传感器2710,指纹传感器2711,触摸传感器2712,骨传导传感器2713等。
可以理解的是,本公开实施例示意的结构并不构成对3D显示设备200的限定。在本 公开另一些实施例中,3D显示设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器201可以包括一个或一个以上处理单元,例如:处理器201可以包括应用处理器(AP),调制解调处理器,基带处理器,寄存器222、图形处理器(GPU)223,图像信号处理器(ISP),控制器,存储器,视频编解码器224,数字信号处理器(DSP),基带处理器、神经网络处理器(NPU)等或它们的组合。其中,不同的处理单元可以是独立的器件,也可以集成在一个或一个以上处理器中。
处理器201中还可以设置有高速缓存器,被配置为保存处理器201刚用过或循环使用的指令或数据。在处理器201要再次使用指令或数据时,可从存储器中直接调用。
在一些实施例中,处理器201可以包括一个或一个以上接口。接口可以包括集成电路(I2C)接口、集成电路内置音频(I2S)接口、脉冲编码调制(PCM)接口、通用异步收发传输器(UART)接口、移动产业处理器接口(MIPI)、通用输入输出(GPIO)接口、用户标识模块(SIM)接口、通用串行总线(USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(SDA)和一根串行时钟线(SCL)。在一些实施例中,处理器201可以包含多组I2C总线。处理器201可以通过不同的I2C总线接口分别通信连接触摸传感器2712,充电器,闪光灯,3D拍摄装置220或其摄像头组件221、眼部定位装置250等。
I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口被配置为连接处理器201与无线通信模块283。
在图2所示的实施例中,MIPI接口可以被配置为连接处理器201与多视点3D显示屏210。此外,MIPI接口还可被配置为连接如摄像头组件221、眼部定位装置250等外围器件。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以配置为连接处理器201与3D拍摄装置220或其摄像头组件221,多视点3D显示屏110,无线通信模块283,音频模块212,传感器模块270等。
USB接口204是符合USB标准规范的接口,可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口204可以被配置为连接充电器为3D显示设备200充 电,也可以用于3D显示设备200与外围设备之间传输数据。也可以被配置为连接耳机,通过耳机播放音频。
可以理解的是,本公开实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对3D显示设备200的结构限定。
3D显示设备200的无线通信功能可以通过天线282、284,移动通信模块281,无线通信模块283,调制解调处理器或基带处理器等实现。
天线282、284被配置为发射和接收电磁波信号。3D显示设备200中的每个天线可被配置为覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块281可以提供应用在3D显示设备200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块281可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(LNA)等。移动通信模块281可以由天线282接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块281还可以对经调制解调处理器调制后的信号放大,经天线282转为电磁波辐射出去。在一些实施例中,移动通信模块281的至少部分功能模块可以被设置于处理器201中。在一些实施例中,移动通信模块282的至少部分功能模块可以与处理器201的至少部分模块被设置在同一个器件中。
无线通信模块283可以提供应用在3D显示设备200上的包括无线局域网(WLAN),蓝牙(BT),全球导航卫星系统(GNSS),调频(FM),近距离无线通信技术(NFC),红外技术(IR)等无线通信的解决方案。无线通信模块283可以是集成至少一个通信处理模块的一个或一个以上器件。无线通信模块283经由天线284接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器201。无线通信模块283还可以从处理器201接收待发送的信号,对其进行调频,放大,经天线284转为电磁波辐射出去。
在一些实施例中,3D显示设备200的天线282和移动通信模块281耦合,天线284和无线通信模块283耦合,使得3D显示设备200可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(GSM),通用分组无线服务(GPRS),码分多址接入(CDMA),宽带码分多址(WCDMA),时分码分多址(TD-SCDMA),长期演进(LTE),BT,GNSS,WLAN,NFC,FM,或IR技术等中至少一项。GNSS可以包括全球卫星定位系统(GPS),全球导航卫星系统(GLONASS),北斗卫星导航系统(BDS),准天顶卫星系统(QZSS)或星基增强系统(SBAS)中至少一项。
在一些实施例中,被配置为接收3D视频信号的外部接口可以包括USB接口204、移动通信模块281、无线通信模块283或其组合。此外,还可以想到其他可行的被配置为接 收3D视频信号的接口,例如上述的接口。
存储器203可以被配置为存储计算机可执行程序代码,可执行程序代码包括指令。处理器201通过运行存储在存储器203的指令,从而执行3D显示设备200的各种功能应用以及数据处理。存储器203可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储3D显示设备200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器203可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(UFS)等。
外部存储器接口202可以被配置为连接外部存储卡,例如Micro SD卡,实现扩展3D显示设备200的存储能力。外部存储卡通过外部存储器接口202与处理器201通信,实现数据存储功能。
在一些实施例中,3D显示设备的存储器可以包括(内部)存储器203、外部存储器接口202连接的外部存储卡或其组合。在本公开另一些实施例中,信号接口也可以采用上述实施例中不同的内部接口连接方式或其组合。
在本公开实施例中,摄像头组件221可以2D或3D采集图像或视频,并经由3D图像输出接口225输出采集到的视频。眼部定位装置250可以确定用户的眼部的空间位置。摄像头组件221、3D图像输出接口225和眼部定位装置250共同形成3D拍摄装置220。
在一些实施例中,3D显示设备200通过信号接口240、3D处理装置230、眼部定位装置250、多视点3D显示屏210,以及应用处理器等实现显示功能。
在一些实施例中,3D显示设备200可包括GPU,例如在处理器201内被配置为对3D视频图像进行处理,也可以对2D视频图像进行处理。
在一些实施例中,3D显示设备200还包括视频编解码器224,被配置为对数字视频压缩或解压缩。
在一些实施例中,信号接口240被配置为将经GPU或编解码器224或两者处理的3D视频信号、例如解压缩的3D视频信号的视频帧输出至3D处理装置230。
在一些实施例中,GPU或编解码器224集成有格式调整器。
多视点3D显示屏210被配置为显示3D图像或视频等。多视点3D显示屏210包括显示面板。显示面板可以采用液晶显示屏(LCD),有机发光二极管(OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(AMOLED),柔性发光二极管(FLED),Mini-LED,Micro-LED,Micro-OLED,量子点发光二极管(QLED)等。
在一些实施例中,眼部定位装置250通信连接至3D处理装置230,从而3D处理装置 230可以基于眼部定位数据渲染复合像素(复合子像素)中的相应子像素。在一些实施例中,眼部定位装置250还可连接处理器201,例如旁路连接处理器201。
在一些实施例中,3D拍摄装置220的3D图像输出接口225可通信连接至处理器201或3D处理装置230。
3D显示设备200可以通过音频模块212,扬声器213,受话器214,麦克风215,耳机接口216,以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块212被配置为将数字音频信息转换成模拟音频信号输出,也被配置为将模拟音频输入转换为数字音频信号。音频模块212还可以被配置为对音频信号编码和解码。在一些实施例中,音频模块212可以设置于处理器201中,或将音频模块212的部分功能模块设置于处理器201中。扬声器213被配置为将音频电信号转换为声音信号。3D显示设备200可以通过扬声器213收听音乐,或收听免提通话。受话器214,也称“听筒”,被配置为将音频电信号转换成声音信号。当3D显示设备200接听电话或语音信息时,可以通过将受话器214靠近耳部接听语音。麦克风215被配置为将声音信号转换为电信号。耳机接口216被配置为连接有线耳机。耳机接口216可以是USB接口204,也可以是3.5mm的开放移动3D显示设备平台(OMTP)标准接口,美国蜂窝电信工业协会(CTIA)标准接口。
按键217包括开机键,音量键等。按键217可以是机械按键。也可以是触摸式按键。3D显示设备200可以接收按键输入,产生与3D显示设备200的用户设置以及功能控制有关的键信号输入。
马达218可以产生振动提示。马达218可以被配置为来电振动提示,也可以被配置为触摸振动反馈。
SIM卡接口260被配置为连接SIM卡。在一些实施例中,3D显示设备200采用eSIM,即:嵌入式SIM卡。
环境光传感器2702被配置为感知环境光亮度。3D显示设备200可以根据感知的环境光亮度调节多视点3D显示屏210的亮度或辅助眼部定位,例如在环境光亮度较暗时,眼部定位装置250启动红外发射装置。环境光传感器2702也可以被配置为在黑白摄像头拍摄时调节白平衡。
压力传感器2703被配置为感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器2703可以设置于多视点3D显示屏210,这落入本公开实施例的范围内。
气压传感器2704被配置为测量气压。在一些实施例中,3D显示设备200通过气压传感器2704测得的气压值计算海拔高度,辅助定位和导航。
磁传感器2705包括霍尔传感器。
重力传感器2706是将运动或重力转换为电信号的传感器,主要被配置为倾斜角、惯性力、冲击及震动等参数的测量。
陀螺仪传感器2707可以被配置为确定3D显示设备200的运动姿态。
加速度传感器2708可检测3D显示设备200在各个方向上(一般为三轴)加速度的大小。
距离传感器2709可被配置为测量距离
温度传感器2710可被配置为检测温度。
指纹传感器2711被配置为采集指纹。3D显示设备200可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器2712可以设置于多视点3D显示屏210中,由触摸传感器2712与多视点3D显示屏210组成触摸屏,也称“触控屏”。
骨传导传感器2713可以获取振动信号。
充电管理模块205被配置为从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块205可以通过USB接口204接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块205可以通过3D显示设备200的无线充电线圈接收无线充电输入。
电源管理模块206被配置为连接电池207,充电管理模块205与处理器201。电源管理模块206接收电池207或充电管理模块205中至少一项的输入,为处理器201,存储器203,外部存储器,多视点3D显示屏210,摄像头组件221,和无线通信模块283等供电。在另一些实施例中,电源管理模块206和充电管理模块205也可以设置于同一个器件中。
3D显示设备200的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本公开所示的实施例以分层架构的安卓系统为例,示例性说明3D显示设备200的软件结构。但可以想到,本公开的实施例可以在不同的软件系统、如操作系统中实施。
图3是图2所示的3D显示设备200的软件结构示意图。分层架构将软件分成若干个层。层与层之间通过软件接口通信。在一些实施例中,将安卓系统分为四层,从上至下分别为应用程序层310,框架层320,核心类库和运行时(Runtime)330,以及内核层340。
应用程序层310可以包括一系列应用程序包。如图3所示,应用程序包可以包括蓝牙,WLAN,导航,音乐,相机,日历,通话,视频,图库,地图,短信息等应用程序。根据本公开实施例的3D视频显示方法,例如可以在视频应用程序中实施。
框架层320为应用程序层的应用程序提供应用编程接口(API)和编程框架。框架层包括一些预先定义的函数。例如,在本公开的一些实施例中,对所采集的3D视频图像进行识 别的函数或者算法以及处理图像的算法等可以包括在框架层。
如图3所示,框架层320可以包括资源管理器、电话管理器、内容管理器、通知管理器、窗口管理器,视图系统,安装包管理器等。
安卓Runtime(运行时)包括核心库和虚拟机。安卓Runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言要调用的功能函数,另一部分是安卓的核心库。
应用程序层和框架层运行在虚拟机中。虚拟机将应用程序层和框架层的java文件执行为二进制文件。虚拟机被配置为执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
核心类库可以包括多个功能模块。例如:3D图形处理库(例如:OpenGL ES),表面管理器,图像处理库,媒体库,图形引擎(例如:SGL)等。
内核层340是硬件和软件之间的层。内核层至少包含摄像头驱动,音视频接口,通话接口,Wifi接口,传感器驱动,电源管理,GPS接口。
在此,以具有图2和图3所示结构的作为移动终端的3D显示设备为例,描述3D显示设备中的3D视频传输和显示的实施例;但是,可以想到,在另一些实施例中可以包括更多或更少的特征或对其中的特征进行改变。
在一些实施例中,例如为移动终端、如平板电脑或智能蜂窝电话的3D显示设备200例如借助作为外部接口的移动通信模块281及天线282或者无线通信模块283及天线284从网络、如蜂窝网络、WLAN网络、蓝牙接收例如压缩的3D视频信号,压缩的3D视频信号例如经GPU 223进行图像处理、编解码器224编解码和解压缩,然后例如经作为内部接口的信号接口240、如MIPI接口或mini-MIPI接口将解压缩的3D视频信号发送至3D处理装置230。并且,通过眼部定位装置250获得用户的眼部空间位置信息。基于眼部空间位置信息确定预定的视点。3D处理装置230针对预定的视点相应地渲染显示屏的子像素,由此实现3D视频播放。
在另一些实施例中,3D显示设备200读取(内部)存储器203或通过外部存储器接口202读取外部存储卡中存储的压缩的3D图像信号,并经相应的处理、传输和渲染来实现3D图像播放。
在另一些实施例中,3D显示设备200接收摄像头组件221拍摄的且经由3D图像输出接口225传输的3D图像,并经相应的处理、传输和渲染来实现3D图像播放。
在一些实施例中,上述3D图像的播放是在安卓系统应用程序层310中的视频应用程序中实施的。
本公开实施例还可以提供一种眼部定位方法,其利用上述实施例中的眼部定位装置来实现。
参考图7,在一些实施例中,眼部定位方法包括:
S701:拍摄第一黑白图像和第二黑白图像;
S702:基于第一黑白图像和第二黑白图像中至少一幅识别眼部的存在;
S703:基于在第一黑白图像和第二黑白图像中识别到的眼部确定眼部空间位置。
示例性地,在第一位置拍摄第一黑白图像,在第二位置拍摄第二黑白图像,第一位置不同于第二位置。
在一些实施例中,眼部定位方法还包括:传输表明眼部空间位置的眼部空间位置信息。
在一些实施例中,眼部定位方法还包括:在第一黑白摄像头或第二黑白摄像头工作时,利用红外发射装置发射红外光。
在一些实施例中,眼部定位方法还包括:分别拍摄出包括第一黑白图像的第一黑白图像序列和包括第二黑白图像的第二黑白图像序列。
在一些实施例中,眼部定位方法还包括:确定时间同步的第一黑白图像和第二黑白图像。
在一些实施例中,眼部定位方法还包括:缓存第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;比较第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像;当通过比较在第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,将基于之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置作为当前的眼部空间位置。
在一些实施例中,眼部定位方法包括:以24帧/秒或以上的频率拍摄第一黑白图像序列和第二黑白图像序列。
本公开实施例还可以提供一种3D显示方法。
参考图8,在一些实施例中,3D显示方法包括:
S801:获得用户的眼部空间位置;
S802:根据眼部空间位置确定所对应的视点;
S803:基于3D信号渲染多视点3D显示屏的与视点对应的子像素。
在一些实施例中,3D显示方法还包括:提供多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像 素由对应于多个视点的多个子像素构成。
示例性地,当基于眼部空间位置确定用户的双眼各对应一个视点时,基于3D视频信号的视频帧生成用户双眼所处的两个视点的图像,并渲染复合子像素中与这两个视点相对应的子像素。
参考图9,在所示实施例中,用户的右眼处于第2视点V2,左眼处于第5视点V5,基于3D视频信号的视频帧生成这两个视点V2和V5的图像,并渲染复合子像素中与这两个视点相对应的子像素。
在一些实施例中,在基于眼部空间位置确定用户双眼相对于多视点3D显示屏的倾斜角度或平行度的情况下,可为用户提供有针对性的或定制化的显示图像,提升用户的观看体验。
上述的眼部空间位置可以是实时获取或确定的,也可以是周期性或随机获取或确定的。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的眼部定位方法、3D显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的眼部定位方法、3D显示方法。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
上述实施例阐明的系统、装置、模块或单元,可以由各种可能的实体来来实现。一种典型的实现实体为计算机或其处理器或其他部件。计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板电脑、可穿戴设备、智能电视、物联网系统、智能家居、工业计算机、单片机系统或者这些设备中的组合。在一个典型的配置中,计算机可包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。
在本申请的实施例的方法、程序、系统、装置等,可以在单个或多个连网的计算机中执行或实现,也可以在分布式计算环境中实践。在本说明书实施例中,在这些分布式计算 环境中,由通过通信网络而被连接的远程处理设备来执行任务。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。
本领域技术人员可想到,上述实施例阐明的功能模块/单元或控制器以及相关方法步骤的实现,可以用软件、硬件和软/硬件结合的方式实现。例如,可以以纯计算机可读程序代码方式实现,也可以部分或全部通过将方法步骤进行逻辑编程来使得控制器以硬件来实现相同功能,包括但不限于逻辑门、开关、专用集成电路、可编程逻辑控制器(如FPGA)和嵌入微控制器。
在本申请的一些实施例中,以功能模块/单元的形式来描述装置的部件。可以想到,多个功能模块/单元一个或多个“组合”功能模块/单元和/或一个或多个软件和/或硬件中实现。也可以想到,单个功能模块/单元由多个子功能模块或子单元的组合和/或多个软件和/或硬件实现。功能模块/单元的划分,可以仅为一种逻辑功能划分,在实现方式中,多个模块/单元可以结合或者可以集成到另一个系统。此外,本文所述的模块、单元、装置、系统及其部件的连接包括直接或间接的连接,涵盖可行的电的、机械的、通信的连接,尤其包括各种接口间的有线或无线连接,包括但不限于HDMI、雷达、USB、WiFi、蜂窝网络。
在本申请的实施例中,方法、程序的技术特征、流程图和/或方框图可以应用到相应的装置、设备、系统及其模块、单元、部件中。反过来,装置、设备、系统及其模块、单元、部件的各实施例和特征可以应用至根据本申请实施例的方法、程序中。例如,计算机程序指令可装载到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,其具有实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中相应的功能或特征。
根据本申请实施例的方法、程序可以以计算机程序指令或程序的方式存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读的存储器或介质中。本申请实施例也涉及存储有可实施本申请实施例的方法、程序、指令的可读存储器或介质。
除非明确指出,根据本申请实施例记载的方法、程序的动作或步骤并不必须按照特定的顺序来执行并且仍然可以实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
已参考上述实施例示出并描述了本申请的示例性系统及方法,其仅为实施本系统及方法的示例。本领域的技术人员可以理解的是可以在实施本系统及/或方法时对这里描述的系统及方法的实施例做各种改变而不脱离界定在所附权利要求中的本申请的精神及范围。

Claims (23)

  1. 一种眼部定位装置,包括:
    眼部定位器,包括被配置为拍摄第一黑白图像的第一黑白摄像头和被配置为拍摄第二黑白图像的第二黑白摄像头;
    眼部定位图像处理器,被配置为基于所述第一黑白图像和第二黑白图像中至少一幅识别眼部的存在且基于在所述第一黑白图像和第二黑白图像中识别到的所述眼部确定眼部空间位置。
  2. 根据权利要求1所述的眼部定位装置,还包括眼部定位数据接口,被配置为传输表明所述眼部空间位置的眼部空间位置信息。
  3. 根据权利要求1所述的眼部定位装置,其中,所述眼部定位器还包括红外发射装置。
  4. 根据权利要求3所述的眼部定位装置,其中,所述红外发射装置被配置为发射波长大于或等于1.5微米的红外光。
  5. 根据权利要求1至4任一项所述的眼部定位装置,其中,所述第一黑白摄像头和第二黑白摄像头被配置为分别拍摄包括所述第一黑白图像的第一黑白图像序列和包括所述第二黑白图像的第二黑白图像序列。
  6. 根据权利要求5所述的眼部定位装置,其中,所述眼部定位图像处理器包括同步器,被配置为确定时间同步的第一黑白图像和第二黑白图像,以便进行眼部的识别以及眼部空间位置的确定。
  7. 根据权利要求6所述的眼部定位装置,其中,所述眼部定位图像处理器包括:
    缓存器,被配置为缓存所述第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;
    比较器,被配置为比较所述第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像;
    判决器,被配置为,当所述比较器通过比较在所述第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,将基于所述之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置作为当前的眼部空间位置。
  8. 一种3D显示设备,包括:
    多视点3D显示屏,包括对应多个视点的多个子像素;
    根据权利要求1至7任一项所述的眼部定位装置,以获得眼部空间位置;
    3D处理装置,被配置为根据所述眼部定位装置获得的眼部空间位置确定所对应的视点,并基于3D信号渲染所述多视点3D显示屏的与所述视点对应的子像素。
  9. 根据权利要求8所述的3D显示设备,其中,所述多视点3D显示屏包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成。
  10. 根据权利要求8所述的3D显示设备,其中,所述3D处理装置与所述眼部定位装置通信连接。
  11. 根据权利要求8至10任一项所述的3D显示设备,还包括:
    3D拍摄装置,被配置为采集3D图像;
    所述3D拍摄装置包括景深摄像头和至少两个彩色摄像头。
  12. 根据权利要求11所述的3D显示设备,其中,所述眼部定位装置与所述3D拍摄装置集成设置。
  13. 根据权利要求12所述的3D显示设备,其中,所述3D拍摄装置前置于所述3D显示设备。
  14. 一种眼部定位方法,包括:
    拍摄第一黑白图像和第二黑白图像;
    基于所述第一黑白图像和第二黑白图像中至少一幅识别眼部的存在;
    基于在所述第一黑白图像和第二黑白图像中识别到的所述眼部确定眼部空间位置。
  15. 根据权利要求14所述的眼部定位方法,还包括:传输表明所述眼部空间位置的眼部空间位置信息。
  16. 根据权利要求14所述的眼部定位方法,还包括:在所述第一黑白摄像头或第二黑白摄像头工作时,利用红外发射装置发射红外光。
  17. 根据权利要求14至16任一项所述的眼部定位方法,还包括:分别拍摄出包括所述第一黑白图像的第一黑白图像序列和包括所述第二黑白图像的第二黑白图像序列。
  18. 根据权利要求17所述的眼部定位方法,还包括:确定时间同步的第一黑白图像和第二黑白图像。
  19. 根据权利要求18所述的眼部定位方法,还包括:
    缓存所述第一黑白图像序列和第二黑白图像序列中多幅第一黑白图像和第二黑白图像;
    比较所述第一黑白图像序列和第二黑白图像序列中的前后多幅第一黑白图像和第二黑白图像;
    当通过比较在所述第一黑白图像序列和第二黑白图像序列中的当前第一黑白图像和第二黑白图像中未识别到眼部的存在且在之前或之后的第一黑白图像和第二黑白图像中识别到眼部的存在时,将基于所述之前或之后的第一黑白图像和第二黑白图像确定的眼部空间位置作为当前的眼部空间位置。
  20. 一种3D显示方法,包括:
    获得用户的眼部空间位置;
    根据所述眼部空间位置确定所对应的视点;
    基于3D信号渲染多视点3D显示屏的与所述视点对应的子像素。
  21. 根据权利要求20所述的3D显示方法,还包括:提供所述多视点3D显示屏,包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成。
  22. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行如权利要求14至21任一项所述的方法。
  23. 一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当该程序指令被计算机执行时,使所述计算机执行如权利要求14至21任一项所述的方法。
PCT/CN2020/133328 2019-12-05 2020-12-02 眼部定位装置、方法及3d显示设备、方法 WO2021110034A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20895782.9A EP4068769A4 (en) 2019-12-05 2020-12-02 METHOD AND DEVICE FOR POSITIONING THE EYE, AND DEVICE AND METHOD FOR 3D DISPLAY
US17/780,504 US20230007225A1 (en) 2019-12-05 2020-12-02 Eye positioning apparatus and method, and 3d display device and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911231206.4A CN112929642A (zh) 2019-12-05 2019-12-05 人眼追踪装置、方法及3d显示设备、方法
CN201911231206.4 2019-12-05

Publications (1)

Publication Number Publication Date
WO2021110034A1 true WO2021110034A1 (zh) 2021-06-10

Family

ID=76160744

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133328 WO2021110034A1 (zh) 2019-12-05 2020-12-02 眼部定位装置、方法及3d显示设备、方法

Country Status (5)

Country Link
US (1) US20230007225A1 (zh)
EP (1) EP4068769A4 (zh)
CN (1) CN112929642A (zh)
TW (1) TWI818211B (zh)
WO (1) WO2021110034A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660480A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种环视功能实现方法、装置、电子设备及存储介质
CN115439477A (zh) * 2022-11-07 2022-12-06 广东欧谱曼迪科技有限公司 24色卡定位方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021112447A1 (de) * 2021-05-12 2022-11-17 Infineon Technologies Ag Chipkarten-Biometrie-Sensor-Bauteil, Chipkarte, Verfahren zum Bilden eines Chipkarten-Biometrie-Sensor-Bauteils und Verfahren zum Bilden einer Chipkarte
CN114079765A (zh) * 2021-11-17 2022-02-22 京东方科技集团股份有限公司 图像显示方法、装置及系统
TWI825526B (zh) * 2021-12-14 2023-12-11 宏碁股份有限公司 具有裸視3d功能的按鍵及鍵盤
CN115278197A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072366A (zh) * 2007-05-24 2007-11-14 上海大学 基于光场和双目视觉技术的自由立体显示系统和方法
CN101866215A (zh) * 2010-04-20 2010-10-20 复旦大学 在视频监控中采用视线跟踪的人机交互装置和方法
CN102045577A (zh) * 2010-09-27 2011-05-04 昆山龙腾光电有限公司 用于三维立体显示的观察者跟踪系统及三维立体显示系统
US20120218258A1 (en) * 2011-02-28 2012-08-30 Sanyo Electric Co., Ltd Display apparatus
KR20130027410A (ko) * 2012-05-07 2013-03-15 이영우 양안 식의 안위감지기, 부채꼴의 변형 렌티큘러와 액정식 배리어를 사용한 입체영상표시장치의 활용방법
JP5167439B1 (ja) * 2012-02-15 2013-03-21 パナソニック株式会社 立体画像表示装置及び立体画像表示方法
CN103248905A (zh) * 2013-03-22 2013-08-14 深圳市云立方信息科技有限公司 一种模仿全息3d场景的显示装置和视觉显示方法
CN103760980A (zh) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 根据双眼位置进行动态调整的显示方法、系统及显示设备
CN106218409A (zh) * 2016-07-20 2016-12-14 长安大学 一种可人眼跟踪的裸眼3d汽车仪表显示方法及装置
CN211531217U (zh) * 2019-12-05 2020-09-18 北京芯海视界三维科技有限公司 3d终端
CN211791829U (zh) * 2019-12-05 2020-10-27 北京芯海视界三维科技有限公司 3d显示设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101398804B1 (ko) * 2012-05-07 2014-05-22 이영우 양안 식의 안위감지기, 부채꼴의 변형 렌티큘러와 액정식 배리어를 사용한 입체영상표시장치에서의 안위인식방법
TW201919391A (zh) * 2017-11-09 2019-05-16 英屬開曼群島商麥迪創科技股份有限公司 顯示系統及顯示影像的顯示方法
CN108881893A (zh) * 2018-07-23 2018-11-23 上海玮舟微电子科技有限公司 基于人眼跟踪的裸眼3d显示方法、装置、设备和介质

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072366A (zh) * 2007-05-24 2007-11-14 上海大学 基于光场和双目视觉技术的自由立体显示系统和方法
CN101866215A (zh) * 2010-04-20 2010-10-20 复旦大学 在视频监控中采用视线跟踪的人机交互装置和方法
CN102045577A (zh) * 2010-09-27 2011-05-04 昆山龙腾光电有限公司 用于三维立体显示的观察者跟踪系统及三维立体显示系统
US20120218258A1 (en) * 2011-02-28 2012-08-30 Sanyo Electric Co., Ltd Display apparatus
JP5167439B1 (ja) * 2012-02-15 2013-03-21 パナソニック株式会社 立体画像表示装置及び立体画像表示方法
KR20130027410A (ko) * 2012-05-07 2013-03-15 이영우 양안 식의 안위감지기, 부채꼴의 변형 렌티큘러와 액정식 배리어를 사용한 입체영상표시장치의 활용방법
CN103248905A (zh) * 2013-03-22 2013-08-14 深圳市云立方信息科技有限公司 一种模仿全息3d场景的显示装置和视觉显示方法
CN103760980A (zh) * 2014-01-21 2014-04-30 Tcl集团股份有限公司 根据双眼位置进行动态调整的显示方法、系统及显示设备
CN106218409A (zh) * 2016-07-20 2016-12-14 长安大学 一种可人眼跟踪的裸眼3d汽车仪表显示方法及装置
CN211531217U (zh) * 2019-12-05 2020-09-18 北京芯海视界三维科技有限公司 3d终端
CN211791829U (zh) * 2019-12-05 2020-10-27 北京芯海视界三维科技有限公司 3d显示设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068769A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660480A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种环视功能实现方法、装置、电子设备及存储介质
CN113660480B (zh) * 2021-08-16 2023-10-31 纵深视觉科技(南京)有限责任公司 一种环视功能实现方法、装置、电子设备及存储介质
CN115439477A (zh) * 2022-11-07 2022-12-06 广东欧谱曼迪科技有限公司 24色卡定位方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
TW202123692A (zh) 2021-06-16
CN112929642A (zh) 2021-06-08
EP4068769A1 (en) 2022-10-05
TWI818211B (zh) 2023-10-11
EP4068769A4 (en) 2023-12-20
US20230007225A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
WO2021110034A1 (zh) 眼部定位装置、方法及3d显示设备、方法
WO2020238741A1 (zh) 图像处理方法、相关设备及计算机存储介质
AU2020250124B2 (en) Image processing method and head mounted display device
US20210406350A1 (en) Facial Recognition Method and Electronic Device
CN114119758B (zh) 获取车辆位姿的方法、电子设备和计算机可读存储介质
TWI746302B (zh) 多視點3d顯示屏、多視點3d顯示終端
CN211791829U (zh) 3d显示设备
CN112351194A (zh) 一种业务处理方法及设备
CN112584125A (zh) 三维图像显示设备及其显示方法
US20240119566A1 (en) Image processing method and apparatus, and electronic device
US20230005277A1 (en) Pose determining method and related device
WO2021110033A1 (zh) 3d显示设备、方法及终端
US20240013432A1 (en) Image processing method and related device
WO2021057626A1 (zh) 图像处理方法、装置、设备及计算机存储介质
CN211128026U (zh) 多视点裸眼3d显示屏、多视点裸眼3d显示终端
WO2021110026A1 (zh) 实现3d图像显示的方法、3d显示设备
CN115631250B (zh) 图像处理方法与电子设备
CN211528831U (zh) 多视点裸眼3d显示屏、裸眼3d显示终端
CN115150542B (zh) 一种视频防抖方法及相关设备
WO2021164387A1 (zh) 目标物体的预警方法、装置和电子设备
WO2021110040A1 (zh) 多视点3d显示屏、3d显示终端
CN113573045A (zh) 杂光检测方法及杂光检测装置
CN116703741B (zh) 一种图像对比度的生成方法、装置和电子设备
CN116703742B (zh) 识别模糊图像的方法和电子设备
CN117812523A (zh) 一种录音信号的生成方法、装置、系统和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20895782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020895782

Country of ref document: EP

Effective date: 20220629