CN117130162A - Image display method and related equipment - Google Patents

Image display method and related equipment Download PDF

Info

Publication number
CN117130162A
CN117130162A CN202311065834.6A CN202311065834A CN117130162A CN 117130162 A CN117130162 A CN 117130162A CN 202311065834 A CN202311065834 A CN 202311065834A CN 117130162 A CN117130162 A CN 117130162A
Authority
CN
China
Prior art keywords
area
light splitting
image
eye box
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311065834.6A
Other languages
Chinese (zh)
Inventor
金康
王云帆
管晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Chiyun Technology Co ltd
Original Assignee
Zhejiang Chiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Chiyun Technology Co ltd filed Critical Zhejiang Chiyun Technology Co ltd
Priority to CN202311065834.6A priority Critical patent/CN117130162A/en
Publication of CN117130162A publication Critical patent/CN117130162A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/30Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving parallax barriers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0112Head-up displays characterised by optical features comprising device for genereting colour display
    • G02B2027/0114Head-up displays characterised by optical features comprising device for genereting colour display comprising dichroic elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The application relates to the technical field of automobile electronics, in particular to an image display method and related equipment. The image display method comprises the steps of obtaining an eye box partitioning result of an image to be displayed and the number of light splitting units of a light splitting element; dividing the display screen into a plurality of preset areas based on the number of the light splitting units of the pixels and the light splitting elements in the image to be displayed, and respectively adjusting the radiuses of the light splitting units corresponding to the central area and the edge area based on the eye box partitioning result so as to enable the pixels covered by the central area and the edge area to reach a target luminous state; and displaying the image to be displayed based on the display screen and the adjusted light splitting element. The application ensures that the pixel light energy distribution of the edge area of the display screen in the head-up display device is consistent with the pixel light energy distribution of the central area of the display screen, thereby ensuring that the left eye and the right eye of a user display two images which have certain parallax and have close brightness through an imaging structure and providing better three-dimensional stereoscopic vision experience for the user.

Description

Image display method and related equipment
Technical Field
The application relates to the technical field of automobile electronics, in particular to an image display method and related equipment.
Background
The head up display device (HUD) can accurately project dashboard information and navigation information elements obtained by analyzing the vehicle and road conditions through sensors (such as cameras and radars) for intelligent driving on a front windshield of the vehicle or a display. On this basis, 3D (three-dimensional) HUD can be realized, and 3D HUD means that a naked eye 3D display technology is utilized, so that a user can directly view a three-dimensional image with naked eyes without wearing special 3D glasses. There are various ways to realize naked eye 3D, in which the lenticular lens 3D display technology is a common naked eye 3D display technology, and the principle of the lenticular lens 3D display technology is to attach a layer of special lenticular lens in front of a conventional display screen to realize light splitting, so that the left eye and the right eye of a person can see different pictures, and the pictures are fused into a 3D effect picture in the brain.
The requirement for fusing the 3D effect picture is that the left eye and the right eye see two images with certain parallax and close brightness, so that the 3D stereoscopic impression can be generated through the processing and comparison of the brains, but the HUD system is a non-uniformity system, and obvious difference of brightness of the two images can exist when the left eye and the right eye of a user see the images, so that the two images cannot be fused into the 3D effect.
Disclosure of Invention
In order to solve the technical problem, a first aspect of the present application discloses an image display method, which is suitable for a head-up display device, wherein the head-up display device comprises a display screen and a light splitting element attached to the display screen, and the method comprises:
acquiring an eye box partitioning result of an image to be displayed and the number of light splitting units of a light splitting element; the image to be displayed comprises a plurality of pixels distributed in an array; the eye box partition result represents the corresponding relation between a preset eye box partition and each pixel;
dividing the display screen into a plurality of preset areas based on the pixels in the image to be displayed and the number of the light splitting units of the light splitting element; each preset area corresponds to a preset number of light splitting units in the light splitting element; the preset area comprises a central area and an edge area;
adjusting the radius of the light splitting unit corresponding to the central area based on the eye box partitioning result so as to enable the pixels covered by the central area to reach a target luminous state;
based on the eye box partitioning result and the adjusted light splitting unit radius corresponding to the central area, adjusting the light splitting unit radius corresponding to the edge area so that the pixels covered by the edge area reach the target luminous state;
And displaying the image to be displayed based on the display screen and the adjusted light splitting element.
Optionally, the dividing the display screen into a plurality of preset areas based on the pixels in the image to be displayed and the number of the light splitting units of the light splitting element includes:
determining the central pixel, wherein the central pixel is at least one pixel corresponding to the geometric central position of the display screen;
extending to cover a preset number of pixels around the central pixel based on the central pixel, and taking the central pixel and the area where the preset number of pixels are located as the central area;
and determining the edge area based on the boundary of the central area, the boundary of the display screen and the number of light splitting units of the light splitting element.
Optionally, the determining the edge area based on the boundary of the central area, the boundary of the display screen, and the number of light splitting units of the light splitting element includes:
determining a first edge area and a second edge area corresponding to the first edge area based on the boundary of the central area and the boundary of the display screen; the first edge region is located to the left of the central region; the second edge region is positioned on the right side of the central region;
Dividing the first edge area into at least one first sub-edge area and dividing the second edge area into at least one second sub-edge area according to the number of light splitting units of the light splitting element; each first sub-edge area and each corresponding second sub-edge area are correspondingly provided with the same number of light splitting units;
the edge region is determined based on the first sub-edge region and the second sub-edge region.
Optionally, the adjusting the radius of the light splitting unit corresponding to the central area based on the result of the eye box partition of the image to be displayed, so that the pixel covered by the central area reaches the target light emitting state includes:
based on the eye box partitioning result, simulating to obtain an eye box test result corresponding to the pixels covered by the central area; the eye box test result comprises an initial light-emitting state of the pixels covered by the central area;
and adjusting the radius of the light splitting unit corresponding to the central area based on the eye box test result corresponding to the pixel covered by the central area, so that the pixel covered by the central area reaches the target luminous state.
Optionally, the adjusting the radius of the light splitting unit corresponding to the edge area based on the result of the eye box partitioning and the adjusted radius of the light splitting unit corresponding to the center area, so that the pixel covered by the edge area reaches the target light emitting state includes:
Based on the eye box partitioning result, determining an eye box test result corresponding to the pixels covered by the edge area;
based on the eye box test result corresponding to the pixels covered by the edge area and the adjusted light splitting unit radius corresponding to the central area, adjusting the light splitting unit radius corresponding to the edge area so that the pixels covered by the edge area reach the target luminous state; the radius of the light splitting unit corresponding to the adjusted edge area is larger than that of the light splitting unit corresponding to the adjusted central area.
Optionally, before the obtaining the result of the eye box partition of the image to be displayed and the number of the light splitting units of the light splitting element, the method further includes:
acquiring the preset eye box partition; the preset eye box partition comprises a left visual area and a right visual area which are arranged according to a preset sequence;
determining partition information of each pixel of an image to be displayed based on the arrangement mode of a left visual area and a right visual area in the preset eye box partition and the number of pixels covered by each light splitting unit;
and determining an eye box partition result of the image to be displayed based on the partition information.
In another aspect, the present application also provides an image display apparatus, including:
The acquisition module is used for acquiring an eye box partitioning result of the image to be displayed and the number of light splitting units of the light splitting element; the image to be displayed comprises a plurality of pixels distributed in an array; the eye box partition result represents the corresponding relation between a preset eye box partition and each pixel;
the area dividing module is used for dividing the display screen into a plurality of preset areas based on the pixels in the image to be displayed and the number of the light splitting units of the light splitting elements; each preset area corresponds to a preset number of light splitting units in the light splitting element; the preset area comprises a central area and an edge area;
the first adjusting module is used for adjusting the radius of the light splitting unit corresponding to the central area based on the eye box partitioning result so as to enable the pixels covered by the central area to reach a target luminous state;
the second adjusting module is used for adjusting the radius of the light splitting unit corresponding to the edge area based on the eye box partitioning result and the adjusted radius of the light splitting unit corresponding to the central area so as to enable the pixels covered by the edge area to reach the target luminous state;
and the display module is used for displaying the image to be displayed based on the display screen and the adjusted light splitting element.
Optionally, the area dividing module includes:
the central pixel determining module is used for determining a central pixel, and the central pixel is at least one pixel corresponding to the geometric central position of the display screen;
the central region determining module is used for extending and covering a preset number of pixels to the periphery of the central pixel based on the central pixel, and taking the region where the preset number of pixels are located as a central region;
the edge area determining module is used for determining an edge area based on the boundary of the central area, the boundary of the display screen and the number of the light splitting units of the light splitting element.
Optionally, the edge region determining module includes:
an edge region determining subunit configured to determine a first edge region and a second edge region corresponding to the first edge region based on a boundary of the center region and a boundary of the display screen; the first edge region is positioned on the left side of the central region; the second edge area is positioned on the right side of the central area;
an edge region dividing unit for determining a first edge region and a second edge region corresponding to the first edge region based on the boundary of the center region and the boundary of the display screen; the first edge region is positioned on the left side of the central region; the second edge area is positioned on the right side of the central area;
And an edge region determination unit configured to determine an edge region based on the first sub-edge region and the second sub-edge region.
Optionally, the first adjustment module includes:
the first eye box test result generating unit is used for obtaining an eye box test result corresponding to the pixels covered by the central area in a simulation mode based on the eye box partition result; the eye box test result comprises an initial light emitting state of the pixels covered by the central area;
the first adjusting unit is used for adjusting the radius of the light splitting unit corresponding to the central area based on the eye box test result corresponding to the pixel covered by the central area so that the pixel covered by the central area reaches the target luminous state.
Optionally, the second adjustment module includes:
the second eye box test result generating unit is used for determining an eye box test result corresponding to the pixels covered by the edge area based on the eye box partition result;
the second adjusting unit is used for adjusting the radius of the light splitting unit corresponding to the edge area based on the eye box test result corresponding to the pixel covered by the edge area and the radius of the light splitting unit corresponding to the adjusted central area, so that the pixel covered by the edge area reaches the target luminous state; the radius of the light splitting unit corresponding to the adjusted edge area is larger than that of the light splitting unit corresponding to the adjusted central area.
Optionally, the image display device further includes:
the preset eye box partition acquisition module is used for acquiring preset eye box partitions; the preset eye box partition comprises a left visual area and a right visual area which are arranged according to a preset sequence;
the partition information determining module is used for determining partition information of each pixel of the image to be displayed based on an arrangement mode of a left visual area and a right visual area in a preset eye box partition and the number of pixels covered by each light splitting unit;
and the eye box partition result determining module is used for determining the eye box partition result of the image to be displayed based on the partition information.
On the other hand, the application also provides a head-up display device, which comprises: the device comprises a display screen, a light splitting element, a memory and a processor;
the display screen is used for displaying an image to be displayed;
the light splitting element is arranged on the display screen and is used for refracting imaging light rays of the image to be displayed, which are emitted by the display screen, so that the imaging light rays are incident to different areas of the eye box in different directions;
the memory is used for storing the processor executable instructions;
the processor is configured to execute the instructions to implement the image display method as described above.
On the other hand, the application also provides a carrier, which comprises the head-up display device.
In another aspect, the present application also provides a computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the image display method as described above.
By adopting the technical scheme, the image display method provided by the application has the following beneficial effects:
according to the application, the display screen and the light splitting elements corresponding to the display screen are partitioned, different radiuses are respectively adjusted for the light splitting units in different areas, the radius of the light splitting unit corresponding to the central area in the ideal state is designed to enable the pixel light energy distribution in the central area of the display screen to be achieved, the radius of the light splitting unit corresponding to the edge area of the display screen is optimally adjusted on the basis, and finally the pixel light energy distribution in the edge area of the display screen is consistent with the pixel light energy distribution in the central area of the display screen, so that the left eye and the right eye of a user can display two 3D images which have certain parallax and have close brightness through an imaging structure, and the ideal 3D fusion effect is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an implementation environment of an image display method based on a head-up display device according to an embodiment of the present application;
fig. 2 is a schematic diagram of a 3D HUD effect according to an embodiment of the present application;
fig. 3 is a schematic diagram of an optical principle of a 3D HUD according to an embodiment of the present application;
fig. 4 is a schematic diagram of a light splitting principle of a light splitting element according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an image display method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a head-up display device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a left eye image and a right eye image generated by a head-up display device according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a display screen partition according to an embodiment of the present application;
FIG. 9 is a schematic diagram of light energy distribution of pixels in different areas of a display screen according to an embodiment of the present application;
FIG. 10 is a schematic diagram of pixel light energy distribution at different eye box positions according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of an alternative method of displaying images according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an alternative display screen partitioning process of an image display method according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a display screen and a light splitting element according to an embodiment of the present application;
FIG. 14 is a schematic diagram of an adjusted display and a light splitting element according to an embodiment of the present application;
FIG. 15 is a schematic view of light energy distribution of pixels in different areas of a display screen according to an embodiment of the present application;
fig. 16 is a schematic block diagram of an image display apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the application. In the description of the present application, it should be understood that the directions or positional relationships indicated by the terms "upper", "lower", "top", "bottom", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be construed as limiting the application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may include one or more of the feature, either explicitly or implicitly. Moreover, the terms "first," "second," and the like, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein.
Fig. 1 is a schematic view of an application scenario of an image display method based on a head-up display device according to a possible embodiment of the present application. As shown in fig. 1, the application scene is a vehicle, in which a Head Up Display (HUD) is provided, the HUD is an integrated electronic display device composed of an electronic component, a display component, a controller, and the like, and is capable of projecting information such as a vehicle speed, navigation information, an alarm, and the like, in the form of images and characters, to the front of a user through an optical component, for example, to a windshield, and the user can view a virtual image formed by reflecting an image source through a mirror 1 and a mirror 2 on the windshield.
It should be noted that fig. 1 is merely an exemplary scenario. In other scenarios, other implementation environments may also be included, such as aircraft, high-speed rail, and the like.
The 3D HUD is characterized in that a naked eye 3D display technology is utilized, a user can directly watch a three-dimensional image with naked eyes without wearing special 3D glasses. There are various ways to realize naked eye 3D, for example, a lenticular lens 3D display technology is adopted, and the principle of the lenticular lens 3D display technology is to attach a layer of special lenticular lens on a conventional display screen to realize light splitting, so that different pictures can be seen by the left eye and the right eye of a person, and the pictures with 3D effect can be fused through the brain of the person.
Fig. 2 is a schematic view of a 3D HUD effect according to a possible embodiment of the present application, and in fig. 2, by optically designing the HUD system, the user can see the image I1 in the left eye and the image I2 in the right eye, and the image I1 and the image I2 can compose a stereoscopic picture with a sense of depth in the brain of the user. By changing the position between the two images and adjusting the binocular parallax, the virtual image distance perceived by the user can be changed (the virtual image distance is virtually unchanged), and the closer the two images are, the closer the virtual image distance perceived by the user is. Conversely, the farther apart the two images are, the farther apart the virtual image is perceived by the user's owner.
How to optically design the HUD system is described below. Fig. 3 is a schematic diagram of the optical principle of a 3D HUD according to a possible embodiment of the present application, where the 3D HUD shown in fig. 3 includes an image source (liquid crystal display), a light splitting element (lenticular lens) attached to the image source, and a mirror assembly (mirror 1, mirror 2), etc. In some possible embodiments, the reflecting mirror 1 may be a plane mirror or a free-surface curved mirror, and the reflecting mirror 2 may be a free-surface curved mirror.
The lenticular lens attached to the image source is a light splitting element composed of a series of lenticular lenses, referring to fig. 4, after the image light in the display screen of the hud is refracted by the lenticular lens, the light splitting is realized by different emergent angles, and after the image light is reflected by the mirror assembly and an imaging structure (such as a windshield of a vehicle) in sequence, the image light enters the left eye and the right eye of a user respectively, so that the left eye and the right eye of the user can see two images with parallax. The condition for realizing the three-dimensional stereoscopic display effect is that the left eye and the right eye can see two images with certain parallax and close brightness, and the 3D stereoscopic impression can be generated through the processing and comparison of brains, but the HUD system is a non-uniformity system, when the HUD optical system is designed, the fact that the pixel light rays in the central area of the display screen can enter the left visual area and the right visual area of the eye box area after being split by the lenticular lens is generally considered, so that the images in the central area of the display screen are normally displayed in the eye box, and the brightness is uniform, so that the images observed by the left eye and the right eye of a user are ideal images. However, after the pixels in the edge area of the display screen are split by the lenticular lens, compared with the pixels in the central area, the phenomenon that the brightness of the pixels in the edge area of the display screen is uneven may not be seen when the user views the image by the left and right eyes, or the brightness of the light emitted by the pixels in the edge area of the display screen is darkened, so that the difference between the brightness of the images seen by the left and right eyes of the user is large, and the three-dimensional viewing experience of the user cannot be brought.
In order to solve the above problems in the prior art, an aspect of the present application provides an image display method, which is suitable for a head-up display device, so as to solve the problem of light quality degradation in an edge area of a HUD system, and further provide a high-quality 3D visual effect for a user.
Referring to fig. 5, a flow chart of an image display method of the present application is shown, comprising the following steps:
s501, acquiring an eye box partitioning result of an image to be displayed and the number of light splitting units of a light splitting element; the image to be displayed comprises a plurality of pixels distributed in an array; the eye box partition result characterizes the corresponding relation between the preset eye box partition and each pixel in the image to be displayed.
In the embodiment of the application, the head-up display device at least comprises a display screen and a light splitting element, and the display screen can be a liquid crystal display screen (Liquid Crystal Display, LCD). In the embodiment of the application, the light emitting units can be pixels, and a plurality of pixels are distributed in an array in the display screen and are used for displaying images to be displayed. In practical applications, for a color liquid crystal screen, the minimum light-emitting unit is each pixel of R/G/B, the R pixel is used for displaying red light, the G pixel is used for displaying green light, and the B pixel is used for displaying blue light, so that images with different colors and brightness can be generated by controlling the brightness and combination mode of each pixel of R/G/B. Thus, when the display screen is a color liquid crystal screen, each pixel of the present application can be understood as each R/G/B pixel.
The light splitting element is used for refracting light rays emitted by the display screen to form light rays with different emergent angles, and the light rays sequentially pass through the reflecting mirror component and the imaging structure (such as a windshield of a vehicle) and then respectively enter the left eye and the right eye of a user, so that images seen by the left eye and the right eye of the user are different. The light-splitting element for realizing the functions can be a slit grating or a cylindrical grating, the cylindrical grating is adopted as the light-splitting element in the embodiment of the application, the light-splitting unit is a plurality of sub-gratings formed by cylindrical lenses which are arranged in parallel in the cylindrical grating, and each sub-grating can refract light rays emitted by each pixel of the display screen. For easy understanding, the light splitting elements in the embodiments of the present application are all illustrated by using a cylindrical lens grating and the light splitting unit are all illustrated by using a sub-grating as examples. In the embodiment of the present application, each sub-grating may cover a preset number of pixels, for example, each sub-grating may cover 2 pixels.
For the eye box partition results, in one possible embodiment, the eye box partition results may be determined by: acquiring a preset eye box partition; the preset eye box partition comprises a left visual area and a right visual area which are arranged according to a preset sequence; determining partition information of each pixel of an image to be displayed based on an arrangement mode of a left visual area and a right visual area in a preset eye box partition and the number of pixels covered by each light splitting unit; and determining an eye box partition result of the image to be displayed based on the partition information.
The application is illustrated by taking the eye box as an example comprising two subareas, namely, the subareas of the preset eye box are an e1 area and an e2 area, the e1 area and the e2 area are arranged according to a preset mode, for example, the e1 area is taken as a left visual area, the e2 area is taken as a right visual area, and the positions of left eyes and right eyes of a user can be understood to be respectively distributed in the e1 area and the e2 area; in practical applications, the partition information of each pixel of the image to be displayed can be determined by the number of the pixels covered by the width of each sub-grating in the lenticular grating, so as to determine the eye box partition result.
Referring to fig. 6, in one possible embodiment, the width of one sub-grating covers two adjacent pixels, one of the two adjacent pixels is used for displaying a left eye image, the other pixel is used for displaying a right eye image, the pixels distributed by the plurality of arrays are divided into a plurality of groups, each group is a period, and each pixel on the display screen periodically corresponds to an e1 region and an e2 region of the eye box to form a left eye image and a right eye image to be displayed.
For convenience of explanation and understanding, it is assumed that the left eye image to be displayed is a pure white image L1, the right eye image is a pure black image R1, the pure white image L1 and the pure black image R1 are displayed through a display screen according to a preset display method, and after the light is split by a lenticular lens, when the left eye of the user is located in an e1 region of the eye box, the right eye is located in an e2 region of the eye box, an ideal state image viewed by the left eye and the right eye of the user through an imaging structure should be as shown in fig. 7, that is, the left eye can only see the pure white image L1, and the right eye can only see the pure black image R1.
However, due to the non-uniformity of the HUD system, it is known from the foregoing that after the pixels in the edge area of the display screen are split by the lenticular lens, there is a phenomenon that the brightness of the pixels in the edge area of the display screen is uneven compared with the pixels in the central area, and when the human eyes watch the pixels in some areas of the eye box, the brightness of the light emitted by the pixels in the edge area of the display screen or the light emitted by the pixels in the area is not seen to be darkened, so that the 3D imaging effect is poor.
For ease of understanding, it can be assumed that the display screen is divided into three regions Q1 (left edge region), Q2 (center region), Q3 (right edge region), and 3 pixels a, b, c are respectively corresponding to the three regions, as shown in fig. 8. The illumination distribution of 3 pixels a, b and c in the display screen is simulated and analyzed by utilizing optical system modeling software (lighttools), the illumination distribution of the 3 pixels is reflected by pixel waveforms of the 3 pixels and an illumination intensity distribution diagram of light emitted by the pixels in the eye box, and whether the light emitted by the pixels is distorted or not can be analyzed according to the illumination distribution when the light regulated by the lenticular grating and the optical system is transmitted into eyes of a user.
In fig. 9, (a), (b), and (c) are pixel waveforms in optical system modeling software (lighttools) after the b pixel, the a pixel, and the c pixel are turned on, respectively, and (d), (e), and (f) are luminance intensity distribution diagrams of light emitted from the b pixel, the a pixel, and the c pixel in the eye box, respectively.
In fig. 9, the pixel waveforms (a), (b), and (c) represent the energy distribution areas of the light emitted from the pixels, the vertical axis y represents the illuminance of the light, and the horizontal axis x represents the observation position of the user.
In the illuminance intensity distribution diagram of the eye box shown in fig. 9 (d), (e), and (f), the middle-most hatched area S1 has the highest illuminance intensity, the two-sided hatched area S2 has the medium illuminance intensity, and the two-sided blank area S3 has the lowest illuminance intensity and is close to zero, which corresponds to no light emission. As can be seen from fig. 9, the waveform of the b pixel and the illuminance intensity distribution in the eye box represent ideal image spot energy distribution, and the waveforms of the a pixel and the c pixel are narrowed relative to the waveform of the b pixel, that is, the energy distribution areas of the a pixel and the c pixel in the eye box are narrowed relative to the b pixel; the luminance intensity distribution of the light emitted from the b pixel in the eye box is ideal, and the luminance intensity highest region (middle shaded region) and the luminance intensity middle region (both side shaded regions) of the a pixel and the c pixel are both narrowed compared with the luminance intensity highest region and the luminance intensity middle region of the b pixel.
Continuing to analyze the problem caused by the narrowing of the light energy distribution area emitted by the pixels, referring to fig. 10, assuming that the human eye is at point P2 in the eye box, the human eye can see the image formed by the light emitted by the 3 pixels a, b and c, that is, can see a whole complete image; when the human eye is at the point P1 in the eye box, the light energy distribution of the pixel a and the pixel c is lower compared with that of the pixel b, and the energy distribution area of the pixel a and the pixel c is narrowed and distorted compared with that of the pixel b, so that the brightness of the pixel a and the pixel c seen at the point P1 is lower than that of the pixel b. In this way, when the human eye is at the P1 position, the light emitted by the a pixel and the c pixel in the display screen is weaker, so that the partial area of the image is deformed (the image is incomplete) or the brightness difference occurs, and the displayed image does not conform to the ideal image. In other words, the light energy distribution of the pixels in the edge area of the display screen becomes narrower and lower, and the brightness becomes lower, so that the brightness of the light emitted from the pixels in the edge area of the display screen or the light emitted from the pixels in this area becomes lower may appear when the fixed eye box (for example, the eye box has a size of 130×50mm and an adjustment range of ±50mm in the vertical direction) is viewed.
With continued reference to fig. 10, assuming that the image viewed by the human eye at point P1 is a left-view image, and the image viewed by the human eye at point P2 is a right-view image, when viewed in combination with the light energy distribution compared at points P1 and P2 in the drawing, the situation that the left-view image and the right-view image cannot be fused into a 3D effect occurs: the illumination intensity of the pixel a, the pixel b and the pixel c at the P2 point is the same and is in the highest area, so that the right-view image is a complete image; the illumination intensity of the pixel a, the pixel b and the pixel c at the P1 point are different, the illumination intensity of the pixel a is weak, and the illumination intensity of the pixel c is close to zero, so that the left view image is an image with brightness difference in different areas and lower overall brightness compared with the right view image; the precursor condition of fusing the 3D effect is that the left eye and the right eye can see two images with certain parallax and close brightness, and then the stereoscopic impression can be generated through the processing and comparison of brains; therefore, if the brightness of the left-view image and the right-view image are greatly different, the brain is difficult to fuse, and thus the generation of the 3D effect is affected.
In order to divide the pixels in the edge area of the display screen into the cylindrical grating, the user can view the 3D image in the ideal state, for example, the display screen can be divided into a central area and an edge area, the radii of the sub-gratings of the cylindrical grating corresponding to the different areas are adaptively adjusted so as to change the light dividing paths of the cylindrical grating to the pixels, so that the pixels in the central area and the edge area of the display screen can reach ideal brightness after being divided by the cylindrical grating, and the user can view the image in the ideal state in each area of the eye box.
Because the cylindrical grating adopted in the embodiment of the application is composed of a plurality of tiny cylindrical lenses (sub-gratings), when light passes through the cylindrical grating, if the surface type radius of each sub-grating is different, the refraction angle and the propagation direction of the light also change; when the surface radius of the sub-grating is larger, the emergent angle of the light after refraction is larger, so that the light is more widely dispersed in space, and the energy distribution is relatively wider. When the surface radius of the sub-grating is smaller, the emergent angle of the light after refraction is smaller, so that the light is more concentrated, and the energy distribution is relatively narrower.
For dividing the display screen into a center area and an edge area, this can be achieved by the following step S502.
S502, dividing a display screen into a plurality of preset areas based on the number of pixels in an image to be displayed and light splitting units of a light splitting element; each preset area corresponds to a preset number of light splitting units in the light splitting element; the preset area includes a center area and an edge area.
In the embodiment of the application, only one central area is provided, namely, a part of area taking a central pixel of the display screen as a center, the edge areas are areas except the central area and distributed on two sides of the central area, one or more edge areas on each side can be provided, and if the number of the light splitting units of the light splitting element is more, the division of a plurality of edge areas can be performed according to the number of the light splitting units. The display screen is partitioned, each preset area corresponds to a preset number of light splitting units in the light splitting element, and the display screen can also be regarded as the light splitting element being partitioned, for example, the central area corresponds to 3 light splitting units, and the edge areas at two sides respectively correspond to 2 light splitting units.
Referring to fig. 11, in one possible implementation, step S502 may include the steps of:
s1101, determining a central pixel, wherein the central pixel is at least one pixel corresponding to the geometric central position of the display screen;
s1102, extending and covering preset number of pixels to the periphery of the central pixel based on the central pixel, and taking the central pixel and the area where the preset number of pixels are located as a central area;
s1103, determining an edge area based on the boundary of the central area, the boundary of the display screen and the number of sub-gratings of the light splitting element.
In practice, the display screen is generally rectangular, and the central pixel may be one or more, for example, the central pixel may be one pixel at the geometric center of the display screen, or may include a pixel at the geometric center of the display screen and a plurality of pixels adjacent to the pixel at the geometric center of the display screen. In one possible embodiment, the above steps may be implemented by referring to fig. 12, where as shown in fig. 12 (a), the darkened pixel is two pixels with reference to the geometric center position of the display screen, and is taken as the center pixel; as shown in fig. 12 (b), the pixels extend to the periphery of the central pixel, for example, the vertical direction extends to the upper edge and the lower edge of the display screen, the horizontal direction extends to both sides of the central pixel by a certain width, a preset number of pixels are covered, the area where the preset number of pixels are located is the central area, the area where the light gray background is the central area, it can be understood that the vertical direction is the direction of the short axis of the rectangle, and the horizontal direction is the direction of the long axis of the rectangle; as shown in fig. 12 (c), the dark gray background area is an edge area, and the boundary of the edge area is composed of the boundary of the center area and the boundary of the display screen. The widths of the center region and the edge region may be set as appropriate, and are not limited herein, to ensure that the pixels covered by the center region form an ideal image in the eye box.
Regarding the determination of the edge area, which is an area other than the central area of the display screen, in a possible implementation manner, the step S1103 may include the following steps:
(1) Determining a first edge area and a second edge area corresponding to the first edge area based on the boundary of the central area and the boundary of the display screen; the first edge region is positioned on the left side of the central region; the second edge area is positioned on the right side of the central area;
(2) Determining a first edge area and a second edge area corresponding to the first edge area based on the boundary of the central area and the boundary of the display screen; the first edge region is positioned on the left side of the central region; the second edge area is positioned on the right side of the central area;
(3) An edge region is determined based on the first sub-edge region and the second sub-edge region.
In the embodiment of the application, the first edge area and the second edge area are relatively distributed on two sides of the central area and are equal in size, in practical application, if the number of the light splitting units of the light splitting element is not large, the first edge area can be directly divided into a first sub-edge area, the second edge area is divided into a second sub-edge area, namely the display screen is divided into a central area and two edge areas, the central area is larger than the edge areas on two sides, and the edge areas on two sides are smaller and equal in size; if the number of the light splitting units of the light splitting element is large, the first edge area may be divided into at least one first sub-edge area, and the second edge area may be divided into at least one second sub-edge area, for example, the first edge area may be divided into two first sub-edge areas, and the second edge area may be divided into two second sub-edge areas, that is, the display screen may be divided into one central area and four edge areas, and the central area may be equal in size.
In the case of a certain display screen size, the influence factors of the number of the light splitting units of the light splitting element may include the radius of the light splitting unit, the larger the radius is, the more the covered pixels are, the smaller the radius is, the fewer the covered pixels are, the better the covered pixels are, and the more the number of the covered pixels is, the more the resolution is reduced, so that the point is needed to be considered in the subsequent adjustment of the radius of the light splitting unit.
S503, adjusting the radius of the light splitting unit corresponding to the central area based on the eye box partitioning result so as to enable the pixels covered by the central area to reach the target light emitting state.
In one possible implementation, step S503 may include the steps of:
based on the eye box partitioning result, simulating to obtain an eye box test result corresponding to the pixels covered by the central area; the eye box test result comprises an initial light emitting state of the pixels covered by the central area; based on the eye box test result corresponding to the pixel covered by the central area, the radius of the light splitting unit corresponding to the central area is adjusted so that the pixel covered by the central area reaches the target luminous state.
In the embodiment of the present application, a simplified diagram of a partitioned display screen and a lenticular lens may be shown in fig. 13, where three sub-gratings are taken as an example, each sub-grating covers two pixels of the display screen, and two pixels B at the center 1 ,R 2 As the center pixel, the display screen is divided into three areas of Q1 (left edge area), Q2 (center area) and Q3 (right edge area), the center area Q2 of the display screen and the edge areas Q1/Q3 on both sides correspond to one sub-grating, and the lenticular grating can be also regarded as being divided into 3 areas corresponding to Q1, Q2 and Q3, and the sub-grating corresponding to the center area Q2The grating radius is Rb, the sub-grating radius corresponding to the edge region Q1 is Ra, and the sub-grating radius corresponding to the edge region Q3 is Rc.
As can be seen from the foregoing, only the pixels corresponding to the central area of the display screen can enter the left and right eyes of the user after the light is split by the lenticular lens, so that the user can watch the image in the ideal state, and therefore, when designing the sub-gratings, the central area of the display screen is generally used as a design standard, and the sub-grating radius Rb of the central area Q2 is designed and adjusted based on the eye box test result, so that the pixels covered by the central area reach the target light emitting state after the light is split by the lenticular lens, and enter the left and right eyes of the user, so that the user can watch the 3D image in the ideal state.
It should be noted that, in the above embodiment, three sub-gratings are taken as an example, only one cylindrical lens grating surface radius Rb is taken as an example for illustration, and in practice, a plurality of sub-gratings may exist, and the display screen partitioning method and radius adjustment should be set according to the actual situation.
Next, the radius of the spectroscopic unit corresponding to the edge region is adjusted in step S504:
s504, adjusting the radius of the light splitting unit corresponding to the edge area based on the eye box partitioning result and the adjusted radius of the light splitting unit corresponding to the central area, so that the pixels covered by the edge area reach the target light emitting state.
In one possible implementation, step S504 may include the steps of:
based on the eye box partitioning result, determining an eye box test result corresponding to the pixels covered by the edge area; based on an eye box test result corresponding to the pixel covered by the edge area and the adjusted light splitting unit radius corresponding to the central area, adjusting the light splitting unit radius corresponding to the edge area so that the pixel covered by the edge area reaches the target luminous state; the radius of the light splitting unit corresponding to the adjusted edge area is larger than that of the light splitting unit corresponding to the adjusted central area.
In the embodiment of the application, in order to keep the optical energy distribution of the final whole display screen pixel consistent, that is, the brightness of the formed image is kept consistent, the sub-grating radius Ra of the edge area Q1 and the sub-grating radius Rc of the edge area Q3 are optimized and adjusted on the basis of the adjusted sub-grating radius Rb of the central area Q2. Therefore, according to the eye box test result and the adjusted sub-grating radius Rb of the central area Q2, the sub-grating radius Ra of the edge area Q1 and the sub-grating radius Rc of the edge area Q3 are adjusted, the adjustment directions are that Ra and Rc are increased, so that Ra and Rc are both larger than Rb, because after Ra and Rc are increased, the emergent angle of the light after being refracted by the sub-gratings of the edge areas Q1 and Q3 is increased, the light is more widely dispersed in space, the energy distribution is relatively wider, the light energy distribution of the pixels of the edge areas Q1 and Q3 is consistent with the light energy distribution of the pixels of the central area Q2, namely, the brightness of the pixels of the edge areas Q1 and Q3 can not be darkened after being split by the lenticular grating, and the target luminous state can be reached, and the left eye and the right eye of a user can watch a 3D image in an ideal state.
In the embodiment of the application, the regulation rule for regulating the radius of the sub-grating is as follows: the larger the sub-grating radius is, the wider the light energy distribution area of the pixel after the sub-grating is split. The sub-grating radius adjustment process needs to be adjusted according to design requirements and practical application, and ensures that the pixel light energy distribution of three areas Q1, Q2 and Q3 reaches an ideal state after adjustment, so that a clear and complete 3D image effect is realized.
Referring to fig. 14, which is a schematic diagram of a display screen and a lenticular lens after adjusting the radius of the sub-grating, it can be seen that the sub-grating radius Ra of the edge area Q1 and the sub-grating radius Rc of the edge area Q3 of the display screen are larger than the sub-grating radius Rb of the central area Q2 of the display screen. Referring to FIG. 15, a center area pixel B of the display screen of FIG. 15 is shown 1 Pixel R of edge region 1 、G 2 The light energy distribution diagram achieved by the adjusted sub-gratings can be seen that when the left and right eyes of the person are respectively at the P1 point and the P2 point, the pixel B can be obtained 1 R 1 、G 2 The light energy distribution of the two eyes of the user are in the same position, namely, when the two eyes of the user are in the eye box area, the 3D images in ideal states can be watched. It can be seen that the present application is solved by the above-described embodiments The problem that the light quality of the pixels at the edge of the display screen is reduced after passing through the lenticular lens is solved, and the visual effect of a user is improved.
S505, displaying the image to be displayed based on the display screen and the adjusted light splitting element.
In the embodiment of the application, the sub-grating surface types of different areas are adjusted by partitioning the display screen and the lenticular lens grating corresponding to the display screen, the sub-grating radius corresponding to the central area in the ideal state is designed to ensure that the pixel light energy distribution of the central area of the display screen is achieved, the sub-lenticular lens grating surface types of the edge area are optimally adjusted on the basis, and finally, the pixel light energy distribution of the edge area of the display screen is consistent with the pixel light energy distribution of the central area of the display screen, so that the left eye and the right eye of a user can display two 3D images which have certain parallax and have close brightness through an imaging structure, and the ideal 3D fusion effect is achieved.
In another aspect, referring to fig. 16, the present application provides an image display apparatus, the apparatus comprising:
an acquisition module 1601, configured to acquire an eye box partition result of an image to be displayed and the number of light splitting units of the light splitting element; the image to be displayed comprises a plurality of pixels distributed in an array; the eye box partition result represents the corresponding relation between the preset eye box partition and each pixel;
The area dividing module 1602 is configured to divide the display screen into a plurality of preset areas based on the number of the pixels in the image to be displayed and the light splitting units of the light splitting element; each preset area corresponds to a preset number of light splitting units in the light splitting element; the preset area comprises a central area and an edge area;
a first adjusting module 1603, configured to adjust a radius of the light splitting unit corresponding to the central area based on the result of the eye box division, so that the pixel covered by the central area reaches a target light emitting state;
the second adjusting module 1604 is configured to adjust a radius of the light splitting unit corresponding to the edge area based on the eye box partitioning result and the adjusted radius of the light splitting unit corresponding to the central area, so that the pixel covered by the edge area reaches a target light emitting state;
the display module 1605 is configured to display an image to be displayed based on the display screen and the adjusted spectroscopic element.
In one possible implementation, the region division module 1602 includes:
the central pixel determining module is used for determining a central pixel, and the central pixel is at least one pixel corresponding to the geometric central position of the display screen;
the central region determining module is used for extending and covering a preset number of pixels to the periphery of the central pixel based on the central pixel, and taking the region where the preset number of pixels are located as a central region;
The edge area determining module is used for determining an edge area based on the boundary of the central area, the boundary of the display screen and the number of the light splitting units of the light splitting element.
In one possible implementation, the edge region determination module includes:
an edge region determining subunit configured to determine a first edge region and a second edge region corresponding to the first edge region based on a boundary of the center region and a boundary of the display screen; the first edge region is positioned on the left side of the central region; the second edge area is positioned on the right side of the central area;
an edge region dividing unit for determining a first edge region and a second edge region corresponding to the first edge region based on the boundary of the center region and the boundary of the display screen; the first edge region is positioned on the left side of the central region; the second edge area is positioned on the right side of the central area;
and an edge region determination unit configured to determine an edge region based on the first sub-edge region and the second sub-edge region.
In one possible implementation, the first adjustment module 1603 includes:
the first eye box test result generating unit is used for obtaining an eye box test result corresponding to the pixels covered by the central area in a simulation mode based on the eye box partition result; the eye box test result comprises an initial light emitting state of the pixels covered by the central area;
The first adjusting unit is used for adjusting the radius of the light splitting unit corresponding to the central area based on the eye box test result corresponding to the pixel covered by the central area so that the pixel covered by the central area reaches the target luminous state.
In one possible implementation, the second adjustment module 1604 includes:
the second eye box test result generating unit is used for determining an eye box test result corresponding to the pixels covered by the edge area based on the eye box partition result;
the second adjusting unit is used for adjusting the radius of the light splitting unit corresponding to the edge area based on the eye box test result corresponding to the pixel covered by the edge area and the radius of the light splitting unit corresponding to the adjusted central area, so that the pixel covered by the edge area reaches the target luminous state; the radius of the light splitting unit corresponding to the adjusted edge area is larger than that of the light splitting unit corresponding to the adjusted central area.
In one possible embodiment, the image display apparatus further includes:
the preset eye box partition acquisition module is used for acquiring preset eye box partitions; the preset eye box partition comprises a left visual area and a right visual area which are arranged according to a preset sequence;
the partition information determining module is used for determining partition information of each pixel of the image to be displayed based on an arrangement mode of a left visual area and a right visual area in a preset eye box partition and the number of pixels covered by each light splitting unit;
And the eye box partition result determining module is used for determining the eye box partition result of the image to be displayed based on the partition information.
On the other hand, the application also provides a head-up display device, which comprises: the device comprises a display screen, a light splitting element, a memory and a processor;
the display screen is used for displaying an image to be displayed;
the light splitting element is arranged on the display screen and is used for refracting imaging light rays of the image to be displayed, which are emitted by the display screen, so that the imaging light rays are incident to different areas of the eye box along different directions;
a memory for storing the processor-executable instructions;
the processor is configured to execute the instructions to implement the image display method as described above.
On the other hand, the application also provides a carrier, which comprises the head-up display device. The vehicle provided in this embodiment may include, but is not limited to, land vehicles such as vehicles, air vehicles such as aircraft (or referred to as aircraft), or water or underwater vehicles.
In another aspect, the present application also provides a computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the image display method as described above.
Alternatively, in the present description embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The memory of the embodiments of the present specification may be used for storing software programs and modules, and the processor executes various functional applications and data processing by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
In another aspect, the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the image display method provided by the above-described method embodiment.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (10)

1. An image display method is suitable for a head-up display device, the head-up display device comprises a display screen and a light splitting element attached to the display screen, and the method is characterized by comprising the following steps:
acquiring an eye box partitioning result of an image to be displayed and the number of light splitting units of a light splitting element; the image to be displayed comprises a plurality of pixels distributed in an array; the eye box partition result represents the corresponding relation between a preset eye box partition and each pixel;
Dividing the display screen into a plurality of preset areas based on the pixels in the image to be displayed and the number of the light splitting units of the light splitting element; each preset area corresponds to a preset number of light splitting units in the light splitting element; the preset area comprises a central area and an edge area;
adjusting the radius of the light splitting unit corresponding to the central area based on the eye box partitioning result so as to enable the pixels covered by the central area to reach a target luminous state;
based on the eye box partitioning result and the adjusted light splitting unit radius corresponding to the central area, adjusting the light splitting unit radius corresponding to the edge area so that the pixels covered by the edge area reach the target luminous state;
and displaying the image to be displayed based on the display screen and the adjusted light splitting element.
2. The image display method according to claim 1, wherein dividing the display screen into a plurality of preset areas based on the number of the pixels in the image to be displayed and the spectroscopic units of the spectroscopic element includes:
determining a central pixel, wherein the central pixel is at least one pixel corresponding to the geometric central position of the display screen;
Extending to cover a preset number of pixels around the central pixel based on the central pixel, and taking the central pixel and the area where the preset number of pixels are located as the central area;
and determining the edge area based on the boundary of the central area, the boundary of the display screen and the number of light splitting units of the light splitting element.
3. The image display method according to claim 2, wherein the determining the edge region based on the boundary of the center region, the boundary of the display screen, and the number of spectroscopic units of the spectroscopic element includes:
determining a first edge area and a second edge area corresponding to the first edge area based on the boundary of the central area and the boundary of the display screen; the first edge region is located to the left of the central region; the second edge region is positioned on the right side of the central region;
dividing the first edge area into at least one first sub-edge area and dividing the second edge area into at least one second sub-edge area according to the number of light splitting units of the light splitting element; each first sub-edge area and each corresponding second sub-edge area are correspondingly provided with the same number of light splitting units;
The edge region is determined based on the first sub-edge region and the second sub-edge region.
4. The image display method according to claim 1, wherein adjusting the radius of the spectroscopic unit corresponding to the center area based on the result of the eye box division of the image to be displayed so that the pixels covered by the center area reach the target light emitting state includes:
based on the eye box partitioning result, simulating to obtain an eye box test result corresponding to the pixels covered by the central area; the eye box test result comprises an initial light-emitting state of the pixels covered by the central area;
and adjusting the radius of the light splitting unit corresponding to the central area based on the eye box test result corresponding to the pixel covered by the central area, so that the pixel covered by the central area reaches the target luminous state.
5. The method according to claim 4, wherein adjusting the beam splitting unit radius corresponding to the edge region based on the eye box division result and the adjusted beam splitting unit radius corresponding to the center region so that the pixel covered by the edge region reaches the target light emitting state includes:
Based on the eye box partitioning result, determining an eye box test result corresponding to the pixels covered by the edge area;
based on the eye box test result corresponding to the pixels covered by the edge area and the adjusted light splitting unit radius corresponding to the central area, adjusting the light splitting unit radius corresponding to the edge area so that the pixels covered by the edge area reach the target luminous state; the radius of the light splitting unit corresponding to the adjusted edge area is larger than that of the light splitting unit corresponding to the adjusted central area.
6. The image display method according to claim 1, wherein before the obtaining of the eye box division result of the image to be displayed and the number of spectroscopic units of the spectroscopic element, the method further comprises:
acquiring the preset eye box partition; the preset eye box partition comprises a left visual area and a right visual area which are arranged according to a preset sequence;
determining partition information of each pixel of an image to be displayed based on the arrangement mode of a left visual area and a right visual area in the preset eye box partition and the number of pixels covered by each light splitting unit;
and determining an eye box partition result of the image to be displayed based on the partition information.
7. An image display device, the device comprising:
the acquisition module is used for acquiring an eye box partitioning result of the image to be displayed and the number of light splitting units of the light splitting element; the image to be displayed comprises a plurality of pixels distributed in an array; the eye box partition result represents the corresponding relation between a preset eye box partition and each pixel;
the area dividing module is used for dividing the display screen into a plurality of preset areas based on the pixels in the image to be displayed and the number of the light splitting units of the light splitting elements; each preset area corresponds to a preset number of light splitting units in the light splitting element; the preset area comprises a central area and an edge area;
the first adjusting module is used for adjusting the radius of the light splitting unit corresponding to the central area based on the eye box partitioning result so as to enable the pixels covered by the central area to reach a target luminous state;
the second adjusting module is used for adjusting the radius of the light splitting unit corresponding to the edge area based on the eye box partitioning result and the adjusted radius of the light splitting unit corresponding to the central area so as to enable the pixels covered by the edge area to reach the target luminous state;
And the display module is used for displaying the image to be displayed based on the display screen and the adjusted light splitting element.
8. A head-up display device, comprising: the device comprises a display screen, a light splitting element, a memory and a processor;
the display screen is used for displaying an image to be displayed;
the light splitting element is arranged on the display screen and is used for refracting imaging light rays of the image to be displayed, which are emitted by the display screen, so that the imaging light rays are incident to different areas of the eye box in different directions;
the memory is used for storing the processor executable instructions;
the processor is configured to execute the instructions to implement the image display method of any one of claims 1-6.
9. A vehicle comprising the heads-up display device of claim 8.
10. A computer-readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the image display method of any one of claims 1-6.
CN202311065834.6A 2023-08-22 2023-08-22 Image display method and related equipment Pending CN117130162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311065834.6A CN117130162A (en) 2023-08-22 2023-08-22 Image display method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311065834.6A CN117130162A (en) 2023-08-22 2023-08-22 Image display method and related equipment

Publications (1)

Publication Number Publication Date
CN117130162A true CN117130162A (en) 2023-11-28

Family

ID=88852168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311065834.6A Pending CN117130162A (en) 2023-08-22 2023-08-22 Image display method and related equipment

Country Status (1)

Country Link
CN (1) CN117130162A (en)

Similar Documents

Publication Publication Date Title
EP3444139B1 (en) Image processing method and image processing device
CN108008540B (en) A kind of three-dimensional display system
US10448005B2 (en) Stereoscopic display device and parallax image correcting method
US8427532B2 (en) Apparatus and method of displaying the three-dimensional image
JPH11285030A (en) Stereoscopic image display method and stereoscopic image display device
US10642061B2 (en) Display panel and display apparatus
CN113534490B (en) Stereoscopic display device and stereoscopic display method based on user eyeball tracking
US20200355914A1 (en) Head-up display
JP7438737B2 (en) Apparatus and method for displaying 3D images
US20180157055A1 (en) Three-dimensional display device
US11054641B2 (en) Image generating device for screen and head-up display
CN117130162A (en) Image display method and related equipment
CN113093402B (en) Stereoscopic display, manufacturing method thereof and stereoscopic display system
JP7127415B2 (en) virtual image display
KR101544841B1 (en) Autostereoscopic multi-view 3d display system with triple segmented-slanted parallax barrier
CN218938680U (en) Display panel, display device and carrier
CN116612726A (en) Image display method and related equipment
JP2003255265A (en) Stereoscopic image display device
JP7456050B2 (en) display device
CN210666207U (en) Head-up display device, imaging system and vehicle
US11300808B2 (en) Stereoscopic image display apparatus
Akpinar et al. Thin form-factor super multiview head-up display system
CN116088195A (en) Display device, head-up display system and carrier
CN117250772A (en) Display device, equipment, method, device and medium for controlling image display
JP2021165790A (en) Stereoscopic display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination