CN116456068A - Three-dimensional display method and device for image, display module and readable storage medium - Google Patents

Three-dimensional display method and device for image, display module and readable storage medium Download PDF

Info

Publication number
CN116456068A
CN116456068A CN202310487410.2A CN202310487410A CN116456068A CN 116456068 A CN116456068 A CN 116456068A CN 202310487410 A CN202310487410 A CN 202310487410A CN 116456068 A CN116456068 A CN 116456068A
Authority
CN
China
Prior art keywords
eye position
eye
image
target image
current observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310487410.2A
Other languages
Chinese (zh)
Inventor
朱丹枫
张哲�
陈乃川
姜苏珈
陈明武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202310487410.2A priority Critical patent/CN116456068A/en
Publication of CN116456068A publication Critical patent/CN116456068A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a three-dimensional display method, a device, a display module and a readable storage medium for images, wherein the method comprises the following steps: acquiring the eye position of a user at the current moment to serve as a first eye position, and acquiring the eye position of the user at the last moment to serve as a second eye position; determining a positional deviation amount of the first eye position and the second eye position; correcting the first eye position by the position deviation amount to obtain a current observation position; and correspondingly updating the target image displayed in the left half area and the target image displayed in the right half area of the display area by utilizing the current observation position, so that a user can observe the three-dimensional target image according to the contents displayed in the left half area and the right half area at the current observation position. Therefore, after the eye position of the user is changed, the display module can correspondingly adjust the image in the display area according to the changed current observation position, and the user can be ensured to observe the three-dimensional image continuously through naked eyes.

Description

Three-dimensional display method and device for image, display module and readable storage medium
Technical Field
The present disclosure relates to the field of three-dimensional technologies, and in particular, to a three-dimensional image display method, device, display module, and readable storage medium.
Background
In the related art, naked eye 3D (glass-Free 3D) requires a user to keep the eye position unchanged at a fixed position to ensure that the user can continuously view a three-dimensional image. If the user moves the head to change the eye position, the user needs to restore to the previous position to continue watching the three-dimensional image, resulting in poor viewing experience of the user.
Disclosure of Invention
The application provides a three-dimensional display method and device of an image, a display module and a readable storage medium.
The application provides a three-dimensional display method of an image, which comprises the following steps:
acquiring an eye position of a user at the current moment to serve as a first eye position, and acquiring an eye position of the user at the last moment to serve as a second eye position;
determining a positional deviation amount based on the first eye position and the second eye position;
correcting the first eye position based on the position deviation amount to obtain a current observation position;
and based on the current observation position, correspondingly updating the target image in the left half area and the target image in the right half area of the display area at the same time so that a three-dimensional image can be observed at the current observation position.
In the three-dimensional display method of the image, after receiving/acquiring the eye position of the user at the current moment, the display module takes the eye position at the current moment as a first eye position and takes the eye position of the user at the previous moment as a second eye position; subsequently, a deviation amount of the first eye position and the second eye position, i.e., a positional deviation amount, is determined; then, correcting the first eye position through the position deviation amount so that the corrected current observation position can accurately reflect the position of the eyes of the user at the current moment; and finally, correspondingly updating the target images of the left and right half areas of the display area by using the current observation position so that a user can observe a three-dimensional image at the current observation position.
Therefore, after the eye position of the user is changed through the determination of the current observation position, the display module can correspondingly adjust the image in the display area so as to ensure that the user can observe the 3D image continuously through naked eyes, thereby ensuring the observation experience of the user; and the current observation position is obtained by correcting the first eye position, so that errors caused by directly adopting the first eye position are avoided, and negative effects caused by hardware cost and hardware limitation are reduced.
The present application also provides a three-dimensional display device of an image, the device including:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the eye position of a user at the current moment to be used as a first eye position and acquiring the eye position of the user at the last moment to be used as a second eye position;
a determination module for determining a positional deviation amount based on the first eye position and the second eye position;
the correction module is used for correcting the first eye position based on the position deviation amount to obtain a current observation position;
and the updating module is used for correspondingly updating the target image in the left half area and the target image in the right half area of the display area based on the current observation position so that the three-dimensional image can be observed at the current observation position.
The application provides a display module, which comprises a memory and a processor, wherein a computer program is stored in the memory, and the three-dimensional display method of the image is realized when the computer program is executed by the processor.
The computer readable storage medium of the present application stores a computer program which, when executed by one or more processors, implements the three-dimensional display method of an image described above.
In the three-dimensional display device, the display module and the computer readable storage medium of the image, the display module can correspondingly adjust the image in the display area after the eye position of the user is changed through the determination of the current observation position so as to ensure that the user can observe the 3D image continuously through naked eyes, thereby ensuring the observation experience of the user; and the current observation position is obtained by correcting the first eye position, so that errors caused by directly adopting the first eye position are avoided, and negative effects caused by hardware cost and hardware limitation are reduced.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for three-dimensional display of images according to certain embodiments of the present application;
fig. 2 is a schematic view of an application scenario according to some embodiments of the present application;
Fig. 3 is a schematic view of an application scenario according to some embodiments of the present application;
fig. 4 is a schematic view of an application scenario according to some embodiments of the present application;
fig. 5 is a schematic view of an application scenario of some embodiments of the present application;
fig. 6 is a schematic view of an application scenario of some embodiments of the present application;
fig. 7 is a schematic view of an application scenario of some embodiments of the present application;
fig. 8 is a schematic view of an application scenario of some embodiments of the present application;
fig. 9 is a schematic view of an application scenario of some embodiments of the present application;
fig. 10 is a schematic view of an application scenario of some embodiments of the present application;
fig. 11 is a schematic view of an application scenario of some embodiments of the present application;
fig. 12 is a schematic view of an application scenario of some embodiments of the present application;
fig. 13 is a schematic view of an application scenario of some embodiments of the present application;
fig. 14 is a schematic view of an application scenario of some embodiments of the present application;
fig. 15 is a schematic view of an application scenario of some embodiments of the present application;
fig. 16 is a schematic diagram of a three-dimensional display device of images according to certain embodiments of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, the present application provides a three-dimensional display method of an image, including:
0110, the eye position of the user at the current moment is obtained as a first eye position, and the eye position of the user at the previous moment is obtained as a second eye position.
It can be understood that the manner of acquiring the eye position in the present application is what can be set according to the actual situation. In some embodiments, the display module for displaying the picture/image in the present application is integrated with a camera module and a target detection module, so that after the display shoots the picture through the camera module, the eye of the user in the picture is identified through the target detection module, and then the eye position is obtained.
It is also understood that the time difference between the first eye position and the second eye position in the present application, that is, the frequency of acquiring the eye position in the present application is set according to the actual situation, such as in some embodiments, the frequency of acquiring the eye position is 60Hz.
It is conceivable that the obtained eye position may be stored in a predetermined storage space (e.g. a buffer memory) after each time the eye position is completed, and further, when the second eye position needs to be obtained, the method may be implemented by reading the data stored in the storage space last.
0120, determining a positional deviation amount based on the first eye position and the second eye position.
It can be understood that, due to the hardware limitation of the acquisition device (such as the camera module and the target detection module) when acquiring/acquiring the eye position, the change condition of the eye position of the user cannot be determined accurately, for example, when the user rotates the head, the shooting frequency of the camera module is lower, so that the camera module shoots a picture with an eye ghost, and then the target detection module identifies the position of the eye ghost in the picture, thereby resulting in inaccuracy/reality of the acquired eye position.
Accordingly, in the present application, since the eye position deviation due to the hardware limitation is corrected, the eye position of the present time is corrected by calculating the deviation amount (i.e., the position deviation amount) between the last time and the current time with respect to the eye position acquired at the present time by using the eye position acquired at the last time after any one of the eye positions is acquired/acquired, and then correcting the current eye position by the position deviation amount in the subsequent process.
It is further understood that, because the present application needs to correct the eye position obtained at this time by using the eye position obtained last time, in order to improve the accuracy of the eye position, in some embodiments of the present application, the second eye position in the process of calculating the position deviation amount may be a corrected amount, that is, after correcting the second eye position by using the eye position obtained at the last time of the second eye position, the position deviation amount corresponding to the first eye position is calculated by using the corrected amount.
In addition, it should be understood that if the position deviation is 0 or the position deviation is smaller than the predetermined threshold, the display module of the present application will determine that the eye position of the user has not been changed, and further the following steps 0130, 0140 and other steps will not be performed.
In addition, it should be further understood that the specific manner of determining/calculating the position deviation amount may be set according to practical situations, and in some embodiments, the 0120 specifically includes:
based on the difference between the first eye position and the second eye position, a positional deviation amount is obtained.
That is, when the first eye position is Δnew, the second eye position is Δlast, and the positional deviation amount is Δvector, there are:
ΔVector=ΔBew-ΔLast
0130, correcting the first eye position based on the position deviation amount to obtain the current observation position.
That is, the present application corrects the first eye position acquired at the current time (this time) based on the amounts of deviation of the first eye position and the second eye position acquired at the front and rear times (twice), thereby eliminating errors/deviations due to hardware limitations.
It will be appreciated that the specific procedure for correcting/revising the first eye position by the amount of positional deviation is what may be set according to the actual situation, and in some embodiments, the above 0130 specifically includes:
Obtaining a target correction amount based on a difference between the position deviation amount and a preset deviation correction amount;
and obtaining the current observation position based on the sum of the first eye position and the target correction amount.
For the purpose of more clear explanation of the embodiments provided in the present application, assuming that the positional deviation amount is Δvactor, the deviation correction amount is Δvalue, the target correction amount is Δvectofinal, the current observation position/current viewpoint is P', and the first eye position is expressed as P (x, y, z), there are:
ΔVectorFinal=ΔVector-ΔValue
P’=P(x,y,z)+ΔVectorFinal
it should be noted that, the deviation correction amount Δvalue in the embodiment of the present application may be data fitted by a plurality of samples in advance, that is, first, after a plurality of image samples including an eye are obtained by the image capturing module; then, calibrating the positions of eyes in each picture sample to obtain the positions of the real eyes; subsequently, identifying an eye position in each picture sample to determine an eye detection position; and finally, fitting the true eye position and the eye detection position of each picture sample, thereby obtaining the deviation correction delta Value.
Therefore, in the embodiment of the application, after the deviation amount (i.e., the position deviation amount) between the first eye position at the current time and the second eye position at the previous time is determined by the position deviation amount, the position deviation amount can be corrected by the deviation correction amount, so that the deviation (i.e., the target correction amount) between the first eye position and the real eye position (i.e., the current observation position) is obtained, and after the first eye position is corrected by the target correction amount, the relatively accurate eye position (i.e., the current observation position) can be obtained, thereby reducing the hardware cost and the negative influence caused by the hardware limitation.
It should be further understood that, in the above embodiment, the position deviation amount Δvactor and the deviation correction amount Δvue can be understood as the deviation of the first eye position and the second eye position in the direction, and therefore, if the user changes the eye position (e.g., deviates the head) only by a small movement range, the target correction amount Δvactor final determined by the above embodiment is more accurate. However, if the user changes the eye position with a large motion amplitude (such as suddenly squatting or running), the position deviation amount Δvactor and the deviation correction amount Δvue are difficult to accurately describe the true deviation of the first eye position and the true eye position in terms of distance due to the hardware limitation and the scale setting condition of the camera module, in other words, the accuracy/reliability of the target correction amount Δvactor final is reduced, and thus, in some embodiments of the present application, the obtaining the target correction amount based on the difference between the position deviation amount and the preset deviation correction amount includes:
obtaining a candidate correction amount based on a difference between the position deviation amount and the deviation correction amount;
the target correction amount is obtained based on the product of the candidate correction amount and the deviation correction amount.
For a clearer description of embodiments provided herein, please refer to the following formula, namely:
ΔVectorFinal=(ΔVector-ΔValue)*ΔValue
it will be appreciated that (Δvector- Δvalue) in the equation is the candidate correction amount.
It can be understood that the scale refers to the actual distance corresponding to the unit distance in the real/real space in the picture shot by the camera module, such as how many meters the 1 pixel unit in the picture shot by the camera module corresponds to.
It is further understood that the product of the candidate correction amount and the deviation correction amount is understood as that the eye of the user moves by a distance (Δvactor- Δvue) in the Δvue direction, in other words, the target correction amount Δvactor final is obtained after enlarging (Δvactor- Δvue) by Δvue times, and then the target correction amount Δvactor final can describe the eye position change condition of the user under a large-amplitude action.
Therefore, according to the embodiment of the application, after the eye position of the user is changed greatly, the display module can also determine the relatively accurate deviation (namely the target correction amount) between the first eye position and the real eye position, so that the negative influence caused by hardware cost and hardware limitation is further reduced.
0140, based on the current observation position, updating the target image in the left half area and the target image in the right half area of the display area correspondingly at the same time, so that the three-dimensional image can be observed at the current observation position.
That is, after the eye position of the user changes, the present application will readjust the images displayed/presented in the left and right half areas of the display area of the display module by using the changed position (i.e. the current observation position), so as to ensure that the user can observe the three-dimensional image after receiving the target image in the left half area and the target image in the right half area of the display area by naked eyes at the current time.
It is to be understood that the target images displayed in the left and right half areas are the same object, that is, the present application displays two images generated by one object at different observation angles in the left and right half areas, respectively.
Therefore, after the eye position of the user is changed through the determination of the current observation position, the display module can correspondingly adjust the image in the display area so as to ensure that the user can observe the 3D image continuously through naked eyes, thereby ensuring the observation experience of the user; and the current observation position is obtained by correcting the first eye position, so that errors caused by directly adopting the first eye position are avoided, and negative effects caused by hardware cost and hardware limitation are reduced.
In addition, it can be understood that what manner the left and right half areas in the display area of the present application display the target image is what can be set according to practical situations, for example, in some embodiments, the resolution of the target image is the same as the resolution of the left half area, the resolution of the left half area and the resolution of the right half area are both the same as the resolution of the display area, and the resolution width of the left half area and the resolution width of the right half area are both half of the resolution width of the display area.
For example, let the resolution of the display area of the display module of the present application be 8K (7680×4320), the resolution heights of the left half area and the right half area are 4320, and the resolution widths of the left half area and the right half area are 7680, i.e. 3840, and therefore, the resolution of the target image will be 3840×4320.
It will be appreciated that in the embodiments provided herein, the display area will be evenly divided into two halves, and that both halves will display the complete target image.
Therefore, the target image can be simply displayed in the two half areas of the target area, and the display efficiency of the target image is guaranteed.
Further, it is understood that the display area in the present application may be understood as a virtual window for displaying a target image, and may also be understood as a display/display panel.
Further, in some embodiments where the display area is the virtual window described above, the resolution of the display area is always the same size as the resolution of the display. For example, let the resolution of the display module of the present application be 8K (7680×4320), the resolution of the display area is also 8K, and the resolution of the left half area, the right half area and the target image may be 3840×4320.
If the resolution of the display area does not match the resolution of the display, the display effect of the target image may be poor, and the user may not observe the 3D image. For example, please refer to fig. 2, 3 and 4, and fig. 2, 3 and 4 are schematic application scenarios of some embodiments of the present application.
In the case where the resolution of the display area is smaller than that of the display, that is, since the resolution of the display area is smaller than that of the display, a portion of the display area does not display any content, black edges appear, and thus, it is difficult for the user to observe a three-dimensional image.
Fig. 3 shows a case where the resolution of the display area is equal to that of the display, that is, since the resolution of the display area is equal to that of the display, both the left and right half portions of the display area can completely display the target image, and thus the user can observe the three-dimensional image.
Fig. 4 shows a case where the resolution of the display area is greater than that of the display, that is, since the resolution of the display area is greater than that of the display, the right half area cannot completely display the target image, and thus the user cannot observe the image of the right half area through the right eye, and thus cannot observe the three-dimensional image.
Optionally, in order to make the image displayed in the display area more natural/smooth, in some embodiments, the three-dimensional display method for an image provided in the present application further includes:
acquiring image samples containing eye positions, which are shot at a preset frequency in preset time, wherein the eye positions in different image samples are different;
based on all the image samples, fitting the position change relation of the eye position in the preset time.
Furthermore, the 0110 specifically includes in some embodiments:
acquiring an eye position of a user at the current moment to obtain a first position, and acquiring the eye position of the user at the last moment as a second position;
Based on the position change relation, determining a plurality of intermediate positions from the last time to the current time;
combining the second position, the plurality of intermediate positions and the first position according to time sequence to obtain a position set;
extracting two adjacent elements in the position set according to time sequence, and taking the former one of the two elements as a second eye position and the latter one as a first eye position;
the 0120 is performed and the second eye position is deleted from the set of positions.
Further, after 0140, the method further includes:
returning to the step of extracting two adjacent elements in the position set according to the time sequence, taking the former one of the two elements as the second eye position and the latter one as the first eye position until the position set does not contain the two elements.
That is, in the embodiment of the present application, the image samples collected by the camera module in the preset time are further used to fit the continuous change relationship of the eyes of the user in the preset time, that is, the position change relationship. Furthermore, after the first position and the second position are obtained, the second position can be regarded as an eye initial position, and the first position can be regarded as an eye end position; then, determining a plurality of intermediate positions from the second position to the first position (namely from the last moment to the current moment) by utilizing the position change relation; then, according to the time sequence, shooting, sequencing and merging the second position, the plurality of intermediate positions and the first position, so as to obtain a position set; then, extracting the 1 st element and the 2 nd element in the position set according to the time sequence, taking the 1 st element as a second eye position and the 2 nd element as a first eye position; then, after the position deviation amount is determined by the first eye position and the second eye position determined in the previous step, the first eye position is deleted from the position set, so that the used/extracted element is not extracted when the element is extracted from the position set next time; after updating the image in the display area with the current observation position, the above-described element extraction step is performed again until the next round of extraction operation is not performed when only one element is included in the position set, that is, when only the first position (eye end position) is included.
It will be appreciated that the display module will update the image displayed in the display area at a higher frequency when the display module controls the image displayed in the display area at the first eye position and the second eye position determined according to the above embodiment than when the current eye position and the last eye position are directly taken as the first eye position and the second eye position, respectively.
Therefore, even if the camera module in the display module does not capture the eye position change of the user from the last time to the current time due to the limitation of hardware, the display module in the embodiment of the invention can simulate the eye position change condition from the last time to the current time through priori knowledge (namely continuous change relation), and further, the target image in the display area can be updated along with the continuous change of the eye position.
Therefore, the display module of the embodiment of the application can finish adjustment of the target image in the display area in a smoother/natural form, and the situation that updating is too severe when the target image is updated by directly adopting eyes at the current moment and the last moment is avoided, so that observation experience of a user is further ensured.
In addition, it should be noted that, in the embodiment of the present application, the preset time is a content that can be set according to an actual situation, for example, in some embodiments, the preset time is a time from the "previous time" to the "current time" described above; in other embodiments, the preset time is any 1 second when the shooting module is in normal operation.
It should be noted that the preset frequency in the embodiment of the present application is also a content that can be set according to practical situations, and in some embodiments, the preset frequency is 60Hz.
And, in some embodiments, the image sample contains only one user's eye position; in yet other embodiments, the image sample contains a plurality of user's eye positions, each eye position corresponding/associated/bound to the user to which it belongs.
In addition, the acquiring/determining/calculating manner of the position change relationship in the embodiment of the present application is also a content that can be set according to practical situations, for example, in some embodiments, the fitting the position change relationship of the eye position in the preset time based on all the image samples includes:
calculating a first slope and a first intercept based on the lateral axis coordinates of the eye position in each image sample, and determining a lateral axis position change relationship based on the first slope and the first intercept;
Calculating a second slope and a second intercept based on the vertical axis coordinates of the eye position in each image sample, and determining a vertical axis position change relationship based on the second slope and the second intercept;
the positional change relation is obtained based on the vertical axis change relation and the horizontal axis change relation.
That is, the embodiments of the present application will use the abscissa in the samples including the respective image samples to fit a linear equation of the eye position to describe the change condition/change relation of the eye position.
For the purpose of more clear explanation of the embodiment provided in the present application, assuming that the number of image samples is 60, 60 image samples are (x 1, y1, z 2), (x 2, y2, z 2) … (x 60, y60, z 60), the first slope and the second intercept of the horizontal axis are Kx and bx, respectively, and the second slope and the second intercept of the vertical axis are Ky and by, respectively, then there are:
therefore, the position change relation of the eye position can be determined in a simpler mode, and the load of the display module is reduced.
In one example, each image sample acquired contains the X-axis, Y-axis, and Z-axis coordinates of the eye coordinates as shown in table 1.
TABLE 1
X Y Z
0 2.88 2.56
0.2 2.2576 3.445
0.7 1.9258 4.623
0.9 2.0862 5.253
0.92 2.109 5.663
0.99 2.1979 6.4786
1.2 2.5409 4.2596
1.4 2.9627 7.4458
1.48 3.155 7.9985
1.5 3.2052 8.2568
Also, therefore, the Kx obtained by fitting in Table 1 was 1.0628, bx was 0.082, ky was 1.2026, and by was 0.852.
Optionally, to adapt to different scenarios and users, 0140 specifically includes in some embodiments of the present application:
mapping the target three-dimensional model to a two-dimensional space based on the current observation position and the received view angle information, pupil distance information, near section position information and far section position information to obtain an updated target image;
and displaying the updated target image in the left half area and the right half area correspondingly, so that a three-dimensional image can be observed at the current observation position.
That is, the embodiment of the application provides a parameter custom interface, so that a user can adjust the display module according to the scene where the user is located (i.e. near-far section information) or the physiological characteristics of the user (i.e. angle of view and pupil distance), so that the display module can control the display of the target image based on the user-defined parameters. Specifically, after the display module receives user-defined Field Of View (FOV) information, pupil Distance (OD) information, near-cross section position information, and far-cross section position information, the display module controls the display Of the target image according to the 4 kinds Of information.
It is to be understood that the field angle information indicates the field range of the human eye. When a human eye is simulated by a specific device (e.g., a robot with a rotatable camera), the field angle represents the field of view of the specific device.
It is also understood that the interpupillary distance represents the distance between the left and right eyes of a person.
It will be further understood that the near and far cross-sections represent the cross-sections closest and farthest, respectively, to the human eye/specific device, with respect to the object under observation. In order to more clearly illustrate the meaning of the near-section position information and the far-section position information in the embodiments of the present application, please refer to fig. 5, and fig. 5 is an application scenario illustration of some embodiments of the present application. In fig. 5, eye represents the current observation position; np represents the near cross-section, N represents the distance of the near cross-section np to the current observation position ey; fp represents a distal section, F represents a distance from the distal section fp to the current observation position ey; p is the object to be observed, and p' is the position of the object to be observed p (perspective) projected to the near-cross-section np.
It will be appreciated that in the scenario shown in fig. 3, the user will observe the object p' on the near cross-section np at the current observation position eye.
It will be further understood that in the scenario shown in fig. 5, after the object to be observed p (x, y, z) is projected onto p' of the near-cross-section np, its coordinates will change from (x, y, z) to (-N x/z, -N x y/z, -N).
It is conceivable that the process of generating the 2D image that can be observed as the 3D image using the above-described type 4 is what can be set according to actual situations, as in some embodiments of the present application, mapping the target three-dimensional model to a two-dimensional space based on the current observation position and the received angle of view information, pupil distance information, near-cross-section position information, and far-cross-section position information to obtain an updated target image includes:
Performing model transformation on a first pixel matrix corresponding to the target three-dimensional model to map the first pixel matrix to a world coordinate system to obtain a second pixel matrix;
performing equivalence treatment on the second pixel matrix to obtain a third pixel matrix;
and mapping the third pixel matrix to a two-dimensional space based on the current observation position and the field angle information, the pupil distance information, the near section position information and the far section position information to obtain an updated target image.
It should be noted that, in the embodiment of the present application, the target image is an image obtained by mapping a three-dimensional model (i.e., a target three-dimensional model) to a two-dimensional space. Accordingly, in the embodiment of the present application, a Model transformation (Model transformation), an observation transformation (View transformation), and a projective transformation (Projection Transform) are performed on the target three-dimensional Model, so that a first pixel matrix corresponding to the target three-dimensional Model in the local three-dimensional space is mapped to a second pixel matrix in the world space, and then the world space of the second pixel matrix is sequentially mapped to the observation space and the clipping space.
It should be noted that, before mapping the second pixel matrix into the target image, the embodiment of the present application further performs an equivalence process on the second pixel matrix, so as to avoid a problem of distortion of a field angle that may occur when an element in the second pixel matrix is projected to the two-dimensional space, thereby ensuring that a shape of the target three-dimensional model after projection is correct.
In some embodiments, the present application implements the equivalence process by orthographically projecting the second pixel matrix.
Therefore, the method and the device enable the model shape of the target three-dimensional model to be unchanged after the target three-dimensional model is mapped into the target image through the equivalence treatment, so that the authenticity of the target image is guaranteed.
Optionally, in some embodiments of the present application, mapping the third pixel matrix to the two-dimensional space based on the current observation position and the received view angle information, the pupil distance information, the near-section position information, and the far-section position information to obtain the updated target image includes:
mapping the third pixel matrix to a clipping space based on the current observation position, the field angle information, the near-section position information and the far-section position information to obtain a fourth pixel matrix;
determining a monocular viewing angle deviation value based on the pupil distance information;
determining a projection matrix of the fourth pixel matrix corresponding to the left half area and the right half area based on the view angle information and the monocular viewing angle deviation;
an updated first target image is obtained based on the product of the fourth pixel matrix and the projection matrix of the left half area, and an updated second target image is obtained based on the product of the fourth pixel matrix and the projection matrix of the right half area.
Further, the above-mentioned method of displaying the updated target image in the left half area and the right half area simultaneously so that the three-dimensional image can be observed at the current observation position includes:
and displaying the updated first target image in the left half area and simultaneously displaying the updated second target image in the right half area so that a three-dimensional image can be observed at the current observation position.
That is, after mapping the target three-dimensional model to the clipping space to obtain the corresponding fourth pixel matrix, in order to reasonably display the fourth pixel matrix in the left half area and the right half area of the display area, in the embodiment of the application, the difference between the left half area and the right half area observed by the single left eye and the single right eye of the user, that is, the single eye visual angle deviation value, is determined through the pupil distance and the visual angle.
In some embodiments, the monocular viewing angle deviation value is half the interpupillary distance. For example, assuming that the pupil distance is 60mm and the intermediate position of the left and right eyes is the origin, the left-eye viewing angle deviation value Δoffset is-30 mm and the right-eye viewing angle deviation value Δroffset is +30mm.
After the monocular viewing angle deviation value is determined, the embodiment of the application determines the display form of the fourth pixel matrix in the left half area and the right half area by using the monocular viewing angle deviation value and the human eye visible range (viewing angle), so as to ensure that the fourth pixel matrix can be completely and reasonably displayed when being displayed/projected in the left half area and the right half area.
In some embodiments, the projection matrix TranMatri is:
wherein X, Y and NearZ respectively represent the three-axis visible range of the field angle of the monocular, and NearZ also represents the distance N from the near cross-section np to the current observation position eye in FIG. 5; Δoffset represents the monocular viewing angle deviation value. It will be appreciated that the f reference number of the matrix is a single precision floating point type float.
Therefore, if the fourth pixel matrix is required to be displayed in the display area, Δoffset and Δroffset are respectively assigned to Δoffset to obtain a projection matrix TranMatri1 corresponding to the left half area and a projection matrix TranMatri2 corresponding to the left half area, then the product result (i.e., the updated first target image) of the fourth pixel matrix and TranMatri1 is displayed in the left half area, and the product result (i.e., the updated second target image) of the fourth pixel matrix and TranMatri2 is displayed in the right half area, thus completing the display of the target image.
Therefore, the single-eye visual angle deviation value and the projection matrix are introduced, so that the left and right eyes of the user can correctly observe the target images in the left half area and the right half area, and the user can observe the three-dimensional target image at the current observation position.
In addition, it can be understood that if the matrix size of the product of the fourth pixel matrix and TranMatri1 (or TranMatri 2) is smaller than the resolution of the left half area (or the right half area), the product can be interpolated by interpolation to ensure that the target image is completely displayed in the left half area (or the right half area).
In addition, it should be noted that the determination manners of X and Y in the projection matrix TranMatri are what can be set according to practical situations, for example, in some embodiments, the calculation formulas of X and Y are as follows:
wherein FOV represents the field angle; clamp () means that the parameter in the bracket is subjected to a clipping process to limit the parameter within a preset range; width and Height represent the resolution Width and Height of the display area, respectively.
In some embodiments, in order to ensure that different display modules can use the embodiments provided in the present application, the clip () corresponds to a clip value range of 100 to 10000, so that the display modules with 1 to 8K resolution can calculate X and Y according to the above formula.
Optionally, if the embodiment provided in the present application is applied to a robot simulation test including a human eye simulation device (such as a camera), after the current observation position is calculated, the deflection angle sigma of the human eye simulation device may be calculated by the following formula, so that the human eye simulation device can keep the observation object in the center of the field of view.
sigma=acos(dot(PF,(P(x,y,z)+ΔVectorFinal))/(norm(PF)*norm(P(x,y,z)
+ΔVectorFinal)))
Where PF may be understood as a vector pointing from the first eye position to the distal section and P may be understood as the first eye position.
Optionally, in some embodiments, if one or more of the angle of view information, the pupil distance information, the near section position information, and the far section position information is not received, the present application will complete the display of the target image using the default value.
In some embodiments, experiments by the inventor prove that the visible range of the human eye is 60 to 90 degrees, and the shooting angle deviation caused when the field angle FOV is 60 to 90 degrees is shown in table 2, namely:
TABLE 2
FOV Deviation of shooting angle
90 1.852
85 1.961
80 2.083
75 2.225
70 2.381
65 2.564
60 2.778
45 3.704
Thus, since the angle of view size is inversely related to the shooting angle deviation, in some embodiments, the angle of view will be assigned 90 degrees when the display module does not receive the angle of view information. In other embodiments, when the display module does not receive the angle information, the angle is assigned to any one of 60 to 90 degrees.
In some embodiments, the images displayed by the display area are shown in fig. 6, 7, 8, 9 and 10 when the interpupillary distance is about 60mm, as tested by the inventors. Fig. 6, 7, 8, 9 and 10 show the display conditions of the display areas when the interpupillary distances are (60-5.25000005) mm, (60-2.25000005) mm, 60mm, (60+2.25000005) mm and (60+5.25000005) mm, respectively.
That is, it is verified through experiments that the object image of the display area can be ensured to have a remarkable three-dimensional effect when the determined pupil distance is 60mm, so that in some embodiments, when the display module does not receive the pupil distance information, the pupil distance can be assigned to 60mm. In other embodiments, when the display module does not receive the interpupillary distance information, the interpupillary distance is assigned to any one of (60-5.25000005) mm, (60-2.25000005) mm, 60mm, (60+2.25000005) mm, and (60+5.25000005) mm.
In some embodiments, the display effect of the display area is shown in fig. 11, 12, 13, 14 and 15 when the distance N between the near cross section and the observation point is about-0.03 (i.e., about 0.03 units of distance in the negative Z-axis direction in fig. 3) through experiments by the inventor. Fig. 11, 12, 13, 14 and 15 show the display areas when the distance N between the near-cross section and the observation point is-1.0, -0.05, -0.03, -0.01 and 0.01, respectively.
That is, through experimental verification, when the distance N between the near-section and the observation point is determined to be-0.03, the user is not easy to generate 3D dizzy and can observe the three-dimensional target image more clearly, so in some embodiments, when the display module does not receive the near-section information, the distance N between the near-section and the observation point is assigned to be-0.03. In other embodiments, the distance N between the near cross-section and the observation point is set to be any one of-1.0, -0.05, -0.03, -0.01, and 0.01.
Referring to fig. 16, the present application also provides a three-dimensional display device 200 of an image, including:
an obtaining module 210, configured to obtain an eye position of a user at a current moment as a first eye position, and obtain an eye position of the user at a previous moment as a second eye position;
a determination module 220 for determining a positional deviation amount based on the first eye position and the second eye position;
a correction module 230, configured to correct the first eye position based on the position deviation amount, so as to obtain a current observation position;
the updating module 240 is configured to update the target image in the left half area and the target image in the right half area of the display area at the same time based on the current observation position, so that a three-dimensional image can be observed at the current observation position.
In some embodiments, the resolution size of the target image is the same as the resolution size of the left half region, the resolution height of the left half region and the resolution height of the right half region are both the same as the resolution height of the display region, the resolution width of the left half area and the resolution width of the right half area are half of the resolution width of the display area.
In some embodiments, the determining module 220 is further configured to obtain the positional deviation amount based on a difference between the first eye position and the second eye position.
In some embodiments, correction module 230 includes:
the first calculation sub-module is used for obtaining a target correction amount based on the difference value of the position deviation amount and a preset deviation correction amount;
and the second calculation sub-module is used for obtaining the current observation position based on the sum value of the first eye position and the target correction amount.
In some embodiments, the first computing sub-module comprises:
a difference calculation unit configured to obtain a candidate correction amount based on a difference between the position deviation amount and the deviation correction amount;
and a product calculation unit for obtaining a target correction amount based on the product of the candidate correction amount and the deviation correction amount.
In some embodiments, the three-dimensional display device 200 of the image of the present application further includes:
the sample acquisition module is used for acquiring image samples containing eye positions, which are shot at a preset frequency in preset time, wherein the eye positions in different image samples are different;
the fitting module is used for fitting the position change relation of the eye position in the preset time based on all the image samples.
Further, the acquisition module 210 includes:
the position acquisition sub-module is used for acquiring the eye position of the user at the current moment to obtain a first position and acquiring the eye position of the user at the last moment as a second position;
The middle position obtaining sub-module is used for determining a plurality of middle positions from the last moment to the current moment based on the position change relation;
the sub-modules are combined together and, the method comprises the steps of merging a second position, a plurality of intermediate positions and a first position according to time sequence to obtain a position set;
the extraction submodule is used for extracting two adjacent elements in the position set according to time sequence, wherein the former one of the two elements is used as a second eye position, and the latter one is used as a first eye position;
and a deletion sub-module for performing the step of determining a positional deviation amount based on the first eye position and the second eye position, and deleting the second eye position from the position set.
Further, the update module 240 further includes:
and the returning sub-module is used for returning to the step of extracting two adjacent elements in the position set in time sequence, taking the former one of the two elements as the second eye position and the latter one as the first eye position until the position set does not contain the two elements.
In some embodiments, the fitting sub-module comprises:
the first relation calculating unit is used for calculating a first slope and a first intercept based on the horizontal axis coordinate of the eye position in each image sample, and determining a horizontal axis position change relation based on the first slope and the first intercept;
A second relation calculating unit for calculating a second slope and a second intercept based on the vertical axis coordinates of the eye position in each image sample, and determining a vertical axis position change relation based on the second slope and the second intercept;
and the change relation obtaining unit is used for obtaining the position change relation based on the vertical axis change relation and the horizontal axis change relation.
In some implementations, the update module 240 includes:
the mapping sub-module is used for mapping the target three-dimensional model to a two-dimensional space based on the current observation position and the received view angle information, pupil distance information, near section position information and far section position information to obtain an updated target image;
and the display sub-module is used for displaying the updated target image in the left half area and the right half area correspondingly at the same time so that the three-dimensional image can be observed at the current observation position.
In some embodiments, the mapped sub-module comprises:
the model transformation unit is used for carrying out model transformation on the first pixel matrix corresponding to the target three-dimensional model so as to map the first pixel matrix to a world coordinate system and obtain a second pixel matrix;
the equivalence processing unit is used for carrying out equivalence processing on the second pixel matrix to obtain a third pixel matrix;
And the conversion unit is used for mapping the third pixel matrix to the two-dimensional space based on the current observation position and the field angle information, the pupil distance information, the near section position information and the far section position information to obtain an updated target image.
In some embodiments, the conversion unit comprises:
the space mapping subunit is used for mapping the third pixel matrix to the clipping space based on the current observation position, the field angle information, the near-section position information and the far-section position information to obtain a fourth pixel matrix;
a deviation value determination subunit configured to determine a monocular angle of view deviation value based on the pupil distance information;
a matrix determining subunit for determining, based on the view angle information and the monocular viewing angle deviation, projection matrices of the fourth pixel matrix corresponding to the left half region and the right half region;
an image obtaining subunit, configured to obtain an updated first target image based on a product of the fourth pixel matrix and the projection matrix of the left half area, and obtain an updated second target image based on a product of the fourth pixel matrix and the projection matrix of the right half area.
The display sub-module is further configured to display the updated first target image in a left half area and simultaneously display the updated second target image in a right half area, so that a three-dimensional image can be observed at the current observation position.
The three-dimensional display device 200 for images provided in the embodiment of the present application can implement each process of the three-dimensional display method for images, and can achieve the same technical effects, so that repetition is avoided, and detailed description is omitted here.
The application also provides a display module, the display module comprises a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the three-dimensional display method of the image is realized.
The present application also provides a computer-readable storage medium containing a computer program. The computer program, when executed by one or more processors, causes the one or more processors to perform the method of three-dimensional display of images of the present application.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present application.

Claims (13)

1. A method of three-dimensional display of an image, the method comprising:
acquiring an eye position of a user at the current moment to serve as a first eye position, and acquiring an eye position of the user at the last moment to serve as a second eye position;
Determining a positional deviation amount based on the first eye position and the second eye position;
correcting the first eye position based on the position deviation amount to obtain a current observation position;
and based on the current observation position, correspondingly updating the target image in the left half area and the target image in the right half area of the display area at the same time so that a three-dimensional image can be observed at the current observation position.
2. The method according to claim 1, wherein the resolution size of the target image is the same as the resolution size of the left half region, the resolution height of the left half region and the resolution height of the right half region are both the same as the resolution height of the display region, and the resolution width of the left half region and the resolution width of the right half region are both half of the resolution width of the display region.
3. The method of three-dimensional display of an image according to claim 1, wherein the determining a positional deviation amount based on the first eye position and the second eye position comprises:
and obtaining the position deviation amount based on a difference between the first eye position and the second eye position.
4. The method according to claim 1, wherein correcting the first eye position based on the positional deviation amount to obtain a current observation position includes:
obtaining a target correction amount based on the difference between the position deviation amount and a preset deviation correction amount;
and obtaining the current observation position based on the sum of the first eye position and the target correction amount.
5. The method according to claim 4, wherein the obtaining the target correction amount based on the difference between the positional deviation amount and a preset deviation correction amount includes:
obtaining a candidate correction amount based on a difference between the positional deviation amount and the deviation correction amount;
and obtaining the target correction amount based on the product of the candidate correction amount and the deviation correction amount.
6. The method of three-dimensional display of an image according to claim 1, further comprising:
acquiring image samples containing eye positions, which are shot at a preset frequency in preset time, wherein the eye positions in different image samples are different;
fitting a position change relation of the eye position in the preset time based on all the image samples;
The obtaining the eye position of the user at the current moment to be used as a first eye position, and obtaining the eye position of the user at the previous moment to be used as a second eye position includes:
acquiring an eye position of the user at the current moment to obtain a first position, and acquiring the eye position of the user at the last moment as a second position;
determining a plurality of intermediate positions between the last time and the current time based on the position change relation;
combining the second position, the plurality of intermediate positions and the first position according to time sequence to obtain a position set;
extracting two adjacent elements in the position set according to the time sequence, and taking the former one of the two elements as the second eye position and the latter one as the first eye position;
performing the step of determining a positional deviation amount based on the first eye position and the second eye position, and deleting the second eye position from the set of positions;
the updating of the target image in the left half area and the target image in the right half area of the display area based on the current observation position is performed so that after the current observation position can observe the three-dimensional target image, the method further comprises:
And returning to the step of extracting two adjacent elements in the position set according to the time sequence, taking the former one of the two elements as the second eye position and the latter one as the first eye position until the position set does not contain the two elements.
7. The method according to claim 6, wherein the fitting the positional change relation of the eye position within the preset time based on all the image samples includes:
calculating a first slope and a first intercept based on a lateral axis coordinate of the eye position in each of the image samples, and determining a lateral axis position change relationship based on the first slope and the first intercept;
calculating a second slope and a second intercept based on the vertical axis coordinates of the eye position in each of the image samples, and determining a vertical axis position change relationship based on the second slope and the second intercept;
and obtaining the position change relation based on the vertical axis change relation and the horizontal axis change relation.
8. The method according to claim 1, wherein the step of simultaneously updating the target image in the left half area and the target image in the right half area of the display area based on the current observation position so that the three-dimensional image can be observed at the current observation position includes:
Mapping a target three-dimensional model to a two-dimensional space based on the current observation position and the received view angle information, pupil distance information, near-section position information and far-section position information to obtain an updated target image;
and displaying the updated target image in the left half area and the right half area correspondingly, so that a three-dimensional image can be observed at the current observation position.
9. The method according to claim 8, wherein mapping a target three-dimensional model to a two-dimensional space based on the current observation position and the received angle of view information, pupil distance information, near-cross-section position information, and far-cross-section position information to obtain the updated target image, comprises:
performing model transformation on a first pixel matrix corresponding to the target three-dimensional model to map the first pixel matrix to a world coordinate system so as to obtain a second pixel matrix;
performing equivalence treatment on the second pixel matrix to obtain a third pixel matrix;
and mapping the third pixel matrix to the two-dimensional space based on the current observation position and the field angle information, the interpupillary distance information, the near section position information and the far section position information to obtain the updated target image.
10. The method according to claim 9, wherein the step of displaying the image is performed based on the current observation position and the angle-of-view information, the pupil distance information, the near-cross-section position information, and the far-cross-section position information, mapping the third pixel matrix to the two-dimensional space to obtain the updated target image, comprising:
mapping the third pixel matrix to a clipping space based on the current observation position, the view angle information, the near-section position information and the far-section position information to obtain a fourth pixel matrix;
determining a monocular viewing angle deviation value based on the interpupillary distance information;
determining a projection matrix of the fourth pixel matrix corresponding to the left half region and the right half region based on the view angle information and the monocular viewing angle deviation;
obtaining an updated first target image based on the product of the fourth pixel matrix and the projection matrix of the left half area, and obtaining an updated second target image based on the product of the fourth pixel matrix and the projection matrix of the right half area;
and displaying the updated target image in the left half area and the right half area correspondingly, so that a three-dimensional image can be observed at the current observation position, and the method comprises the following steps:
And displaying the updated first target image in the left half area, and simultaneously displaying the updated second target image in the right half area, so that a three-dimensional image can be observed at the current observation position.
11. A three-dimensional display device for an image, the device comprising:
the device comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring the eye position of a user at the current moment to be used as a first eye position and acquiring the eye position of the user at the last moment to be used as a second eye position;
a determination module for determining a positional deviation amount based on the first eye position and the second eye position;
the correction module is used for correcting the first eye position based on the position deviation amount to obtain a current observation position;
and the updating module is used for correspondingly updating the target image in the left half area and the target image in the right half area of the display area based on the current observation position so that the three-dimensional image can be observed at the current observation position.
12. A display module comprising a memory and a processor, wherein the memory stores a computer program which, when executed by the processor, implements the method of any of claims 1-10.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by one or more processors, implements the method of any of claims 1-10.
CN202310487410.2A 2023-04-28 2023-04-28 Three-dimensional display method and device for image, display module and readable storage medium Pending CN116456068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310487410.2A CN116456068A (en) 2023-04-28 2023-04-28 Three-dimensional display method and device for image, display module and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310487410.2A CN116456068A (en) 2023-04-28 2023-04-28 Three-dimensional display method and device for image, display module and readable storage medium

Publications (1)

Publication Number Publication Date
CN116456068A true CN116456068A (en) 2023-07-18

Family

ID=87130172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310487410.2A Pending CN116456068A (en) 2023-04-28 2023-04-28 Three-dimensional display method and device for image, display module and readable storage medium

Country Status (1)

Country Link
CN (1) CN116456068A (en)

Similar Documents

Publication Publication Date Title
US11928838B2 (en) Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display
US10198865B2 (en) HMD calibration with direct geometric modeling
EP3163535A1 (en) Wide-area image acquisition method and device
JP6008397B2 (en) AR system using optical see-through HMD
US20070248260A1 (en) Supporting a 3D presentation
CN105404393A (en) Low-latency virtual reality display system
CN108596854B (en) Image distortion correction method and device, computer readable medium, electronic device
KR102066058B1 (en) Method and device for correcting distortion errors due to accommodation effect in stereoscopic display
US20130135310A1 (en) Method and device for representing synthetic environments
US20210185293A1 (en) Depth data adjustment based on non-visual pose data
KR20190120492A (en) Method And Apparatus for Registering Virtual Object to Real Space Non-rigid Object
CN111324200B (en) Virtual reality display method and device and computer storage medium
US9918066B2 (en) Methods and systems for producing a magnified 3D image
JP6061334B2 (en) AR system using optical see-through HMD
CN107864372B (en) Stereo photographing method and device and terminal
US10901213B2 (en) Image display apparatus and image display method
US20100149319A1 (en) System for projecting three-dimensional images onto a two-dimensional screen and corresponding method
BR112021008558A2 (en) apparatus, disparity estimation method, and computer program product
CN107483915B (en) Three-dimensional image control method and device
KR101888837B1 (en) Preprocessing apparatus in stereo matching system
CN112017242A (en) Display method and device, equipment and storage medium
JP6168597B2 (en) Information terminal equipment
US11050993B2 (en) Image generating apparatus and image generating method
JP2018078496A (en) Three-dimensional moving picture display processing device, moving picture information recording medium, moving picture information provision server, and program
WO2020084312A1 (en) Method and system for providing at least a portion of content having six degrees of freedom motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination