CN116405650A - Image correction method, image correction device, storage medium, and display apparatus - Google Patents
Image correction method, image correction device, storage medium, and display apparatus Download PDFInfo
- Publication number
- CN116405650A CN116405650A CN202310228163.4A CN202310228163A CN116405650A CN 116405650 A CN116405650 A CN 116405650A CN 202310228163 A CN202310228163 A CN 202310228163A CN 116405650 A CN116405650 A CN 116405650A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel point
- monocular
- image correction
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003702 image correction Methods 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000003384 imaging method Methods 0.000 claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 19
- 230000003287 optical effect Effects 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 5
- 238000005452 bending Methods 0.000 abstract description 4
- 239000011521 glass Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 208000003464 asthenopia Diseases 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 206010028813 Nausea Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000008693 nausea Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an image correction method, an image correction device, a storage medium and a display apparatus. The image correction method comprises the following steps: determining the coordinates of each new pixel point of the monocular image to be displayed; according to a preset coordinate transformation relation, determining original pixel point coordinates corresponding to each new pixel point coordinate in the original monocular image; and extracting the pixel value of the original pixel point coordinate as the pixel value of the corresponding new pixel point coordinate to form a monocular display image, wherein the monocular display image is used for binocular image combination. Through correcting the monocular virtual image, the spatial position of an image point in the binocular imaging process is adjusted, a flat virtual image is obtained, bending of the image is avoided, obvious difference of image depths of different areas is avoided, dislocation double images of left and right eye images are avoided, image display quality is improved, and user experience is improved.
Description
Technical Field
The invention belongs to the technical field of augmented reality near-to-eye display, and particularly relates to an image correction method, an image correction device, a computer readable storage medium and display equipment.
Background
An Augmented Reality (AR) near-eye display technology is a display technology capable of superimposing a virtual image to be displayed onto a real scene. In the prior art, the fusion effect of the virtual image and the real scene is poor, and it is difficult to ensure that an observer can clearly see the virtual image and the real scene displayed by the AR at the same time. The virtual image displayed by the AR is not seen clearly, and the virtual image displayed by the AR is not seen clearly when the real scene is seen clearly, so that discomfort such as visual fatigue is easily caused. As shown in fig. 1, O1 is a virtual image to be displayed, O2 is a real scene object, E1 and E2 are left and right eyes of an observer, s1 and s2 are viewing axis directions of the left and right eyes when the observer looks at the object, and R1 is an indoor real scene range (typically within 10m range) of interest to the observer. In fig. 1 a, the observer looks at the object O2 in the real scene, and at the same time, the observer cannot see the virtual image O1 at the same time, because the binocular visual axis angle when looking at O2 does not match the binocular visual axis angle when looking at O1, and vice versa, the binocular visual axis angle when looking at O1 is as shown in fig. 1 b. Thus resulting in poor visual fusion of the virtual image and the real scene. Therefore, when the gaze point of the observer is switched between the virtual image displayed by the AR and the real scene, the convergence of the human eye needs to be repeatedly and greatly adjusted, which is likely to cause discomfort such as visual fatigue, dizziness, nausea, and the like.
In order to solve the technical problem, a common method is to adjust the included angles of the main optical axes of the left and right glasses lenses, as shown in fig. 2, w1 and w2 in fig. 2 are the left and right glasses lenses of the AR near-eye display device, c1 and c2 are the main optical axes of the left and right glasses lenses, and a1 is the included angle of c1 and c2, so that the setting has the advantages of shortening the convergence distance when the human eyes observe the virtual image to be displayed, and enabling the virtual image originally imaged at infinity to be imaged in the distance range of about 25cm to 10 m. This distance range substantially covers the distance of most objects of visual interest to people when working and living indoors. When the human eyes observe the virtual image and the real scene, because the difference of imaging distances of the virtual image and the real scene is small, the convergence change is small, as shown in a c of fig. 1, the gaze switching of the virtual image and the real scene can be realized by only fine adjustment of human eye muscles, and the visual experience of a user is more comfortable.
However, the included angle between the main optical axes of the left and right glasses lenses can cause the image depth perceived by human eyes to bend after binocular imaging of the virtual images displayed by the left and right eyes, and the image edge can have the problems of blurring and even double image. As shown in fig. 3, fig. 3 illustrates a case of a virtual image seen by a single eye when an included angle exists between main optical axes of left and right eye images of an AR near-eye display device. Where L is the full-frame virtual image seen by the left eye and R is the full-frame virtual image seen by the right eye. The two full-frame virtual image planes L and R intersect in space. This results in that, except for the image points in the horizontal and vertical directions in the center of the virtual image, two lines of sight formed by connecting any image point of the rest area of the virtual image with the left and right eyes are not intersected in space theoretically, which results in that when the left and right eye images are binocular-combined, the same image point seen by the left and right eyes cannot be overlapped, i.e. double images appear. And the closer to the virtual image edge, the greater the distance of the left and right eye vision in space, the more serious the ghost. In addition, there is a problem that the virtual image is deeply curved after binocular imaging, and fig. 4 is a conversion relationship between the virtual image seen by a monocular and the virtual image seen after binocular imaging, and is a top view. The aforementioned problem of line-of-sight non-intersection is omitted here for ease of analysis. The image point S1 is a full-image virtual image perceived by human eyes after binocular imaging, consider that in FIG. 4, an image point L11 is on the left-eye virtual image L1 to be displayed, if the homonymous point on the corresponding right-eye virtual image R1 to be displayed is R11, after binocular imaging of human eyes (the intersection point of the image point and the corresponding eye connecting line is calculated), the image point S11 on the curved surface S1 is formed after the image point and the homonymous point are overlapped, namely, when the human eyes watch, the image point is perceived to be positioned at the S11 point of the curved surface S1. Similarly, an image point L12 on the virtual image L1 to be displayed by the left eye and the corresponding homonymous point on the virtual image R1 to be displayed by the right eye are R12, and after binocular imaging of human eyes, the image point and the homonymous point are superimposed to form an image point S12 on the curved surface S1. When the images to be displayed by the left eye and the right eye are identical, the calculation according to the method can know that the full-picture virtual image perceived by the human eyes after binocular image combination is a curved surface S1.
Therefore, the binocular imaging image bending, virtual image edge blurring, ghost images, and the like in the AR glasses are technical problems that need to be solved in the art.
Disclosure of Invention
The invention solves the technical problems that: how to avoid the virtual image resulting from binocular imaging in AR glasses from generating curvature, virtual image edge blurring and ghosting.
The invention discloses an image correction method, which comprises the following steps:
determining the coordinates of each new pixel point of the monocular image to be displayed;
determining original pixel point coordinates corresponding to the new pixel point coordinates in the original monocular image according to a preset coordinate transformation relation;
and extracting the pixel value of the original pixel point coordinate as the pixel value of the corresponding new pixel point coordinate to form a monocular display image, wherein the monocular display image is used for binocular image combination.
Preferably, the method for determining the coordinates of each new pixel point of the monocular image to be displayed is as follows:
acquiring coordinates of each image point of a virtual image to be displayed and coordinates of each homonymous pixel point of a monocular reference image;
and obtaining a new pixel point coordinate according to the pixel point coordinate and the pixel point coordinate with the same name.
Preferably, the expression of the preset coordinate transformation relation is:
wherein, (u ', v') represents original pixel point coordinates, (u, v) represents new pixel point coordinates, θ represents a main optical axis included angle of left and right eye images, h represents binocular imaging distance, d represents average interpupillary distance of human eyes, (uc, vc) represents central pixel point coordinates of an image to be displayed by a single eye, and M represents a main optical axis rotation matrix of the image to be displayed by the single eye.
Preferably, the image correction method further includes: and calculating a main optical axis rotation matrix M according to the rotation angle of the main optical axis of the monocular image to be displayed relative to each coordinate axis in the world coordinate system.
The application also discloses an image display method, which comprises the following steps:
tracking and acquiring the gaze direction of the user in real time;
when the gaze direction reaches a predetermined condition, the above-described image correction method is performed.
The application also discloses an image correction device, the image correction device includes:
the new coordinate determining module is used for determining the coordinates of each new pixel point of the monocular image to be displayed;
the coordinate transformation module is used for determining original pixel point coordinates corresponding to the new pixel point coordinates in the original monocular image according to a preset coordinate transformation relation;
and the pixel value extraction module is used for extracting the pixel value of the original pixel point coordinate as the pixel value of the corresponding new pixel point coordinate to form a monocular display image, and the monocular display image is used for binocular image combination.
Preferably, the image correction apparatus further includes:
and the pixel storage module is used for storing the original monocular image.
The application also discloses a computer-readable storage medium storing an image correction program which, when executed by a processor, implements the image correction method described above.
The application also discloses a display device comprising a computer readable storage medium, a processor and an image correction program stored in the computer readable storage medium, which when executed by the processor, implements the image correction method described above.
The invention discloses an image correction method and an image correction device, which have the following technical effects:
through correcting the monocular virtual image, the spatial position of an imaging point in the binocular imaging process is adjusted, a flat virtual image is obtained, bending of the image is avoided, obvious difference of image depths of different areas is prevented, dislocation double images of left and right eye images are avoided, image display quality is improved, and user experience is improved.
Drawings
FIG. 1 is a prior art fusion diagram of a virtual image and a real scene;
FIG. 2 is a schematic view of a prior art left and right eyeglass lens with an included angle between the main optical axes;
FIG. 3 is a schematic diagram of a left-right eye display image when the main optical axes of the left-right spectacle lenses have an included angle in the prior art;
FIG. 4 is a schematic diagram showing the transformation relationship between a virtual image seen by a single eye and a virtual image seen after binocular imaging in the prior art;
FIG. 5 is a flowchart of an image correction method according to a first embodiment of the present invention;
FIG. 6 is a schematic diagram showing a transformation relationship between a virtual image seen by a single eye and a virtual image seen after binocular imaging according to the first embodiment of the present invention;
fig. 7 is a schematic block diagram of an image correction apparatus according to a third embodiment of the present invention;
fig. 8 is a schematic diagram of a display device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Before describing in detail the various embodiments of the present application, the technical concepts of the present application are first briefly described: as described in the background art section, when the main optical axes of the left and right eye images have an included angle, the virtual image obtained after binocular imaging of the left and right eye images is a curved image, and the edges of the virtual image are blurred and ghost. Therefore, the image correction method provided by the invention has the advantages that the coordinates of each pixel point of the original monocular image are adaptively adjusted, and each pixel value of the original monocular image is reserved, so that the combined image can be positioned on the same plane when the corrected monocular display image is used for binocular image combination, namely, the virtual image obtained by the combined image is a flat image, the image is prevented from being bent, the image depth is prevented from being different, and the left eye image and the right eye image are prevented from being dislocated and ghost.
In the first embodiment, the AR device is an execution subject of the image correction method, and the AR device includes, but is not limited to, AR glasses and AR head-display device.
Specifically, as shown in fig. 5, the image correction method of the first embodiment includes the steps of:
step S10, determining the coordinates of each new pixel point of the monocular image to be displayed;
step S20, determining original pixel point coordinates corresponding to the new pixel point coordinates in the original monocular image according to a preset coordinate transformation relation;
and step S30, extracting the pixel value of the original pixel point coordinate as the pixel value of the corresponding new pixel point coordinate to form a monocular display image, wherein the monocular display image is used for binocular image combination.
Specifically, the present embodiment defines that, under the world coordinate system, the pixel coordinates of a certain point in the physical space in the left-eye image and the pixel coordinates in the right-eye image are two same-name pixel point coordinates, and a virtual image is obtained after binocular imaging is performed on the left-eye image and the right-eye image. For example, as shown in fig. 6, for a certain point in space, the corresponding pixel point in the left eye image L2 is L21, the corresponding pixel point in the right eye image R2 is R21, and the combined image point in the virtual image S2 obtained by combining is S21. The image point S21 is an intersection point between the connection line between the pixel point L21 and the left eye and the connection line between the pixel point R21 and the right eye. Since the relative position between the left and right eyes is kept unchanged in the world coordinate system, the coordinates of the image point S21 can be considered to depend on the coordinates of the pixel point L21 and the pixel point R21.
Further, the basic idea of step S10 is to assume that one of the monocular images remains unchanged and set the binocular imaging virtual image to be displayed on a certain plane, and solve the position of the other corrected monocular image based on the two images. Specifically, first, each pixel point coordinate of the binocular imaging virtual image to be displayed and each pixel point coordinate of the same name of the monocular reference image are obtained, and then, a new pixel point coordinate of the other monocular image is obtained according to the binocular pixel point coordinate and the pixel point coordinate of the same name on the monocular reference image. For example, when it is determined that a certain point in space is S21 at an image point in the binocular imaging virtual image to be displayed and L21 at a pixel point in the left-eye reference image, the pixel point in the right-eye image to be displayed is R21 according to the principle of ray intersection, and so on, the coordinates of each pixel point in the right-eye image to be displayed, that is, the coordinates of the new pixel point, can be obtained.
Further, in order to complete image transformation, in addition to determining transformed pixel coordinates, it is also necessary to determine pixel values corresponding to the respective pixel coordinates. In order to determine the pixel value corresponding to each pixel point coordinate, the original pixel point coordinate corresponding to each new pixel point coordinate in the original monocular image after transformation needs to be determined first, and then the pixel value of the original pixel point coordinate is used as the pixel value of the new pixel point coordinate.
Specifically, in step S20, the original pixel coordinates corresponding to each new pixel coordinate may be determined according to a preset coordinate transformation relationship, where the preset coordinate transformation relationship has the following expression:
as shown in fig. 6, (u ', v') represents the original pixel point coordinates, (u, v) represents the new pixel point coordinates, θ represents the main optical axis included angle of the left and right eye images, h represents the binocular imaging distance, d represents the human eye average pupil distance, (uc, vc) represents the center pixel point coordinates of the monocular image to be displayed, and M represents the main optical axis rotation matrix of the monocular image to be displayed.
The main optical axis rotation matrix M is calculated according to the rotation angle of the main optical axis of the monocular image to be displayed relative to each coordinate axis in the world coordinate system. Specifically, in the world coordinate system, assuming that rotation angles of the main optical axis relative to three coordinate axes of XYZ are α, β, γ, respectively, the expression of the main optical axis rotation matrix M is:
M=R z (γ)*R y (β)*R x (α)
according to the calculation process, the original pixel point coordinates (u, v) corresponding to the new pixel point coordinates (u, v) in the original monocular image can be obtained ′ For example, coordinates of a pixel point R21 in the image to be displayed for the right eye are calculated to obtain coordinates of a corresponding original pixel point R11 in the original image for the right eye.
Further, an original pixel point coordinate (u) corresponding to each new pixel point coordinate (u, v) is determined ′ After v'), each original pixel point coordinate (u) is extracted ′ The pixel value of v') is used as the pixel value of the corresponding new pixel point coordinate (u, v), so that each new pixel point coordinate of the monocular image to be displayed and the corresponding pixel value are determined, and the required monocular image to be displayed can be formed. When the monocular display image and the monocular reference image are subjected to binocular imaging, a flat virtual image can be obtained, for example, after the corrected right eye image R2 and the left eye image L2 as a reference are subjected to binocular imaging, a flat virtual image S2 is obtained.
In the first embodiment, the left-eye image L2 is used as a reference image to adjust the right-eye image R2, and in other embodiments, the right-eye image R2 may be used as a reference image to adjust the left-eye image L2, for example, the original pixel point L12 of the left-eye image L1 is adjusted to a new pixel point L22, and the new pixel point L22 and the same-name pixel point R22 in the right-eye image R2 are binocular-combined to obtain the image point S22. Furthermore, the left eye image L2 and the right eye image R2 may be adjusted at the same time, for example, the left eye image L2 is used as a reference, the portion of the right eye image R2 is adjusted, and then the portion of the left eye image L2 is adjusted with the portion of the right eye image R2 that is not adjusted as a reference, so as to obtain the flat virtual image S2.
According to the image correction method provided by the embodiment, the monocular virtual image is corrected, so that the spatial position of an imaging point in the binocular imaging process is adjusted, a flat virtual image is obtained, bending of the image is avoided, obvious difference of image depths of different areas is prevented, dislocation double images of left and right eye images are avoided, the image display quality is improved, and the user experience is improved.
Further, the second embodiment also discloses an image display method, which includes tracking and acquiring a gaze direction of a user in real time, and executing the image correction method in the first embodiment when the gaze direction reaches a predetermined condition. Illustratively, using in conjunction with eye tracking techniques, the AR display device defaults to not making monocular virtual image adjustments, i.e., not making corrections to the binocular imaging virtual image, in displaying the image. Once eye tracking detects that the viewer's gaze is at the edge of the virtual image, correction of the binocular imaging virtual image is immediately turned on to reduce the demands on computing power and power consumption.
As shown in fig. 7, the second embodiment discloses an image correction apparatus including a new coordinate determination module 100, a coordinate transformation module 200, and a pixel value extraction module 300. The new coordinate determining module 100 is configured to determine coordinates of each new pixel point of the monocular image to be displayed; the coordinate transformation module 200 is configured to determine, according to a preset coordinate transformation relationship, an original pixel point coordinate corresponding to each new pixel point coordinate in the original monocular image; the pixel value extraction module 300 is configured to extract a pixel value of the original pixel point coordinate as a pixel value of the corresponding new pixel point coordinate, so as to form a monocular display image, where the monocular display image is used for binocular imaging.
Further, the image correction apparatus further includes a pixel storage module 400, and the pixel storage module 400 is configured to store the original monocular image.
The image correction device may be AR equipment such as AR glasses and AR head display, and the detailed working process of each module in the image correction device may refer to the description of the first embodiment, which is not described herein.
The third embodiment also discloses a computer readable storage medium storing an image correction program which when executed by a processor implements the image correction method described above.
The fourth embodiment also discloses a display device, which includes, at the hardware level, as shown in fig. 8, a processor 12, an internal bus 13, a network interface 14, and a computer-readable storage medium 11. The processor 12 reads a corresponding program from a computer-readable storage medium and then runs to form a request processing means at a logic level. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device. The computer-readable storage medium 11 stores thereon an image correction program which, when executed by a processor, implements the image correction method described above.
Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be display device readable instructions, data structures, modules of a program, or other data. Examples of computer-readable storage media include, but are not limited to, phase-change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
While certain embodiments have been shown and described, it would be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (9)
1. An image correction method, characterized in that the image correction method comprises:
determining the coordinates of each new pixel point of the monocular image to be displayed;
determining original pixel point coordinates corresponding to the new pixel point coordinates in the original monocular image according to a preset coordinate transformation relation;
and extracting the pixel value of the original pixel point coordinate as the pixel value of the corresponding new pixel point coordinate to form a monocular display image, wherein the monocular display image is used for binocular image combination.
2. The image correction method according to claim 1, wherein the method for determining the coordinates of each new pixel point of the monocular image to be displayed is:
acquiring coordinates of each image point of a virtual image to be displayed and coordinates of each homonymous pixel point of a monocular reference image;
and obtaining a new pixel point coordinate according to the pixel point coordinate and the pixel point coordinate with the same name.
3. The image correction method according to claim 2, wherein the expression of the preset coordinate transformation relationship is:
wherein, (u ', v') represents original pixel point coordinates, (u, v) represents new pixel point coordinates, θ represents a main optical axis included angle of left and right eye images, h represents binocular imaging distance, d represents average interpupillary distance of human eyes, (uc, vc) represents central pixel point coordinates of an image to be displayed by a single eye, and M represents a main optical axis rotation matrix of the image to be displayed by the single eye.
4. The image correction method according to claim 3, characterized in that the image correction method further comprises: and calculating a main optical axis rotation matrix M according to the rotation angle of the main optical axis of the monocular image to be displayed relative to each coordinate axis in the world coordinate system.
5. An image display method, characterized in that the image display method comprises:
tracking and acquiring the gaze direction of the user in real time;
the image correction method according to any one of claims 1 to 4 is performed when the gaze direction reaches a predetermined condition.
6. An image correction apparatus, characterized in that the image correction apparatus comprises:
the new coordinate determining module is used for determining the coordinates of each new pixel point of the monocular image to be displayed;
the coordinate transformation module is used for determining original pixel point coordinates corresponding to the new pixel point coordinates in the original monocular image according to a preset coordinate transformation relation;
and the pixel value extraction module is used for extracting the pixel value of the original pixel point coordinate as the pixel value of the corresponding new pixel point coordinate to form a monocular display image, and the monocular display image is used for binocular image combination.
7. The image correction device according to claim 6, characterized in that the image correction device further comprises:
and the pixel storage module is used for storing the original monocular image.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores an image correction program which, when executed by a processor, implements the image correction method of any one of claims 1 to 4.
9. A display device comprising a computer-readable storage medium, a processor, and an image correction program stored in the computer-readable storage medium, which when executed by the processor, implements the image correction method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310228163.4A CN116405650A (en) | 2023-03-10 | 2023-03-10 | Image correction method, image correction device, storage medium, and display apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310228163.4A CN116405650A (en) | 2023-03-10 | 2023-03-10 | Image correction method, image correction device, storage medium, and display apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116405650A true CN116405650A (en) | 2023-07-07 |
Family
ID=87013301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310228163.4A Pending CN116405650A (en) | 2023-03-10 | 2023-03-10 | Image correction method, image correction device, storage medium, and display apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116405650A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001128195A (en) * | 1999-10-29 | 2001-05-11 | Atr Ningen Joho Tsushin Kenkyusho:Kk | Stereoscopic image correcting device, stereoscopic image display device, and recording medium with stereoscopic image correcting program recorded thereon |
CN106791773A (en) * | 2016-12-30 | 2017-05-31 | 浙江工业大学 | A kind of novel view synthesis method based on depth image |
CN110149511A (en) * | 2019-05-13 | 2019-08-20 | 北京理工大学 | A kind of distortion correction method, device and display system |
CN110874135A (en) * | 2018-09-03 | 2020-03-10 | 广东虚拟现实科技有限公司 | Optical distortion correction method and device, terminal equipment and storage medium |
US20200342652A1 (en) * | 2019-04-25 | 2020-10-29 | Lucid VR, Inc. | Generating Synthetic Image Data for Machine Learning |
CN113485013A (en) * | 2021-07-07 | 2021-10-08 | 深圳市安之眼科技有限公司 | Adjusting method for binocular AR display device image combination dislocation |
US20220092754A1 (en) * | 2018-12-30 | 2022-03-24 | Elbit Systems Ltd. | Systems and methods for reducing image artefacts in binocular displays |
US20220392109A1 (en) * | 2021-06-07 | 2022-12-08 | Qualcomm Incorporated | Methods and apparatus for dynamic distortion correction |
-
2023
- 2023-03-10 CN CN202310228163.4A patent/CN116405650A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001128195A (en) * | 1999-10-29 | 2001-05-11 | Atr Ningen Joho Tsushin Kenkyusho:Kk | Stereoscopic image correcting device, stereoscopic image display device, and recording medium with stereoscopic image correcting program recorded thereon |
CN106791773A (en) * | 2016-12-30 | 2017-05-31 | 浙江工业大学 | A kind of novel view synthesis method based on depth image |
CN110874135A (en) * | 2018-09-03 | 2020-03-10 | 广东虚拟现实科技有限公司 | Optical distortion correction method and device, terminal equipment and storage medium |
US20220092754A1 (en) * | 2018-12-30 | 2022-03-24 | Elbit Systems Ltd. | Systems and methods for reducing image artefacts in binocular displays |
US20200342652A1 (en) * | 2019-04-25 | 2020-10-29 | Lucid VR, Inc. | Generating Synthetic Image Data for Machine Learning |
CN110149511A (en) * | 2019-05-13 | 2019-08-20 | 北京理工大学 | A kind of distortion correction method, device and display system |
US20220392109A1 (en) * | 2021-06-07 | 2022-12-08 | Qualcomm Incorporated | Methods and apparatus for dynamic distortion correction |
CN113485013A (en) * | 2021-07-07 | 2021-10-08 | 深圳市安之眼科技有限公司 | Adjusting method for binocular AR display device image combination dislocation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7094266B2 (en) | Single-depth tracking-accommodation-binocular accommodation solution | |
CN107071382B (en) | Stereoscopic display device | |
CN108663799B (en) | Display control system and display control method of VR image | |
US10397539B2 (en) | Compensating 3D stereoscopic imagery | |
JP6276691B2 (en) | Simulation device, simulation system, simulation method, and simulation program | |
JPH0676073A (en) | Method and apparats for generating solid three- dimensional picture | |
CN108632599B (en) | Display control system and display control method of VR image | |
EP3001681B1 (en) | Device, method and computer program for 3d rendering | |
CN112929636B (en) | 3D display device and 3D image display method | |
Hwang et al. | Instability of the perceived world while watching 3D stereoscopic imagery: a likely source of motion sickness symptoms | |
CN111880654A (en) | Image display method and device, wearable device and storage medium | |
CN111757090A (en) | Real-time VR image filtering method, system and storage medium based on fixation point information | |
WO2021169853A1 (en) | Display method and apparatus, and terminal device and storage medium | |
JP2012066002A (en) | Visual field image display device of eyeglasses | |
CN112929638B (en) | Eye positioning method and device and multi-view naked eye 3D display method and device | |
CN111915739A (en) | Real-time three-dimensional panoramic information interactive information system | |
CN116405650A (en) | Image correction method, image correction device, storage medium, and display apparatus | |
CN115202475A (en) | Display method, display device, electronic equipment and computer-readable storage medium | |
US11934571B2 (en) | Methods and systems for a head-mounted device for updating an eye tracking model | |
TW202225783A (en) | Naked eye stereoscopic display and control method thereof | |
CN114020150A (en) | Image display method, image display device, electronic apparatus, and medium | |
CN111915740A (en) | Rapid three-dimensional image acquisition method | |
CN108881892A (en) | Anti-dazzle method, system for Table top type virtual reality system | |
KR20050100095A (en) | The apparatus and method for vergence control of a parallel-axis stereo camera system using compensated image signal processing | |
CN115334296B (en) | Stereoscopic image display method and display device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |