CN114449237B - Method for anti-distortion and anti-dispersion and related equipment - Google Patents

Method for anti-distortion and anti-dispersion and related equipment Download PDF

Info

Publication number
CN114449237B
CN114449237B CN202011197969.4A CN202011197969A CN114449237B CN 114449237 B CN114449237 B CN 114449237B CN 202011197969 A CN202011197969 A CN 202011197969A CN 114449237 B CN114449237 B CN 114449237B
Authority
CN
China
Prior art keywords
image
display area
display
distortion
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011197969.4A
Other languages
Chinese (zh)
Other versions
CN114449237A (en
Inventor
陈启超
赖武军
沈钢
付钟奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011197969.4A priority Critical patent/CN114449237B/en
Publication of CN114449237A publication Critical patent/CN114449237A/en
Application granted granted Critical
Publication of CN114449237B publication Critical patent/CN114449237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3173Constructional details thereof wherein the projection device is specially adapted for enhanced portability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3182Colour adjustment, e.g. white balance, shading or gamut

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a method for anti-distortion and anti-dispersion and related equipment, wherein the method comprises the following steps: the method comprises the steps that an adjusting device sends image data corresponding to a first image and a first anti-distortion and anti-dispersion model to a VR device, wherein the first image comprises a plurality of display areas; the adjusting equipment obtains a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion and anti-dispersion model; the adjusting device adjusts coefficients of processing functions corresponding to a first display area, wherein the first display area is a display area in the plurality of display areas; the adjusting device sends a second anti-distortion anti-dispersion model to the VR device, where the second anti-distortion anti-dispersion model includes the adjusted processing function corresponding to the first display area. In this way, the visual effect of the VR device display can be improved.

Description

Method for anti-distortion and anti-dispersion and related equipment
Technical Field
The application relates to the technical field of Virtual Reality (VR), in particular to a method for anti-distortion and anti-dispersion and related equipment.
Background
VR is a computer simulation system that can create and experience a virtual world that utilizes a computer to create a simulated environment into which a user is immersed. People may experience VR by wearing VR head mounted display devices (or VR helmets, VR glasses). The VR head-mounted display device comprises a lens and a display screen, and human eyes can view a display picture on the display screen through the lens. Since the lens has a refraction effect on light, the image refracted by the lens generates pincushion distortion. In addition, due to the different refractive indexes of the light rays with different wavelengths, a chromatic dispersion phenomenon is generated at the same time, so that an image seen by eyes through the lens is often distorted, and the VR experience effect is seriously affected. Therefore, in VR technology, research of anti-distortion and anti-dispersion algorithms is of great importance.
The anti-distortion and anti-dispersion processing method mainly comprises the step of adding an anti-distortion and anti-dispersion to a displayed image, wherein the added anti-distortion and anti-dispersion are counteracted with distortion and dispersion effects generated by a lens, so that a human eye sees a normal image from a VR helmet. In the prior art, an anti-distortion and anti-dispersion algorithm is generally to process lens correction coefficients provided by manufacturers to obtain an anti-distortion and anti-dispersion model, calculate scale factors corresponding to three primary colors through the anti-distortion and anti-dispersion model, and calculate display coordinates after the anti-distortion and anti-dispersion model is processed according to original display coordinates and scale factors.
However, since the lens is curved, the most suitable anti-distortion and anti-dispersion models of different areas are different, and when the effect of one area is the best, the effect of the other area is affected.
Disclosure of Invention
The application provides a method for anti-distortion and anti-dispersion and related equipment, which can improve the visual effect of a display picture of VR head-mounted display equipment.
In a first aspect, an embodiment of the present application provides a method for anti-distortion and anti-dispersion, where the method includes: the method comprises the steps that an adjusting device sends image data corresponding to a first image and a first anti-distortion and anti-dispersion model to a Virtual Reality (VR) device, wherein the first image comprises a plurality of display areas; the adjusting equipment obtains a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion and anti-dispersion model; the adjusting device adjusts coefficients of processing functions corresponding to a first display area, wherein the first display area is a display area in the plurality of display areas; the adjusting device sends a second anti-distortion anti-dispersion model to the VR device, where the second anti-distortion anti-dispersion model includes the adjusted processing function corresponding to the first display area. By the mode, the anti-distortion and anti-dispersion effect of the local display area can be adjusted, the flexibility of anti-distortion and anti-dispersion processing is enhanced, and the visual effect of a display picture of VR equipment is improved.
With reference to the first aspect, in some embodiments, the obtaining, by the adjusting device, a processing function corresponding to each display area of the plurality of display areas according to the first distortional inverse dispersion model includes: the adjusting device determines a segmentation distance according to a preset segment number and a maximum value of a first distance, wherein the first distance is a distance between a light ray refracted by a lens of the VR device and the center of the lens; the adjusting device uniformly divides a scale factor curve in the first anti-distortion and anti-dispersion model into curves with the preset segment numbers based on the segmentation distance, the scale factor curve is used for representing the corresponding relation between the first distance and the scale factor, and the scale factor is used for the VR device to process the image data; and the adjusting equipment carries out linear fitting on each curve in the curves with the preset sections to obtain processing functions corresponding to a plurality of display areas in the first image.
With reference to the first aspect, in some embodiments, the first image includes a plurality of circles, a radius of the circles being a multiple of the segmentation distance, and the plurality of circles are used to indicate positions of the plurality of display areas in the first image.
With reference to the first aspect, in some embodiments, the similarity between the first display area and the second display area is lower than a preset value, and the second display area is a display area in the third image, where a position of the second display area in the third image is the same as a position of the first display area in the first image; the third image is an image formed by a second image displayed by the VR device and observed through a lens of the VR device, and the second image is an image displayed by the VR device based on the image data and the first anti-distortion anti-dispersion model.
With reference to the first aspect, in some embodiments, the third image is an image obtained by shooting the second image by a shooting device acquired by the adjusting device.
With reference to the first aspect, in some embodiments, the coefficient of the processing function corresponding to the first display area is adjusted by presetting an adjustment parameter in the adjustment parameter set.
With reference to the first aspect, in some embodiments, the processing function corresponding to the first display area is a linear function, and coefficients of the processing function corresponding to the first display area include a first parameter and a second parameter. In this way, the adjustment of the processing function can be facilitated, since the primary function contains fewer variable parameters.
With reference to the first aspect, in some embodiments, the method further includes: and stopping adjusting the coefficients of the processing functions corresponding to the first display area when the adjustment parameter traversal in the adjustment parameter set is completed.
With reference to the first aspect, in some embodiments, the method further includes: the adjusting device generates a plurality of second anti-distortion anti-dispersion models, wherein the second anti-distortion anti-dispersion models comprise processing functions corresponding to the first display area after one-time adjustment; the adjusting device determines a third inverse distortion inverse dispersion model from the second inverse distortion inverse dispersion models, wherein the third inverse distortion inverse dispersion model comprises a processing function corresponding to the first display area, the similarity between the third display area corresponding to the processing function and the first display area is higher than that between the third display area corresponding to the processing function corresponding to the first display area and the similarity between the third display area corresponding to the processing function corresponding to the first display area and the first display area contained in other second inverse distortion inverse dispersion models; the third display area is a display area in a fifth image, the position of the third display area in the fifth image is the same as the position of the first display area in the first image, the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion and anti-dispersion models; the adjustment device sends the third distorted inverse dispersion model to the VR device to cause the VR device to process image data in accordance with the third distorted inverse dispersion model.
With reference to the first aspect, in some embodiments, the method further includes: when the similarity between a third display area and the first display area is higher than or equal to the preset value, stopping adjusting the coefficient of the processing function corresponding to the first display area, wherein the third display area is a display area in a fifth image, and the position of the third display area in the fifth image is the same as the position of the first display area in the first image; the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion anti-dispersion models.
With reference to the first aspect, in some embodiments, the method further includes: the adjusting device determines a fourth anti-distortion inverse dispersion model, wherein the fourth anti-distortion inverse dispersion model comprises a processing function corresponding to the first display area when the adjustment is stopped; the adjustment device sends the fourth anti-distortion anti-dispersion model to the VR device to enable the VR device to process image data according to the fourth anti-distortion anti-dispersion model.
With reference to the first aspect, in some embodiments, the fifth image is an image obtained by photographing the fourth image by a photographing device acquired by the adjusting apparatus.
In a second aspect, embodiments of the present application provide a method of anti-distortion and anti-dispersion, the method comprising: the VR equipment receives image data corresponding to a first image and a first anti-distortion and anti-dispersion model, wherein the image data corresponds to the first image and the first anti-distortion and anti-dispersion model are sent by the adjusting equipment, and the first image comprises a plurality of display areas; the VR device processes the image data according to the first anti-distortion and anti-dispersion model, and displays a second image according to the processed image data; the VR equipment receives a second anti-distortion anti-dispersion model sent by the adjusting equipment, wherein the second anti-distortion anti-dispersion model comprises a processing function corresponding to a first display area adjusted by the adjusting equipment, and the first display area is a display area in the plurality of display areas; and the VR equipment processes the image data according to the second anti-distortion and anti-dispersion model and displays a fourth image according to the processed image data. By the mode, the anti-distortion and anti-dispersion effect of the local display area can be adjusted, the flexibility of anti-distortion and anti-dispersion processing is enhanced, and the visual effect of a display picture of VR equipment is improved.
With reference to the second aspect, in some embodiments, the first image includes a plurality of circles, a radius of the circles is a multiple of a segment distance, the segment distance is determined by a preset segment number and a maximum value of a first distance, the first distance is a distance between a light ray refracted by a lens of the VR device and a center of the lens, and the plurality of circles is used to indicate positions of the plurality of display areas in the first image.
With reference to the second aspect, in some embodiments, the lens of the VR device includes a first lens and a second lens, the display screen of the VR device includes a first display screen and a second display screen, the first lens corresponds to the first display screen, and the second lens corresponds to the second display screen; the first anti-distortion and anti-dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the first display screen, and the second sub-model is used for correcting the display of the second display screen; the second anti-distortion and anti-dispersion model comprises a third model and a fourth model, wherein the third model is used for correcting the display of the first display screen, and the fourth model is used for correcting the display of the second display screen.
With reference to the second aspect, in some embodiments, the lens of the VR device includes a first lens and a second lens, the display screen includes a left display area and a right display area, the first lens corresponds to the left display area of the display screen, and the second lens corresponds to the right display area of the display screen; the first anti-distortion inverse dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the left display area, and the second sub-model is used for correcting the display of the right display area; the second anti-distortion inverse dispersion model includes a third model for correcting the display of the left display area and a fourth model for correcting the display of the right display area.
In a third aspect, embodiments of the present application provide an adjustment device comprising one or more processors and a memory coupled to the one or more processors, the memory for storing program code, the one or more processors invoking the program code to cause the adjustment device to: transmitting image data corresponding to a first image and a first anti-distortion and anti-dispersion model to VR equipment, wherein the first image comprises a plurality of display areas; obtaining a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion inverse dispersion model; adjusting coefficients of processing functions corresponding to a first display area, wherein the first display area is a display area in the plurality of display areas; and sending a second anti-distortion anti-dispersion model to the VR device, wherein the second anti-distortion anti-dispersion model comprises the adjusted processing function corresponding to the first display area.
With reference to the third aspect, in some embodiments, the one or more processors call the program code to cause the adjusting device to specifically: determining a segmentation distance according to a preset segment number and a maximum value of a first distance, wherein the first distance is a distance between a ray refracted by a lens of the VR device and the center of the lens; dividing a scale factor curve in the first anti-distortion and anti-dispersion model into curves with the preset segment numbers uniformly based on the segmentation distance, wherein the scale factor curve is used for representing the corresponding relation between the first distance and the scale factor, and the scale factor is used for processing the image data by the VR equipment; and performing linear fitting on each curve in the curves with the preset segments to obtain processing functions corresponding to the display areas in the first image.
With reference to the third aspect, in some embodiments, the first image includes a plurality of circles, a radius of the circles being a multiple of the segmentation distance, and the plurality of circles are used to indicate positions of the plurality of display areas in the first image.
With reference to the third aspect, in some embodiments, the similarity between the first display area and the second display area is lower than a preset value, and the second display area is a display area in the third image, where a position of the second display area in the third image is the same as a position of the first display area in the first image; the third image is an image formed by a second image displayed by the VR device and observed through a lens of the VR device, and the second image is an image displayed by the VR device based on the image data and the first anti-distortion anti-dispersion model.
With reference to the third aspect, in some embodiments, the third image is an image obtained by shooting the second image by a shooting device acquired by the adjusting device.
With reference to the third aspect, in some embodiments, the coefficient of the processing function corresponding to the first display area is adjusted by presetting an adjustment parameter in the adjustment parameter set.
With reference to the third aspect, in some embodiments, the processing function corresponding to the first display area is a linear function, and the coefficient of the processing function corresponding to the first display area includes a first parameter and a second parameter.
With reference to the third aspect, in some embodiments, the one or more processors call the program code to cause the adjusting device to further: and stopping adjusting the coefficients of the processing functions corresponding to the first display area when the adjustment parameter traversal in the adjustment parameter set is completed.
With reference to the third aspect, in some embodiments, the one or more processors call the program code to cause the adjusting device to further: generating a plurality of second anti-distortion inverse dispersion models, wherein the second anti-distortion inverse dispersion models comprise processing functions corresponding to the first display area after one-time adjustment; determining a third inverse distortion inverse dispersion model from the plurality of second inverse distortion inverse dispersion models, wherein the third inverse distortion inverse dispersion model comprises a processing function corresponding to the first display area, the similarity between the third display area corresponding to the processing function and the first display area is higher than that between the third display area corresponding to the processing function corresponding to the first display area and the similarity between the third display area corresponding to the processing function corresponding to the first display area and the first display area contained in other second inverse distortion inverse dispersion models; the third display area is a display area in a fifth image, the position of the third display area in the fifth image is the same as the position of the first display area in the first image, the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion and anti-dispersion models; and sending the third distorting inverse dispersion model to the VR device to cause the VR device to process image data according to the third distorting inverse dispersion model.
With reference to the third aspect, in some embodiments, the one or more processors call the program code to cause the adjusting device to further: when the similarity between a third display area and the first display area is higher than or equal to the preset value, stopping adjusting the coefficient of the processing function corresponding to the first display area, wherein the third display area is a display area in a fifth image, and the position of the third display area in the fifth image is the same as the position of the first display area in the first image; the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion anti-dispersion models.
With reference to the third aspect, in some embodiments, the one or more processors call the program code to cause the adjusting device to further: determining a fourth anti-distortion inverse dispersion model, wherein the fourth anti-distortion inverse dispersion model comprises a processing function corresponding to the first display area when adjustment is stopped; and sending the fourth anti-distortion anti-dispersion model to the VR device, so that the VR device processes image data according to the fourth anti-distortion anti-dispersion model.
With reference to the third aspect, in some embodiments, the fifth image is an image obtained by photographing the fourth image by a photographing device acquired by the adjusting apparatus.
In a fourth aspect, an embodiment of the present application provides a VR device, including: one or more processors, memory, lenses, and a display screen; the memory is coupled to the one or more processors, the memory is for storing program code that the one or more processors call to cause the VR device to: receiving image data corresponding to a first image and a first anti-distortion and anti-dispersion model sent by an adjusting device, wherein the first image comprises a plurality of display areas; processing the image data according to the first anti-distortion and anti-dispersion model, and displaying a second image on the display screen according to the processed image data; receiving a second anti-distortion anti-dispersion model sent by the adjusting device, wherein the second anti-distortion anti-dispersion model comprises a processing function corresponding to a first display area adjusted by the adjusting device, and the first display area is a display area in the plurality of display areas; and processing the image data according to the second anti-distortion and anti-dispersion model, and displaying a fourth image on the display screen according to the processed image data.
With reference to the fourth aspect, in some embodiments, the first image includes a plurality of circles, a radius of the circles is a multiple of a segment distance, the segment distance is determined by a preset segment number and a maximum value of a first distance, the first distance is a distance between a light ray refracted by a lens of the VR device and a center of the lens, and the plurality of circles is used to indicate positions of the plurality of display areas in the first image.
With reference to the fourth aspect, in some embodiments, the lens of the VR device includes a first lens and a second lens, the display screen of the VR device includes a first display screen and a second display screen, the first lens corresponds to the first display screen, and the second lens corresponds to the second display screen; the first anti-distortion and anti-dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the first display screen, and the second sub-model is used for correcting the display of the second display screen; the second anti-distortion and anti-dispersion model comprises a third model and a fourth model, wherein the third model is used for correcting the display of the first display screen, and the fourth model is used for correcting the display of the second display screen.
With reference to the fourth aspect, in some embodiments, the lens of the VR device includes a first lens and a second lens, the display screen includes a left display area and a right display area, the first lens corresponds to the left display area of the display screen, and the second lens corresponds to the right display area of the display screen; the first anti-distortion inverse dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the left display area, and the second sub-model is used for correcting the display of the right display area; the second anti-distortion inverse dispersion model includes a third model for correcting the display of the left display area and a fourth model for correcting the display of the right display area.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform as the electronic device described above performs as any one of the possible implementations of the first aspect, or cause the electronic device to perform as the electronic device described above performs as any one of the possible implementations of the second aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium, including instructions that, when executed on an electronic device, cause the electronic device to perform as the electronic device performs as any one of the possible implementations of the first aspect, or cause the electronic device to perform as the electronic device performs as any one of the possible implementations of the second aspect.
In the method for anti-distortion and anti-dispersion provided by the embodiment of the application, the VR device processes the image data of the first image according to the first anti-distortion and anti-dispersion model sent by the adjusting device, and displays the image according to the processed image data. And the adjusting equipment obtains a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion inverse dispersion model. And then, the adjusting device adjusts coefficients of processing functions corresponding to a first display area with poor display effect, wherein the first display area is a display area in the plurality of display areas, and sends a second anti-distortion anti-dispersion model to the VR device, and the second anti-distortion anti-dispersion model comprises the adjusted processing functions corresponding to the first display area. Next, the VR device can process the image data according to the second anti-distortion anti-dispersion model and display an image according to the processed image data. By the mode, the anti-distortion and anti-dispersion effect of the local display area can be adjusted, the flexibility of anti-distortion and anti-dispersion processing is enhanced, and the visual effect of a display picture of VR equipment is improved.
Drawings
FIG. 1 is a schematic illustration of a pincushion distortion provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of barrel distortion provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of an anti-distortion provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a display image after an anti-distortion and anti-dispersion process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a processing system 10 provided in accordance with an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device 10 according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present application;
FIG. 8 is a flow chart of a method for determining an anti-distortion and anti-dispersion model provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of xy coordinates and RGB coordinates of a pixel according to an embodiment of the present application;
FIG. 10 is a schematic diagram showing a comparison between the front and rear of an image anti-distortion and anti-dispersion process according to an embodiment of the present application;
FIG. 11 is a schematic illustration of a scale factor curve provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a scale factor curve versus lens area according to an embodiment of the present application;
FIG. 13 is a schematic illustration of an original image provided by an embodiment of the present application;
FIG. 14 is a schematic illustration of a fitting result of a linear fit provided by an embodiment of the present application;
FIG. 15 is a schematic representation of the fit results of yet another linear fit provided by an embodiment of the present application;
fig. 16 is a schematic diagram of a VR headset 100 worn by a user in accordance with an embodiment of the application;
fig. 17 is a flowchart of a method for displaying an image by a VR device according to an embodiment of the present application;
FIG. 18 is a schematic illustration of an original image provided by an embodiment of the present application;
FIG. 19 is a schematic diagram of a display image on a display 180A according to an embodiment of the present application;
FIG. 20 is a schematic diagram of a rectangular coordinate system on a display screen according to an embodiment of the present application;
fig. 21 is a flow chart of a method for anti-distortion and anti-dispersion according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and furthermore, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
Some concepts related to the embodiments of the present application are described below.
Distortion (display)
Distortion is a condition in which the imaged picture is different from the actual picture due to the physical properties of the lenses in the lens and the structure of the lens group. The distortion mainly includes pincushion distortion (pincushion distortion) and barrel distortion (barrel distortion).
Pincushion distortion, also known as pincushion distortion, is a distortion phenomenon in which the image is distorted toward the middle "shrinkage". Referring to fig. 1, a schematic diagram of pincushion distortion is provided in an embodiment of the present application. The left side of fig. 1 is an actual picture displayed in the display screen, and the right side of fig. 1 is an imaging picture observed by human eyes.
Barrel distortion, also known as barrel distortion, is a distortion phenomenon in which an imaging picture is in a barrel-shaped expansion shape. Referring to fig. 2, a schematic diagram of barrel distortion is provided in an embodiment of the present application. The left side of fig. 2 is an actual picture displayed in the display screen, and the right side of fig. 2 is an imaging picture observed by human eyes.
Dispersion (dispersion)
The property of a material that its refractive index changes with changes in the frequency of incident light is called "dispersion". For example, sunlight passes through a triple prism to produce a continuous spectrum of colors arranged sequentially from red to violet. In a broad sense, dispersion refers not only to the decomposition of an optical wave into the spectrum, but also to any physical quantity that changes with frequency (or wavelength). In the embodiment of the application, after the multi-color light enters the lens, the lens has different refractive indexes for light with various frequencies, and the propagation directions of various color lights are deflected to different degrees, so that the light is respectively dispersed when the light leaves the lens, and the dispersion is called as chromatic dispersion.
Anti-distortion and anti-dispersion
The principle of anti-distortion and anti-dispersion is to add an anti-distortion and anti-dispersion to the displayed image, which can be understood as that each pixel point in the displayed image of the display screen has a certain offset, and the added anti-distortion and anti-dispersion is "offset" with the distortion and dispersion effect generated by the lens, so that the human eyes see the normal image through the lens. Referring to fig. 3, an inverse distortion is schematically shown according to an embodiment of the present application. The left side of fig. 3 is an actual picture displayed in the display screen, and the right side of fig. 3 is an imaging picture viewed by human eyes. Unlike the normally displayed image, each pixel point in the displayed image in fig. 3 has a certain offset. Referring to fig. 4, fig. 4 is a schematic diagram of a display image after an anti-distortion and anti-dispersion process according to an embodiment of the present application.
Anti-distortion anti-dispersion algorithm
Because each produced lens has fixed optical coefficient, the lens manufacturer can perform spectral analysis when producing the lens, and perform polynomial fitting on the optical characteristics of the lens to obtain the lens correction coefficient. In the prior art, an anti-distortion anti-dispersion algorithm generally performs polynomial processing on lens correction coefficients provided by manufacturers to obtain an anti-distortion anti-dispersion model, calculates scale factors corresponding to three primary colors through the anti-distortion anti-dispersion model, and calculates display coordinates after the anti-distortion anti-dispersion model processing according to original display coordinates and the scale factors.
At this stage, an inverse distortion and inverse dispersion model is applied to one lens. However, since the lens is curved, the most suitable anti-distortion and anti-dispersion models of different areas are different, and when the effect of one area is the best, the effect of the other area is affected.
In view of this, a method of anti-distortion and anti-dispersion of the embodiments of the present application is presented. In the embodiment of the application, the piecewise function fitting can be performed on the scale factor curve calculated according to the lens correction coefficients provided by the lens manufacturer. It will be appreciated that a piecewise function corresponds to a region of the lens. And then adjusting the coefficients of the piecewise functions corresponding to the display areas with poor display effects, so that the adjusted piecewise functions are closer to the optical characteristics of the lens. By the mode, the local anti-distortion and anti-dispersion effect can be adjusted, the flexibility of anti-distortion and anti-dispersion processing is enhanced, and the visual effect of a display picture of the VR head-mounted display device is improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a processing system 10 according to an embodiment of the application. The processing system 10 may include a VR head mounted display device 100 and an adjustment device 300. In the processing system 10, the adjustment device 300 first calculates a reference anti-distortion anti-dispersion model from the lens correction coefficients provided by the lens manufacturer, and transmits the calculated basic anti-distortion anti-dispersion model and the raw image data to the VR head-mounted display device 100. The VR head mounted display apparatus 100 processes the original image data according to the reference anti-distortion and anti-dispersion model and displays an image according to the processed image data. Then, the adjusting device 300 performs piecewise fitting on the scale factor curve in the reference anti-distortion and anti-dispersion model to obtain linear functions corresponding to different display areas in the original image; and then, the fitting coefficient of the linear function corresponding to the display area with poor display effect is adjusted, and the adjusted anti-distortion and anti-dispersion model is sent to the VR head-mounted display device 100. Next, the VR head-mounted display apparatus 100 processes the original image data according to the adjusted anti-distortion and anti-dispersion model, and displays an image according to the processed image data. The adjusting device 300 may continuously adjust the anti-distortion anti-dispersion model according to the image display effect fed back by the VR head-mounted display device 100, and optimize the image display effect to obtain a final anti-distortion anti-dispersion model. By adjusting the device 300, the anti-distortion and anti-dispersion effects of the local display area can be adjusted, the flexibility of the anti-distortion and anti-dispersion processing is enhanced, and the visual effect of the display screen of the VR head-mounted display device 100 is improved.
Next, the VR head mounted display device 100 and the adjustment device 300 will be described. Wherein:
VR headset 100 is an electronic device that provides a virtual environment using VR technology. VR head mounted display device 100 utilizes VR technology to render and display one or more virtual objects. The VR head-mounted display device comprises a lens and a display screen, and human eyes can view a display picture on the display screen through the lens. The left and right eye screens of the display screen respectively display left and right eye images, the images are respectively imaged at the retina of the left and right eyes of the human eyes and are overlapped in the visual center of the brain, so that a three-dimensional virtual environment is constructed.
In some embodiments, the image data used to generate the display interface on the display screen of VR head mounted display device 100 may be received from other electronic devices. Other electronic devices include the adjustment device 300. The adjustment device 300 may be a server, or may include a smart phone, a computer, etc. connected to or paired with the VR head mounted display device 100.
In some embodiments, VR head mounted display device 100 includes a lens 182 and a display screen 180. The display 180 may include a display 180A and a display 180B, and the lens 182 may include a lens 182A and a lens 182B. The lens 180 and the display 182 correspond to each other. Illustratively, the display screen 180A corresponds to the lens 182A and the display screen 180B corresponds to the lens 182B.
In other embodiments, VR headset 100 includes one display screen that includes two display areas, e.g., a first display area and a second display area. Illustratively, the first display area may be a display area where the display screen 180A is located, and the second display area may be a display area where the display screen 180B is located. One display area corresponds to one lens. Illustratively, the lens 182A corresponds to a first display area and the lens 182B corresponds to a second display area.
In other embodiments, the lenses of VR head mounted display device 100 may be more than two. For example, the display 180 may include a display 180A and a display 180B, where one display corresponds to two or more lenses. The corresponding lenses of display 180A may be, for example, two lenses stacked one above the other.
The adjustment device 300 may be an electronic device having an image processing function, such as a notebook computer or a desktop computer, and fig. 1 illustrates a desktop computer as an example. The adjustment device 300 may send image data and the anti-distortion inverse dispersion model to the VR head mounted display device 100. The VR head mounted display device 100 may process the image data according to the received anti-distortion and anti-dispersion model, and then display an image in a display screen according to the processed image data.
In the embodiment of the present application, the VR head-mounted display device 100 may be an electronic device, and the electronic device 10 according to the embodiment of the present application is described below. Referring to fig. 6, a schematic structural diagram of an electronic device 10 according to an embodiment of the present application is provided.
As shown in fig. 6, the electronic device 10 may include a processor 110, a memory 120, a sensor module 130, an audio module 140, keys 150, an input-output interface 160, a communication module 170, a display 180, a battery 190, and the like. Wherein the sensor module 130 may include a sound detector 132, a proximity light sensor 131, and the like. The sensor module 130 may also contain other sensors such as distance sensors, gyroscopic sensors, ambient light sensors, acceleration sensors, and the like.
It should be understood that the illustrated construction of the embodiments of the present application does not constitute a particular limitation of the electronic device 10. In other embodiments of the application, the electronic device 10 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a Video Processing Unit (VPU) controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In some embodiments, processor 110 invokes the anti-distortion anti-dispersion model stored in memory 120 to process the image data before controlling display 180 (which may refer to display 180A and display 180B in FIG. 5) to display the image. Alternatively, the electronic device 10 implements display functions via a GPU, a display screen 180, or the like. A GPU is a microprocessor for image processing for performing mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
In some embodiments, a memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. Alternatively, the correction parameters may be stored in the memory of the processor 110. The correction parameters may be directly recalled from this memory when needed by the processor 110. In this way, repeated accesses may be avoided, reducing the latency of the processor 110, and thus improving the efficiency of the system.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 10 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 10 may support one or more video codecs. In this way, the electronic device 10 may play or record video in a variety of encoding formats, such as: MPEG (moving picture experts group) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 10 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
Wherein the controller may be a neural hub and a command center of the electronic device 10. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
In some embodiments, the processor 110 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, a Serial Peripheral Interface (SPI) interface, etc.
The I2C interface is a bidirectional synchronous serial bus, which includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be configured to separately battery 190, display 180, etc. via different I2C bus interfaces.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the communication module 170. For example: the processor 110 communicates with a bluetooth module in the communication module 170 through a UART interface to implement a bluetooth function.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as the display 180. The MIPI interface includes a Display Serial Interface (DSI) and the like. In some embodiments, processor 110 and display 180 communicate via a DSI interface to implement the display functionality of electronic device 10.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the display 180, the communication module 170, the sensor module 130, the microphone 140, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a micro USB interface, a USB type c interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 10, or may be used to transfer data between the electronic device 10 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as smartphones and the like. The USB interface may be USB3.0 for compatible high-speed display interface (DP) signaling. In some embodiments, the electronic device 10 may receive audio and video high-speed data transmitted by other devices (e.g., smart phones, computers) through a USB interface.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative and not limiting to the structure of the electronic device 10. In other embodiments of the present application, the electronic device 10 may also employ different interfaces in the above embodiments, or a combination of interfaces.
In addition, the electronic device 10 may include wireless communication functionality. The communication module 170 may include a wireless communication module and a mobile communication module. The wireless communication function can be realized by an antenna, a mobile communication module, a modem processor, a baseband processor and the like.
The antenna is used for transmitting and receiving electromagnetic wave signals. Multiple antennas may be included in the electronic device 10, each of which may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on the electronic device 10. The mobile communication module may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), etc. The mobile communication module can receive electromagnetic waves by the antenna, filter, amplify and the like the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module can amplify the signal modulated by the modulation and demodulation processor and convert the signal into electromagnetic waves to radiate through the antenna. In some embodiments, at least some of the functional modules of the mobile communication module may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speakers, etc.), or displays images or video through the display 180. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module or other functional module, independent of the processor 110.
The wireless communication module may provide solutions for wireless communication including Wireless Local Area Networks (WLANs) such as wireless fidelity (Wi-Fi) networks, bluetooth (BT), global Navigation Satellite Systems (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like, applied to the electronic device 10. The wireless communication module may be one or more devices that integrate at least one communication processing module. The wireless communication module receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation through the antenna.
In some embodiments, the antenna and mobile communication module of the electronic device 10 are coupled such that the electronic device 10 may communicate with a network and other devices through wireless communication techniques. For example, the electronic device 10 may receive image data to be displayed in a display screen transmitted by other devices (e.g., smart phone, computer) through wireless communication technology.
Memory 120 may be used to store computer-executable program code that includes instructions. The processor 110 executes instructions stored in the memory 120 to thereby perform various functional applications and data processing of the electronic device 10. The memory 120 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 10 (e.g., audio data), and so forth. In addition, the memory 120 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. In an embodiment of the present application, the data of the anti-distortion anti-dispersion model transmitted by the adjusting apparatus 300 may be stored in the storage 120.
The electronic device 10 may implement audio functionality through an audio module 140, an application processor, and the like. Such as music playing, recording, etc. The audio module may also include a speaker, microphone, headphone interface, etc. The audio module 140 is used for converting digital audio information into an analog audio signal output and also used for converting an analog audio input into a digital audio signal. The audio module 140 may also be used to encode and decode audio signals. In some embodiments, the audio module 140 may be disposed in the processor 110, or some functional modules of the audio module 140 may be disposed in the processor 110.
Speakers, also known as "horns," are used to convert audio electrical signals into sound signals. The electronic device 10 may listen to music through a speaker or to hands-free conversations. Microphones, also known as "microphones" and "microphones", are used to convert sound signals into electrical signals. The electronic device 10 may be provided with at least one microphone. In other embodiments, the electronic device 10 may be provided with two microphones, and may perform a noise reduction function in addition to collecting sound signals. The earphone interface is used for connecting a wired earphone.
In some embodiments, the electronic device 10 may include one or more keys 150 that may control the electronic device 10 to provide a user with access to functions on the electronic device 10. The keys 150 may be in the form of buttons, switches, dials, and touch or near touch sensing devices (e.g., touch sensors). For example, the user may turn on the display 180 of the electronic device 10 by pressing a button. The keys 150 may include a power on key, a volume key, etc.
In some embodiments, electronic device 10 may include an input-output interface 160, and input-output interface 160 may connect other apparatus to electronic device 10 through suitable components. The components may include, for example, audio/video jacks, data connectors, and the like.
In some embodiments, the electronic device 10 may include a sound detector 132, which sound detector 132 may detect and process voice signals for controlling the portable electronic device. For example, the electronic device 10 may use a microphone to convert sound into an electrical signal. The acoustic detector 132 may then process the electrical signal and identify the signal as a system command. The processor 110 may be configured to receive a voice signal from a microphone. After receiving the voice signal, the processor 110 may run the voice detector 132 to recognize the voice command.
In some embodiments, the electronic device 10 may implement eye tracking (eyetracking). In particular, infrared devices (e.g., infrared emitters) and image acquisition devices (e.g., cameras) may be utilized to detect eye gaze direction.
The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 10 may detect a gesture operation at a particular location of the electronic device 10 using a proximity light sensor for purposes of associating the gesture operation with an operation command.
And a distance sensor for measuring the distance. The electronic device 10 may measure the distance by infrared or laser light. In some embodiments, the electronic device 10 may utilize distance sensor ranging to achieve quick focus.
The gyroscopic sensor may be used to determine a motion pose of the electronic device 10. In some embodiments, the angular velocity of electronic device 10 about three axes (i.e., the x, y, and z axes) may be determined by a gyroscopic sensor. The gyroscopic sensor may also be used to navigate, somatosensory a game scene.
The ambient light sensor is used for sensing ambient light brightness. The electronic device 10 may adaptively adjust the brightness of the display 180 based on the perceived ambient light level. The ambient light sensor may also be used to automatically adjust white balance when taking a photograph.
The acceleration sensor may detect the magnitude of acceleration of the electronic device 10 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 10 is stationary. The method can also be used for recognizing the gesture of the head-mounted electronic equipment and applied to pedometers and the like.
The display 180 is used to display images, videos, and the like. The display 180 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Miniled, a Micro-OLED, a quantum dot light-emitting diode (QLED), or the like.
In the embodiment of the present application, the adjusting device 300 may be an electronic device, and the electronic device 20 according to the embodiment of the present application is described below. Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device 20 according to an embodiment of the application. For example, the electronic device 20 may be a desktop computer, a notebook computer, or the like. As shown in fig. 7, the electronic device 20 may include a processor 102, a memory 103, a wireless communication processing module 104, a power switch 105, an input module 106, an output module 107, and a USB interface 108. These components may be connected by a bus. Wherein:
the processor 102 may be used to read and execute computer readable instructions. In particular implementations, the processor 102 may include primarily a Graphics Processor (GPU), a controller, an operator, and registers. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, logic operations, and the like, and may also perform address operations and conversions. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In particular implementations, the hardware architecture of the processor 102 may be an application specific integrated circuit (ApplicationSpecificIntegratedCircuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In some embodiments, the graphics processor may be used to adjust the image data according to the fitted piecewise function. Memory 103 is coupled to processor 102 for storing various software programs and/or sets of instructions. In particular implementations, memory 103 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 103 may store an operating system, such as an embedded operating system, for example windows, android. The memory 103 may also store communication programs that may be used to communicate with the VR head mounted display device 100, or additional devices.
The wireless communication processing module 104 may provide solutions for wireless communication including Wireless Local Area Networks (WLANs) (e.g., wi-Fi networks), bluetooth (BT), BLE radio, global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), and the like, applied to the electronic device 20. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The electronic device 20 may establish a wireless communication connection with other devices through the wireless communication processing module 104, and communicate with other devices through one or more wireless communication technologies in bluetooth or WLAN. In some embodiments, the electronic device 20 may send image data to the VR head mounted display device 100 via the wireless communication module 104. In some embodiments, electronic device 20 may send data of the calculated anti-distortion anti-dispersion model to electronic device 10 via wireless communication module 104.
The wireless communication processing module 104 may also include a cellular mobile communication processing module (not shown). The cellular mobile communications processing module may communicate with other devices (e.g., servers) via cellular mobile communications technology.
The power switch 105 may be used to control the power supplied by the power source to the electronic device 20.
The input module 106 may be configured to receive instructions entered by a user, and the input module 106 may include, for example, one or more of a mouse, a keyboard, a touchpad, a touch screen, a microphone, and the like.
The output module 107 may be used to output information, such as one or more display screens included in the electronic device 20, which may be used to display images, videos, etc. The display screen includes a display panel. The display panel may employ a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Miniled, a Micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In addition, the output module 107 may also include one or more speakers, boxes, etc.
The USB interface 108 is an interface conforming to the USB standard, and may specifically be a MiniUSB interface, a micro USB interface, a USB type c interface, or the like. The USB interface 108 may be used to connect a charger to charge the electronic device 20, or may be used to transfer data between the electronic device 20 and a peripheral device. For example, the electronic device 20 may receive a captured original image transmitted by the camera 200 through the USB interface 108. In some embodiments, the interface may also be used to connect other electronic devices, such as VR head mounted display device 100, and the like. The electronic device 20 may send the calculated correction parameters to the VR head mounted display device 100 via the USB interface 108.
It will be appreciated that the configuration illustrated in fig. 7 does not constitute a particular limitation of the electronic device 20. In other embodiments of the application, electronic device 20 may include more or fewer components than shown, or certain components may be combined, or certain components may be separated, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The following describes the anti-distortion and anti-dispersion method according to the embodiments of the present application in detail based on the processing system and the electronic device described in the foregoing description, and with reference to other drawings. In the following, VR device is taken as an example of VR head-mounted display device 100, and adjusting device is taken as an example of adjusting device 300. The VR head mounted display device 100 includes a lens 182A, a lens 182B, a display 180A, and a display 180B. The lens 182A corresponds to the display screen 180A, and the lens 182B corresponds to the display screen 180B. The following description will take an example of a manner of determining the inverse distortion inverse dispersion model of the display screen 180A corresponding to the lens 182A. It should be noted that, the anti-distortion and anti-dispersion model of the display screen 180B corresponding to the determining lens 182B may refer to a similar manner, and will not be described in detail later.
Referring to fig. 8, a flowchart of a method for determining an inverse distortion inverse dispersion model is provided in an embodiment of the present application. The method at least comprises the following steps:
s101, the adjusting apparatus 300 calculates a reference anti-distortion anti-dispersion model of the lens 182A.
Since the lens shape is center symmetric, it is mainly radial distortion and the magnitude is related to the distance to the center of the lens, the anti-distortion and anti-dispersion model can be as shown in equation 1-1:
p' (R, G, B) =f (P (R, G, B), R; k0, k1, k2, k3, k4, k5, k6, k 7) formula 1-1
Wherein P (R, G, B) is the three-primary-color component position of one pixel point in the original image, and P' (R, G, B) is the three-primary-color component position of the one pixel point after the anti-distortion anti-dispersion treatment; r is the distance between the light refracted by the lens of the VR head mounted display device 100 and the center of the lens (alternatively referred to as the distance of the refracted light from the center of the lens), k0, k1, k2, k3 are distortion correction coefficients, and k4, k5, k6, k7 are dispersion correction coefficients.
The distortion correction coefficient and the dispersion correction coefficient are lens correction coefficients supplied by a lens manufacturer. Because each produced lens has fixed optical coefficient, the lens manufacturer can perform spectral analysis when producing the lens, and perform polynomial fitting on the optical characteristics of the lens to obtain the lens correction coefficient. In some embodiments, the tuning device 300 may receive a user-entered lens correction factor provided by a lens manufacturer and substitute the reference anti-distortion anti-dispersion model into the anti-distortion anti-dispersion model.
Illustratively, a possible anti-distortion inverse dispersion model (f) is presented. The anti-distortion inverse dispersion model may include a calculation formula of scale factors corresponding to three primary colors, and a relational expression of display coordinates in the primary image and processed display coordinates.
The calculation modes of the scale factors corresponding to the three primary colors can be shown as formula 1-2, formula 1-3 and formula 1-4.
scale g =(k0+k1*r 2 +k2*r 4 +k3*r 6 ) Formulas 1-2
scale r =scale g *(1+k4+k5*r 2 ) Formulas 1-3
scale b =scale g *(1+k6+k7*r 2 ) Formulas 1 to 4
Wherein scale is g Scale, is the scale factor on the green (G) component r In red (R)Scale factor in quantity b The scale factor on the blue (B) component, r, is the distance of the refracted ray from the center of the lens.
The relationship between the display coordinates after the anti-distortion and anti-dispersion model processing and the original display coordinates can be shown as formulas 1-5.
Wherein, (x, y) is the original display coordinate of a pixel point in the original image, and the origin of the two-dimensional coordinate system where the coordinate is located is on the axis of the lens. (r) x0 ,r y0 )、(g x0 ,g y0 )、(b x0 ,b y0 ) And performing anti-distortion and anti-dispersion processing on the RGB color components of the pixel point to obtain display coordinates.
According to the principle of light path reversibility, three light rays are converged after passing through a lens to obtain a normal image as long as the RGB three color component values of the original pixel point are moved to the position after the anti-distortion and anti-dispersion.
Referring to fig. 9, fig. 9 is a schematic diagram of xy coordinates and RGB coordinates of a pixel according to an embodiment of the present application. Where o is the center of the lens, the dashed line is the axis of the lens, and r is the distance of the refracted ray from the center of the lens. The coordinate of one pixel point in the original display image is (x, y), and after refraction of the lens, the coordinate of the one pixel point in the RGB coordinate system is (r x ,r y )、(g x ,g y )、(b x ,b y ). The left (0, 0) is the origin of coordinates of the two-dimensional coordinate system (xoy coordinate system) where the pixel point is located, and the right (0, 0) is the origin of coordinates of the RGB coordinate system where the refracted pixel point is located. The two origins of coordinates lie on the same straight line.
Referring to fig. 10, fig. 10 is a schematic diagram showing a comparison between an image before and after an anti-distortion and anti-dispersion process according to an embodiment of the present application. In fig. 10, the left image is the original display image, and the right image is the display image after the anti-distortion and anti-dispersion processing. Sitting of one pixel point in original display imageMarked as (x, y), and the coordinates after the anti-distortion and anti-dispersion treatment corresponding to (x, y) are (r) x0 ,r y0 )、(g x0 ,g y0 )、(b x0 ,b y0 ). Alternatively, the adjustment device 300 may generate scale according to equations 1-2 g Scale factor curves of (2). Referring to fig. 11, a scale factor curve is schematically shown according to an embodiment of the present application. In fig. 11, the abscissa indicates the distance of the refracted ray from the center of the lens, and the ordinate indicates the scale factor. The scale factor curve is used for representing the corresponding relation between the distance of the refracted ray from the center of the lens and the scale factor. The scale factors are used by the VR head mounted display device 100 to process image data of the original image (see equations 1-5). It can be seen that the scale factor varies continuously and smoothly with the distance of the refracted ray from the center of the lens (i.e., r).
Thereafter, can be made by scale g And scale of r 、scale b The relationship between them (see equations 1-3 and equations 1-4 described above) generates scale r 、scale b Scale factor curves of (2).
The reference anti-distortion and anti-dispersion model comprises scale g 、scale r 、scale b The calculation method (see equations 1-2, 1-3 and 1-4) and the correspondence relation of the coordinates before and after the processing (see equations 1-5).
S102, the adjustment device 300 transmits the original image data and the reference anti-distortion and anti-dispersion model to the VR head-mounted display device 100.
The original image represented by the original image data may include a plurality of squares and circles. Square and circular shapes can facilitate the user in observing the display effect of the image. Optionally, the radius of the circle in the original image is a multiple of the segment distance selected by segment fitting, and the radius is used for calibrating the image display range corresponding to different lens areas. Wherein the segment distance is determined by the maximum value of the preset number of segments and the distance of the refracted ray from the center of the lens. Optionally, the segmentation distance is a quotient of a maximum value of the distance of the refracted ray from the center of the lens and the preset segment number. The maximum value of the distance between the refracted ray and the center of the lens is the radius of the lens.
Illustratively, in the segment fitting of the subsequent scale factor curve, the preset number of segments of the divided scale factor curve may be 5 segments, 8 segments, 10 segments, and so on. It will be appreciated that the segment of the curve that is divided corresponds to an area of the lens. In order to better observe the display effect of each segmented region, circles with different radiuses are used for calibrating the image display ranges corresponding to different lens regions. In addition, the more the number of segments is divided, the closer each segment curve is fitted to the actual curve. In practical application, the preset number of segments can be set as required.
For example, the radius of the lens is 2 cm, the preset number of segments of the scale factor curve is 5 segments, the segmentation distance may be 2/5=0.4 cm, and the radius of the circle in the original image may be 0.4 cm, 0.8 cm, 1.2 cm, 1.6 cm, and 2.0 cm. The original image included 5 concentric circles with radii of 0.4 cm, 0.8 cm, 1.2 cm, 1.6 cm, and 2.0 cm. Referring to fig. 12, a schematic diagram of a correspondence relationship between a scale factor curve and a lens area is provided in an embodiment of the present application. Where 0 is the center of the lens and curve 4 corresponds to region 4 of the lens.
Exemplary, referring to fig. 13, a schematic diagram of an original image according to an embodiment of the present application is provided. In the original image, squares and circles are included. The original image included 5 concentric circles with radii of 0.4 cm, 0.8 cm, 1.2 cm, 1.6 cm, and 2.0 cm. The 5 concentric circles can demarcate the image display ranges corresponding to the different lens areas. For example, a circle having a radius of 1.2 cm and a circle having a radius of 1.6 cm form a circular ring region which is an image display region corresponding to the region 4 of the lens. In addition, in the original image, the center of the concentric circle coincides with the origin of the two-dimensional coordinate system (xoy coordinate system) in which each pixel point in the original image is located.
Alternatively, the image areas corresponding to different lens areas may be displayed in different colors.
S103, the VR head-mounted display device 100 processes the original image data according to the reference anti-distortion anti-dispersion model, and displays an image on the display screen 180A according to the processed image data.
The manner in which VR head mounted display device 100 processes raw image data according to the anti-distortion inverse dispersion model and displays an image according to the processed image data will be described in detail later, and will not be described here.
S104, the adjusting device 300 performs piecewise fitting on the scale factor curve of the lens 182A to obtain linear functions corresponding to different display areas in the original image.
It will be appreciated that the segment of the curve that is divided corresponds to an area of the lens 182A, a display area in the original image, and a segment of the curve corresponds to a linear function after fitting; then, a linear function after fitting corresponds to an area of the lens 182A, a display area in the original image.
In one possible implementation, the implementation of the piecewise fitting of the scale factor curve of lens 182A by adjustment device 300 may refer to the following steps:
s11, the adjusting device 300 uniformly divides the scale factor curve into preset segments according to the distance between the refracted ray and the center of the lens.
Alternatively, scale may be used g The scale factor curve is uniformly divided into 5 sections, 8 sections, 10 sections and the like for fitting. It should be noted that a section of the scale factor curve corresponds to one region of the lens, one display region in the original image, and one display region in the image displayed by the VR head-mounted display device 100.
And S12, the adjusting device 300 fits each segmented curve by adopting a linear function.
By way of example, the linear function may be as shown in equations 1-6.
scale g =mr+n formulas 1 to 6
Wherein scale is g And m and n are fitting coefficients, wherein the fitting coefficients are scale factors.
Taking the example of a uniform division of the curve into 5 segments, a linear fit will yield 5 linear functions. Referring to fig. 14, a schematic diagram of a fitting result of a linear fitting according to an embodiment of the present application is shown. Taking the example of a uniform division of the curve into 10 segments, a linear fit will yield 10 linear functions. Referring to fig. 15, a schematic diagram of a fitting result of yet another linear fitting provided by an embodiment of the present application is shown.
S105, the adjusting device 300 adjusts fitting coefficients of the linear function corresponding to the display area with poor display effect.
The image that VR head mounted display device 100 displays on display screen 180A corresponds to the original image. If some display areas have poor display effects in the image displayed by the VR headset display device 100, it indicates that the anti-distortion and anti-dispersion models corresponding to the display areas are not suitable, and the adjusting device 300 may adjust the fitting coefficients of the linear functions corresponding to the display areas with poor display effects, so as to further improve the display effects of the VR headset display device 100.
In one possible implementation, the poorly displayed display area is typically located at the edges of the original image. The display area with the poor display effect may be a preset display area. Optionally, in some embodiments, the adjustment device 300 may also adjust a linear function corresponding to each display area in the original image.
In one possible implementation, the user may wear the VR headset display device 100, directly observe an image displayed by the VR headset display device 100, and determine a display area with a poor display effect.
Referring to fig. 16, a schematic diagram of a VR headset 100 worn by a user according to an embodiment of the application is provided. The VR headset 100 includes two displays (i.e., display 180A and display 180B) therein, which correspond to two lenses (i.e., lens 182A and lens 182B), respectively. Specifically, display 180A corresponds to lens 182A and display 180B corresponds to lens 182A. The user can view the display image on the display screen 180A through the lens 182A to determine the display area with poor display effect.
In another possible implementation, the image displayed by VR head mounted display device 100 may be captured by a camera. The image capturing device captures a display image obtained by capturing an image displayed on the VR head-mounted display device 100, and can represent a display image observed by the human eye when the human eye views the VR head-mounted display device 100. The photographing means transmits the photographed display image to the adjustment device 300. Alternatively, the photographing device may be a part of the adjusting apparatus 300, and the photographing device may also be a device connected to the adjusting apparatus 300 through a wire or wireless connection. The adjusting apparatus 300 compares the display image with the original image, calculates a similarity between one display area in the display image and a display area corresponding to the one display area in the original image, and determines a display area having a similarity lower than a preset value as a display area having a poor display effect. Wherein the position of one display area in the display image is the same as the position of the display area corresponding to the one display area in the original image.
Specifically, the fitting coefficient of the linear function corresponding to the display area with poor display effect is adjusted to optimize the image display effect of the display area. For example, the display area with poor display effect may be a circular area (the display image corresponding to the area 4 of fig. 12 described above) formed by a circle with a radius of 1.2 cm and a circle with a radius of 1.6 cm, which is abbreviated as a third display area for convenience of description, and a circular area formed by a circle with a radius of 1.6 cm and a circle with a radius of 2.0 cm, which is abbreviated as a fourth display area.
Taking a third display area as an example, an implementation process of adjusting fitting coefficients is introduced, wherein the segmentation function corresponding to the third display area is scale g =m 4 r+n 4 ,1.2<r≤1.6。
The linear function includes two fitting coefficients, i.e. m 4 And n 4 . Since the linear function only comprises two variable parameters, the adjustment mode is convenient. Alternatively, the adjustment apparatus 300 may perform processing of increasing or decreasing the two fitting coefficients by one or more unit values. For example, the unit value may be 0.01. 0.05, 0.1, etc., or the length of one pixel.
Referring to table 1, a set of preset adjustment parameters is exemplarily shown in table 1. In Table 1, a plurality of possible adjustment parameters are included, in particular, the rows in Table 1 represent m 4 Is shown in Table 1 as n 4 Is provided. The position where a space is located may indicate a way to adjust the two fitting coefficients in the linear function. Illustratively, the adjustment indicated by (-0.04 ) is: let m 4 Decrease 0.04, n 4 The decrease is 0.04.
TABLE 1
m 4 And n 4 The adjustment of the first step may be shifted up by one step, i.e., (0, -0.01), based on the initial position, where the adjustment parameter of (0, 0) is located. That is, m 4 The value of (2) is kept unchanged, n is as follows 4 The value of (2) is reduced by 0.01. The modified piecewise function is scale g =m 4 r+(n 4 -0.01),1.2<r≤1.6。
Next, the adjusted anti-distortion inverse dispersion model is transmitted to the VR head mounted display device 100.
The adjusted anti-distortion inverse dispersion model is shown in the following formula:
scale r =scale g *(1+k4+k5*r 2 ) Formulas 1-3
scale b =scale g *(1+k6+k7*r 2 ) Formulas 1 to 4
Thereafter, the VR head mounted display apparatus 100 processes the original image data according to the adjusted anti-distortion anti-dispersion model and displays an image on the display screen 180A according to the processed image data.
In one possible implementation, the user re-observes the image displayed by the VR head mounted display device 100. If the display effect of the third display area meets the optimization requirement, stopping adjusting the fitting coefficient of the linear function corresponding to the third display area; if the display effect of the third display area does not reach the optimization requirement, continuously adjusting the fitting coefficient of the corresponding linear function of the third display area according to the above-described mode until the display effect of the third display area reaches the optimization requirement.
Optionally, whether the image displayed by the VR headset 100 meets the optimization requirement may be determined by the visual perception of the user. The original image represented by the original image data contains a plurality of squares and circles, so that a user can observe the display effect of the image conveniently.
In another possible implementation, the image displayed by VR head mounted display device 100 may be re-captured by the capturing means. The photographing means transmits the display image photographed again to the adjustment apparatus 300. The adjusting device 300 compares the display area corresponding to the third display area in the display image shot again with the third display area in the original image, and calculates the similarity of the two display areas. If the similarity is not lower than the preset value (can be regarded as meeting the optimization requirement), stopping adjusting the fitting coefficient of the linear function corresponding to the third display area; if the similarity is lower than the preset value (which can be regarded as not meeting the optimization requirement), continuously adjusting the fitting coefficient of the linear function corresponding to the third display area according to the above-described mode until the similarity is not lower than the preset value. And the position of the display area corresponding to the third display area in the display image is the same as the position of the third display area in the original image.
Optionally, continuing to adjust the display area corresponding to the third display area in the original image, where the fitting coefficient of the corresponding linear function may be that m is continuously adjusted 4 And n 4 Is provided. For example, referring to Table 1, based on the first adjustment, the second adjustment may be shifted one more left, i.e., (-0.01 ), i.e., m 4 The value of (2) is reduced by 0.01, n is reduced 4 The value of (2) is reduced by 0.01. The segment function after the modification is scale again g =(m 4 -0.01)r+(n 4 -0.01),1.2<r is less than or equal to 1.6. The adjustment parameters of the fitting coefficients may be selected from a set of preset adjustment parameters shown in table 1, and the adjustment device 300 selects the adjustment parameters sequentially (e.g., from small to large) or randomly. Thereafter, the adjusting device 300 transmits the again adjusted anti-distortion inverse dispersion model to the VR head-mounted display device 100. Next, the VR head mounted display apparatus 100 processes the original image data according to the readjusted anti-distortion anti-dispersion model, and displays an image on the display screen 180A according to the processed image data. The adjusting device 300 may continuously adjust the fitting coefficient of the linear function corresponding to the third display area according to the image display effect fed back by the VR head-mounted display device 100 until the display effect reaches the optimization requirement, or the similarity is not lower than the preset value.
In other embodiments, the tuning device 300 may traverse the tuning parameters shown in table 1, wherein an tuning value may correspond to generating an inverse distortion inverse dispersion model. The adjustment device 300 sends the anti-distortion anti-dispersion model to the VR head mounted display device 100, and the VR head mounted display device 100 may feedback different display images according to different anti-distortion anti-dispersion models. The adjusting device 300 may determine, from the display images, a display image with the best display effect (or a display image with the highest similarity), and determine, according to an anti-distortion anti-dispersion model corresponding to the display image with the best display effect (or the display image with the highest similarity), a fitting coefficient of a corresponding linear function of the third display area. Alternatively, the similarity size may be recorded in table 1.
In addition, after the fitting coefficients of the corresponding linear functions of the adjustment parameters are determined, the adjustment apparatus 300 adjusts the fitting coefficients of the corresponding linear functions of the fourth display area. The adjustment method may refer to a method of adjusting the fitting coefficient of the linear function corresponding to the third display area, which is not described herein.
After the fitting coefficients of the linear functions corresponding to the fourth display area are determined, the processing means obtain a final anti-distortion inverse dispersion model. Illustratively, the final anti-distortion inverse dispersion model may be as follows:
scale r =scale g *(1+k4+k5*r 2 ) Formulas 1-3
scale b =scale g *(1+k6+k7*r 2 ) Formulas 1 to 4
S106, the adjusting device 300 sends the final anti-distortion and anti-dispersion model to the VR head-mounted display device 100, so that the VR head-mounted display device 100 adjusts the display of the display screen 180A according to the final anti-distortion and anti-dispersion model.
It should be noted that, referring to the above procedure, the measurement apparatus 300 may also obtain an inverse distortion and inverse dispersion model corresponding to the display screen 180B. The measurement device 300 sends the anti-distortion inverse dispersion model corresponding to the display screen 180B to the VR head-mounted display device 100, so that the VR head-mounted display device 100 adjusts the display of the display screen 180B according to the anti-distortion inverse dispersion model corresponding to the display screen 180B.
In some embodiments, the number of display screens in VR head mounted display device 100 is one, and the display screens are divided into two display areas, one corresponding to each lens. Illustratively, the lens 182A corresponds to a first display area and the lens 182B corresponds to a second display area. Then in step S103, the VR head mounted display device 100 processes the original image data according to the reference anti-distortion and anti-dispersion model and displays an image on the first display area of the display screen according to the processed image data. In the subsequent adjustment process, the VR head-mounted display device 100 displays an image on the first display area of the display screen. The subsequent flow may refer to step S104-step S106 in the above, and the adjusting apparatus 300 may determine the corresponding anti-distortion anti-dispersion model of the first display area. Likewise, the tuning device 300 may determine a corresponding anti-distortion inverse dispersion model of the second display area according to a similar procedure.
The measurement device 300 sends the corresponding anti-distortion inverse dispersion model of the first display region and the corresponding anti-distortion inverse dispersion model of the second display region to the VR head mounted display device 100 such that the VR head mounted display device 100 adjusts the display of the first display region according to the corresponding anti-distortion inverse dispersion model of the first display region and adjusts the display of the second display region according to the corresponding anti-distortion inverse dispersion model of the second display region.
In one possible application scenario, the debugger of the VR head-mounted display device 100 may adjust the anti-distortion inverse dispersion model of the VR head-mounted display device 100 by using the adjusting device 300 in the manner described above, so that the adjusted anti-distortion inverse dispersion model is more fit to the optical characteristics of each region in the lens of the VR head-mounted display device 100. In the process of adjusting the anti-distortion anti-dispersion model, adjusting the anti-distortion anti-dispersion model corresponding to one region does not affect the display effect of other regions. Finally, a final anti-distortion inverse dispersion model may be determined for use by the VR head mounted display device 100 in processing the image data. The VR head mounted display device 100 stores the final anti-distortion inverse dispersion model. In the subsequent use process of the VR head-mounted display device 100, the VR head-mounted display device 100 may process the image data according to the final anti-distortion and anti-dispersion model, and display an image on a display screen according to the processed image data.
The above embodiments describe a method of determining the corresponding inverse distortion inverse dispersion model for display screen 180A. After VR head mounted display device 100 receives the inverse distortion inverse dispersion model from adjustment device 300, VR head mounted display device 100 adjusts the display of display screen 180A according to the inverse distortion inverse dispersion model. A specific description of the manner in which VR head mounted display device 100 displays an image is provided below. In the process of determining the anti-distortion and anti-dispersion model, the VR head-mounted display device 100 processes the original image data according to the reference anti-distortion and anti-dispersion model, and displays an image on the display screen 180A according to the processed image data; and, the VR head-mounted display apparatus 100 processes the original image data according to the anti-distortion and anti-dispersion model in the adjustment process, and displays the image on the display screen 180A according to the processed image data, which may also be referred to as the following description.
Referring to fig. 17, a flowchart of a method for VR device to display an image is provided.
S201, the VR head-mounted display device 100 receives the anti-distortion anti-dispersion model corresponding to the display screen 180A sent by the adjustment device 300.
After VR head-mounted display device 100 receives the inverse distortion inverse dispersion model corresponding to display screen 180A, VR head-mounted display device 100 stores the inverse distortion inverse dispersion model corresponding to display screen 180A.
S202, the VR head-mounted display device 100 receives image data of an image to be displayed on the display screen 180A transmitted by another device.
Specifically, the image data of the image includes coordinates and color values of each pixel point on the image.
In some embodiments, the other device may be a smart phone, a computer, or a server. Other devices and the VR headset may be wired or wireless. In determining the anti-distortion and anti-dispersion model, the image data received by the VR head-mounted display device 100 may be transmitted by the adjusting device 300.
In some embodiments, the image data of the image to be displayed by the display 180A is the same as the image data of the image to be displayed by the display 180B.
In some embodiments, the image data of the image to be displayed by display 180A is not the same as the image data of the image to be displayed by display 180B. Illustratively, the display 180A is to display a first image and the display 180B is to display a second image. The VR head-mounted display device 100 processes the display image of the display screen 180A according to the anti-distortion and anti-dispersion model corresponding to the display screen 180A and the image data of the first image. The VR head-mounted display device 100 processes the display image of the display screen 180B according to the anti-distortion and anti-dispersion model corresponding to the display screen 180B and the image data of the second image.
It should be noted that, the display screen pixels of the VR head-mounted display device 100 are generally larger than the pixels of the original image represented by the image data. For example, the pixels of the original image may be: 1600×1200 pixels of the display screen may be 1700×1300, the display screen displays the image within its 1600×1200 pixels, and the remaining area may be filled with pixels displayed as black (color value (0, 0)).
S203, the VR head-mounted display device 100 determines coordinates and color values of each pixel point in the display screen 180A after adjustment according to the coordinates, color values, and the anti-distortion inverse dispersion model of each pixel point in the image before adjustment.
Illustratively, the upper left pixel in the original image represented by the image data will be described as an example. The VR head mounted display device 100 determines coordinates and color values of the adjusted upper left pixel point in the display screen 180A according to the coordinates, color values, and the inverse distortion inverse dispersion model of the adjusted upper left pixel point.
VR head mounted display device 100 may determine the location of a pixel in a manner that establishes a coordinate system on the display screen. It should be noted that, the positive direction of the x-axis of the rectangular coordinate system established on the display screen is the same as the positive direction of the x-axis of the original image in the adjustment apparatus 300, and the positive direction of the y-axis of the rectangular coordinate system established on the display screen is the same as the positive direction of the y-axis of the original image in the adjustment apparatus 300. In some embodiments, the manner in which the rectangular coordinate system is established on the display screen is the same as the manner in which the rectangular coordinate system is established on the original image in the adjustment device 300.
Referring to fig. 18, a schematic diagram of an original image according to an embodiment of the present application is provided. In fig. 18, a rectangular coordinate system is established with the center of the display screen as the origin O (0, 0), the length of one pixel is a unit length, and the horizontal direction of the display screen is the x-axis and the vertical direction is the y-axis. The coordinates of a pixel point at the upper right corner of the original image are (a, B), and the color value of the pixel point is (R, G, B).
The following describes the implementation of VR head-mounted display device 100 in determining the coordinates and color values of the adjusted top-left pixel in display screen 180A according to the coordinates, color values, and anti-distortion inverse dispersion model of the top-right pixel in the image before adjustment.
S21, VR head-mounted display device 100 calculates a distance l between the one pixel point and the origin of coordinates.
Specifically, the distance can be calculated as shown in equations 1-9.
S22, substituting the distance l into the anti-distortion and anti-dispersion model by the VR head-mounted display device 100 to calculate scale factors g 、scale r And scale b
The distance l is substituted for the distance r of the refracted ray from the center of the lens.
Illustratively, the distance l is at 1.6<r is less than or equal to 2, scale g =m 5 r+n 5 . Determining scale g Then, determining scale according to formulas 1-3 and 1-4 r And scale b Is a value of (2).
S23, VR head-mounted display device 100 based on the original coordinates (a, b) and scale factor scale g 、scale r And scale b And determining the adjusted coordinates.
Specifically, the adjusted coordinates are calculated according to equations 1-5. By calculation, (r) can be obtained x0 ,r y0 )、(g x0 ,g y0 )、(b x0 ,b y0 ) Coordinates of the three color components.
S24, the VR head-mounted display device 100 determines the color values corresponding to the coordinates of the three color components according to the color values of the original coordinates.
Wherein (r) x0 ,r y0 ) I.e. the coordinates of the red component, shows the value of red, (r) x0 ,r y0 ) The corresponding color value is (R, 0); (g) x0 ,g y0 ) That is, coordinates of the green component, a green color value is displayed, (g) x0 ,g y0 ) The corresponding color value is (0, G, 0); (b) x0 ,b y0 ) I.e. the coordinates of the blue component, to display the color value of blue, (b) x0 ,b y0 ) The corresponding color value is (0, b).
Referring to fig. 19, a schematic diagram of a display image on a display screen 180A according to an embodiment of the present application is shown. The (a, b) coordinates of the upper right corner pixel point in the original image are compared with (r) in fig. 19 x0 ,r y0 )、(g x0 ,g y0 )、(b x0 ,b y0 ) Coordinates of the three color components correspond. In addition, the pixels in the display screen to which no image data corresponds are displayed in black.
Referring to the description above, the VR head-mounted display device 100 may also determine the processed coordinates and color values corresponding to other pixels in the image, which will not be described herein.
S204, VR head-mounted display device 100 displays an image on display screen 180A according to the coordinates and color values of each adjusted pixel point in display screen 180A.
In some embodiments, VR head mounted display device 100 may determine coordinates and color values of other pixels in the image in a manner that determines coordinates and color values of pixels in the adjusted upper left corner of the image. And then displaying the image according to the coordinates and the color values of each pixel point in the image.
It should be noted that, in a similar manner, VR head-mounted display device 100 may process the image data of display screen 180B according to the inverse distortion inverse dispersion model of display screen 180B transmitted by adjustment device 300. Then, an image is displayed on the display screen 180B based on the processed image data. The VR headset 100 may process the image data of the display 180A and the image data of the display 180B at the same time.
In other embodiments, the VR head mounted display device 100 has one display screen that is divided into two display areas, one for each lens, illustratively a first display area for lens 182A and a second display area for lens 182B. The VR headset 100 may establish rectangular coordinate systems in the two display areas of the display screen, respectively. Referring to fig. 20, a schematic diagram of a rectangular coordinate system on a display screen according to an embodiment of the present application is shown. Exemplary, the center of the first display area is taken as the origin of coordinates O 1 And (0, 0) establishing a rectangular coordinate system, wherein the length of one pixel point is the unit length, and the horizontal direction of the display screen is taken as the x axis, and the vertical direction is taken as the y axis. With the centre of the second display area as the origin of coordinates O 2 And (0, 0) establishing a rectangular coordinate system, wherein the length of one pixel point is the unit length, and the horizontal direction of the display screen is taken as the x axis, and the vertical direction is taken as the y axis. In a similar manner as described above, VR head mounted display device 100 may process image data for a first display region according to an inverse distortion inverse dispersion model of the first display region. And then displaying the image on the first display area according to the processed image data. In addition, VR head mounted display device 100 may process image data of the second display region according to an inverse distortion inverse dispersion model of the second display region. And then displaying the image on the second display area according to the processed image data.
In this way, the VR head-mounted display device 100 can process the image data to be displayed according to the anti-distortion and anti-dispersion model of the display screen determined by the adjusting device 300, so as to improve the visual effect of the display image.
Based on the embodiments described in the foregoing, the present application provides a method of anti-distortion and anti-dispersion. Referring to fig. 21, a flow chart of a method for anti-distortion and anti-dispersion according to an embodiment of the present application is shown. As shown in fig. 21, the method includes:
And S301, the adjusting device sends image data corresponding to a first image and a first anti-distortion and anti-dispersion model to the virtual reality VR device, wherein the first image comprises a plurality of display areas.
In some embodiments, the first image includes a plurality of circles, the circles having a radius that is a multiple of the segmentation distance, the plurality of circles being used to indicate the locations of the plurality of display regions in the first image. The segmentation distance is determined by the adjusting device according to a preset segment number and a maximum value of a first distance, and the first distance is a distance between light rays refracted by a lens of the VR device and the center of the lens.
In some embodiments, the first image further includes a plurality of squares of the same area. This facilitates the user to observe the display effect of the image.
For example, the first image may refer to the original image shown in fig. 13. Illustratively, the first anti-distortion inverse dispersion model may refer to the reference anti-distortion inverse dispersion model in the corresponding embodiment of fig. 8, which may refer to equations 1-2, 1-3, 1-4, and 1-5.
S302, after the VR equipment receives image data corresponding to the first image and the first anti-distortion anti-dispersion model sent by the adjusting equipment, the VR equipment processes the image data according to the first anti-distortion anti-dispersion model, and displays a second image according to the processed image data.
For example, the implementation of step S402 may refer to the description of step S203 to step S204 in the corresponding example of fig. 17.
And S303, the adjusting equipment obtains a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion anti-dispersion model.
The embodiment of step S303 may refer to the description of step S104 in fig. 8.
In some embodiments, the adjusting device obtains a processing function corresponding to each display region of the plurality of display regions according to the first anti-distortion anti-dispersion model, including: the adjusting device determines a segmentation distance according to a preset segment number and a maximum value of a first distance, wherein the first distance is a distance between a light ray refracted by a lens of the VR device and the center of the lens; the adjusting device uniformly divides a scale factor curve in the first anti-distortion and anti-dispersion model into curves with the preset segment numbers based on the segmentation distance, the scale factor curve is used for representing the corresponding relation between the first distance and the scale factor, and the scale factor is used for the VR device to process the image data; and the adjusting equipment carries out linear fitting on each curve in the curves with the preset sections to obtain processing functions corresponding to a plurality of display areas in the first image. Wherein, the first anti-distortion anti-dispersion model can refer to the reference anti-distortion anti-dispersion model in the embodiment corresponding to fig. 8, and the preset number of segments can refer to the preset number of segments introduced in fig. 8. The schematic diagram of the processing function may refer to fig. 14 and the schematic diagram of the fitting result corresponding to fig. 15.
S304, the adjusting device adjusts coefficients of processing functions corresponding to a first display area, wherein the first display area is one of the plurality of display areas.
The embodiment of step S304 may refer to the description of step S105 in fig. 8. The first display area may refer to the display area with the poor display effect described in step S105. In this way, the adjusting device 300 may adjust the processing functions corresponding to the display areas with poor display effects, so as to improve the display effect of the VR device.
In some embodiments, the similarity between the first display area and the second display area is lower than a preset value, the second display area is a display area in the third image, and the position of the second display area in the third image is the same as the position of the first display area in the first image. The third image is an image formed by a second image displayed by the VR device and observed through a lens of the VR device, and the second image is an image displayed by the VR device based on the image data and the first anti-distortion anti-dispersion model. The second display area may be an annular area formed by a circle with a radius of 1.2 cm and a circle with a radius of 1.6 cm (the display image corresponding to the area 4 of fig. 12 described above), and an annular area formed by a circle with a radius of 1.6 cm and a circle with a radius of 2.0 cm, for example.
In some embodiments, the third image is an image formed by a second image displayed by the VR device as viewed by a user wearing the VR device through a lens of the VR device. In other embodiments, the third image is an image obtained by photographing the second image with a photographing device obtained by the adjusting apparatus.
In some embodiments, the coefficients of the processing function corresponding to the first display area are adjusted by presetting adjustment parameters in an adjustment parameter set. For example, the preset adjustment parameter set may refer to the preset adjustment parameter set shown in table 1.
In some embodiments, the processing function corresponding to the first display area is a linear function, and the coefficient of the processing function corresponding to the first display area includes a first parameter and a second parameter. For example, the processing function corresponding to the first display area may be: scale for measuring the size of a sample g =m 4 r+n 4 ,1.2<r is less than or equal to 1.6, wherein the first parameter is m 4 The second parameter is n 4
S305, the adjusting device sends a second anti-distortion anti-dispersion model to the VR device, wherein the second anti-distortion anti-dispersion model comprises the adjusted processing function corresponding to the first display area.
For example, the adjusted processing function corresponding to the first display area may be: scale for measuring the size of a sample g =m 4 r+(n 4 -0.01),1.2<r is less than or equal to 1.6. The second anti-distortion inverse dispersion model may refer to equations 1-7, 1-3, 1-4, and 1-5.
In some embodiments, the method further comprises: and stopping adjusting the coefficients of the processing functions corresponding to the first display area when the adjustment parameter traversal in the adjustment parameter set is completed.
In some embodiments, the method further comprises: the adjusting device generates a plurality of second anti-distortion anti-dispersion models, wherein the second anti-distortion anti-dispersion models comprise processing functions corresponding to the first display area after one-time adjustment; the adjusting device determines a third inverse distortion inverse dispersion model from the second inverse distortion inverse dispersion models, wherein the third inverse distortion inverse dispersion model comprises a processing function corresponding to the first display area, the similarity between the third display area corresponding to the processing function and the first display area is higher than that between the third display area corresponding to the processing function corresponding to the first display area and the similarity between the third display area corresponding to the processing function corresponding to the first display area and the first display area contained in other second inverse distortion inverse dispersion models; the third display area is a display area in a fifth image, the position of the third display area in the fifth image is the same as the position of the first display area in the first image, the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion and anti-dispersion models; the adjustment device sends the third distorted inverse dispersion model to the VR device to cause the VR device to process image data in accordance with the third distorted inverse dispersion model. Illustratively, the third inverse distorted inverse dispersion model may refer to the final inverse distorted inverse dispersion model in the corresponding embodiment of fig. 8. Specifically, the third inverse distortion inverse dispersion model can refer to formulas 1 to 8, 1 to 3, 1 to 4, and 1 to 5.
In some embodiments, the method further comprises: when the similarity between a third display area and the first display area is higher than or equal to the preset value, stopping adjusting the coefficient of the processing function corresponding to the first display area, wherein the third display area is a display area in a fifth image, and the position of the third display area in the fifth image is the same as the position of the first display area in the first image; the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion anti-dispersion models.
In some embodiments, the fifth image is an image obtained by photographing the fourth image by a photographing device acquired by the adjusting apparatus.
In some embodiments, the method further comprises: the adjusting device determines a fourth anti-distortion inverse dispersion model, wherein the fourth anti-distortion inverse dispersion model comprises a processing function corresponding to the first display area when the adjustment is stopped; the adjustment device sends the fourth anti-distortion anti-dispersion model to the VR device to enable the VR device to process image data according to the fourth anti-distortion anti-dispersion model. Illustratively, the fourth anti-distortion inverse dispersion model may refer to the final anti-distortion inverse dispersion model in the corresponding embodiment of fig. 8. Specifically, the third inverse distortion inverse dispersion model can refer to formulas 1 to 8, 1 to 3, 1 to 4, and 1 to 5. S306, after the VR equipment receives the second anti-distortion anti-dispersion model sent by the adjusting equipment, the VR equipment processes the image data according to the second anti-distortion anti-dispersion model, and a fourth image is displayed according to the processed image data.
For example, the implementation of step S306 may refer to the description of step S203 to step S204 in the corresponding example of fig. 17.
In some embodiments, the lenses of the VR device include a first lens and a second lens, the display screen of the VR device includes a first display screen and a second display screen, the first lens corresponds to the first display screen, and the second lens corresponds to the second display screen; the first anti-distortion and anti-dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the first display screen, and the second sub-model is used for correcting the display of the second display screen; the second anti-distortion and anti-dispersion model comprises a third model and a fourth model, wherein the third model is used for correcting the display of the first display screen, and the fourth model is used for correcting the display of the second display screen. Illustratively, the display of the VR device may refer to display 180 in fig. 5, the first display may be display 180A, and the second display may be display 180B.
In some embodiments, the lenses of the VR device include a first lens and a second lens, the display screen including a left display area and a right display area, the first lens corresponding to the left display area of the display screen and the second lens corresponding to the right display area of the display screen; the first anti-distortion inverse dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the left display area, and the second sub-model is used for correcting the display of the right display area; the second anti-distortion inverse dispersion model includes a third model for correcting the display of the left display area and a fourth model for correcting the display of the right display area. For example, the display screen of the VR device may refer to the display screen in fig. 20, with the left display area being the first display area shown in fig. 20 and the right display area being the second display area shown in fig. 20.
The embodiments of the present application may be arbitrarily combined to achieve different technical effects.
The present application also provides a computer readable storage medium comprising a computer program or instructions which, when run on an electronic device, cause the electronic device to perform a method of anti-distortion and anti-dispersion as described in the above embodiments.
The application also provides a computer program product comprising a computer program or instructions which, when run on an electronic device, cause the electronic device to perform a method of anti-distortion and anti-dispersion as described in the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk), etc.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (32)

1. A method of anti-distortion and anti-dispersion, the method comprising:
the method comprises the steps that an adjusting device sends image data corresponding to a first image and a first anti-distortion and anti-dispersion model to a Virtual Reality (VR) device, wherein the first image comprises a plurality of display areas; the first anti-distortion inverse dispersion model is used by the VR device to process the image data;
the adjusting equipment obtains a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion and anti-dispersion model; comprising the following steps: determining a segmentation distance according to the preset segment number and the maximum value of the first distance; uniformly dividing a scale factor curve in the first anti-distortion inverse dispersion model into curves with the preset segment numbers based on the segment distances; performing linear fitting on each curve in the curves with the preset segments to obtain processing functions corresponding to a plurality of display areas in the first image, wherein the scale factor curve is used for representing the corresponding relation between the first distance and a scale factor, the scale factor is used for processing the image data by the VR device, and the first distance is the distance between the light refracted by the lens of the VR device and the center of the lens;
The adjusting device adjusts coefficients of processing functions corresponding to a first display area, the processing functions corresponding to the first display area are primary functions, and the first display area is a display area in the plurality of display areas;
the adjusting device sends a second anti-distortion anti-dispersion model to the VR device, the second anti-distortion anti-dispersion model including the adjusted processing function corresponding to the first display area, the second anti-distortion anti-dispersion model being used by the VR device to process the image data.
2. The method of claim 1, wherein the first image includes a plurality of circles, the circles having a radius that is a multiple of the segmentation distance, the plurality of circles being used to indicate the locations of the plurality of display regions in the first image.
3. The method according to claim 1 or 2, wherein the similarity between the first display area and a second display area is lower than a preset value, the second display area is a display area in a third image, and the position of the second display area in the third image is the same as the position of the first display area in the first image;
The third image is an image formed by a second image displayed by the VR device and observed through a lens of the VR device, and the second image is an image displayed by the VR device based on the image data and the first anti-distortion anti-dispersion model.
4. A method according to claim 3, wherein the third image is an image obtained by capturing the second image by a capturing device obtained by the adjustment apparatus.
5. The method of claim 1, wherein the coefficients of the processing function corresponding to the first display region are adjusted by adjusting parameters in a preset set of adjusting parameters.
6. The method of claim 5, wherein the coefficients of the processing function corresponding to the first display region comprise a first parameter and a second parameter.
7. The method of claim 5, wherein the method further comprises:
and stopping adjusting the coefficients of the processing functions corresponding to the first display area when the adjustment parameter traversal in the adjustment parameter set is completed.
8. The method of claim 7, wherein the method further comprises:
The adjusting device generates a plurality of second anti-distortion anti-dispersion models, wherein the second anti-distortion anti-dispersion models comprise processing functions corresponding to the first display area after one-time adjustment;
the adjusting device determines a third inverse distortion inverse dispersion model from a plurality of second inverse distortion inverse dispersion models, wherein the third inverse distortion inverse dispersion model comprises a processing function corresponding to the first display area, the similarity between the third display area corresponding to the processing function and the first display area is higher than that between the third display area corresponding to the processing function corresponding to the first display area contained in other second inverse distortion inverse dispersion models and the similarity between the third display area corresponding to the processing function corresponding to the first display area and the first display area;
the third display area is a display area in a fifth image, the position of the third display area in the fifth image is the same as the position of the first display area in the first image, the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion and anti-dispersion models;
The adjustment device sends the third distorted inverse dispersion model to the VR device to cause the VR device to process image data in accordance with the third distorted inverse dispersion model.
9. The method of claim 5, wherein the method further comprises:
when the similarity between a third display area and the first display area is higher than or equal to a preset value, stopping adjusting the coefficient of a processing function corresponding to the first display area, wherein the third display area is a display area in a fifth image, and the position of the third display area in the fifth image is the same as the position of the first display area in the first image;
the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion anti-dispersion models.
10. The method according to claim 9, wherein the method further comprises:
the adjusting device determines a fourth anti-distortion inverse dispersion model, wherein the fourth anti-distortion inverse dispersion model comprises a processing function corresponding to the first display area when the adjustment is stopped;
The adjustment device sends the fourth anti-distortion anti-dispersion model to the VR device to enable the VR device to process image data according to the fourth anti-distortion anti-dispersion model.
11. The method according to claim 8 or 9, wherein the fifth image is an image obtained by capturing the fourth image by a capturing device acquired by the adjustment apparatus.
12. A method of anti-distortion and anti-dispersion comprising:
the VR equipment receives image data corresponding to a first image and a first anti-distortion and anti-dispersion model, wherein the image data corresponds to the first image and the first anti-distortion and anti-dispersion model are sent by the adjusting equipment, and the first image comprises a plurality of display areas;
the VR device processes the image data according to the first anti-distortion and anti-dispersion model, and displays a second image according to the processed image data;
the VR equipment receives a second anti-distortion anti-dispersion model sent by the adjusting equipment, wherein the second anti-distortion anti-dispersion model comprises a processing function corresponding to a first display area adjusted by the adjusting equipment, and the first display area is a display area in the plurality of display areas; the processing function corresponding to each display area in the plurality of display areas is obtained by uniformly dividing a scale factor curve in the first anti-distortion inverse dispersion model into curves of the preset number of segments according to the maximum value of the preset number of segments and a first distance, and then linearly fitting each curve in the curves of the preset number of segments, wherein the processing function corresponding to the first display area is a primary function, the scale factor curve is used for representing the corresponding relation between the first distance and the scale factor, the scale factor is used for processing the image data by the VR equipment, and the first distance is the distance between light refracted by a lens of the VR equipment and the center of the lens;
And the VR equipment processes the image data according to the second anti-distortion and anti-dispersion model and displays a fourth image according to the processed image data.
13. The method of claim 12, wherein the first image includes a plurality of circles, the circles having a radius that is a multiple of a segmentation distance, the segmentation distance being determined by a preset number of segments and a maximum value of the first distance, the plurality of circles being used to indicate the locations of the plurality of display regions in the first image.
14. The method of claim 12 or 13, wherein the lens of the VR device comprises a first lens and a second lens, the display of the VR device comprises a first display and a second display, the first lens corresponds to the first display, and the second lens corresponds to the second display;
the first anti-distortion and anti-dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the first display screen, and the second sub-model is used for correcting the display of the second display screen;
the second anti-distortion and anti-dispersion model comprises a third model and a fourth model, wherein the third model is used for correcting the display of the first display screen, and the fourth model is used for correcting the display of the second display screen.
15. The method of claim 14, wherein the lens of the VR device comprises a first lens and a second lens, the display screen comprises a left display area and a right display area, the first lens corresponds to the left display area of the display screen, and the second lens corresponds to the right display area of the display screen;
the first anti-distortion inverse dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the left display area, and the second sub-model is used for correcting the display of the right display area;
the second anti-distortion inverse dispersion model includes a third model for correcting the display of the left display area and a fourth model for correcting the display of the right display area.
16. An adjustment device comprising one or more processors and a memory, the memory coupled with the one or more processors, the memory for storing program code, the one or more processors invoking the program code to cause the adjustment device to:
Transmitting image data corresponding to a first image and a first anti-distortion and anti-dispersion model to VR equipment, wherein the first image comprises a plurality of display areas; the first anti-distortion inverse dispersion model is used by the VR device to process the image data;
obtaining a processing function corresponding to each display area in the plurality of display areas according to the first anti-distortion inverse dispersion model; comprising the following steps: determining a segmentation distance according to the preset segment number and the maximum value of the first distance; uniformly dividing a scale factor curve in the first anti-distortion inverse dispersion model into curves with the preset segment numbers based on the segment distances; performing linear fitting on each curve in the curves with the preset segments to obtain processing functions corresponding to a plurality of display areas in the first image, wherein the scale factor curve is used for representing the corresponding relation between the first distance and a scale factor, the scale factor is used for processing the image data by the VR device, and the first distance is the distance between the light refracted by the lens of the VR device and the center of the lens;
adjusting coefficients of processing functions corresponding to a first display area, wherein the processing functions corresponding to the first display area are primary functions, and the first display area is a display area in the plurality of display areas;
And sending a second anti-distortion anti-dispersion model to the VR device, wherein the second anti-distortion anti-dispersion model comprises an adjusted processing function corresponding to the first display area, and the second anti-distortion anti-dispersion model is used for processing the image data by the VR device.
17. The adjustment device of claim 16, wherein the first image comprises a plurality of circles, the circles having a radius that is a multiple of the segmentation distance, the plurality of circles being used to indicate the locations of the plurality of display regions in the first image.
18. The adjustment device according to claim 16 or 17, characterized in that the similarity of the first display area and a second display area is lower than a preset value, the second display area being a display area in a third image, the position of the second display area in the third image being the same as the position of the first display area in the first image;
the third image is an image formed by a second image displayed by the VR device and observed through a lens of the VR device, and the second image is an image displayed by the VR device based on the image data and the first anti-distortion anti-dispersion model.
19. The adjustment device of claim 18, wherein the third image is an image obtained by capturing the second image with a capturing apparatus that is obtained by the adjustment device.
20. The adjustment device of claim 16, wherein the coefficients of the processing function corresponding to the first display region are adjusted by adjusting parameters in a set of preset adjusting parameters.
21. The adjustment device of claim 20, wherein the coefficients of the processing function corresponding to the first display region include a first parameter and a second parameter.
22. The adjustment device of claim 20, characterized in that the one or more processors call the program code to cause the adjustment device to further perform the following operations:
and stopping adjusting the coefficients of the processing functions corresponding to the first display area when the adjustment parameter traversal in the adjustment parameter set is completed.
23. The adjustment device of claim 22, wherein the one or more processors call the program code to cause the adjustment device to further:
generating a plurality of second anti-distortion inverse dispersion models, wherein the second anti-distortion inverse dispersion models comprise processing functions corresponding to the first display area after one-time adjustment;
Determining a third inverse distortion inverse dispersion model from the plurality of second inverse distortion inverse dispersion models, wherein the third inverse distortion inverse dispersion model comprises a processing function corresponding to the first display area, the similarity between the third display area corresponding to the processing function and the first display area is higher than that between the third display area corresponding to the processing function corresponding to the first display area and the similarity between the third display area corresponding to the processing function corresponding to the first display area and the first display area contained in other second inverse distortion inverse dispersion models;
the third display area is a display area in a fifth image, the position of the third display area in the fifth image is the same as the position of the first display area in the first image, the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion and anti-dispersion models;
and sending the third distorting inverse dispersion model to the VR device to cause the VR device to process image data according to the third distorting inverse dispersion model.
24. The adjustment device of claim 20, characterized in that the one or more processors call the program code to cause the adjustment device to further perform the following operations:
When the similarity between a third display area and the first display area is higher than or equal to a preset value, stopping adjusting the coefficient of a processing function corresponding to the first display area, wherein the third display area is a display area in a fifth image, and the position of the third display area in the fifth image is the same as the position of the first display area in the first image;
the fifth image is an image formed by a fourth image displayed by the VR device and observed through a lens of the VR device, and the fourth image is an image displayed by the VR device based on the image data and one of the second anti-distortion anti-dispersion models.
25. The adjustment device of claim 24, wherein the one or more processors call the program code to cause the adjustment device to further:
determining a fourth anti-distortion inverse dispersion model, wherein the fourth anti-distortion inverse dispersion model comprises a processing function corresponding to the first display area when adjustment is stopped;
and sending the fourth anti-distortion anti-dispersion model to the VR device, so that the VR device processes image data according to the fourth anti-distortion anti-dispersion model.
26. The adjustment device according to claim 23 or 24, characterized in that the fifth image is an image obtained by capturing the fourth image by a capturing means that is obtained by the adjustment device.
27. A VR device, the VR device comprising: one or more processors, memory, lenses, and a display screen;
the memory is coupled to the one or more processors, the memory is for storing program code that the one or more processors call to cause the VR device to:
receiving image data corresponding to a first image and a first anti-distortion and anti-dispersion model sent by an adjusting device, wherein the first image comprises a plurality of display areas;
processing the image data according to the first anti-distortion and anti-dispersion model, and displaying a second image on the display screen according to the processed image data;
receiving a second anti-distortion anti-dispersion model sent by the adjusting device, wherein the second anti-distortion anti-dispersion model comprises a processing function corresponding to a first display area adjusted by the adjusting device, and the first display area is a display area in the plurality of display areas; the processing function corresponding to each display area in the plurality of display areas is obtained by uniformly dividing a scale factor curve in the first anti-distortion inverse dispersion model into curves of the preset number of segments according to the maximum value of the preset number of segments and a first distance, and then linearly fitting each curve in the curves of the preset number of segments, wherein the processing function corresponding to the first display area is a primary function, the scale factor curve is used for representing the corresponding relation between the first distance and the scale factor, the scale factor is used for processing the image data by the VR equipment, and the first distance is the distance between light refracted by a lens of the VR equipment and the center of the lens;
And processing the image data according to the second anti-distortion and anti-dispersion model, and displaying a fourth image on the display screen according to the processed image data.
28. The VR device of claim 27, wherein the first image includes a plurality of circles having a radius that is a multiple of a segment distance, the segment distance being determined by a preset number of segments and a maximum value of the first distance, the plurality of circles being used to indicate the locations of the plurality of display areas in the first image.
29. The VR device of claim 27 or 28, wherein the lens of the VR device comprises a first lens and a second lens, the display of the VR device comprises a first display and a second display, the first lens corresponds to the first display, and the second lens corresponds to the second display;
the first anti-distortion and anti-dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the first display screen, and the second sub-model is used for correcting the display of the second display screen;
the second anti-distortion and anti-dispersion model comprises a third model and a fourth model, wherein the third model is used for correcting the display of the first display screen, and the fourth model is used for correcting the display of the second display screen.
30. The VR device of claim 27 or 28, wherein the lens of the VR device comprises a first lens and a second lens, the display screen comprising a left display area and a right display area, the first lens corresponding to the left display area of the display screen and the second lens corresponding to the right display area of the display screen;
the first anti-distortion inverse dispersion model comprises a first sub-model and a second sub-model, wherein the first sub-model is used for correcting the display of the left display area, and the second sub-model is used for correcting the display of the right display area;
the second anti-distortion inverse dispersion model includes a third model for correcting the display of the left display area and a fourth model for correcting the display of the right display area.
31. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-11.
32. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 12-15.
CN202011197969.4A 2020-10-31 2020-10-31 Method for anti-distortion and anti-dispersion and related equipment Active CN114449237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011197969.4A CN114449237B (en) 2020-10-31 2020-10-31 Method for anti-distortion and anti-dispersion and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011197969.4A CN114449237B (en) 2020-10-31 2020-10-31 Method for anti-distortion and anti-dispersion and related equipment

Publications (2)

Publication Number Publication Date
CN114449237A CN114449237A (en) 2022-05-06
CN114449237B true CN114449237B (en) 2023-09-29

Family

ID=81357021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011197969.4A Active CN114449237B (en) 2020-10-31 2020-10-31 Method for anti-distortion and anti-dispersion and related equipment

Country Status (1)

Country Link
CN (1) CN114449237B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU7652187A (en) * 1986-08-08 1988-02-11 Corning Glass Works Optical fiber dispersion transformer
CA2146384A1 (en) * 1995-04-05 1996-10-06 Joseph Ip Chromatic Dispersion Compensation Device
CN101408616A (en) * 2008-11-24 2009-04-15 江南大学 Inverse synthetic aperture radar imaging distance aligning method applicable to low signal-noise ratio data
CN101895771A (en) * 2010-07-09 2010-11-24 中国科学院长春光学精密机械与物理研究所 Luminance and chrominance separately-acquiring and hybrid-correction method of LED display screen
US8917329B1 (en) * 2013-08-22 2014-12-23 Gopro, Inc. Conversion between aspect ratios in camera
CN105791789A (en) * 2016-04-28 2016-07-20 努比亚技术有限公司 Head-mounted equipment, display equipment and method of automatically adjusting display output
KR20160097640A (en) * 2015-02-09 2016-08-18 하이네트(주) Security control apparatuss using wide angle lense
CN106572342A (en) * 2016-11-10 2017-04-19 北京奇艺世纪科技有限公司 Image anti-distortion and anti-dispersion processing method, device and virtual reality device
WO2017167107A1 (en) * 2016-03-28 2017-10-05 腾讯科技(深圳)有限公司 Image displaying method, method for manufacturing irregular screen having curved surface and head-mounted display apparatus
CN110557626A (en) * 2019-07-31 2019-12-10 华为技术有限公司 image display method and electronic equipment
CN111784615A (en) * 2016-03-25 2020-10-16 北京三星通信技术研究有限公司 Method and device for processing multimedia information
WO2020215214A1 (en) * 2019-04-23 2020-10-29 深圳市大疆创新科技有限公司 Image processing method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002574A1 (en) * 2007-06-29 2009-01-01 Samsung Electronics Co., Ltd. Method and a system for optical design and an imaging device using an optical element with optical aberrations
US9818201B2 (en) * 2014-12-22 2017-11-14 Lucasfilm Entertainment Company Ltd. Efficient lens re-distortion
US20190110028A1 (en) * 2016-03-21 2019-04-11 Thomson Licensing Method for correcting aberration affecting light-field data
US10373297B2 (en) * 2016-10-26 2019-08-06 Valve Corporation Using pupil location to correct optical lens distortion
US10672363B2 (en) * 2018-09-28 2020-06-02 Apple Inc. Color rendering for images in extended dynamic range mode

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU7652187A (en) * 1986-08-08 1988-02-11 Corning Glass Works Optical fiber dispersion transformer
CA2146384A1 (en) * 1995-04-05 1996-10-06 Joseph Ip Chromatic Dispersion Compensation Device
CN101408616A (en) * 2008-11-24 2009-04-15 江南大学 Inverse synthetic aperture radar imaging distance aligning method applicable to low signal-noise ratio data
CN101895771A (en) * 2010-07-09 2010-11-24 中国科学院长春光学精密机械与物理研究所 Luminance and chrominance separately-acquiring and hybrid-correction method of LED display screen
US8917329B1 (en) * 2013-08-22 2014-12-23 Gopro, Inc. Conversion between aspect ratios in camera
KR20160097640A (en) * 2015-02-09 2016-08-18 하이네트(주) Security control apparatuss using wide angle lense
CN111784615A (en) * 2016-03-25 2020-10-16 北京三星通信技术研究有限公司 Method and device for processing multimedia information
WO2017167107A1 (en) * 2016-03-28 2017-10-05 腾讯科技(深圳)有限公司 Image displaying method, method for manufacturing irregular screen having curved surface and head-mounted display apparatus
CN105791789A (en) * 2016-04-28 2016-07-20 努比亚技术有限公司 Head-mounted equipment, display equipment and method of automatically adjusting display output
CN106572342A (en) * 2016-11-10 2017-04-19 北京奇艺世纪科技有限公司 Image anti-distortion and anti-dispersion processing method, device and virtual reality device
WO2020215214A1 (en) * 2019-04-23 2020-10-29 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN110557626A (en) * 2019-07-31 2019-12-10 华为技术有限公司 image display method and electronic equipment

Also Published As

Publication number Publication date
CN114449237A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
WO2020192458A1 (en) Image processing method and head-mounted display device
CN107580209B (en) Photographing imaging method and device of mobile terminal
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
WO2021036429A1 (en) Decoding method, encoding method, and apparatus
KR102385365B1 (en) Electronic device and method for encoding image data in the electronic device
TW201503047A (en) Variable resolution depth representation
CN110062246B (en) Method and device for processing video frame data
CN107248137B (en) Method for realizing image processing and mobile terminal
CN111028144B (en) Video face changing method and device and storage medium
CN113056692A (en) Lens assembly and electronic device including the same
WO2021238821A1 (en) Quick matching method and head-mounted electronic device
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN113038165B (en) Method, apparatus and storage medium for determining encoding parameter set
CN110248197B (en) Voice enhancement method and device
CN111103975B (en) Display method, electronic equipment and system
EP3881116B1 (en) Lens assembly and electronic device including the same
CN114257920B (en) Audio playing method and system and electronic equipment
CN114449237B (en) Method for anti-distortion and anti-dispersion and related equipment
US11494885B2 (en) Method for synthesizing image on reflective object on basis of attribute of reflective object included in different image, and electronic device
CN112565735B (en) Virtual reality measuring and displaying method, device and system
CN111294905B (en) Image processing method, image processing device, storage medium and electronic apparatus
WO2022127612A1 (en) Image calibration method and device
WO2021057420A1 (en) Method for displaying control interface and head-mounted display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant