CN109886876A - A kind of nearly eye display methods based on visual characteristics of human eyes - Google Patents

A kind of nearly eye display methods based on visual characteristics of human eyes Download PDF

Info

Publication number
CN109886876A
CN109886876A CN201910138436.XA CN201910138436A CN109886876A CN 109886876 A CN109886876 A CN 109886876A CN 201910138436 A CN201910138436 A CN 201910138436A CN 109886876 A CN109886876 A CN 109886876A
Authority
CN
China
Prior art keywords
subregion
display
eye
image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910138436.XA
Other languages
Chinese (zh)
Inventor
季渊
高钦
余云森
陈文栋
穆廷洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yun Microelectronics (shanghai) Co Ltd Light
Original Assignee
Yun Microelectronics (shanghai) Co Ltd Light
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yun Microelectronics (shanghai) Co Ltd Light filed Critical Yun Microelectronics (shanghai) Co Ltd Light
Priority to CN201910138436.XA priority Critical patent/CN109886876A/en
Publication of CN109886876A publication Critical patent/CN109886876A/en
Priority to PCT/CN2020/076512 priority patent/WO2020173414A1/en
Priority to US17/060,840 priority patent/US11438564B2/en
Pending legal-status Critical Current

Links

Abstract

The nearly eye display methods based on visual characteristics of human eyes that the invention discloses a kind of, is divided into n display subregion for near-eye display screen first;Then the corresponding marginal space frequency of the n display subregion is obtained;According to the corresponding marginal space frequency of the n display subregion, it is described n from the video image of input and shows that subregion is respectively created and renders the video image data of corresponding n figure layer;The video image data of the n figure layer is transmitted to the near-eye display;Finally the video image data of the n figure layer is rebuild and spliced, generation meets the image that human eye stares effect, is shown in the near-eye display screen.The control method of near-eye display provided by the invention can substantially reduce the data volume that image forming source is transferred to near-eye display, transmission bandwidth is reduced, higher display resolution and refresh rate is supported, reduces power consumption, and meet eye space distribution character, while can also slow down dizzy phenomenon.

Description

A kind of nearly eye display methods based on visual characteristics of human eyes
Technical field
The present invention relates to field of display, in particular to a kind of nearly eye display methods based on visual characteristics of human eyes.
Background technique
Near-eye display is a kind of new display that big visual field is formed by optical system, is usually located near human eye, It can be used for wearable nearly eye and show scene, such as virtually/augmented reality helmet or glasses.With virtual/augmented reality application The index request of monitor resolution and refresh rate is continuously improved, display data volume required for display system is sharply promoted, The transmission bandwidth of current techniques can not meet virtual/augmented reality very well and apply to the transmission requirement for showing data.
In view of there is very big visual perception redundancy, transmission in the video image information source that near-eye display system is transmitted And show that the redundancy that human visual system cannot be aware of is one for limited network bandwidth and terminal device Kind waste, if therefore these redundancies can be removed, so that it may so that transmission image data greatly reduce, so as to improve video Image data transmission quantity huge the technical issues of bringing.
Due to conventional flat panel display each region of panel use identical physical picture element point away from identical driving side Method, therefore conventional image data compression method considers parameter of the human-eye visual characteristic in terms of color more to reduce redundancy Information, and the distribution character of less consideration human eye spatially.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of nearly eye display methods based on human-eye visual characteristic, thus Reduce data transfer bandwidth.For space, resolution ratio highest of the human eyesight on view center position, as visual angle increases And resolution capability becomes smaller and smaller, therefore this invention contemplates a kind of control method of near-eye display, makes the heart in the picture High display quality is provided, low display quality is provided in image border, and make show quality with spatial distribution from center to Surrounding is successively decreased, to reduce transmitted data amount.
To achieve the goals above, the nearly eye display methods based on visual characteristics of human eyes that the present invention provides a kind of, it is described Method includes:
According to human eye fixation point, near-eye display screen is divided into n display subregion, including centrally located Human eye stares subregion;
Obtain the corresponding marginal space frequency of the n display subregion;
It is the n from the video image of input according to the corresponding marginal space frequency of the n display subregion A display subregion is respectively created and renders the video image data of corresponding n figure layer;
The video image data of the n figure layer is transmitted to the near-eye display;
The video image data of the n figure layer to be rebuild and spliced, generation meets the image that human eye stares effect, It is shown in the near-eye display screen.
Further, the human eye is stared effect and is included at least:
The display effect that subregion uses relatively high amount of image information is stared in the human eye,
At edge, subregion uses the display effect of relatively low amount of image information,
It uses between the middle sub-field that the human eye stares subregion and the edge subregion between highest Display effect between amount of image information and minimum amount of image information;
And described image information content is described by the pixel spatial resolution and pixel gray value digit of image.
Further, retinal eccentricity of the n display subregion according to human eye to the near-eye display screen Quantization or it is continuous divide, and include cyclic annular subregion that subregion expand to edge being stared with the human eye and/or without showing Show the corner subregion of content.
Further, the resolution ratio and details that the n display subregion is constituted meet the central fovea of Human Visual System Image, and each corresponding marginal space frequency of display subregion is reduced with the increase of retinal eccentricity.
Further, the central fovea image is obtained by geometric maps method, filtering method or Hierarchical Approach, and right N figure layer is answered, the n figure layer can be described by image pyramid, and the n figure layer is flat in the pyramidal mapping of described image On face it is combined and spliced after form and show the central fovea image presented on screen in the nearly eye.
Further, described image pyramid is gaussian pyramid, laplacian pyramid, difference pyramid or mipmap One of pyramid.
Further, rule of thumb formula or human-eye model formula obtain the marginal space frequency, the empirical equation Parameter include retinal eccentricity, half-resolution eccentricity constant, human eye contrast degree susceptibility threshold and spatial frequency decaying Coefficient, the parameter of the human-eye model formula include the distance and configurable filter of retinal eccentricity, pixel to fixation point Wave system number.
Further, obtaining the specific steps that described n shows the corresponding marginal space frequency of subregion includes:
It is all physical picture elements in the display subregion by the corresponding marginal space set of frequency of the n display subregion Put the maximum value of the corresponding marginal space frequency in position or a certain fixed value close to maximum value.
Further, the specific steps of the creation and the video image data for rendering corresponding n figure layer include:
The physical location that near-eye display screen locating for subregion is shown according to described n, is obtained from inputted video image Take the video image data of this n display subregion corresponding position;
The video image data for showing subregion corresponding position to described n carries out the down-sampling filter of different proportion respectively Wave generates the video image data of n figure layer, and the video image data of each figure layer passes through the filtered figure of the down-sampling Image space frequency is equal or close to the corresponding marginal space frequency of the display subregion;
Obtain the down-sampling coefficient of the video image data of the n figure layer.
Further, the specific steps of the creation and the video image data for rendering corresponding n figure layer further comprise Pixel low data is added on surrounding pixel, thus the step of reducing pixel data digit.
Further, by the video image data of the n figure layer based on communication mode wirelessly or non-wirelessly, different Channel is successively transmitted in the near-eye display in same channel but in different time, the channel be physical channel or Person's logic channel.
Further, the specific steps that the video image data of the n figure layer is rebuild and spliced include:
The video image data of the n figure layer is rebuild, is restored to image resolution ratio and gray value described close The corresponding resolution ratio of eye display screen and gray value;
The step of reserved overlay region progress multi-resolution image splicing between the domain of adjacent display areas, the reserved overlay region, wraps The image data that judgement is in the overlay region has been included, and the image of the overlay region has mutually been merged by different weights And the step of forming complete picture.
Further, the reconstruction and the step of splicing include image interpolation, it is image resampling, image enhancement, bilateral The calculating of filtering and pixel Bits Expanding.
Further, the central point that the human eye stares subregion is obtained in real time by eyeball tracking mode, and from described The near-eye display that gets of central point shows that the time delay of image is not found by people.
Further, the eyeball tracking include tracked according to the changing features of eyeball and eyeball periphery, or according to Iris angle change is tracked, or extracts feature after actively projecting infrared light beam to iris to be tracked.
Further, the near-eye display screen includes the independence for showing two left eyes for being respectively used to people and right eye Image, or include left eye and right eye that two independent screens are respectively used to display people, the independent image and independent screen are all Several display subregions that subregion is stared including human eye can be respectively divided into.
The present invention compared with prior art, has following obvious substantive distinguishing features and marked improvement:
(1) nearly eye display methods provided by the invention, can substantially reduce the data volume that image source is transferred to near-eye display, To reduce transmission bandwidth, higher display resolution and refresh rate are supported, reduce system power dissipation.
(2) data compression method and restoring method provided by the invention meet the spatial characteristics of human eye, compression efficiency Height, data calculation amount is small, and image restoring effect is good, while can also slow down dizzy phenomenon.
(3) the present invention provides wired and wireless a variety of transmission modes, make to transmit more flexible.
(4) the present invention provides eyeball tracking schemes to come real-time control fixation point, more practicability.
(5) the present invention provides the schemes of monocular and binocular near-eye display, more practicability.
Detailed description of the invention
Fig. 1 is the flow chart of first embodiment of the invention;
Fig. 2 is a kind of near-eye display screen partition schematic diagram;
Fig. 3 is another near-eye display screen partition schematic diagram;
Fig. 4 is a kind of schematic diagram of the corresponding image pyramid of n figure layer;
Fig. 5 is the schematic diagram that a kind of corresponding image pyramid of n figure layer synthesizes central fovea image;
Fig. 6 is a kind of example of the relationship of retinal eccentricity and marginal space frequency;
Fig. 7 is a kind of example of the relationship of location of pixels and marginal space frequency.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiment is only the embodiment of a part of the invention, instead of all the embodiments.Based on this Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
Embodiment one:
Referring to Fig.1, a kind of flow chart of nearly eye display methods of the present invention is shown, comprising steps of
Step S100: according to human eye fixation point, being divided into n display subregion for near-eye display screen, including Centrally located human eye stares subregion.
Step S110: the n corresponding marginal space frequency of display subregion difference is obtained.
Step S120: the corresponding marginal space frequency of subregion difference is shown according to n, from the video image of input Show that subregion is respectively created and renders the video image data of corresponding n figure layer for this n.
Step S130: the video image data of this n figure layer is transmitted in the near-eye display.
Step S140: being rebuild and spliced to the video image data of this n figure layer, and generation meets human eye and stares effect Image, be shown in the near-eye display screen.
For the clearer technical solution and beneficial effect for illustrating first embodiment of the invention, below furtherly It is bright.
Referring to Fig. 2, the first subregion schematic diagram of the near-eye display screen of above technical scheme is shown.By display Screen T30 is divided into n display subregion, and wherein n is the integer greater than 1.According to corresponding to n display subregion difference Marginal space frequency be that corresponding video image number is respectively created in the n display subregion from the video image of input According to.Wherein, the resolution ratio of the corresponding video image data of different display subregions is different.According to human eye to the display subregion The quantized result of retinal eccentricity indicator screen T30 is divided into n display subregion, and included at least people from center Eye stares the square or rectangular annular section that area and center are expanded to edge, and square or rectangular annular section width is not Must be equal, but each square or rectangular annular section width contains at least one pixel;The quantity of the display subregion It is configured with size according to user demand;It is a kind of quantizing process that quantization, which is divided and shown for the division of display T30, is used Limited division region, simplifies partition process and calculating process;And another way is divided using continuous, can reach and human eye Best match degree.
Obviously, described partition method is only the partition method of present invention a part, rather than whole square partitions Method.
Referring to Fig. 3, the subregion schematic diagram of the near-eye display screen of another technical solution is shown.By indicator screen T40 is divided into n display subregion, and wherein n is the integer greater than 1.This partition method and the first above-mentioned partition method are basic It is identical, it is particular in that: near-eye display screen T40 being divided into n display subregion according to human eye visual angle, and is at least wrapped The annular section that center human eye stares the round or ellipse that area and center are expanded to edge, the ring-type of round or ellipse are contained Peak width need not be equal, but the annular section width of each round or ellipse contains at least one pixel;Display The quantity and size in region are configured according to user demand.Particularly, when being directed at the near-eye display using circular eyepiece, At the quadrangle T50 of indicator screen, it is not necessary to transmitting video image data, to be further reduced quantity transmission quantity.
The above method can reduce the bandwidth of data transmission guaranteeing under the premise of human eye stares effect.Human eye is stared Point position where pixel, the resolution ratio highest being perceived by the human eye, the resolution ratio that rest of pixels point is perceived by the human eye with The increase of its pixel distance apart from human eye fixation point position and reduce;The pixel distance is bigger, is perceived by the human eye Resolution ratio is smaller.Therefore according to this feature, display can be made to reach human eye and stare effect.Particularly, this human eye is stared Effect further include: the display effect that subregion uses the amount of image information of high (or highest) is stared in the human eye, separate The edge subregion that the human eye stares subregion uses the display effect of the amount of image information of low (or minimum), and is being situated between The middle sub-field of subregion and the edge subregion is stared using the image letter between high (or highest) in the human eye Display effect between breath amount and the amount of image information of low (or minimum).
Further, described image information content is characterized by the pixel quantity and pixel gray value digit of image, Also just on the one hand meaning can achieve the purpose that reduce volume of transmitted data by compressing the spatial data of image for this, on the other hand It can also achieve the purpose that reduce volume of transmitted data by compressing the gray scale digit of image, for example, the human eye to center is stared Subregion uses 24-30 color datas or 8-10 monochromatic datas, adopts in the edge subregion for staring subregion far from human eye The gray scale that image is characterized with 10-18 color datas or 3-6 monochromatic datas, stares subregion further away from human eye, using more Few data bits.
Marginal space frequency is utilized in technical solution described in the embodiment of the present invention, keeps high-resolution in human eye gaze area The fidelity of rate is shown, and is shown in sight fringe region using the image of low resolution, and simulation human eye stares effect, ensure that User experience.
Embodiment two:
The present embodiment be described in further detail in implementation one in step S100 by near-eye display screen be divided into n it is aobvious Show the method for subregion.
Firstly, resolution ratio and details that the n display subregion is constituted meet the central fovea image of Human Visual System (Foveated image), so that central fovea image shown by screen has corresponded to the central fovea of human eye retina, to make screen Image resolution ratio and details are consistent with human-eye model.In the central fovea image, each display subregion is corresponding Marginal space frequency is reduced with the increase of retinal eccentricity.
Particularly, central fovea image method can be used geometric maps method, filtering method and Hierarchical Approach and obtain.
It illustrates and implements in detail individually below:
Geometric maps method closes the geometry of nonuniform sampling in company with the adaptive coordinate Change-over knot of spatial variations Come, for example, being arrived the important goal object Mapping at sight center using log-polar coordinate mapping transformation or super-pixel transform method Unessential content map to image edge area is formed central fovea image by the high-resolution areas among coordinate.
Filtering method realizes image resampling, the choosing of the filter cutoff frequency by using low pass or bandpass filter It takes and is determined by the local sampling density of retina.It is obtained after carrying out the filtering of different sample frequencys by using limited group of filter A series of images, splicing obtain central fovea image.
Hierarchical Approach be by input picture carry out it is different degrees of it is fuzzy after building form image pyramid, will scheme As obtaining central fovea image after pyramid mapping.Different degrees of blur method contains adopting again for image spatial resolution Sample (scaling) and for image processing methods such as pixel color depths.For example, using gaussian pyramid, laplacian pyramid, The image pyramids such as difference pyramid, mixed Gaussian low repetition system or mipmap pyramid describe central fovea image.
Referring to Fig. 4, a kind of schematic diagram of the corresponding image pyramid of n figure layer is shown, which describes From the same image source, resolution ratio by bottom from it is above gradually reducing, in Pyramid arrange n figure layer.It is described Pyramid is gaussian pyramid, laplacian pyramid, difference gaussian pyramid, mixed Gaussian low repetition system or mipmap gold word Tower.The gaussian pyramid, which refers to, to carry out gaussian filtering (or Gaussian convolution) for image and forms one group of image to down-sampling, In one example, first by, again to its gaussian filtering, the image then generated to gaussian filtering carries out down after one times of image augmentation Sampling, so in image pyramid repeatedly, obtains n figure layer.Particularly, two adjacent images in image pyramid are done Difference obtains interpolation image, obtains difference gaussian pyramid.Further, it is also possible to be used for using laplacian pyramid from pyramid Low layer pictures high-rise image is rebuild by up-sampling.Cooperation gaussian pyramid is used together.In another example, pass through Mipmap mapping technology generates image pyramid, such as, it is established that the pass between Gaussian kernel radius and Mipmap grade m System, to reduce the operand of gaussian filtering.
Referring to Fig. 5, the central fovea image that a kind of corresponding image pyramid of n figure layer synthesizes is shown.Each figure layer For indicating a display subregion in the near-eye display screen, these display subregion groups on mapped plan are merged It is formd after connecing and shows the central fovea image presented on screen in nearly eye.It can be seen that not being synthesized in image pyramid The part of central fovea image will no longer be transmitted.Particularly, the splicing further includes the process of boundary filtering, is made at image mosaic It is more smooth.
Embodiment three:
The calculation method for implementing marginal space frequency in one has been described in further detail in the present embodiment.The marginal space frequency Rate indicates that in current region human eye believes the perceived maximum level of detail of things institute more than the high frequency of this frequency Number information will not be perceived by human eye.
In the first instance, it shows marginal space frequency and passes through a kind of calculating process of empirical equation:
Firstly, some is particularly shown in subregion contrast sensitivity on human eyes corresponding to physical picture element point position and this The spatial frequency of display subregion establishes mathematical relationship, and a kind of preferred embodiment of the relationship can be described by formula (1) and formula (2):
In formula, CS (f, e) is contrast sensitivity on human eyes, and CT (f, e) is visible contrast threshold, and f is spatial frequency, e It is retinal eccentricity, e2It is half-resolution eccentricity constant, CT0It is contrast sensitivity on human eyes threshold value, a is spatial frequency decaying Coefficient.Further, a=0.05~0.25, e2=2.0~3.0, CT0=1/128~1/16.Particularly, work as a=0.106, e2 =2.3, CT0=1/64, it can satisfy most of image request, for being fitted contrast sensitive model.
Then, maximum value or weighted average are set by contrast sensitivity on human eyes CS (f, e), is preferably arranged as maximum Value, secondly, the weighted average for being set as being slightly less than maximum value is also can be with received scheme.
Finally, being marginal space frequency according to the spatial frequency that formula (1) and formula (2) are calculated.
Fig. 6 has reacted the relationship of the retinal eccentricity that the present embodiment is calculated and marginal space frequency, and abscissa is Retinal eccentricity, ordinate are marginal space frequency.
It should be pointed out that formula (1) and formula (2) are a kind of preferred inverse type and exponential type relationship, but do not represent all Functional relation.When contrast sensitivity on human eyes CS (f, e) is kept fixed, as long as the increase for meeting retinal eccentricity e leads to institute The mathematical relationship for stating spatial frequency f reduction may be all used.
In second example, shows marginal space frequency and passes through a kind of calculating process of human-eye model formula:
Firstly, carrying out gaussian filtering process to original image using 3 × 3 Gaussian convolution function according to formula (3):
I (l) indicates that l layers of texture pyramid image, p are fixed point position, and the Gaussian convolution function that G is 3 × 3 includes The Gauss weights of each adjacent pixels,For eight adjacent pixels centered on fixed point, wGFor Gaussian function each section weight The inverse of sum.The texture pyramid image of each layer is calculated since l=0.
Secondly, making the length and width of image by the way of the value of filtering down-sampling becomes original 1/1.5~1/10, it is excellent 1/2 is selected, to establish texture pyramid.
Then, the corresponding angle value of fixation point corresponding to each pixel, i.e. eccentricity are calculated according to formula (4);
Wherein, Nv is distance of the observer to fixation point plane, and d (x) is distance of the pixel to fixation point.
Finally, spatial frequency is calculated according to formula (5), to remove the high frequency letter in image information using gaussian filtering Number.
Wherein, fcFor spatial frequency, σ is the nuclear radius (configurable filter factor) of high filtering, and e is retinal eccentricities Rate, d (x) are distance of the pixel to fixation point.
Fig. 7 has reacted the relationship of the location of pixels that the present embodiment is calculated and marginal space frequency, and abscissa is plane On location of pixels, ordinate be marginal space frequency, this example includes that four figure layers may include more in other examples More figure layers is to be further reduced the data volume of image transmitting.The quantity of figure layer depends on the speed of operation body, operation body speed Faster, the layer count that can be supported is also more, and the data volume for reducing image transmitting is also bigger.
Implementation column four:
The present embodiment, which is described in detail, to be implemented to obtain the n corresponding marginal space frequency of display subregion difference in one Specific steps S110, which includes:
It is all physics pixels in the display subregion by the corresponding marginal space set of frequency of the display subregion The maximum value of the corresponding marginal space frequency in position or some fixed value close to maximum value.The latter indicates to use slightly lower image Quality to provide bigger data compression and more small data transmission bandwidth.
Embodiment five:
The video image data for implementing the creation and the corresponding n figure layer of rendering in one is described in detail in the present embodiment Specific steps S120, which includes:
Firstly, showing the physical location of near-eye display screen locating for subregion according to n, obtained from inputted video image Take the video image data of this n display subregion corresponding position;
Then, according to the marginal space frequency of each display subregion, the view of subregion corresponding position is shown to this n Frequency image data carries out the down-sampling filtering of different proportion, the video image data of n figure layer of generation, and each figure layer respectively Video image data is corresponding equal or close to the display subregion by the filtered image space frequency of the down-sampling Marginal space frequency, so that down-sampling filtering is made to be filtered compressed data according to human eye acceptable marginal space frequency, It should be noted that the filtered image space frequency of down-sampling can be made to be slightly less than the marginal space frequency of this subregion, Achieve the purpose that increase compression ratio by slightly sacrificing picture quality.
Meanwhile obtaining the down-sampling coefficient of the video image data of the n figure layer.
Further, the filtering down-sampling includes to the filtering down-sampling of the spatial resolution of image and to image ash The down-sampling of resolution ratio is spent, the former contains the process for taking several pixel gray level average values, and the latter contains pixel is low Position gray value is added to neighboring pixel to reduce the process of pixel digit.Particularly, described to create and render corresponding n figure The specific steps of the video image data of layer, which further comprise, is added to pixel low data on surrounding pixel, to reduce pixel The process of data bits.
Further, the central fovea (Foveated) for the process image that down-sampling filters can be calculated to carry out, this packet It has included and subregion has been stared to human eye smaller down-sampling coefficient is used to be filtered to retain more image informations, and to separate The human eye stares subregion and bigger down-sampling coefficient is used to be filtered to compress the process of more image informations.Further Ground, central fovea, which calculates, can use geometric mapping, filter method or stratification method, and including wavelet transformation, gaussian filtering, volume The mathematical operations such as product, quantization, texture pyramid (including mipmap pyramid), data compression and/or gray scale shake.Particularly, Method in central fovea calculation method such as third example.
Embodiment six:
The video image data by n figure layer that the present embodiment is described in detail in implementation one is transmitted to the nearly eye and shows The specific steps S130 of device, comprising:
By the video image data of n figure layer based on communication mode wirelessly or non-wirelessly, in different channels or same It one channel but is successively transmitted in the near-eye display in different time, the channel is physical channel or logic channel.
Embodiment seven:
The present embodiment, which is described in detail, implements the video image data of n figure layer is rebuild and being spliced in one, raw The specific steps S140 for staring the image of effect at human eye is met, comprising:
The video image data of n figure layer is rebuild, so that image resolution ratio and gray value is restored to the nearly eye aobvious Show the corresponding resolution ratio of device screen and gray value;
The step of reserved overlay region progress multi-resolution image splicing between the domain of adjacent display areas, the reserved overlay region, wraps The image data that judgement is in the overlay region has been included, and the image of the overlay region has mutually been merged by different weights And form the process of complete picture.
Further, the process of the reconstruction and splicing include image interpolation, it is image resampling, image enhancement, bilateral The calculating of filtering and pixel Bits Expanding.Particularly, arest neighbors interpolation, bilinear interpolation, bicubic can be used in described image interpolation Interpolation, spline interpolation, the image interpolation algorithm based on edge, the image interpolation algorithm based on region.
Particularly, in order to reduce the boundary line effect in texture figure layer splicing, using the side of addition middle transition band Formula reserves certain overlapping region in adjacent texture figure layer in splicing, and the pixel of overlapping region carries out phase by different weights Mutually fusion forms image co-registration intermediate zone, specifically can be described as formula (6):
IF(i, j)=γ (i, j) IA(i,j)+(1-γ(i,j))IB(i,j)(6)
Wherein, IFFor the pixel in image co-registration intermediate zone, IAFor the pixel of current layer, IBFor the picture in next figure layer Element;Pixel mutually merges between value by controlling γ can carry out different figure layers, and the value range of γ is 0~1.
Embodiment eight:
The present embodiment increases on the basis of embodiment one to seven and obtains human eye in real time by eyeball tracking mode and stare The method of point.When position change occurs for human eye ball, immediately according to new human eye fixation point, near-eye display screen is shown Show repartitioning for subregion, and retrieves the central point that human eye stares subregion.Display image is recalculated. Show that the time delay of image is not found by people from the near-eye display that gets of human eye fixation point, to avoid dizzy It is dizzy.Particularly, the central point for staring subregion in view of human eye has certain error, therefore shows that subregion and setting face dividing Boundary's spatial frequency should reserve enough allowances, to guarantee not to be readily perceptible by the human eye.
Further, the eyeball tracking include tracked according to the changing features of eyeball and eyeball periphery, or according to Iris angle change is tracked, or extracts feature after actively projecting infrared light beam to iris to be tracked.
Embodiment nine:
The near-eye display screen includes the independent image for showing two left eyes for being respectively used to people and right eye, described Independent image can be divided into several display subregions that subregion is stared including human eye respectively.Particularly, the two independences Image can react the human eye fixation point that human eye is tracked and corresponding display subregion respectively.
In another example, the near-eye display screen includes the left side that two independent screens are respectively used to display people Eye and right eye, the independent screen can be respectively divided into several display subregions that subregion is stared including human eye.Especially Ground, the two independent screens can react the human eye fixation point that human eye is tracked and corresponding display subregion respectively.

Claims (16)

1. a kind of nearly eye display methods based on visual characteristics of human eyes, which is characterized in that the described method includes:
According to human eye fixation point, near-eye display screen is divided into n display subregion, including centrally located human eye Stare subregion;
Obtain the corresponding marginal space frequency of the n display subregion;
It is the n aobvious from the video image of input according to the corresponding marginal space frequency of the n display subregion Show that subregion is respectively created and renders the video image data of corresponding n figure layer;
The video image data of the n figure layer is transmitted to the near-eye display;
The video image data of the n figure layer is rebuild and spliced, generation meets the image that human eye stares effect, shows In the near-eye display screen.
2. nearly eye display methods according to claim 1, which is characterized in that the human eye is stared effect and included at least:
The display effect that subregion uses relatively high amount of image information is stared in the human eye,
At edge, subregion uses the display effect of relatively low amount of image information,
It uses between the middle sub-field that the human eye stares subregion and the edge subregion between highest image letter Display effect between breath amount and minimum amount of image information;
And described image information content is described by the pixel spatial resolution and pixel gray value digit of image.
3. nearly eye display methods according to claim 1, which is characterized in that the n display subregion is arrived according to human eye The quantization of the retinal eccentricity of the near-eye display screen or it is continuous divide, and including with the human eye stare subregion to The cyclic annular subregion that edge is expanded and/or the corner subregion without display content.
4. nearly eye display methods according to claim 1, which is characterized in that the resolution that the n display subregion is constituted Rate and details meet the central fovea image of Human Visual System, and each corresponding marginal space frequency of display subregion with The increase of retinal eccentricity and reduce.
5. nearly eye display methods according to claim 4, which is characterized in that the central fovea image passes through geometric maps side Method or filtering method or Hierarchical Approach obtain, and corresponding n figure layer, and the n figure layer can be described by image pyramid, described It is formed in after n figure layer is combined and spliced on the pyramidal mapped plan of described image on the nearly eye display screen in presenting The recessed image of the heart.
6. nearly eye display methods according to claim 5, which is characterized in that described image pyramid be gaussian pyramid, One of laplacian pyramid, difference pyramid or mipmap pyramid.
7. nearly eye display methods according to claim 1, which is characterized in that the marginal space frequency rule of thumb formula Or human-eye model formula obtains, the parameter of the empirical equation includes retinal eccentricity, half-resolution eccentricity constant, human eye Contrast sensitivity threshold value and spatial frequency attenuation coefficient, the parameter of the human-eye model formula include retinal eccentricity, as Distance and configurable filter factor of the vegetarian refreshments to fixation point.
8. nearly eye display methods according to claim 1, which is characterized in that it is right respectively to obtain the n display subregion The specific steps for the marginal space frequency answered include:
It is all physics pixels position in the display subregion by the corresponding marginal space set of frequency of the n display subregion Set the maximum value of corresponding marginal space frequency or a certain fixed value close to maximum value.
9. nearly eye display methods according to claim 1, which is characterized in that described to create and render corresponding n figure layer The specific steps of video image data include:
The physical location that near-eye display screen locating for subregion is shown according to described n, obtains this n from inputted video image The video image data of a display subregion corresponding position;
The video image data for showing subregion corresponding position to described n carries out the down-sampling filtering of different proportion respectively, raw At the video image data of n figure layer, and the video image data of each figure layer is empty by the filtered image of the down-sampling Between frequency be equal or close to the corresponding marginal space frequency of the display subregion;
Obtain the down-sampling coefficient of the video image data of the n figure layer.
10. nearly eye display methods according to claim 1, which is characterized in that described to create and render corresponding n figure layer The specific steps of video image data further include that pixel low data is added on surrounding pixel, to reduce pixel data The step of digit.
11. nearly eye display methods according to claim 1, which is characterized in that by the video image data of the n figure layer Based on communication mode wirelessly or non-wirelessly, but in different channel or in same channel be successively transmitted in different time described In near-eye display, the channel is physical channel or logic channel.
12. nearly eye display methods according to claim 1, which is characterized in that the video image data of the n figure layer into Row is rebuild and the specific steps of splicing include:
The video image data of the n figure layer is rebuild, so that image resolution ratio and gray value is restored to the nearly eye aobvious Show the corresponding resolution ratio of device screen and gray value;
The step of reserved overlay region progress multi-resolution image splicing between the domain of adjacent display areas, the reserved overlay region includes sentencing The disconnected image data in the overlay region, and the image of the overlay region is mutually merged and formed by different weights The step of complete picture.
13. nearly eye display methods according to claim 12, which is characterized in that the reconstruction and splicing the step of include figure As interpolation, the calculating of image resampling, image enhancement, bilateral filtering and pixel Bits Expanding.
14. nearly eye display methods according to claim 1, which is characterized in that the human eye stares the central point of subregion It is obtained in real time by eyeball tracking mode, and shows that the time of image prolongs from the near-eye display that gets of the central point It is not found late by people.
15. nearly eye display methods according to claim 14, which is characterized in that the eyeball tracking include according to eyeball and The changing features on eyeball periphery are tracked, or are tracked according to iris angle change, or are actively projected infrared light beam and arrived Feature is extracted after iris to be tracked.
16. nearly eye display methods according to claim 1, which is characterized in that the near-eye display screen includes display The independent image of two left eyes for being respectively used to people and right eye, or including two independent screens be respectively used to display people left eye and Right eye, the independent image and independent screen can all be respectively divided into several display subregions that subregion is stared including human eye.
CN201910138436.XA 2019-02-25 2019-02-25 A kind of nearly eye display methods based on visual characteristics of human eyes Pending CN109886876A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910138436.XA CN109886876A (en) 2019-02-25 2019-02-25 A kind of nearly eye display methods based on visual characteristics of human eyes
PCT/CN2020/076512 WO2020173414A1 (en) 2019-02-25 2020-02-25 Human vision characteristic-based near-eye display method and device
US17/060,840 US11438564B2 (en) 2019-02-25 2020-10-01 Apparatus and method for near-eye display based on human visual characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910138436.XA CN109886876A (en) 2019-02-25 2019-02-25 A kind of nearly eye display methods based on visual characteristics of human eyes

Publications (1)

Publication Number Publication Date
CN109886876A true CN109886876A (en) 2019-06-14

Family

ID=66929343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910138436.XA Pending CN109886876A (en) 2019-02-25 2019-02-25 A kind of nearly eye display methods based on visual characteristics of human eyes

Country Status (1)

Country Link
CN (1) CN109886876A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173414A1 (en) * 2019-02-25 2020-09-03 昀光微电子(上海)有限公司 Human vision characteristic-based near-eye display method and device
CN112070657A (en) * 2020-08-14 2020-12-11 昀光微电子(上海)有限公司 Image processing method, device, system, equipment and computer storage medium
CN112272294A (en) * 2020-09-21 2021-01-26 苏州唐古光电科技有限公司 Display image compression method, device, equipment and computer storage medium
CN112468820A (en) * 2020-11-26 2021-03-09 京东方科技集团股份有限公司 Image display method and image display system
CN112991169A (en) * 2021-02-08 2021-06-18 辽宁工业大学 Image compression method and system based on image pyramid and generation countermeasure network
CN113362450A (en) * 2021-06-02 2021-09-07 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267716A1 (en) * 2015-03-11 2016-09-15 Oculus Vr, Llc Eye tracking for display resolution adjustment in a virtual reality system
CN106412563A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Image display method and apparatus
CN106856010A (en) * 2015-12-09 2017-06-16 想象技术有限公司 Retina female is rendered
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality
CN108665521A (en) * 2018-05-16 2018-10-16 京东方科技集团股份有限公司 Image rendering method, device, system, computer readable storage medium and equipment
CN109242943A (en) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 A kind of image rendering method, device and image processing equipment, storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267716A1 (en) * 2015-03-11 2016-09-15 Oculus Vr, Llc Eye tracking for display resolution adjustment in a virtual reality system
CN106856010A (en) * 2015-12-09 2017-06-16 想象技术有限公司 Retina female is rendered
CN106412563A (en) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 Image display method and apparatus
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality
CN108665521A (en) * 2018-05-16 2018-10-16 京东方科技集团股份有限公司 Image rendering method, device, system, computer readable storage medium and equipment
CN109242943A (en) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 A kind of image rendering method, device and image processing equipment, storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIN ZHANG等: "Real-time foveation filtering using nonlinear Mipmap interpolation", 《VISUAL COMPUTER》 *
张鑫: "基于人眼视觉生理、心理的表达式绘制算法研究", 《万方学位论文》 *
杨中雷: "基于眼睛生理的凝视效果模拟", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173414A1 (en) * 2019-02-25 2020-09-03 昀光微电子(上海)有限公司 Human vision characteristic-based near-eye display method and device
US11438564B2 (en) 2019-02-25 2022-09-06 Lumicore Microelectronics Shanghai Co. Ltd. Apparatus and method for near-eye display based on human visual characteristics
CN112070657A (en) * 2020-08-14 2020-12-11 昀光微电子(上海)有限公司 Image processing method, device, system, equipment and computer storage medium
CN112070657B (en) * 2020-08-14 2024-02-27 昀光微电子(上海)有限公司 Image processing method, device, system, equipment and computer storage medium
CN112272294A (en) * 2020-09-21 2021-01-26 苏州唐古光电科技有限公司 Display image compression method, device, equipment and computer storage medium
CN112468820A (en) * 2020-11-26 2021-03-09 京东方科技集团股份有限公司 Image display method and image display system
CN112468820B (en) * 2020-11-26 2023-08-15 京东方科技集团股份有限公司 Image display method and image display system
CN112991169A (en) * 2021-02-08 2021-06-18 辽宁工业大学 Image compression method and system based on image pyramid and generation countermeasure network
CN113362450A (en) * 2021-06-02 2021-09-07 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system
CN113362450B (en) * 2021-06-02 2023-01-03 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system

Similar Documents

Publication Publication Date Title
CN109886876A (en) A kind of nearly eye display methods based on visual characteristics of human eyes
US20210325684A1 (en) Eyewear devices with focus tunable lenses
JP7415931B2 (en) Image display control using real-time compression within the image peripheral area
Weier et al. Perception‐driven accelerated rendering
Banks et al. 3D Displays
US11900578B2 (en) Gaze direction-based adaptive pre-filtering of video data
CN107516335A (en) The method for rendering graph and device of virtual reality
Jones et al. Degraded reality: using VR/AR to simulate visual impairments
US11438564B2 (en) Apparatus and method for near-eye display based on human visual characteristics
CN108076384B (en) image processing method, device, equipment and medium based on virtual reality
CN109933268A (en) A kind of nearly eye display device based on visual characteristics of human eyes
CN110378914A (en) Rendering method and device, system, display equipment based on blinkpunkt information
Mohanto et al. An integrative view of foveated rendering
CN106484116A (en) The treating method and apparatus of media file
CN105763865A (en) Naked eye 3D augmented reality method and device based on transparent liquid crystals
CN112040222B (en) Visual saliency prediction method and equipment
CN111757090A (en) Real-time VR image filtering method, system and storage medium based on fixation point information
CN106780313A (en) Image processing method and device
Bektaş et al. GeoGCD: Improved visual search via gaze-contingent display
CN112272294B (en) Display image compression method, device, equipment and computer storage medium
CN114418857A (en) Image display method and device, head-mounted display equipment and storage medium
Çöltekin Foveation for 3D visualization and stereo imaging
CN103824250A (en) GPU-based image tone mapping method
Kim et al. Selective foveated ray tracing for head-mounted displays
Hsieh et al. Learning to perceive: Perceptual resolution enhancement for VR display with efficient neural network processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination