CN112835453B - Method, apparatus and storage medium for simulating interface effect when focusing human eyes - Google Patents

Method, apparatus and storage medium for simulating interface effect when focusing human eyes Download PDF

Info

Publication number
CN112835453B
CN112835453B CN202110239627.2A CN202110239627A CN112835453B CN 112835453 B CN112835453 B CN 112835453B CN 202110239627 A CN202110239627 A CN 202110239627A CN 112835453 B CN112835453 B CN 112835453B
Authority
CN
China
Prior art keywords
layer
user interface
graphical user
blurring
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110239627.2A
Other languages
Chinese (zh)
Other versions
CN112835453A (en
Inventor
张鑫磊
曲梦瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110239627.2A priority Critical patent/CN112835453B/en
Publication of CN112835453A publication Critical patent/CN112835453A/en
Application granted granted Critical
Publication of CN112835453B publication Critical patent/CN112835453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application relates to the field of computer graphics, and provides a method, equipment and a computer readable storage medium for simulating an interface effect when a human eye is focused, wherein a human eye focusing position is determined by using smaller computing resources, and then the interface effect when the human eye is focused is simulated. The method comprises the following steps: determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device; setting the ambiguity of the layer where the focusing point is located to be 0; when the focusing point moves, determining an offset value of any one layer according to the maximum range of the focusing point or the maximum deflection radian of the device, which are determined based on the height and the width of the display screen of the device; and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located. The technical scheme of the application requires extremely small calculated amount, and can better simulate the effect of the graphical user interface when the human eyes focus in a real scene.

Description

Method, apparatus and storage medium for simulating interface effect when focusing human eyes
Technical Field
The present invention relates to the field of computer graphics, and in particular, to a method, apparatus, and storage medium for simulating an interface effect when focusing a human eye.
Background
In the field of computers, interface interaction is a main way for users to exchange information with computer software, and enriching the interaction experience of interfaces becomes an important means for improving the user experience. In general, a scene seen by a human eye varies with a focus position of the human eye, and thus, when an interface effect simulating the focus of the human eye is required, the focus position of the human eye is an important reference.
In the existing method for simulating the interface effect during focusing of human eyes, a camera is mainly used for shooting human eye images, then the human eye images are transmitted to a processor, the human eye images shot by the camera are analyzed through artificial intelligence (Artificial Intelligence, AI) technology, the focusing position of the human eyes is obtained in real time, and then the interface effect during focusing of the human eyes is simulated according to the focusing position of the human eyes.
Although the above-mentioned existing method for simulating the interface effect when focusing the human eyes can obtain the focusing position of the person, the above-mentioned method is at the cost of consuming a huge amount of computing resources, and is not an optimal solution in intelligent mobile terminals or other lightweight devices where resources are abundant and precious.
Disclosure of Invention
The application provides a method, equipment and a storage medium for simulating an interface effect when a human eye is focused, wherein a human eye focusing position is determined by using smaller computing resources, and then the interface effect when the human eye is focused is simulated.
In one aspect, the present application provides a method for simulating an interface effect when focusing a human eye, including:
determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
setting the ambiguity of the layer where the focusing point is located to be 0;
when the focusing point moves, determining an offset value of any one layer according to the maximum range of the focusing point or the maximum deflection radian of the device, which are determined based on the height and the width of the display screen of the device, wherein the offset proportion of any one layer is related to the depth of any one layer;
and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located.
Optionally, when the focusing point moves, determining the offset value of any layer according to the maximum deflection radian of the device includes: obtaining the offset ratio F of any one layer on the graphical user interface from a configuration file; multiplying the maximum offset value C of any one layer on the graphical user interface by the offset proportion F of any one layer on the graphical user interface, wherein the result of multiplication is determined as the offset value of any one layer when the focusing point moves, and the maximum offset value C of any one layer on the graphical user interface is determined by the maximum deflection radian of the device.
Optionally, before multiplying the maximum offset value C of any one layer on the graphical user interface by the offset ratio F of any one layer on the graphical user interface, the method further includes: increasing a design resolution of the graphical user interface; limiting the maximum offset value C of any layer on the graphical user interface within the range of the added value of the design resolution of the graphical user interface.
Optionally, when the focusing point moves, determining the offset value of any layer according to the maximum deflection radian of the device includes: acquiring the deflection radian d of the equipment from the angular motion detection device in real time; and according to a linear function c=f (D) =dC/D, calculating an offset value C of any one layer when the focusing point moves, wherein C is the maximum offset value of any one layer on the graphical user interface, and D is the maximum deflection radian of the equipment.
Optionally, generating, by a blurring algorithm, a blurred image of any one layer on the graphical user interface according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located, including: acquiring a clear image of any one layer on the graphical user interface; determining the ambiguity of any one layer on the graphical user interface according to the depth difference d between the depth of any one layer on the graphical user interface and the layer where the focusing point is located, wherein the ambiguity is related to n, n= [ d/s ], s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity, and the symbol [ ] represents rounding the result within [ ]; and carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface, and generating a blurred image of any one layer on the graphical user interface.
Optionally, generating, by a blurring algorithm, a blurred image of any one layer on the graphical user interface according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located, including: determining a blurring radius for Gaussian blurring according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located; comparing the color value of the pixel Pi with a horizontal blurring threshold value of the pixel Pi on the basis of the blurring radius and the abscissa of each pixel on the horizontal direction of any one of the pixels Pi on the basis of the horizontal direction of any one of the pixels on the graphical user interface, if the color value of the pixel Pi is within the horizontal blurring threshold value range of the pixel, reserving the color value of the pixel Pi, otherwise, taking the horizontal blurring threshold value of the pixel Pi as the color value of the pixel Pi; on the basis of the blurring radius and the ordinate of each pixel point in the longitudinal direction of any one layer on the graphical user interface, comparing the color value of the pixel point Pj with a longitudinal blurring threshold value of the layer for any one pixel point Pj in the longitudinal direction, if the color value of the pixel point Pj is within the longitudinal blurring threshold value range of the layer, reserving the color value of the pixel point Pj, otherwise, taking the longitudinal blurring threshold value of the layer as the color value of the pixel point Pj; and merging the layer subjected to the blurring processing in the transverse direction and the layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
In another aspect, the present application provides a method for simulating an interface effect when focusing a human eye, the method comprising:
determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
setting the ambiguity of the layer where the focusing point is located to be 0;
when the focusing point moves, any one layer on the graphical user interface is moved from the current position to the target position;
and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located.
Optionally, while moving any layer on the graphical user interface from the current position to the target position, the method further includes: interpolation is carried out on the low-resolution area brightness map area of any one layer on the graphical user interface by adopting an interpolation model through a nearest neighbor interpolation method, so that a high-resolution area brightness map is obtained; calculating a loss function of the interpolation model, and summing the smoothness of the high-resolution area brightness map; projecting the falling direction of the loss function to a feasible direction and determining a falling step length; and correcting the pixel brightness value of the high-resolution area brightness map to reduce the value of the loss function.
Optionally, generating, by a blurring algorithm, a blurred image of any one layer on the graphical user interface according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located, including: acquiring a clear image of any one layer on the graphical user interface; determining the ambiguity of any one layer on the graphical user interface according to the depth difference d between the depth of any one layer on the graphical user interface and the depth of the layer where the focusing point is located, wherein the ambiguity is related to n, n=d/s, and s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity; and carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface, and generating a blurred image of any one layer on the graphical user interface.
Optionally, generating, by a blurring algorithm, a blurred image of any one layer on the graphical user interface according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located, including: determining a blurring radius for Gaussian blurring according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located; comparing the color value of the pixel Pi with a horizontal blurring threshold value of the pixel Pi on the basis of the blurring radius and the abscissa of each pixel on the horizontal direction of any one of the pixels Pi on the basis of the horizontal direction of any one of the pixels on the graphical user interface, if the color value of the pixel Pi is within the horizontal blurring threshold value range of the pixel, reserving the color value of the pixel Pi, otherwise, taking the horizontal blurring threshold value of the pixel Pi as the color value of the pixel Pi; on the basis of the blurring radius and the ordinate of each pixel point in the longitudinal direction of any one layer on the graphical user interface, comparing the color value of the pixel point Pj with a longitudinal blurring threshold value of the layer for any one pixel point Pj in the longitudinal direction, if the color value of the pixel point Pj is within the longitudinal blurring threshold value range of the layer, reserving the color value of the pixel point Pj, otherwise, taking the longitudinal blurring threshold value of the layer as the color value of the pixel point Pj;
And merging the layer subjected to the blurring processing in the transverse direction and the layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
In a third aspect, the present application provides an apparatus for simulating an interface effect when focusing a human eye, comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and a width of a display screen of the device or by angular movement detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focusing point is located to be 0;
the offset value determining module is used for determining an offset value of any one layer according to the maximum range of the focusing point or the maximum deflection radian of the device, which is determined based on the height and the width of the display screen of the device, when the focusing point moves, and the offset proportion of any one layer is related to the depth of any one layer;
and the generation module is used for generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between the any one layer on the graphical user interface and the layer where the focusing point is located.
In a fourth aspect, the present application provides an apparatus for simulating an interface effect when focusing a human eye, comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and a width of a display screen of the device or by angular movement detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focusing point is located to be 0;
the interpolation module is used for moving any one layer on the graphical user interface from the current position to the target position when the focusing point moves;
and the generation module is used for generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between the any one layer on the graphical user interface and the layer where the focusing point is located.
In a fifth aspect, the present application provides a computer device comprising a memory having a computer program stored therein and a processor executing the steps of the method of simulating the effect of an interface when a human eye is focused as described in any of the embodiments above by invoking the computer program stored in the memory.
In a sixth aspect, the present application provides a computer readable storage medium storing a computer program adapted to be loaded by a processor for performing the steps of the method of simulating an effect of an interface when a human eye is focused as described in any of the embodiments above.
According to the technical scheme provided by the application, on one hand, the focusing point on the graphical user interface is determined based on the height and the width of the display screen of the device, and the focusing point on the graphical user interface is determined through the angular motion detection device of the device, so that complex calculation is not needed, and the calculation amount needed by the technical scheme is extremely small compared with the huge resource cost in the prior art; on the other hand, when the ambiguity of the layer where the focusing point is located is set to 0, the method is equivalent to guiding human eyes to the clearest layer area, so that the layering offset and the regional ambiguity of the layer based on the movement of the focusing point can better simulate the effect of a graphical user interface when the human eyes focus in a real scene.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for simulating an interface effect when focusing a human eye according to an embodiment of the present application;
FIG. 2 is a schematic representation of the depth of different layers on a graphical user interface provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of the graphical user interface illustrated in FIG. 2 after a layer shift provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of the graphical user interface of FIG. 2 after a layer shift according to an embodiment of the present application
FIG. 5 is a flow chart of a method for simulating an interface effect when focusing on a human eye according to another embodiment of the present application;
FIG. 6 is a schematic structural diagram of an apparatus for simulating an interface effect when focusing a human eye according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an apparatus for simulating an interface effect when focusing a human eye according to another embodiment of the present application;
FIG. 8 is a schematic structural diagram of an apparatus for simulating an interface effect when focusing a human eye according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In this specification, adjectives such as first and second may be used solely to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. Where the environment permits, reference to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but may be one or more of the element, component, or step, etc.
In the present specification, for convenience of description, the dimensions of the various parts shown in the drawings are not drawn in actual scale.
The application provides a method for simulating an interface effect when focusing human eyes, which is shown in fig. 1, and mainly comprises steps S101 to S104, and is described in detail as follows:
step S101: the focus point on the graphical user interface is determined based on the height and width of the display screen of the device or by angular movement detection means of the device.
Unlike the prior art, which analyzes a human eye image captured by a camera through an AI technology, a great computing power is required when a neural network of AI is operated, in the embodiment of the present application, the focal point on the graphical user interface is determined based on the height and the width of the display screen of the device or through the angular motion detection device of the device, wherein the focal point on the graphical user interface is determined based on the height and the width of the display screen of the device, which is for a traditional terminal such as a personal computer, and the focal point on the graphical user interface is determined by the angular motion detection device of the device, which is for a mobile intelligent terminal such as a smart phone, a tablet computer, and the like. Before describing the method for determining the focus point on the graphical user interface for the two types of devices, the basic facts are firstly clarified that the refreshing of the graphical user interface is actually performed at a certain frequency, for example 60Hz, and each time the graphical user interface is refreshed, the graphical user interface is equivalent to one frame of updating; secondly, for facilitating subsequent calculations, the range of values of the focus point is normalized to [ -1,1], i.e. the abscissa x e [ -1,1] and the ordinate y e [ -1,1] of the focus point.
A method for determining a focus point on a graphical user interface based on a height and a width of a display screen of a device, comprising: firstly, defining the geometric center of a graphical user interface of the equipment as the origin of a focusing point, wherein the coordinates are (0, 0), and defining the upper, lower, left and right boundaries of the graphical user interface as +/-1; the coordinates of the focusing point on any frame of graphical user interface are expressed by (x, y), and the position (a, b) of the mouse on the graphical user interface and the interface width W and height H are obtained through an operating system interface; further assuming that the origin of coordinates of the mouse is at the lower left corner of the graphical user interface (the mouse of different systems, the origin may be different), let w=w/2,h =h/2, the abscissa x= (a-W)/W, and the ordinate y= (b-H)/H of the focal point may be obtained.
As for determining the focus point on the gui by the angular motion detection device of the apparatus, it is first explained that in the embodiment of the present application, the angular motion detection device of the apparatus may be a gyroscope or the like, and secondly, since the gyroscope only reflects the rotation of the apparatus, it is agreed that the deflection state of the gyroscope when entering the gui represents that the focus point is at the origin, that is, the coordinate of the focus point at this time is (0, 0), and secondly, when it is agreed that the apparatus is tilted to the left or right or front or back by an arc of m=pi/4, the abscissa or ordinate of the focus point is defined as ±1. Based on the two basic conventions described above, the determination of the focus point on the graphical user interface by the angular movement detection means of the device may specifically be: when the device deflects, the angular velocity of the deflection of the gyroscope is obtained through the device interface, and then the angular velocity is directly multiplied by the time increment to obtain the deflection radian (a, b, c) (wherein a represents left-right inclination, b represents front-back inclination and c represents inner-outer inclination), so that the abscissa x=x '+a/m and the ordinate y=y' +b/m of the focusing point can be obtained.
Step S102: the ambiguity of the layer in which the focus point is located is set to 0.
The ambiguity of a layer on the graphical user interface ranges from 0 to a certain value, and the ambiguity of a layer is 0, i.e. the ambiguity is minimal, meaning that the layer is the clearest. In this embodiment of the present application, the ambiguity of the layer where the focus point is located is set to 0, which is considered that in an actual scene, when a certain layer or a certain region of an image on a graphical user interface is the clearest, a human eye may involuntarily move to the layer or the region. Therefore, when the ambiguity of the layer where the focus point is located determined in step S101 is set to 0, the method is equivalent to guiding human eyes to the clearest layer area, so that the subsequent layering offset and the blurring of the layer based on the movement of the focus point can better simulate the effect of the graphical user interface when the human eyes focus in a real scene.
Step S103: and when the focusing point moves, determining an offset value of any one layer according to the maximum range of the focusing point or the maximum deflection radian of the device, which are determined based on the height and the width of the display screen of the device, wherein the offset proportion of any one layer is related to the depth of any one layer.
In the embodiment of the application, when the focusing point moves, the offset value of any one layer is determined according to the maximum deflection radian of the device, which means that when the focusing point moves, the layers with different depths are offset to different degrees, so that the depth experience of human eyes in focusing can be simulated through the visual effect of a two-dimensional plane, and a user can experience a spatial sense in a two-dimensional graphical user interface. Before explaining the technical solution of step S103, the depth of the layer will be briefly described herein. By depth of a layer, it is meant the distance between the layer and the screen (since the background and the screen are usually in the same plane, depth of a layer is also understood to be the distance between the layer and the background), and another definition of depth of a layer is also meant the distance between the layer and the viewer. Whichever definition, the visual perception of the layer on the two-dimensional plane to an observer or user is: the farther a layer is perceived by the user, the greater the depth of that layer, and conversely, the closer a layer is perceived by the user, the lesser the depth of that layer.
As an embodiment of the present application, when the focal point moves, determining the offset value of any layer according to the maximum deflection radian of the device may be implemented through step S1031 and step S1032, which are described as follows:
Step S1031: and obtaining the offset ratio F of any one layer on the graphical user interface from the configuration file.
In this embodiment of the present application, the offset ratio F of a layer is used to indicate the offset degree of the layer when the focus point moves, where the offset ratio F of any one layer ILi is related to the depth of the layer ILi, specifically, the smaller the depth of the layer ILi, the larger the offset ratio F of the layer ILi, otherwise, the larger the depth of the layer ILi, the smaller the offset ratio F of the layer ILi. The offset ratio F of layer ILi has a maximum value of 1 and a minimum value of-1. For example, three layers of the layer IL1, the layer IL2, and the layer IL3, which have depths of 0, 5, and 10, respectively, when the offset ratio thereof is set, the offset ratio F1 of the layer IL1 may be set to f1= (10-0)/10=1, the offset ratio F2 of the layer IL2 may be set to f2= (10-5)/10=1/2, and the offset ratio F3 of the layer IL3 may be set to f3= (10-10)/10=0. In this embodiment of the present application, the offset ratio may be set for any layer on the gui in advance, and then, the offset ratio F of the layers is saved as a configuration file, and when needed, the offset ratio F of the layer may be directly read from the configuration file according to the identifier of the layer.
Step S1032: multiplying the maximum offset value C of any one layer on the graphical user interface by the offset proportion F of any one layer on the graphical user interface, wherein the maximum offset value C of any one layer on the graphical user interface is determined by the maximum deflection radian of the device, and the result of multiplication is determined as the offset value of any one layer when the focusing point moves.
As previously described, the offset ratio F of any layer ILi on the graphical user interface may be set directly to any value between [ -1,1], with a maximum value of 1, meaning that the layer offset is at its maximum. In fact, the maximum offset value C of the layer ILi may also be determined by the maximum deflection arc of the device, i.e. when the deflection arc of the device reaches a maximum value, the layer ILi also reaches the maximum offset value C, in this case also corresponding to the offset ratio F of the layer reaching the maximum value, i.e. f=1. Based on the above facts, in the embodiment of the present application, the maximum offset value C of any one layer on the graphical user interface is multiplied by the offset ratio F of any one layer on the graphical user interface, and the result of the multiplication is determined as the offset value of any one layer when the focus point moves. It should be noted here that for PC devices, there is generally no maximum arc of deflection of the device. Therefore, for the PC device, when the focus point is in the maximum range determined based on the height and width of the device display screen, the offset value of the layer is also the maximum, that is, the maximum offset value C, so that the offset value of any one layer is determined based on the maximum range of the focus point determined based on the height and width of the device display screen, the maximum offset value C of any one layer on the graphical user interface is actually multiplied by the offset ratio F of any one layer on the graphical user interface, and the result of the multiplication is determined as the offset value of any one layer when the focus point moves.
Since the values of the offset ratios F are different for layers with different depths, the embodiments corresponding to the step S1031 and the step S1032 can actually realize the effect that the layers with different depths have different offset magnitudes when the focus point moves. As shown in fig. 2, two layers with different depths on a certain game interface are taken as examples of two layers marked with a 'fun playing method' and a 'ancient task' word, and obviously, the depth of the 'fun playing method' of the layer is smaller, and the depth of the 'ancient task' of the layer is larger. Referring to fig. 3, an example of the offset of the various layers as the focus moves is shown for the interface of the example of fig. 2. Through comparison, the deviation amplitude of the layer 'fun playing method' is different from that of the layer 'ancient task', and specifically, the deviation amplitude of the layer 'fun playing method' is larger than that of the layer 'ancient task'.
In the above embodiment, a case is also considered in which the layer is excessively shifted when the focus moves, or even in the lateral or longitudinal direction, one end of the layer is out of the screen resolution range, and the other end is blank without pixels. To avoid this, in the above-described embodiment, the design resolution of the graphical user interface may be increased before multiplying the maximum offset value C of any one layer on the graphical user interface by the offset ratio F of any one layer on the graphical user interface, and then the maximum offset value C of any one layer on the graphical user interface may be limited within the range of the increased value of the design resolution of the graphical user interface. For example, increasing the design resolution of the graphical user interface by 200 (increasing by 200 in both the left and right directions) in the lateral direction and by 113 in the longitudinal direction (increasing by 113 in both the up and down directions), limiting the maximum offset value C of any one layer on the graphical user interface to be within the range of the increased value of the design resolution of the graphical user interface means that the maximum offset value C of any one layer on the graphical user interface is within [ -200, 200] in the lateral direction and within [ -113, 113] in the longitudinal direction. As shown in fig. 4, the left column is a case where the layer protrudes outside the screen after the maximum offset value C of the layer on the gui exceeds the device resolution, and the right column is a case where the maximum offset value C of the layer on the gui is limited within the range of the added value of the design resolution of the gui after the design resolution of the gui is increased for the influence of the left column.
For determining the offset value of any one layer on the gui, it may also be implemented by a linear relationship, that is, as another embodiment of the present application, when the focal point moves, determining the offset value of any one layer according to the maximum deflection radian of the device may be implemented by step S '1031 and step S'1032, which are described as follows:
step S'1031: the deflection radian d of the device is obtained in real time from the angular motion detection means.
As previously mentioned, the angular motion detection means may be an integrated gyroscope on the device which can acquire the deflection arc d of the device in real time.
Step S'1032: and according to a linear function c=f (D) =dC/D, calculating an offset value C of any one layer on the graphical user interface when the focusing point moves, wherein C is the maximum offset value of any one layer on the graphical user interface, and D is the maximum deflection radian of the device.
According to the technical solutions of step S '1031 and step S'1032 in the foregoing embodiments, the effect that when the device gradually deflects from 0 to a certain maximum radian, any layer on the gui gradually reaches the maximum offset value from 0 can be achieved.
Step S104: and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located.
It should be noted that, in the embodiment of the present application, a blurred image should be understood in a broad sense, and should not be considered as a blurred or unclear image, but should be understood as an image having a degree of blur, and the degree of blur may be a smaller value or a larger value. When the blur is large, this means that the image is less clear, when the blur is small, this means that the image is clearer, for example, when the blur is minimum (for example, the blur is 0), the image is sharpest.
As an embodiment of the present application, according to the depth difference between any one layer on the gui and the layer where the focus point is located, generating the blurred image of any one layer on the gui through the blurring algorithm may be implemented through steps S1041 to S1043, which are described as follows:
step S1041: a clear image of any one layer on the graphical user interface is obtained.
Specifically, the obtaining of a clear image of any one layer on the graphical user interface may be: acquiring an original clear image of any one layer on a graphical user interface, and then calculating a first pixel value of each pixel point by adopting a color matrix aiming at each pixel point in the original clear image to obtain a second pixel value corresponding to each pixel point; and generating a clear image of any one layer on the graphical user interface according to the second pixel value corresponding to each pixel point, wherein each pixel point is provided with an alpha channel and at least one color channel.
Step S1042: and determining the ambiguity of any one layer on the graphical user interface according to the depth difference d between the any one layer on the graphical user interface and the layer where the focusing point is located, wherein the ambiguity is related to n, n= [ d/s ], s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity, and the symbol [ ] represents the result rounding within the pair [ ].
In the embodiment of the application, n related to the ambiguity is actually a value indicating how far from the sampling pixel point in the gaussian blur processing, specifically, n circles of pixel points near the sampling pixel point, and the value of n determines the ambiguity of the finally obtained image layer; on the other hand, since the time consumption of calculation gradually increases with an increase in the value of n, that is, an increase in the number of pixels around the pixel, it is necessary to limit the upper limit of n; in the embodiment of the present application, s is a coefficient set by limiting the size of n by a user, where s may be set as the maximum depth of a layer on the graphical user interface divided by the maximum ambiguity, where the maximum ambiguity does not exceed 10, and is generally taken as 10.
Step S1043: and carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface, and generating a blurred image of any one layer on the graphical user interface.
As another embodiment of the present application, according to the depth difference between any one layer on the gui and the layer where the focus point is located, generating the blurred image of any one layer on the gui through the blurring algorithm may be implemented through steps S '1041 to S'1044, which is described below.
Step S'1041: and determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located.
Similar to step S1042 in the foregoing embodiment, the value of n, i.e. how many circles around the sampled pixel point are within the gaussian blur processing, is determined according to the depth difference d between any layer on the gui and the layer where the focus point is located, where n= [ d/S ], S is the maximum depth of the layer on the gui divided by the maximum ambiguity, and the symbol [ ] represents the rounding of the result within [ ]; when the value of n is determined, it means that the blur radius of the gaussian blur process can be determined; wherein the maximum degree of ambiguity does not exceed 10, which is generally taken as 10.
Step S'1042: and comparing the color value of the pixel Pi with a horizontal blurring threshold value of the layer on the basis of the blurring radius and the abscissa of each pixel point on the horizontal direction of any one of the pixel points Pi on the basis of the blurring radius and the abscissa of each pixel point on the graphical user interface, if the color value of the pixel point Pi is within the horizontal blurring threshold value range of the layer, reserving the color value of the pixel point Pi, otherwise, taking the horizontal blurring threshold value of the layer as the color value of the pixel point Pi.
In this embodiment of the present application, the horizontal blur threshold of the layer is a color value, which determines whether a pixel point on the layer should be blurred in the horizontal direction, specifically, the color value of the pixel point Pi may be preserved if the color value of the pixel point Pi is within the horizontal blur threshold range of the layer by comparing the color value of the pixel point Pi with the horizontal blur threshold of the layer, otherwise, the horizontal blur threshold of the layer is used as the color value of the pixel point Pi.
Step S'1043: and comparing the color value of the pixel point Pj with a longitudinal blurring threshold value of the layer on the basis of the blurring radius and the ordinate of each pixel point on the longitudinal direction of any one of the pixel points Pj on the basis of the blurring radius and the ordinate of any one of the pixel points on the graphical user interface, if the color value of the pixel point Pj is within the longitudinal blurring threshold value range of the layer, reserving the color value of the pixel point Pj, otherwise, taking the longitudinal blurring threshold value of the layer as the color value of the pixel point Pj.
In the step S'1042, the layer is processed in the transverse direction, and similar to the previous embodiment, the layer longitudinal blurring threshold is also a color value, which determines whether a pixel point on the layer should be blurred in the longitudinal direction, specifically, the color value of the pixel point Pj may be preserved if the color value of the pixel point Pj is within the layer longitudinal blurring threshold, otherwise, the layer longitudinal blurring threshold is used as the color value of the pixel point Pj.
Step S'1044: and merging the image layer subjected to the blurring processing in the transverse direction and the image layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any one image layer on the graphical user interface.
Unlike the steps S1041 to S1043 in the foregoing embodiments, the steps S '1041 to S'1044 in the embodiments of the present application can implement the blurring of the layer only by performing the blurring processing cycle twice in the horizontal direction and the vertical direction of the layer, so that the operation amount is small, the processing efficiency is high, and the consumption of the memory and the operation resource is reduced.
On the one hand, as known from the method for simulating the effect of the human eye focusing interface illustrated in fig. 1, on the other hand, no complex calculation is required for determining the focusing point on the graphical user interface based on the height and the width of the display screen of the device or determining the focusing point on the graphical user interface by the angular motion detection device of the device, so that the calculation amount required by the technical scheme of the application is extremely small compared with the huge resource cost in the prior art; on the other hand, when the ambiguity of the layer where the focusing point is located is set to be minimum, the method is equivalent to guiding human eyes to the clearest layer area, so that the layering offset and the regional ambiguity of the layer based on the movement of the focusing point can better simulate the effect of a graphical user interface when the human eyes focus in a real scene.
Referring to fig. 5, another embodiment of a method for simulating an interface effect during focusing of a human eye according to the present invention mainly includes steps S501 to S504, which are described in detail below:
step S501: the focus point on the graphical user interface is determined based on the height and width of the display screen of the device or by angular movement detection means of the device.
The implementation scheme of step S501 is identical to the implementation scheme of step S101 in the foregoing embodiment, and explanation of related terms, concepts, etc. can refer to the description of the implementation scheme of step S101 in the foregoing embodiment, which is not repeated herein.
Step S502: the ambiguity of the layer in which the focus point is located is set to 0.
The implementation scheme of step S502 is identical to the implementation scheme of step S102 in the foregoing embodiment, and explanation of related terms, concepts, etc. can refer to the description of the implementation scheme of step S102 in the foregoing embodiment, which is not repeated herein.
Step S503: when the focus point is moved, any one layer on the graphical user interface is moved from the current position to the target position.
Specifically, when the focus moves, moving any layer on the gui from the current position to the target position may be controlling the layer to interpolate from the current position to the target position according to a linear interpolation function p= (1-t) ×a+t×b, where p is real-time coordinates of the layer in the interpolation process, a is the current position of the layer, b is the target position of the layer, and t is an interpolation ratio, where t e [0,1]. By the linear interpolation algorithm, an effect similar to inertia can be generated when the layers are shifted on the graphical user interface, and the layers are not dead or smoother when the layers are shifted.
In view of the above-mentioned possible damage to the original image of the layer when any one layer on the gui is moved from the current position to the target position by the linear interpolation algorithm or other reasons, for example, the resolution is reduced, in the embodiment of the present application, the above-mentioned damage to the original image of the layer may be solved by using the linear interpolation algorithm to move any one layer on the gui from the current position to the target position at the same time, which is described below through steps S5031 to S5034:
step S5031: and adopting an interpolation model to interpolate the luminance map area of the low-resolution area of any one layer on the graphical user interface by a nearest neighbor interpolation method, so as to obtain the luminance map of the high-resolution area.
Step S5032: and calculating a loss function of the interpolation model, and summing the smoothness of the brightness map of the high-resolution area.
Step S5033: projecting the falling direction of the loss function of the interpolation model to a feasible direction and determining a falling step size.
Step S5034: the pixel luminance values of the high resolution area luminance map are corrected such that the value of the loss function of the interpolation model is reduced.
By the embodiment, the damage to the original image of the image layer is effectively restrained when the image layer is shifted, the exploration direction and the step length of the loss function of the interpolation model can be rapidly determined, the calculated amount is reduced, and the execution of the interpolation algorithm is accelerated.
Step S504: and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located.
Similar to the foregoing embodiment step S104, as an embodiment of the present application, step S504 may be implemented through steps S5041 to S5043, which are described as follows:
step S5041: a clear image of any one layer on the graphical user interface is obtained.
Specifically, it may be: acquiring an original clear image of any one layer on a graphical user interface, and then calculating a first pixel value of each pixel point by adopting a color matrix aiming at each pixel point in the original clear image to obtain a second pixel value corresponding to each pixel point; and generating a clear image of any one layer on the graphical user interface according to the second pixel value corresponding to each pixel point, wherein each pixel point is provided with an alpha channel and at least one color channel.
Step S5042: and determining the ambiguity of any one layer on the graphical user interface according to the depth difference d between the any one layer on the graphical user interface and the layer where the focusing point is located, wherein the ambiguity is related to n, n= [ d/s ], s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity, wherein the symbol [ ] represents the result within the pair [ ] is rounded, and the maximum ambiguity is not more than 10, and is generally taken as 10.
In the embodiment of the present application, n related to the ambiguity is actually a value indicating how far from the sampling pixel point in the gaussian blur process, specifically, n circles of pixel points near the sampling pixel point, where the value of n determines the ambiguity of the finally obtained layer; on the other hand, since the time consumption of calculation gradually increases with an increase in the value of n, that is, an increase in the number of pixels around the pixel, it is necessary to limit the upper limit of n; in the embodiment of the present application, s is a coefficient set by limiting the size of n by a user, and s may be set as the maximum depth of the layer on the graphical user interface divided by 10.
Step S5043: and carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface, and generating a blurred image of any one layer on the graphical user interface.
Similar to the step S104 of the previous embodiment, as another embodiment of the present application, the step S504 may be implemented through the steps S '5041 to S'5044, which is described below.
Step S'5041: and determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located.
Similar to step S5042 in the foregoing embodiment, according to the depth difference d between any layer on the gui and the layer where the focus point is located, determining the value of n, i.e. how many circles around the sampled pixel point are within the gaussian blur processing, where n= [ d/S ], S is the maximum depth of the layer on the gui divided by the maximum ambiguity, and the symbol [ ] represents the rounding of the result within [ ]; when the value of n is determined, it means that the blur radius of the gaussian blur process can be determined; wherein the maximum degree of ambiguity does not exceed 10, which is generally taken as 10.
Step S'5042: and comparing the color value of the pixel Pi with a horizontal blurring threshold value of the layer on the basis of the blurring radius and the abscissa of each pixel point on the horizontal direction of any one of the pixel points Pi on the basis of the blurring radius and the abscissa of each pixel point on the graphical user interface, if the color value of the pixel point Pi is within the horizontal blurring threshold value range of the layer, reserving the color value of the pixel point Pi, otherwise, taking the horizontal blurring threshold value of the layer as the color value of the pixel point Pi.
In this embodiment of the present application, the horizontal blur threshold of the layer is a color value, which determines whether a pixel point on the layer should be blurred in the horizontal direction, specifically, the color value of the pixel point Pi may be preserved if the color value of the pixel point Pi is within the horizontal blur threshold range of the layer by comparing the color value of the pixel point Pi with the horizontal blur threshold of the layer, otherwise, the horizontal blur threshold of the layer is used as the color value of the pixel point Pi.
Step S'5043: and comparing the color value of the pixel point Pj with a longitudinal blurring threshold value of the layer on the basis of the blurring radius and the ordinate of each pixel point on the longitudinal direction of any one of the pixel points Pj on the basis of the blurring radius and the ordinate of any one of the pixel points on the graphical user interface, if the color value of the pixel point Pj is within the longitudinal blurring threshold value range of the layer, reserving the color value of the pixel point Pj, otherwise, taking the longitudinal blurring threshold value of the layer as the color value of the pixel point Pj.
In the step S'5042, the layer is processed in the transverse direction, and similar to the previous embodiment, the vertical blurring threshold of the layer is also a color value, which determines whether a pixel point on the layer should be blurred in the longitudinal direction, specifically, the color value of the pixel point Pj may be reserved by comparing the color value of the pixel point Pj with the vertical blurring threshold of the layer, if the color value of the pixel point Pj is within the vertical blurring threshold of the layer, otherwise, the vertical blurring threshold of the layer is used as the color value of the pixel point Pj.
Step S'5044: and merging the image layer subjected to the blurring processing in the transverse direction and the image layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any one image layer on the graphical user interface.
Unlike the steps S5041 to S5043 in the foregoing embodiments, the steps S '5041 to S'5044 in the embodiments of the present application can implement the blurring of the layer only by performing the blurring processing cycle twice in the horizontal direction and the vertical direction of the layer, so that the operation amount is small, the processing efficiency is high, and the consumption of the memory and the operation resource is reduced.
On the one hand, as known from the method for simulating the effect of the human eye focusing interface illustrated in fig. 5, on the other hand, no complex calculation is required for determining the focusing point on the graphical user interface based on the height and the width of the display screen of the device or determining the focusing point on the graphical user interface by the angular motion detection device of the device, so that the calculation amount required by the technical scheme of the application is extremely small compared with the huge resource cost in the prior art; on the other hand, when the ambiguity of the layer where the focusing point is located is set to be minimum, the method is equivalent to guiding human eyes to the clearest layer area, so that the layering offset and the regional ambiguity of the layer based on the movement of the focusing point can better simulate the effect of a graphical user interface when the human eyes focus in a real scene.
Referring to fig. 6, an apparatus for simulating an interface effect when focusing a human eye according to an embodiment of the present application may include a focus point determining module 601, a setting module 602, an offset value determining module 603, and a generating module 604, which are described in detail below:
a focus point determination module 601 for determining a focus point on the graphical user interface based on the height and width of the display screen of the device or by angular movement detection means of the device;
a setting module 602, configured to set an ambiguity of a layer where a focus point is located to 0;
an offset value determining module 603, configured to determine, when the focus point moves, an offset value of any one layer on the graphical user interface according to a maximum range of the focus point or a maximum deflection radian of the device, where the maximum range and the maximum deflection radian are determined based on a height and a width of a display screen of the device, and an offset ratio of any one layer on the graphical user interface is related to a depth of any one layer on the graphical user interface;
and the generating module 604 is configured to generate a blurred image of any one layer on the graphical user interface through a blurring algorithm according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located.
Optionally, in the apparatus illustrated in fig. 6, the offset value determining module 603 may include an offset ratio obtaining unit and a first calculating unit, where:
The offset proportion obtaining unit is used for obtaining the offset proportion F of any one layer on the graphical user interface from the configuration file;
the first calculation unit is used for multiplying the maximum offset value C of any one layer on the graphical user interface by the offset proportion F of any one layer on the graphical user interface, and determining the multiplied result as the offset value of any one layer on the graphical user interface when the focusing point moves, wherein the maximum offset value C of any one layer on the graphical user interface is determined by the maximum deflection radian of the device.
Optionally, the apparatus illustrated in fig. 6 may further include a resolution increasing module 701 and a limiting module 702, where an apparatus for simulating an interface effect when focusing a human eye according to another embodiment of the present application is shown in fig. 7, and the method includes:
a resolution increasing module 701 for increasing the design resolution of the graphical user interface;
the limiting module 702 is configured to limit the maximum offset value C of any one layer on the gui to be within the range of the added value of the design resolution of the gui.
Optionally, in the apparatus illustrated in fig. 6, the offset value determining module 603 may include a deflection arc obtaining unit and a second calculating unit, where:
The deflection radian acquisition unit is used for acquiring the deflection radian d of the equipment from the angular motion detection device in real time;
and the second calculation unit is used for calculating the offset value C of any one layer on the graphical user interface when the focusing point moves according to a linear function c=f (D) =dc/D, wherein C is the maximum offset value of any one layer on the graphical user interface, and D is the maximum deflection radian of the device.
Optionally, in the apparatus illustrated in fig. 6, the generating module 604 may include a clear image acquiring unit, a third calculating unit, and a blurred image generating unit, where:
the clear image acquisition unit is used for acquiring a clear image of any one image layer on the graphical user interface;
a third calculation unit, configured to determine an ambiguity of any one layer on the gui according to a depth difference d between the any one layer on the gui and the layer where the focus is located, where the ambiguity is related to n, n= [ d/s ], s is a maximum depth of the layer on the gui divided by a maximum ambiguity, where the symbol [ ] represents a rounding of a result within [ ], and the maximum ambiguity is not more than 10, and is generally taken as 10;
and the blurred image generating unit is used for carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface and generating a blurred image of any one layer on the graphical user interface.
Optionally, in the apparatus illustrated in fig. 6, the generating module 604 may include a blur radius determining unit, a first comparing unit, a second comparing unit, and a merging unit, where:
the blurring radius determining unit is used for determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focusing point is located;
the first comparison unit is used for comparing the color value of the pixel Pi with the horizontal blurring threshold value of the layer on the basis of the blurring radius and the abscissa of each pixel on the horizontal direction of any one layer on the graphical user interface, if the color value of the pixel Pi is within the range of the horizontal blurring threshold value of the layer, the color value of the pixel Pi is reserved, otherwise, the horizontal blurring threshold value of the layer is used as the color value of the pixel Pi;
the second comparison unit is used for comparing the color value of the pixel point Pj with the longitudinal blurring threshold value of the layer on the basis of the blurring radius and the ordinate of each pixel point in the longitudinal direction on the basis of the blurring radius and the longitudinal coordinate of each pixel point on the graphical user interface, if the color value of the pixel point Pj is within the longitudinal blurring threshold value range of the layer, the color value of the pixel point Pj is reserved, otherwise, the longitudinal blurring threshold value of the layer is used as the color value of the pixel point Pj;
And the merging unit is used for merging the image layer subjected to the blurring processing in the transverse direction and the image layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any one image layer on the graphical user interface.
Referring to fig. 8, an apparatus for simulating an interface effect when focusing a human eye according to an embodiment of the present application may include a focus point determining module 601, a setting module 602, an interpolation module 801, and a generating module 604, which are described in detail below:
a focus point determination module 601 for determining a focus point on the graphical user interface based on the height and width of the display screen of the device or by angular movement detection means of the device;
a setting module 602, configured to set an ambiguity of a layer where a focus point is located to 0;
an interpolation module 801, configured to move any one of the layers on the graphical user interface from the current position to the target position when the focus point moves;
and the generating module 604 is configured to generate a blurred image of any one layer on the graphical user interface through a blurring algorithm according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located.
Optionally, in the apparatus illustrated in fig. 8, the generating module 604 may include a clear image acquiring unit, a third calculating unit, and a blurred image generating unit, where:
The clear image acquisition unit is used for acquiring a clear image of any one image layer on the graphical user interface;
a third calculation unit, configured to determine an ambiguity of any one layer on the gui according to a depth difference d between the any one layer on the gui and the layer where the focus is located, where the ambiguity is related to n, n= [ d/s ], s is a maximum depth of the layer on the gui divided by a maximum ambiguity, where the symbol [ ] represents a rounding of a result within [ ], and the maximum ambiguity is not more than 10, and is generally taken as 10;
and the blurred image generating unit is used for carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface and generating a blurred image of any one layer on the graphical user interface.
Optionally, in the apparatus illustrated in fig. 8, the generating module 604 may include a blur radius determining unit, a first comparing unit, a second comparing unit, and a merging unit, where:
the blurring radius determining unit is used for determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focusing point is located;
The first comparison unit is used for comparing the color value of the pixel Pi with the horizontal blurring threshold value of the layer on the basis of the blurring radius and the abscissa of each pixel on the horizontal direction of any one layer on the graphical user interface, if the color value of the pixel Pi is within the range of the horizontal blurring threshold value of the layer, the color value of the pixel Pi is reserved, otherwise, the horizontal blurring threshold value of the layer is used as the color value of the pixel Pi;
the second comparison unit is used for comparing the color value of the pixel point Pj with the longitudinal blurring threshold value of the layer on the basis of the blurring radius and the ordinate of each pixel point in the longitudinal direction on the basis of the blurring radius and the longitudinal coordinate of each pixel point on the graphical user interface, if the color value of the pixel point Pj is within the longitudinal blurring threshold value range of the layer, the color value of the pixel point Pj is reserved, otherwise, the longitudinal blurring threshold value of the layer is used as the color value of the pixel point Pj;
and the merging unit is used for merging the image layer subjected to the blurring processing in the transverse direction and the image layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any one image layer on the graphical user interface.
As can be seen from the description of the above technical solutions, on one hand, since the focal point on the graphical user interface is determined based on the height and the width of the display screen of the device, or the focal point on the graphical user interface is determined by the angular motion detection device of the device, complex calculation is not required, and therefore, compared with the huge resource cost in the prior art, the calculation amount required by the technical solution of the present application is extremely small; on the other hand, when the ambiguity of the layer where the focusing point is located is set to be minimum, the method is equivalent to guiding human eyes to the clearest layer area, so that the layering offset and the regional ambiguity of the layer based on the movement of the focusing point can better simulate the effect of a graphical user interface when the human eyes focus in a real scene.
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 9, the computer device 9 of this embodiment mainly includes: a processor 90, a memory 91 and a computer program 92 stored in the memory 91 and executable on the processor 90, for example a program for a method of simulating the effect of an interface when a human eye is focused. The steps in the above-described embodiments of the method for simulating the human eye focusing interface effect are implemented by the processor 90 executing the computer program 92, for example, steps S101 to S104 shown in fig. 1 or steps S501 to S504 shown in fig. 5. Alternatively, the processor 90 implements the functions of the modules/units in the above-described apparatus embodiments when executing the computer program 92, for example, the functions of the focus point determination module 601, the setting module 602, the offset value determination module 603, and the generation module 604 shown in fig. 6, or the functions of the focus point determination module 601, the setting module 602, the interpolation module 801, and the generation module 604 shown in fig. 8.
Illustratively, the computer program 92 for a method of simulating the effects of an interface when a human eye is in focus, consists essentially of: determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device; setting the ambiguity of the layer where the focusing point is located to be 0; when the focusing point moves, determining an offset value of any one layer on the graphical user interface according to a maximum range of the focusing point or a maximum deflection radian of the device, which is determined based on the height and the width of the display screen of the device, wherein the offset proportion of any one layer on the graphical user interface is related to the depth of any one layer on the graphical user interface; generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located; alternatively, the computer program 92 for a method of simulating an interface effect when focusing a human eye mainly includes: determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device; setting the ambiguity of the layer where the focusing point is located to be 0; when the focusing point moves, any one layer on the graphical user interface is moved from the current position to the target position; and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located. The computer program 92 may be divided into one or more modules/units, which are stored in the memory 91 and executed by the processor 90 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing particular functions for describing the execution of the computer program 92 in the computer device 9. For example, the computer program 92 may be divided into functions of a focus point determination module 601, a setting module 602, an offset value determination module 603, and a generation module 604 (a module in a virtual device), each of which specifically functions as follows: a focus point determination module 601 for determining a focus point on the graphical user interface based on the height and width of the display screen of the device or by angular movement detection means of the device; a setting module 602, configured to set an ambiguity of a layer where a focus point is located to 0; an offset value determining module 603, configured to determine, when the focus point moves, an offset value of any one layer on the graphical user interface according to a maximum range of the focus point or a maximum deflection radian of the device, where the maximum range and the maximum deflection radian are determined based on a height and a width of a display screen of the device, and an offset ratio of any one layer on the graphical user interface is related to a depth of any one layer on the graphical user interface; the generating module 604 is configured to generate a blurred image of any one layer on the graphical user interface through a blurring algorithm according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located; alternatively, the computer program 92 may be divided into functions of the focus point determination module 601, the setting module 602, the interpolation module 801, and the generation module 604 (modules in the virtual device), each of which has the following specific functions: a focus point determination module 601 for determining a focus point on the graphical user interface based on the height and width of the display screen of the device or by angular movement detection means of the device; a setting module 602, configured to set an ambiguity of a layer where a focus point is located to 0; an interpolation module 801, configured to move any one of the layers on the graphical user interface from the current position to the target position when the focus point moves; and the generating module 604 is configured to generate a blurred image of any one layer on the graphical user interface through a blurring algorithm according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located.
The computer device 9 may include, but is not limited to, a processor 90, a memory 91. It will be appreciated by those skilled in the art that fig. 9 is merely an example of a computer device 9 and is not intended to limit the computer device 9, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computer device may also include an input-output computer device, a network access computer device, a bus, etc.
The processor 90 may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the computer device 9, such as a hard disk or a memory of the computer device 9. The memory 91 may also be an external storage computer device of the computer device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 9. Further, the memory 91 may also include both an internal storage unit of the computer device 9 and an external storage computer device. The memory 91 is used to store computer programs and other programs and data required by the computer device. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that the above-described functional units and modules are merely illustrated for convenience and brevity of description, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in this application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, or may be implemented by a computer program to instruct related hardware, where the computer program for a method of simulating an interface effect when focusing on a human eye may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above, that is, determining the focus point on the graphical user interface based on the height and width of the display screen of the device or by the angular motion detection device of the device; setting the ambiguity of the layer where the focusing point is located to be 0; when the focusing point moves, determining an offset value of any one layer on the graphical user interface according to a maximum range of the focusing point or a maximum deflection radian of the device, which is determined based on the height and the width of the display screen of the device, wherein the offset proportion of any one layer on the graphical user interface is related to the depth of any one layer on the graphical user interface; generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located; alternatively, a computer program for a method of simulating an interface effect when focusing a human eye may be stored in a computer readable storage medium, which when executed by a processor, implements the steps of the various method embodiments described above, i.e., the computer program 92 for a method of simulating an interface effect when focusing a human eye mainly comprises: determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device; setting the ambiguity of the layer where the focusing point is located to be 0; when the focusing point moves, any one layer on the graphical user interface is moved from the current position to the target position; and generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the non-transitory computer readable medium may include content that is suitably scaled according to the requirements of jurisdictions in which the legislation and patent practice, such as in some jurisdictions, the non-transitory computer readable medium does not include electrical carrier signals and telecommunication signals according to the legislation and patent practice. The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
The foregoing detailed description of the embodiments has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of example only, and is not intended to limit the scope of the invention.

Claims (12)

1. A method of simulating an interface effect when a human eye is focused, the method comprising:
determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
setting the ambiguity of the layer where the focusing point is located to be 0;
when the focusing point moves, determining an offset value of any one layer according to the maximum range of the focusing point or the maximum deflection radian of the device, which is determined based on the height and the width of the display screen of the device, wherein the offset proportion of any one layer is related to the depth of any one layer;
generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located, wherein the blurred image comprises the following steps:
Determining a blurring radius for Gaussian blurring according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located;
the horizontal direction of any one layer on the graphic user interface is based on the blurring radius and the horizontal coordinate of each pixel point, and is aimed at any one pixel point P in the horizontal direction i Comparing pixel points P i If the pixel point P is i The pixel point P is reserved if the color value of the pixel point P is within the horizontal blurring threshold range of the layer i Otherwise, the layer lateral blur threshold is taken as the pixel point P i Color values of (2);
on the basis of the blurring radius and the ordinate of each pixel point, the longitudinal direction of any one layer on the graphical user interface is aimed at any one pixel point P on the longitudinal direction j Comparing the pixel points P j If the pixel point P is j The color value is within the longitudinal blurring threshold range of the layer, and the pixel point P is reserved j Otherwise, the longitudinal blurring threshold value of the layer is taken as the pixel point P j Color values of (2);
and merging the layer subjected to the blurring processing in the transverse direction and the layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
2. The method of simulating the effect of an interface in the focusing of a human eye according to claim 1, wherein determining the offset value of any one of the layers according to the maximum deflection radian of the device when the focus point is moved comprises:
obtaining the offset ratio F of any one layer on the graphical user interface from a configuration file;
multiplying the maximum offset value C of any one layer on the graphical user interface by the offset proportion F of any one layer on the graphical user interface, wherein the result of multiplication is determined as the offset value of any one layer when the focusing point moves, and the maximum offset value C of any one layer on the graphical user interface is determined by the maximum deflection radian of the device.
3. The method of simulating an effect of an interface in a focus of a human eye according to claim 2, wherein prior to multiplying the maximum offset value C of any one of the layers on the graphical user interface by the offset ratio F of any one of the layers on the graphical user interface, the method further comprises:
increasing a design resolution of the graphical user interface;
limiting the maximum offset value C of any layer on the graphical user interface within the range of the added value of the design resolution of the graphical user interface.
4. The method of simulating the effect of an interface in the focusing of a human eye according to claim 1, wherein determining the offset value of any one of the layers according to the maximum deflection radian of the device when the focus point is moved comprises:
acquiring the deflection radian d of the equipment from the angular motion detection device in real time;
and according to a linear function c=f (D) =dC/D, calculating an offset value C of any one layer when the focusing point moves, wherein C is the maximum offset value of any one layer on the graphical user interface, and D is the maximum deflection radian of the equipment.
5. The method for simulating the effect of an interface when focusing a human eye according to claim 1, wherein generating the blurred image of any one layer on the graphical user interface by a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located comprises:
acquiring a clear image of any one layer on the graphical user interface;
determining the ambiguity of any one layer on the graphical user interface according to the depth difference d between the depth of any one layer on the graphical user interface and the layer where the focusing point is located, wherein the ambiguity is related to n, n= [ d/s ], s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity, and the symbol [ ] represents rounding the result within [ ];
And carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface, and generating a blurred image of any one layer on the graphical user interface.
6. A method of simulating an interface effect when a human eye is focused, the method comprising:
determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
setting the ambiguity of the layer where the focusing point is located to be 0;
when the focusing point moves, any one layer on the graphical user interface is moved from the current position to the target position;
generating a blurred image of any one layer on the graphical user interface through a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located, wherein the blurred image comprises the following steps:
determining a blurring radius for Gaussian blurring according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located;
the horizontal direction of any one layer on the graphic user interface is based on the blurring radius and the horizontal coordinate of each pixel point, and is aimed at any one pixel point P in the horizontal direction i Comparing pixel points P i If the pixel point P is i The pixel point P is reserved if the color value of the pixel point P is within the horizontal blurring threshold range of the layer i Otherwise, the layer lateral blur threshold is taken as the pixel point P i Color values of (2);
on the basis of the blurring radius and the ordinate of each pixel point, the longitudinal direction of any one layer on the graphical user interface is aimed at any one pixel point P on the longitudinal direction j Comparing the pixel points P j If the pixel point P is j The color value is within the longitudinal blurring threshold range of the layer, and the pixel point P is reserved j Otherwise, the longitudinal blurring threshold value of the layer is taken as the pixel point P j Color values of (2);
and merging the layer subjected to the blurring processing in the transverse direction and the layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
7. The method of simulating an effect of an eye-focusing interface of claim 6, wherein the moving any one of the layers on the graphical user interface from the current position to the target position, the method further comprises:
Interpolation is carried out on the low-resolution area brightness map area of any one layer on the graphical user interface by adopting an interpolation model through a nearest neighbor interpolation method, so that a high-resolution area brightness map is obtained;
calculating a loss function of the interpolation model, and summing the smoothness of the high-resolution area brightness map;
projecting the falling direction of the loss function to a feasible direction and determining a falling step length;
and correcting the pixel brightness value of the high-resolution area brightness map to reduce the value of the loss function.
8. The method for simulating the effect of an interface in focusing a human eye according to claim 6, wherein generating the blurred image of any one layer on the graphical user interface by a blurring algorithm according to the depth difference between any one layer on the graphical user interface and the layer on which the focusing point is located comprises:
acquiring a clear image of any one layer on the graphical user interface;
determining the ambiguity of any one layer on the graphical user interface according to the depth difference d between the depth of any one layer on the graphical user interface and the depth of the layer where the focusing point is located, wherein the ambiguity is related to n, n=d/s, and s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity;
And carrying out Gaussian blur processing on the clear image according to the blur degree of any one layer on the graphical user interface, and generating a blurred image of any one layer on the graphical user interface.
9. An apparatus for simulating the effects of an interface when a human eye is focused, the apparatus comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and a width of a display screen of the device or by angular movement detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focusing point is located to be 0;
the offset value determining module is used for determining an offset value of any one layer according to the maximum range of the focusing point or the maximum deflection radian of the device, which is determined based on the height and the width of the display screen of the device, when the focusing point moves, wherein the offset proportion of any one layer is related to the depth of any one layer;
a generation module for generating a focus image according to any one layer on the GUIThe depth difference between the layers of the points generates a blurred image of any one layer on the graphical user interface through a blurring algorithm, and the method comprises the following steps: determining a blurring radius for Gaussian blurring according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located; the horizontal direction of any one layer on the graphic user interface is based on the blurring radius and the horizontal coordinate of each pixel point, and is aimed at any one pixel point P in the horizontal direction i Comparing pixel points P i If the pixel point P is i The pixel point P is reserved if the color value of the pixel point P is within the horizontal blurring threshold range of the layer i Otherwise, the layer lateral blur threshold is taken as the pixel point P i Color values of (2); on the basis of the blurring radius and the ordinate of each pixel point, the longitudinal direction of any one layer on the graphical user interface is aimed at any one pixel point P on the longitudinal direction j Comparing the pixel points P j If the pixel point P is j The color value is within the longitudinal blurring threshold range of the layer, and the pixel point P is reserved j Otherwise, the longitudinal blurring threshold value of the layer is taken as the pixel point P j Color values of (2); and merging the layer subjected to the blurring processing in the transverse direction and the layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
10. An apparatus for simulating the effects of an interface when a human eye is focused, the apparatus comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and a width of a display screen of the device or by angular movement detection means of the device;
The setting module is used for setting the ambiguity of the layer where the focusing point is located to be 0;
the interpolation module is used for moving any one layer on the graphical user interface from the current position to the target position when the focusing point moves;
the generating module is configured to generate, according to a depth difference between any one layer on the graphical user interface and a layer where the focusing point is located, a blurred image of any one layer on the graphical user interface through a blurring algorithm, where the generating module includes: determining a blurring radius for Gaussian blurring according to the depth difference between any one layer on the graphical user interface and the layer where the focusing point is located; the horizontal direction of any one layer on the graphic user interface is based on the blurring radius and the horizontal coordinate of each pixel point, and is aimed at any one pixel point P in the horizontal direction i Comparing pixel points P i If the pixel point P is i The pixel point P is reserved if the color value of the pixel point P is within the horizontal blurring threshold range of the layer i Otherwise, the layer lateral blur threshold is taken as the pixel point P i Color values of (2); on the basis of the blurring radius and the ordinate of each pixel point, the longitudinal direction of any one layer on the graphical user interface is aimed at any one pixel point P on the longitudinal direction j Comparing the pixel points P j If the pixel point P is j The color value is within the longitudinal blurring threshold range of the layer, and the pixel point P is reserved j Otherwise, the longitudinal blurring threshold value of the layer is taken as the pixel point P j Color values of (2); and merging the layer subjected to the blurring processing in the transverse direction and the layer subjected to the blurring processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
11. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, carries out the steps of the method of simulating an interface effect in a human eye according to any one of claims 1 to 5 or the steps of the method of simulating an interface effect in a human eye according to any one of claims 6 to 8.
12. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor realizes the steps of the method of simulating an interface effect when focusing a human eye according to any one of claims 1 to 5 or the steps of the method of simulating an interface effect when focusing a human eye according to any one of claims 6 to 8.
CN202110239627.2A 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes Active CN112835453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110239627.2A CN112835453B (en) 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110239627.2A CN112835453B (en) 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes

Publications (2)

Publication Number Publication Date
CN112835453A CN112835453A (en) 2021-05-25
CN112835453B true CN112835453B (en) 2023-05-09

Family

ID=75934555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110239627.2A Active CN112835453B (en) 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes

Country Status (1)

Country Link
CN (1) CN112835453B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986228A (en) * 2018-07-06 2018-12-11 网易(杭州)网络有限公司 The method and device shown for virtual reality median surface

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI238348B (en) * 2002-05-13 2005-08-21 Kyocera Corp Portable information terminal, display control device, display control method, and recording media
CN103226436A (en) * 2013-03-06 2013-07-31 广东欧珀移动通信有限公司 Man-machine interaction method and system of intelligent terminal
WO2015184412A1 (en) * 2014-05-30 2015-12-03 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
CN105590294B (en) * 2014-11-18 2019-02-05 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN107003734B (en) * 2014-12-23 2019-12-17 美达视野股份有限公司 Device, method and system for coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest
US9652125B2 (en) * 2015-06-18 2017-05-16 Apple Inc. Device, method, and graphical user interface for navigating media content
GB201709199D0 (en) * 2017-06-09 2017-07-26 Delamont Dean Lindsay IR mixed reality and augmented reality gaming system
CN108769545A (en) * 2018-06-12 2018-11-06 Oppo(重庆)智能科技有限公司 A kind of image processing method, image processing apparatus and mobile terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986228A (en) * 2018-07-06 2018-12-11 网易(杭州)网络有限公司 The method and device shown for virtual reality median surface

Also Published As

Publication number Publication date
CN112835453A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
US11756223B2 (en) Depth-aware photo editing
CN108848367B (en) Image processing method and device and mobile terminal
CN113256781B (en) Virtual scene rendering device, storage medium and electronic equipment
CN108830923B (en) Image rendering method and device and storage medium
US20140078170A1 (en) Image processing apparatus and method, and program
US20220375042A1 (en) Defocus Blur Removal and Depth Estimation Using Dual-Pixel Image Data
WO2019198570A1 (en) Video generation device, video generation method, program, and data structure
CN109285122B (en) Method and equipment for processing image
CN110363837B (en) Method and device for processing texture image in game, electronic equipment and storage medium
CN109145688A (en) The processing method and processing device of video image
CN111583329B (en) Augmented reality glasses display method and device, electronic equipment and storage medium
CN108734712B (en) Background segmentation method and device and computer storage medium
CN112835453B (en) Method, apparatus and storage medium for simulating interface effect when focusing human eyes
CN111652794B (en) Face adjusting and live broadcasting method and device, electronic equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN113496506A (en) Image processing method, device, equipment and storage medium
US20160148415A1 (en) Depth of field synthesis using ray tracing approximation
CN113810755B (en) Panoramic video preview method and device, electronic equipment and storage medium
CN113126944B (en) Depth map display method, display device, electronic device, and storage medium
US9230305B2 (en) Summed area computation using ripmap of partial sums
CN115294493A (en) Visual angle path acquisition method and device, electronic equipment and medium
CN111652025B (en) Face processing and live broadcasting method and device, electronic equipment and storage medium
CN111651033B (en) Face driving display method and device, electronic equipment and storage medium
CN111652807B (en) Eye adjusting and live broadcasting method and device, electronic equipment and storage medium
CN111652024B (en) Face display and live broadcast method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant