CN112835453A - Method, device and storage medium for simulating interface effect when human eyes are focused - Google Patents

Method, device and storage medium for simulating interface effect when human eyes are focused Download PDF

Info

Publication number
CN112835453A
CN112835453A CN202110239627.2A CN202110239627A CN112835453A CN 112835453 A CN112835453 A CN 112835453A CN 202110239627 A CN202110239627 A CN 202110239627A CN 112835453 A CN112835453 A CN 112835453A
Authority
CN
China
Prior art keywords
layer
user interface
graphical user
focus point
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110239627.2A
Other languages
Chinese (zh)
Other versions
CN112835453B (en
Inventor
张鑫磊
曲梦瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110239627.2A priority Critical patent/CN112835453B/en
Publication of CN112835453A publication Critical patent/CN112835453A/en
Application granted granted Critical
Publication of CN112835453B publication Critical patent/CN112835453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application relates to the field of computer graphics, and provides a method, equipment and a computer readable storage medium for simulating an interface effect when human eyes are focused, wherein the focusing position of the human eyes is determined by using smaller computing resources, and the interface effect when the human eyes are focused is further simulated. The method comprises the following steps: determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device; setting the ambiguity of the layer where the focus point is located as 0; when the focus point moves, determining an offset value of any layer according to the maximum range of the focus point or the maximum deflection radian of the equipment, which is determined based on the height and the width of a display screen of the equipment; and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located. The technical scheme of the application requires extremely small calculation amount, and can better simulate the effect of the graphical user interface when human eyes focus in a real scene.

Description

Method, device and storage medium for simulating interface effect when human eyes are focused
Technical Field
The invention relates to the field of computer graphics, in particular to a method, equipment and a storage medium for simulating an interface effect when human eyes focus.
Background
In the field of computers, interface interaction is a main way for information exchange between users and computer software, and interface-rich interaction experience becomes an important means for improving user experience. Generally, the scene seen by human eyes varies with the focus position of human eyes, so that the focus position of human eyes is an important reference when an interface effect simulating the focus of human eyes is required.
In the existing method for simulating the interface effect when human eyes are focused, a camera is mainly adopted to shoot human eye images, then the human eye images are transmitted to a processor, the human eye images shot by the camera are analyzed through an Artificial Intelligence (AI) technology, the focusing position of human eyes is obtained in real time, and the interface effect when the human eyes are focused is simulated according to the focusing position of the human eyes.
Although the above conventional method for simulating the interface effect of the human eye focusing can obtain the focusing position of the person, the above method is at the cost of consuming a huge amount of computing resources, and is not an optimal solution for the intelligent mobile terminal or other lightweight devices with precious resources.
Disclosure of Invention
The application provides a method, equipment and storage medium for simulating an interface effect when human eyes are focused, which can determine the focusing position of the human eyes by using smaller computing resources so as to simulate the interface effect when the human eyes are focused.
In one aspect, the present application provides a method for simulating an interface effect when focusing on a human eye, including:
determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device;
setting the ambiguity of the layer where the focus point is located to be 0;
when the focus point moves, determining an offset value of any one layer according to the maximum range of the focus point determined based on the height and the width of a display screen of the device or the maximum deflection radian of the device, wherein the offset proportion of any one layer is related to the depth of any one layer;
and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Optionally, when the focus point moves, determining an offset value of any one of the image layers according to a maximum deflection radian of the device includes: acquiring the offset proportion F of any layer on the graphical user interface from a configuration file; and multiplying the maximum deviation value C of any layer on the graphical user interface by the deviation proportion F of any layer on the graphical user interface, wherein the result of the multiplication is determined as the deviation value of any layer when the focus point moves, and the maximum deviation value C of any layer on the graphical user interface is determined by the maximum deflection radian of the equipment.
Optionally, before the step of multiplying the maximum offset value C of any one layer on the gui by the offset ratio F of any one layer on the gui, the method further includes: increasing a design resolution of the graphical user interface; and limiting the maximum offset value C of any layer on the graphical user interface within the range of the increased value of the design resolution of the graphical user interface.
Optionally, when the focus point moves, determining an offset value of any one of the image layers according to a maximum deflection radian of the device includes: acquiring the deflection radian d of the equipment from the angular motion detection device in real time; and calculating the offset value C of any layer when the focus point moves according to a linear function C ═ f (D) ═ dC/D, wherein C is the maximum offset value of any layer on the graphical user interface, and D is the maximum deflection radian of the equipment.
Optionally, the generating a blurred image of any layer on the gui through a blurring algorithm according to a depth difference between any layer on the gui and the layer where the focus point is located includes: acquiring a clear image of any layer on the graphical user interface; determining the ambiguity of any layer on the graphical user interface according to the depth difference d between any layer on the graphical user interface and the layer where the focus point is located, wherein the ambiguity is related to n, n ═ d/s ], s is the maximum ambiguity divided by the maximum depth of the layer on the graphical user interface, and the symbol [ ] represents the rounding of the result in the [ ]; and performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
Optionally, the generating a blurred image of any layer on the gui through a blurring algorithm according to a depth difference between any layer on the gui and the layer where the focus point is located includes: determining the blurring radius for performing Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located; in the transverse direction of any one layer on the graphical user interface, based on the fuzzification radius and the abscissa of each pixel point, comparing the color value of the pixel point Pi with a layer transverse fuzzy threshold value aiming at any pixel point Pi in the transverse direction, if the color value of the pixel point Pi is within the range of the layer transverse fuzzy threshold value, retaining the color value of the pixel point Pi, and if not, taking the layer transverse fuzzy threshold value as the color value of the pixel point Pi; in the longitudinal direction of any one layer on the graphical user interface, based on the fuzzification radius and the vertical coordinate of each pixel point, aiming at any pixel point Pj in the longitudinal direction, comparing the color value of the pixel point Pj with a layer longitudinal fuzzy threshold, if the color value of the pixel point Pj is within the range of the layer longitudinal fuzzy threshold, retaining the color value of the pixel point Pj, and if not, taking the layer longitudinal fuzzy threshold as the color value of the pixel point Pj; and combining the layer subjected to the fuzzification treatment in the transverse direction and the layer subjected to the fuzzification treatment in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
In another aspect, the present application provides a method for simulating an interface effect when a human eye focuses, the method comprising:
determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device;
setting the ambiguity of the layer where the focus point is located to be 0;
when the focus point moves, moving any one layer on the graphical user interface from the current position to the target position;
and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Optionally, while moving any layer on the graphical user interface from the current position to the target position, the method further includes: performing interpolation of a nearest neighbor interpolation method on a low-resolution area brightness image area of any image layer on the graphical user interface by adopting an interpolation model to obtain a high-resolution area brightness image; calculating a loss function of the interpolation model, and summing smoothness of the high-resolution area brightness map; projecting the descending direction of the loss function to a feasible direction and determining a descending step length; and correcting the brightness value of the pixel of the brightness image of the high-resolution area to reduce the value of the loss function.
Optionally, the generating a blurred image of any layer on the gui through a blurring algorithm according to a depth difference between any layer on the gui and the layer where the focus point is located includes: acquiring a clear image of any layer on the graphical user interface; determining the ambiguity of any layer on the graphical user interface according to the depth difference d between any layer on the graphical user interface and the layer where the focus point is located, wherein the ambiguity is related to n, n is d/s, and s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity; and performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
Optionally, the generating a blurred image of any layer on the gui through a blurring algorithm according to a depth difference between any layer on the gui and the layer where the focus point is located includes: determining the blurring radius for performing Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located; in the transverse direction of any one layer on the graphical user interface, based on the fuzzification radius and the abscissa of each pixel point, comparing the color value of the pixel point Pi with a layer transverse fuzzy threshold value aiming at any pixel point Pi in the transverse direction, if the color value of the pixel point Pi is within the range of the layer transverse fuzzy threshold value, retaining the color value of the pixel point Pi, and if not, taking the layer transverse fuzzy threshold value as the color value of the pixel point Pi; in the longitudinal direction of any one layer on the graphical user interface, based on the fuzzification radius and the vertical coordinate of each pixel point, aiming at any pixel point Pj in the longitudinal direction, comparing the color value of the pixel point Pj with a layer longitudinal fuzzy threshold, if the color value of the pixel point Pj is within the range of the layer longitudinal fuzzy threshold, retaining the color value of the pixel point Pj, and if not, taking the layer longitudinal fuzzy threshold as the color value of the pixel point Pj;
and combining the layer subjected to the fuzzification treatment in the transverse direction and the layer subjected to the fuzzification treatment in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
In a third aspect, the present application provides an apparatus for simulating an interface effect when focusing on a human eye, comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focus point is located to be 0;
an offset value determining module, configured to determine, when the focus point moves, an offset value of any one layer according to the maximum range of the focus point determined based on the height and the width of the device display screen or the maximum deflection radian of the device, where an offset ratio of any one layer is related to the depth of any one layer;
and the generating module is used for generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
In a fourth aspect, the present application provides an apparatus for simulating an interface effect when focusing on a human eye, comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focus point is located to be 0;
the interpolation module is used for moving any one layer on the graphical user interface from the current position to the target position when the focus point moves;
and the generating module is used for generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
In a fifth aspect, the present application provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor executes the steps in the method for simulating the interface effect when the human eye focuses on the display device according to any one of the above embodiments by calling the computer program stored in the memory.
In a sixth aspect, the present application provides a computer-readable storage medium, which stores a computer program, the computer program being suitable for being loaded by a processor to execute the steps of the method for simulating the interface effect when the human eye focuses on the display device according to any of the embodiments.
As can be seen from the above technical solutions, on one hand, no matter the focal point on the gui is determined based on the height and width of the display screen of the device, or the focal point on the gui is determined by the angular motion detection device of the device, the determination can be achieved without complicated calculation, so that the amount of calculation required by the technical solutions of the present application is extremely small compared to the huge resource cost of the prior art; on the other hand, when the ambiguity of the layer where the focus point is located is set to be 0, the eye can be guided to the clearest layer area, and therefore the subsequent hierarchical offset and regional ambiguity of the layer based on the movement of the focus point can better simulate the effect of a graphical user interface when the eye focuses on the real scene.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for simulating an interface effect when focusing on human eyes according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating depths of different layers on a graphical user interface provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an example of the graphical user interface of FIG. 2 after layer shifting according to an embodiment of the present application;
fig. 4 is a schematic diagram after layer shifting is performed on the graphical user interface illustrated in fig. 2 according to an embodiment of the present application
FIG. 5 is a flowchart of a method for simulating an interface effect when focusing on human eyes according to another embodiment of the present application;
FIG. 6 is a schematic structural diagram of an apparatus for simulating an interface effect when focusing by human eyes according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an apparatus for simulating an interface effect when focusing by human eyes according to another embodiment of the present application;
FIG. 8 is a schematic structural diagram of an apparatus for simulating an interface effect when focusing by human eyes according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this specification, adjectives such as first and second may only be used to distinguish one element or action from another, without necessarily requiring or implying any actual such relationship or order. References to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but rather to one or more of the element, component, or step, etc., where the context permits.
In the present specification, the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The application provides a method for simulating an interface effect when human eyes are focused, as shown in fig. 1, the method mainly comprises steps S101 to S104, which are detailed as follows:
step S101: the focus point on the graphical user interface is determined based on the height and width of the display screen of the device or by angular motion detection means of the device.
Different from the prior art that human eye images shot by a camera are analyzed through an AI technology, and huge calculation power is needed when a neural network of the AI is operated, in the embodiment of the application, a focus point on a graphical user interface is determined based on the height and the width of an equipment display screen or through an angular motion detection device of the equipment, wherein the focus point on the graphical user interface is determined based on the height and the width of the equipment display screen, and the focus point on the graphical user interface is determined through the angular motion detection device of the equipment, and the focus point on the graphical user interface is determined for traditional terminals such as a personal computer, and mobile intelligent terminals such as a smart phone and a tablet computer. Before describing the method for determining the focus point on the gui for the above two types of devices, the basic facts are first clarified that the gui is actually refreshed at a certain frequency, for example, 60Hz, and that each time the gui is refreshed, it is equivalent to a frame update of the gui; secondly, in order to facilitate subsequent calculation, the value range of the focus point is normalized to [ -1, 1], that is, the abscissa x ∈ [ -1, 1] and the ordinate y ∈ [ -1, 1] of the focus point.
The method for determining the focus point on the graphical user interface based on the height and width of the device display screen specifically comprises the following steps: firstly, defining the geometric center of a graphical user interface of the equipment as the origin of a focus point, wherein the coordinate is (0, 0), and defining the upper, lower, left and right boundaries of the graphical user interface as +/-1; the coordinates of a focus point on any frame of graphical user interface are expressed by (x, y), and the position (a, b) of the mouse on the graphical user interface and the width W and the height H of the interface are obtained through an operating system interface; further, assuming that the origin of the coordinates of the mouse is at the lower left corner of the gui (the origin may be different for different systems), let W be W/2 and H be H/2, then the abscissa x of the focus point be (a-W)/W and the ordinate y be (b-H)/H.
As for the determination of the focus point on the gui by the angular motion detection device of the apparatus, it is first described that, in the embodiment of the present application, the angular motion detection device of the apparatus may be a gyroscope or the like, and secondly, since the gyroscope can only reflect the rotation of the apparatus, it is determined that the deflection state of the gyroscope when entering the gui represents that the focus point is at the origin, that is, the coordinate of the focus point is (0, 0), and secondly, when the apparatus is tilted to the left and right or back and forth by an arc of m ═ pi/4, the abscissa or ordinate of the focus point is defined as ± 1. Based on the two basic conventions, the determination of the focus point on the graphical user interface by the angular motion detection means of the device may specifically be: let the coordinates of the focus point on any frame of gui be (x, y), and the coordinates of the focus point on the last frame of gui be (x ', y'), when the device is deflected, the angular velocity of the gyroscope deflection is obtained through the device interface, and then the angular velocity is directly multiplied by the time increment to obtain the radian of the deflection (a, b, c) (here, a represents the left-right tilt, b represents the front-back tilt, and c represents the inside-outside tilt), then the abscissa x of the focus point at this time is x '+ a/m, and the ordinate y is y' + b/m.
Step S102: and setting the ambiguity of the layer where the focus point is located as 0.
The ambiguity of a certain layer on the graphical user interface is from 0 to a certain numerical value, and the ambiguity of a layer is 0, namely the ambiguity is minimum, which means that the layer is clearest. In the embodiment of the present application, the ambiguity of the layer where the focus point is located is set to 0, which takes into account that in an actual scene, when a certain layer or a certain region of an image on a graphical user interface is clearest, human eyes may involuntarily move to the layer or the region. Therefore, when the ambiguity of the layer where the focus point is located, which is determined in step S101, is set to 0, it is equivalent to guiding the human eye to the clearest layer region, and therefore, the subsequent hierarchical shift and regional ambiguity of the layer based on the movement of the focus point can better simulate the effect of the graphical user interface when the human eye focuses on the layer in the real scene.
Step S103: when the focus point moves, determining an offset value of any one layer according to the maximum range of the focus point or the maximum deflection radian of the device, which is determined based on the height and the width of the display screen of the device, wherein the offset proportion of any one layer is related to the depth of any one layer.
In the embodiment of the application, when the focus point moves, the offset value of any layer is determined according to the maximum deflection radian of the equipment, which means that when the focus point moves, the layers with different depths are offset to different degrees, so that the depth experience of human eyes during focusing can be simulated through the visual effect of a two-dimensional plane, and a user can experience a spatial sense in a two-dimensional graphical user interface. Before explaining the technical solution of step S103, the depth of the layer is briefly described here. The depth of a layer may refer to a distance between the layer and a screen (since a background and a screen are generally in the same plane, the depth of a layer may also be understood as a distance between the layer and the background), and another definition of the depth of a layer may also refer to a distance between the layer and a viewer. In any definition, the layer on the two-dimensional plane gives the viewer or user the visual feeling of: when a certain layer feels farther to a user, the depth of the layer is larger, and conversely, when a certain layer feels closer to the user, the depth of the layer is smaller.
As an embodiment of the present application, when the focus point moves, determining the offset value of any one layer according to the maximum deflection radian of the device may be implemented by steps S1031 and S1032, which are described as follows:
step S1031: and acquiring the offset proportion F of any layer on the graphical user interface from the configuration file.
In this embodiment of the application, the offset ratio F of a layer is used to indicate an offset degree of the layer when a focus point moves, where the offset ratio F of any one layer ILi is related to a depth of the layer ILi, and specifically, the smaller the depth of the layer ILi is, the larger the offset ratio F of the layer ILi is, and conversely, the larger the depth of the layer ILi is, the smaller the offset ratio F of the layer ILi is; the shift proportion F of the layer ILi has a maximum value of 1 and a minimum value of-1. For example, the three layers such as the layer IL1, the layer IL2, and the layer IL3 have respective depths of 0, 5, and 10, and when the shift ratios thereof are set, the shift ratio F1 of the layer IL1 may be set to F1 ═ 10/1, the shift ratio F2 of the layer IL2 may be set to F2 ═ 10/10 ═ 1/2, and the shift ratio F3 of the layer IL3 may be set to F3 ═ 10/10 ═ 0. In the embodiment of the application, the offset proportion of any layer on the graphical user interface can be set in advance, then, the offset proportions F of the layers are stored as configuration files, and when needed, the offset proportions F of the layers can be directly read from the configuration files according to the identifiers of the layers.
Step S1032: multiplying the maximum offset value C of any layer on the graphical user interface by the offset proportion F of any layer on the graphical user interface, and determining the multiplied result as the offset value of any layer when the focus point moves, wherein the maximum offset value C of any layer on the graphical user interface is determined by the maximum deflection radian of the equipment.
As mentioned above, the shift ratio F of any layer ILi on the gui can be directly set to any value between [ -1, 1], and the maximum value is 1, which means that the layer shift reaches the maximum value at this time. In practice, the maximum offset value C of the layer ILi may also be determined by the maximum deflection radian of the device, i.e. when the deflection radian of the device reaches the maximum value, the layer ILi also reaches the maximum offset value C, at which time the offset proportion F also corresponding to the layer reaches the maximum value, i.e. F equals 1. Based on the above facts, in the embodiment of the present application, the maximum offset value C of any one layer on the graphical user interface is multiplied by the offset ratio F of any one layer on the graphical user interface, and the multiplication result is determined as the offset value of any one layer when the focus point moves. It should be noted here that, for PC devices, there is generally no maximum deflection radian of the device. Therefore, for the PC device, when the focus point is actually in the maximum range determined based on the height and width of the device display screen, at this time, the offset value of the layer is also the maximum, that is, the maximum offset value C, so the offset value of any layer is determined according to the maximum range of the focus point determined based on the height and width of the device display screen, actually, the maximum offset value C of any layer on the graphical user interface is multiplied by the offset proportion F of any layer on the graphical user interface, and the multiplication result is determined as the offset value of any layer when the focus point moves.
Because the shift ratio F of the layers with different depths has different values, the embodiments corresponding to steps S1031 and S1032 can actually achieve the effect that the layers with different depths have different shift amplitudes when the focus point moves. As shown in fig. 2, the two layers are different in depth on a certain game interface, and the two layers are marked with the characters of "fun play" and "ancient task", so that obviously, the depth of the layer "fun play" is small, and the depth of the layer "ancient task" is large. Referring to fig. 3, an example of the shift of each layer when the focus point moves in the interface illustrated in fig. 2 is shown. Through comparison, the shift range of the image layer 'fun play' is different from that of the image layer 'ancient task', and specifically, the shift range of the image layer 'fun play' is larger than that of the image layer 'ancient task'.
In the above embodiment, a case is also considered where when the focus point is moved, the layer shift is too large, or even in the lateral or longitudinal direction, one end of the layer is out of the screen resolution range, and the other end is blank without pixels. To avoid this situation, in the above embodiment, before multiplying the maximum offset value C of any one layer on the graphical user interface by the offset ratio F of any one layer on the graphical user interface, the design resolution of the graphical user interface may be increased, and then, the maximum offset value C of any one layer on the graphical user interface may be limited within the range of the increased value of the design resolution of the graphical user interface. For example, if the design resolution of the graphical user interface is increased by 200 in the lateral direction (increased by 200 in both the left and right directions) and increased by 113 in the longitudinal direction (increased by 113 in both the up and down directions), the maximum offset value C of any one layer on the graphical user interface is limited within the range of the increased value of the design resolution of the graphical user interface, meaning that the maximum offset value C of any one layer on the graphical user interface is within [ -200, 200] in the lateral direction and within [ -113, 113] in the longitudinal direction. As shown in fig. 4, the left column is a situation where the maximum offset value C of the layer on the gui exceeds the device resolution, and the layer protrudes out of the screen, and the right column is a situation where the maximum offset value C of the layer on the gui is limited within the range of the increased value of the design resolution of the gui after the design resolution of the gui is increased for the influence of the left column.
For determining the offset value of any one layer on the gui, it can also be implemented by a linear relationship, that is, as another embodiment of the present application, when the focus point moves, according to the maximum deflection radian of the device, determining the offset value of any one layer can be implemented by steps S '1031 and S' 1032, which are described as follows:
step S' 1031: and acquiring the deflection radian d of the equipment in real time from the angular motion detection device.
As mentioned above, the angular motion detection device may be a gyroscope integrated on the device, which can acquire the deflection radian d of the device in real time.
Step S' 1032: and calculating the offset value C of any layer on the graphical user interface when the focus point moves according to a linear function C ═ f (D) ═ dC/D, wherein C is the maximum offset value of any layer on the graphical user interface, and D is the maximum deflection radian of the device.
According to the technical solutions of step S '1031 and step S' 1032 in the above embodiments, when the device gradually deflects from the radian 0 to a certain maximum radian, an effect that any one layer on the graphical user interface gradually reaches the maximum offset value from 0 can be achieved.
Step S104: and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
It should be noted that, in the embodiments of the present application, a blurred image should be understood in a broad sense, and cannot be regarded as a blurred or unclear image, but should be understood as an image having a certain degree of blurring, which may be a small value or a large value. When the degree of blur is large, it means that the image is not sharp, and when the degree of blur is small, it means that the image is sharp, for example, when the degree of blur is minimum (for example, the degree of blur is 0), the image is sharpest.
As an embodiment of the present application, according to a depth difference between any one layer on the gui and the layer where the focus point is located, generating a blurred image of any one layer on the gui through a blur algorithm may be implemented through steps S1041 to S1043, which are described as follows:
step S1041: and acquiring a clear image of any layer on the graphical user interface.
Specifically, the obtaining of the clear image of any layer on the graphical user interface may be: acquiring an original clear image of any image layer on a graphical user interface, and then calculating a first pixel value of each pixel point by adopting a color matrix aiming at each pixel point in the original clear image to obtain a second pixel value corresponding to each pixel point; and generating a clear image of any layer on the graphical user interface according to the second pixel value corresponding to each pixel point, wherein each pixel point is provided with an alpha channel and at least one color channel.
Step S1042: and determining the ambiguity of any layer on the graphical user interface according to the depth difference d between any layer on the graphical user interface and the layer where the focus point is located, wherein the ambiguity is related to n, n is [ d/s ], s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity, and the symbol [ ] represents the rounding of the result within [ ].
In the embodiment of the application, n related to the ambiguity is actually a value indicating how far away from a sampling pixel point in gaussian blur processing, specifically, n circles of pixel points near the sampling pixel point, and the value of n determines the finally obtained ambiguity of a layer; on the other hand, as the value of n increases, that is, the number of pixels around a pixel increases, the time consumed for calculation gradually increases, and therefore the upper limit of n needs to be limited; in this embodiment of the present application, s is a coefficient set by a user to limit the size of n, and s may be set as the maximum depth of the layer divided by the maximum ambiguity on the graphical user interface, where the maximum ambiguity does not exceed 10, and is generally 10.
Step S1043: and performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
As another embodiment of the present application, according to a depth difference between any one layer on the gui and the layer where the focus point is located, generating a blurred image of any one layer on the gui through a blurring algorithm may be implemented through steps S '1041 to S' 1044, which are described below.
Step S' 1041: and determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Similar to step S1042 in the foregoing embodiment, according to a depth difference d between any one layer on the gui and the layer where the focus point is located, a value of n, that is, how many circles of pixel points near the pixel point are sampled in the gaussian blur processing process is determined, where n ═ d/S ], S is a maximum depth of the layer on the gui divided by a maximum blur degree, and a symbol [ ] indicates rounding a result within the pair [ ]; when the value of n is determined, the fuzzification radius of the Gaussian fuzzification processing can be determined; wherein the maximum ambiguity does not exceed 10, which is generally taken as 10.
Step S' 1042: in the horizontal direction of any one layer on the graphical user interface, the fuzzification radius and the abscissa of each pixel point are taken as the basis, the color value of the pixel point Pi and the horizontal fuzzy threshold of the layer are compared aiming at any one horizontal pixel point Pi, if the color value of the pixel point Pi is within the range of the horizontal fuzzy threshold of the layer, the color value of the pixel point Pi is reserved, and otherwise, the horizontal fuzzy threshold of the layer is taken as the color value of the pixel point Pi.
In this application embodiment, the horizontal fuzzy threshold value of picture layer is a colour value, and it has decided whether a certain pixel should fuzzify on the picture layer in horizontal, specifically speaking, can be through the colour value of comparison pixel point Pi and the horizontal fuzzy threshold value of picture layer, if the colour value of pixel point Pi is within the horizontal fuzzy threshold value range of picture layer, then keep the colour value of pixel point Pi, otherwise, regard the horizontal fuzzy threshold value of picture layer as the colour value of pixel point Pi.
Step S' 1043: and in the longitudinal direction of any one layer on the graphical user interface, based on the fuzzification radius and the vertical coordinate of each pixel point, aiming at any one pixel point Pj in the longitudinal direction, comparing the color value of the pixel point Pj with the longitudinal fuzzy threshold of the layer, if the color value of the pixel point Pj is within the range of the longitudinal fuzzy threshold of the layer, keeping the color value of the pixel point Pj, and otherwise, taking the longitudinal fuzzy threshold of the layer as the color value of the pixel point Pj.
In the foregoing step S' 1042, the layer is processed in the horizontal direction, and similar to the foregoing embodiment, the layer longitudinal fuzzy threshold is also a color value, which determines whether a certain pixel point on the layer should be fuzzified in the longitudinal direction, specifically, by comparing the color value of the pixel point Pj with the layer longitudinal fuzzy threshold, if the color value of the pixel point Pj is within the range of the layer longitudinal fuzzy threshold, the color value of the pixel point Pj is retained, otherwise, the layer longitudinal fuzzy threshold is used as the color value of the pixel point Pj.
Step S' 1044: and combining the layer subjected to the fuzzification treatment in the horizontal direction and the layer subjected to the fuzzification treatment in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
Unlike steps S1041 to S1043 in the foregoing embodiment, in step S '1041 to step S' 1044 in this embodiment, only two cycles of blurring processing need to be performed in the horizontal direction and the vertical direction of the layer, respectively, to perform blurring on the layer, so that the amount of computation is small, the processing efficiency is high, and the consumption of memory and computational resources is reduced.
As can be seen from the method for simulating the interface effect when focusing on human eyes illustrated in fig. 1, on one hand, no matter the focal point on the graphical user interface is determined based on the height and width of the display screen of the device, or the focal point on the graphical user interface is determined by the angular motion detection device of the device, the method can be implemented without complex calculation, so that the amount of calculation required by the technical scheme of the present application is very small compared with the huge resource cost in the prior art; on the other hand, when the ambiguity of the layer where the focus point is located is set to be the minimum, the eye can be guided to the clearest layer area, and therefore, the subsequent hierarchical offset and regional ambiguity of the layer based on the movement of the focus point can better simulate the effect of a graphical user interface when the eye focuses on the real scene.
Referring to fig. 5, a method for simulating an interface effect when focusing on human eyes according to another embodiment of the present application mainly includes steps S501 to S504, which are detailed as follows:
step S501: the focus point on the graphical user interface is determined based on the height and width of the display screen of the device or by angular motion detection means of the device.
The implementation of step S501 is completely the same as the implementation of step S101 in the foregoing embodiment, and the explanation of related terms, concepts, and the like can refer to the description of step S101 in the foregoing embodiment, which is not described herein again.
Step S502: and setting the ambiguity of the layer where the focus point is located as 0.
The implementation of step S502 is completely the same as the implementation of step S102 in the foregoing embodiment, and the explanation of related terms, concepts, and the like may refer to the description of step S102 in the foregoing embodiment, which is not described herein again.
Step S503: when the focus point moves, any layer on the graphical user interface is moved from the current position to the target position.
Specifically, when the focus point moves, the moving of any one layer on the graphical user interface from the current position to the target position may be performed according to a linear interpolation function p ═ 1-t × a + t × b, and the layer is controlled to be interpolated from the current position to the target position, where p is a real-time coordinate of the layer in the interpolation process, a is the current position of the layer, b is the target position of the layer, and t is an interpolation proportion, where t belongs to [0, 1 ]. By the linear interpolation algorithm, an effect similar to inertia can be generated when the layer on the graphical user interface is shifted, so that the layer is not rigid or smoother when the layer is shifted.
In view of the above-mentioned damage that may be caused to the original image of the layer when any one layer on the graphical user interface is moved from the current position to the target position by the linear interpolation algorithm or for other reasons, for example, the resolution is decreased, in the embodiment of the present application, while any one layer on the graphical user interface is moved from the current position to the target position by the linear interpolation algorithm, the above-mentioned damage to the original image of the layer may also be solved by steps S5031 to S5034, which is described as follows:
step S5031: and performing interpolation of a nearest neighbor interpolation method on the low-resolution area brightness image area of any image layer on the graphical user interface by adopting an interpolation model to obtain a high-resolution area brightness image.
Step S5032: and calculating a loss function of the interpolation model, and summing the smoothness of the brightness map of the high-resolution area.
Step S5033: and projecting the descending direction of the loss function of the interpolation model to the feasible direction and determining the descending step length.
Step S5034: the luminance values of the pixels of the luminance map of the high-resolution area are corrected so that the value of the loss function of the interpolation model is reduced.
By the embodiment, the damage to the original image of the image layer during the image layer deviation is effectively suppressed, the loss function exploration direction and the step length of the interpolation model can be quickly determined, the calculated amount is reduced, and the execution of the interpolation algorithm is accelerated.
Step S504: and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Similar to step S104 in the previous embodiment, step S504 can be implemented in step S5041 to step S5043 as an embodiment of the present application, and is described as follows:
step S5041: and acquiring a clear image of any layer on the graphical user interface.
Specifically, it may be: acquiring an original clear image of any image layer on a graphical user interface, and then calculating a first pixel value of each pixel point by adopting a color matrix aiming at each pixel point in the original clear image to obtain a second pixel value corresponding to each pixel point; and generating a clear image of any layer on the graphical user interface according to the second pixel value corresponding to each pixel point, wherein each pixel point is provided with an alpha channel and at least one color channel.
Step S5042: and determining the ambiguity of any layer on the graphical user interface according to the depth difference d between any layer on the graphical user interface and the layer where the focus point is located, wherein the ambiguity is related to n, n is [ d/s ], s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity, wherein a symbol [ ] represents rounding the result within [ ], and the maximum ambiguity does not exceed 10, and is generally 10.
In the embodiment of the application, n related to the ambiguity is actually a value indicating how far away from a sampling pixel point in gaussian blur processing, specifically, n circles of pixel points near the sampling pixel point, wherein the value of n determines the finally obtained ambiguity of a layer; on the other hand, as the value of n increases, that is, the number of pixels around a pixel increases, the time consumed for calculation gradually increases, and therefore the upper limit of n needs to be limited; in this embodiment of the application, s is a coefficient set by a user to limit the size of n, and s may be set as the maximum depth of the layer divided by 10 on the gui.
Step S5043: and performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
Similar to step S104 of the previous embodiment, step S504 can be implemented by step S '5041 to step S' 5044 as another embodiment of the present application, which is described below.
Step S' 5041: and determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Similar to step S5042 in the foregoing embodiment, according to a depth difference d between any one layer on the gui and the layer where the focus point is located, a value of n, that is, how many circles of pixel points near the pixel point are sampled in the gaussian blur processing process is determined, where n ═ d/S, S is a maximum depth of the layer on the gui divided by a maximum blur degree, and a symbol [ ] indicates rounding a result within [ ]; when the value of n is determined, the fuzzification radius of the Gaussian fuzzification processing can be determined; wherein the maximum ambiguity does not exceed 10, which is generally taken as 10.
Step S' 5042: in the horizontal direction of any one layer on the graphical user interface, the fuzzification radius and the abscissa of each pixel point are taken as the basis, the color value of the pixel point Pi and the horizontal fuzzy threshold of the layer are compared aiming at any one horizontal pixel point Pi, if the color value of the pixel point Pi is within the range of the horizontal fuzzy threshold of the layer, the color value of the pixel point Pi is reserved, and otherwise, the horizontal fuzzy threshold of the layer is taken as the color value of the pixel point Pi.
In this application embodiment, the horizontal fuzzy threshold value of picture layer is a colour value, and it has decided whether a certain pixel should fuzzify on the picture layer in horizontal, specifically speaking, can be through the colour value of comparison pixel point Pi and the horizontal fuzzy threshold value of picture layer, if the colour value of pixel point Pi is within the horizontal fuzzy threshold value range of picture layer, then keep the colour value of pixel point Pi, otherwise, regard the horizontal fuzzy threshold value of picture layer as the colour value of pixel point Pi.
Step S' 5043: and in the longitudinal direction of any one layer on the graphical user interface, based on the fuzzification radius and the vertical coordinate of each pixel point, aiming at any one pixel point Pj in the longitudinal direction, comparing the color value of the pixel point Pj with the longitudinal fuzzy threshold of the layer, if the color value of the pixel point Pj is within the range of the longitudinal fuzzy threshold of the layer, keeping the color value of the pixel point Pj, and otherwise, taking the longitudinal fuzzy threshold of the layer as the color value of the pixel point Pj.
In the foregoing step S' 5042, the layer is processed in the horizontal direction, and similar to the foregoing embodiment, the layer longitudinal fuzzy threshold is also a color value, which determines whether a certain pixel point on the layer should be fuzzified in the longitudinal direction, specifically, by comparing the color value of the pixel point Pj with the layer longitudinal fuzzy threshold, if the color value of the pixel point Pj is within the range of the layer longitudinal fuzzy threshold, the color value of the pixel point Pj is retained, otherwise, the layer longitudinal fuzzy threshold is used as the color value of the pixel point Pj.
Step S' 5044: and combining the layer subjected to the fuzzification treatment in the horizontal direction and the layer subjected to the fuzzification treatment in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
Unlike steps S5041 to S5043 of the foregoing embodiments, steps S '5041 to S' 5044 of the present embodiment only need to perform twice cycles of blurring processing in the horizontal direction and the vertical direction of the layer, respectively, to perform blurring of the layer, so that the amount of computation is small, the processing efficiency is high, and the consumption of memory and computational resources is reduced.
As can be seen from the method for simulating the interface effect when focusing on the human eye illustrated in fig. 5, on one hand, no matter the focal point on the gui is determined based on the height and width of the display screen of the device, or the focal point on the gui is determined by the angular motion detection device of the device, the method can be implemented without complicated calculation, so that the amount of calculation required by the technical solution of the present application is very small compared with the huge resource cost in the prior art; on the other hand, when the ambiguity of the layer where the focus point is located is set to be the minimum, the eye can be guided to the clearest layer area, and therefore, the subsequent hierarchical offset and regional ambiguity of the layer based on the movement of the focus point can better simulate the effect of a graphical user interface when the eye focuses on the real scene.
Referring to fig. 6, an apparatus for simulating an interface effect when focusing on human eyes according to an embodiment of the present application may include a focus point determining module 601, a setting module 602, an offset value determining module 603, and a generating module 604, which are described in detail as follows:
a focus point determination module 601 for determining a focus point on a graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device;
a setting module 602, configured to set the ambiguity of the layer where the focus point is located to be 0;
an offset value determining module 603, configured to determine, when the focus point moves, an offset value of any layer on the graphical user interface according to a maximum range of the focus point determined based on a height and a width of a display screen of the device or a maximum deflection radian of the device, where an offset ratio of any layer on the graphical user interface is related to a depth of any layer on the graphical user interface;
the generating module 604 is configured to generate a blurred image of any layer on the graphical user interface through a blur algorithm according to a depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Optionally, in the apparatus illustrated in fig. 6, the offset value determining module 603 may include an offset ratio obtaining unit and a first calculating unit, where:
the offset ratio acquiring unit is used for acquiring the offset ratio F of any layer on the graphical user interface from the configuration file;
the first calculation unit is used for multiplying the maximum offset value C of any layer on the graphical user interface by the offset proportion F of any layer on the graphical user interface, and determining the multiplication result as the offset value of any layer on the graphical user interface when the focus point moves, wherein the maximum offset value C of any layer on the graphical user interface is determined by the maximum deflection radian of the device.
Optionally, the apparatus illustrated in fig. 6 may further include a resolution increasing module 701 and a limiting module 702, as shown in fig. 7, the apparatus for simulating an interface effect when focusing on human eyes according to another embodiment of the present application, wherein:
a resolution increasing module 701 for increasing a design resolution of the graphical user interface;
a limiting module 702, configured to limit the maximum offset value C of any layer on the gui within the range of the increased value of the design resolution of the gui.
Alternatively, in the apparatus illustrated in fig. 6, the offset value determining module 603 may include a deflection radian acquiring unit and a second calculating unit, wherein:
a deflection radian acquisition unit for acquiring a deflection radian d of the device from the angular motion detection device in real time;
and the second calculating unit is used for calculating the offset value C of any layer on the graphical user interface when the focus point moves according to the linear function C ═ f (D) ═ dC/D, wherein C is the maximum offset value of any layer on the graphical user interface, and D is the maximum deflection radian of the equipment.
Optionally, in the apparatus illustrated in fig. 6, the generating module 604 may include a sharp image acquiring unit, a third calculating unit, and a blurred image generating unit, where:
the clear image acquisition unit is used for acquiring a clear image of any layer on the graphical user interface;
a third calculating unit, configured to determine a ambiguity of any layer on the gui according to a depth difference d between any layer on the gui and a layer where the focus point is located, where the ambiguity is related to n, where n ═ d/s, and s is a maximum ambiguity divided by a maximum depth of the layer on the gui, where a symbol [ ] indicates rounding a result within the pair [ ], and the maximum ambiguity does not exceed 10, and is generally taken as 10;
and the blurred image generating unit is used for performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
Optionally, in the apparatus illustrated in fig. 6, the generating module 604 may include a blur radius determining unit, a first comparing unit, a second comparing unit, and a merging unit, where:
the blurring radius determining unit is used for determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any one layer on the graphical user interface and the layer where the focus point is located;
the first comparison unit is used for comparing the color value of the pixel point Pi with the layer transverse fuzzy threshold value aiming at any pixel point Pi in the transverse direction on the basis of the fuzzification radius and the abscissa of each pixel point in the transverse direction of any layer on the graphical user interface, if the color value of the pixel point Pi is within the range of the layer transverse fuzzy threshold value, the color value of the pixel point Pi is reserved, and if not, the layer transverse fuzzy threshold value is used as the color value of the pixel point Pi;
the second comparison unit is used for comparing the color value of the pixel point Pj with the longitudinal fuzzy threshold of the layer in the longitudinal direction of any layer on the graphical user interface by taking the fuzzification radius and the longitudinal coordinate of each pixel point as the basis, if the color value of the pixel point Pj is within the range of the longitudinal fuzzy threshold of the layer, the color value of the pixel point Pj is reserved, and if not, the longitudinal fuzzy threshold of the layer is taken as the color value of the pixel point Pj;
and the merging unit is used for merging the layer subjected to the fuzzification processing in the horizontal direction and the layer subjected to the fuzzification processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
Referring to fig. 8, an apparatus for simulating an interface effect when focusing on human eyes according to an embodiment of the present application may include a focus point determining module 601, a setting module 602, an interpolation module 801, and a generating module 604, which are described in detail as follows:
a focus point determination module 601 for determining a focus point on a graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device;
a setting module 602, configured to set the ambiguity of the layer where the focus point is located to be 0;
an interpolation module 801, configured to move any one layer on the graphical user interface from a current position to a target position when the focus point moves;
the generating module 604 is configured to generate a blurred image of any layer on the graphical user interface through a blur algorithm according to a depth difference between any layer on the graphical user interface and the layer where the focus point is located.
Optionally, in the apparatus illustrated in fig. 8, the generating module 604 may include a sharp image acquiring unit, a third calculating unit, and a blurred image generating unit, where:
the clear image acquisition unit is used for acquiring a clear image of any layer on the graphical user interface;
a third calculating unit, configured to determine a ambiguity of any layer on the gui according to a depth difference d between any layer on the gui and a layer where the focus point is located, where the ambiguity is related to n, where n ═ d/s, and s is a maximum ambiguity divided by a maximum depth of the layer on the gui, where a symbol [ ] indicates rounding a result within the pair [ ], and the maximum ambiguity does not exceed 10, and is generally taken as 10;
and the blurred image generating unit is used for performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
Optionally, in the apparatus illustrated in fig. 8, the generating module 604 may include a blur radius determining unit, a first comparing unit, a second comparing unit, and a merging unit, where:
the blurring radius determining unit is used for determining the blurring radius for carrying out Gaussian blurring processing according to the depth difference between any one layer on the graphical user interface and the layer where the focus point is located;
the first comparison unit is used for comparing the color value of the pixel point Pi with the layer transverse fuzzy threshold value aiming at any pixel point Pi in the transverse direction on the basis of the fuzzification radius and the abscissa of each pixel point in the transverse direction of any layer on the graphical user interface, if the color value of the pixel point Pi is within the range of the layer transverse fuzzy threshold value, the color value of the pixel point Pi is reserved, and if not, the layer transverse fuzzy threshold value is used as the color value of the pixel point Pi;
the second comparison unit is used for comparing the color value of the pixel point Pj with the longitudinal fuzzy threshold of the layer in the longitudinal direction of any layer on the graphical user interface by taking the fuzzification radius and the longitudinal coordinate of each pixel point as the basis, if the color value of the pixel point Pj is within the range of the longitudinal fuzzy threshold of the layer, the color value of the pixel point Pj is reserved, and if not, the longitudinal fuzzy threshold of the layer is taken as the color value of the pixel point Pj;
and the merging unit is used for merging the layer subjected to the fuzzification processing in the horizontal direction and the layer subjected to the fuzzification processing in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
As can be seen from the above description of the technical solutions, on one hand, no matter the focal point on the gui is determined based on the height and width of the display screen of the device, or the focal point on the gui is determined by the angular motion detection device of the device, the determination can be achieved without complicated calculation, so that the amount of calculation required by the technical solution of the present application is extremely small compared to the huge resource cost in the prior art; on the other hand, when the ambiguity of the layer where the focus point is located is set to be the minimum, the eye can be guided to the clearest layer area, and therefore, the subsequent hierarchical offset and regional ambiguity of the layer based on the movement of the focus point can better simulate the effect of a graphical user interface when the eye focuses on the real scene.
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 9, the computer device 9 of this embodiment mainly includes: a processor 90, a memory 91 and a computer program 92 stored in the memory 91 and executable on the processor 90, such as a program for simulating a method of interface effects when the human eye is focused on. The processor 90 executes the computer program 92 to implement the steps in the above-mentioned method embodiment for simulating the interface effect when the human eye is focused, such as the steps S101 to S104 shown in fig. 1 or the steps S501 to S504 shown in fig. 5. Alternatively, the processor 90, when executing the computer program 92, implements the functions of the modules/units in the above-described apparatus embodiments, such as the functions of the focus point determining module 601, the setting module 602, the offset value determining module 603, and the generating module 604 shown in fig. 6 or the focus point determining module 601, the setting module 602, the interpolating module 801, and the generating module 604 shown in fig. 8.
Illustratively, the computer program 92 of the method for simulating an interface effect when focusing on the human eye mainly comprises: determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device; setting the ambiguity of the layer where the focus point is located as 0; when the focus point moves, determining an offset value of any layer on the graphical user interface according to the maximum range of the focus point or the maximum deflection radian of the equipment, which is determined based on the height and the width of the display screen of the equipment, wherein the offset proportion of any layer on the graphical user interface is related to the depth of any layer on the graphical user interface; generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located; alternatively, the computer program 92 for simulating the interface effect when the human eye is focused mainly comprises: determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device; setting the ambiguity of the layer where the focus point is located as 0; when the focus point moves, moving any one layer on the graphical user interface from the current position to the target position; and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located. The computer program 92 may be divided into one or more modules/units, which are stored in the memory 91 and executed by the processor 90 to accomplish the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 92 in the computer device 9. For example, the computer program 92 may be divided into functions of a focus point determining module 601, a setting module 602, an offset value determining module 603, and a generating module 604 (modules in a virtual device), and the specific functions of each module are as follows: a focus point determination module 601 for determining a focus point on a graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device; a setting module 602, configured to set the ambiguity of the layer where the focus point is located to be 0; an offset value determining module 603, configured to determine, when the focus point moves, an offset value of any layer on the graphical user interface according to a maximum range of the focus point determined based on a height and a width of a display screen of the device or a maximum deflection radian of the device, where an offset ratio of any layer on the graphical user interface is related to a depth of any layer on the graphical user interface; the generating module 604 is configured to generate a blurred image of any layer on the graphical user interface through a blur algorithm according to a depth difference between any layer on the graphical user interface and a layer where the focus point is located; alternatively, the computer program 92 may be divided into the functions of the focus point determining module 601, the setting module 602, the interpolation module 801, and the generation module 604 (modules in the virtual device), and the specific functions of each module are as follows: a focus point determination module 601 for determining a focus point on a graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device; a setting module 602, configured to set the ambiguity of the layer where the focus point is located to be 0; an interpolation module 801, configured to move any one layer on the graphical user interface from a current position to a target position when the focus point moves; the generating module 604 is configured to generate a blurred image of any layer on the graphical user interface through a blur algorithm according to a depth difference between any layer on the graphical user interface and the layer where the focus point is located.
The computer device 9 may include, but is not limited to, a processor 90, a memory 91. Those skilled in the art will appreciate that fig. 9 is merely an example of a computer device 9 and is not intended to limit the computer device 9 and may include more or fewer components than those shown, or some of the components may be combined, or different components, e.g., the computer device may also include an input-output computer device, a network access computer device, a bus, etc.
The Processor 90 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 91 may be an internal storage unit of the computer device 9, such as a hard disk or a memory of the computer device 9. The memory 91 may also be an external storage computer device of the computer device 9, such as a plug-in hard disk provided on the computer device 9, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 91 may also include both an internal storage unit of the computer device 9 and an external storage computer device. The memory 91 is used for storing computer programs and other programs and data required by the computer device. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as required to different functional units and modules, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, all or part of the processes in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium, and when executed by a processor, may implement the steps of the embodiments of the method described above, that is, determining a focus point on a graphical user interface based on the height and width of a display screen of a device or by an angular motion detection device of the device; setting the ambiguity of the layer where the focus point is located as 0; when the focus point moves, determining an offset value of any layer on the graphical user interface according to the maximum range of the focus point or the maximum deflection radian of the equipment, which is determined based on the height and the width of the display screen of the equipment, wherein the offset proportion of any layer on the graphical user interface is related to the depth of any layer on the graphical user interface; generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located; alternatively, a computer program for simulating the interface effect when the human eye focuses may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the above embodiments of the method, that is, the computer program 92 for simulating the interface effect when the human eye focuses mainly includes: determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device; setting the ambiguity of the layer where the focus point is located as 0; when the focus point moves, moving any one layer on the graphical user interface from the current position to the target position; and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the non-transitory computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, non-transitory computer readable media does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice. The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present invention.

Claims (14)

1. A method for simulating an interface effect when a human eye is focused, the method comprising:
determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device;
setting the ambiguity of the layer where the focus point is located to be 0;
when the focus point moves, determining an offset value of any one layer according to the maximum range of the focus point determined based on the height and the width of a display screen of the device or the maximum deflection radian of the device, wherein the offset proportion of any one layer is related to the depth of any one layer;
and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
2. The method for simulating an interface effect when focusing on human eyes as claimed in claim 1, wherein said determining an offset value of any one of said layers according to a maximum deflection radian of said device when said focus point moves comprises:
acquiring the offset proportion F of any layer on the graphical user interface from a configuration file;
and multiplying the maximum deviation value C of any layer on the graphical user interface by the deviation proportion F of any layer on the graphical user interface, wherein the result of the multiplication is determined as the deviation value of any layer when the focus point moves, and the maximum deviation value C of any layer on the graphical user interface is determined by the maximum deflection radian of the equipment.
3. The method for simulating an interface effect when focusing on human eyes as claimed in claim 2, wherein before said step of multiplying the maximum offset value C of any one layer on said gui by the offset ratio F of any one layer on said gui, said method further comprises:
increasing a design resolution of the graphical user interface;
and limiting the maximum offset value C of any layer on the graphical user interface within the range of the increased value of the design resolution of the graphical user interface.
4. The method for simulating an interface effect when focusing on human eyes as claimed in claim 1, wherein said determining an offset value of any one of said layers according to a maximum deflection radian of said device when said focus point moves comprises:
acquiring the deflection radian d of the equipment from the angular motion detection device in real time;
and calculating the offset value C of any layer when the focus point moves according to a linear function C ═ f (D) ═ dC/D, wherein C is the maximum offset value of any layer on the graphical user interface, and D is the maximum deflection radian of the equipment.
5. The method for simulating an interface effect when focusing by human eyes according to claim 1, wherein the generating a blurred image of any layer on the gui through a blurring algorithm according to a depth difference between any layer on the gui and a layer where the focus point is located comprises:
acquiring a clear image of any layer on the graphical user interface;
determining the ambiguity of any layer on the graphical user interface according to the depth difference d between any layer on the graphical user interface and the layer where the focus point is located, wherein the ambiguity is related to n, n ═ d/s ], s is the maximum ambiguity divided by the maximum depth of the layer on the graphical user interface, and the symbol [ ] represents the rounding of the result in the [ ];
and performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
6. The method for simulating an interface effect when focusing by human eyes according to claim 1, wherein the generating a blurred image of any layer on the gui through a blurring algorithm according to a depth difference between any layer on the gui and a layer where the focus point is located comprises:
determining the blurring radius for performing Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located;
in the transverse direction of any one layer on the graphical user interface, based on the fuzzification radius and the abscissa of each pixel point, comparing the color value of the pixel point Pi with a layer transverse fuzzy threshold value aiming at any pixel point Pi in the transverse direction, if the color value of the pixel point Pi is within the range of the layer transverse fuzzy threshold value, retaining the color value of the pixel point Pi, and if not, taking the layer transverse fuzzy threshold value as the color value of the pixel point Pi;
in the longitudinal direction of any one layer on the graphical user interface, based on the fuzzification radius and the vertical coordinate of each pixel point, aiming at any pixel point Pj in the longitudinal direction, comparing the color value of the pixel point Pj with a layer longitudinal fuzzy threshold, if the color value of the pixel point Pj is within the range of the layer longitudinal fuzzy threshold, retaining the color value of the pixel point Pj, and if not, taking the layer longitudinal fuzzy threshold as the color value of the pixel point Pj;
and combining the layer subjected to the fuzzification treatment in the transverse direction and the layer subjected to the fuzzification treatment in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
7. A method for simulating an interface effect when a human eye is focused, the method comprising:
determining a focus point on the graphical user interface based on the height and width of the device display screen or by angular motion detection means of the device;
setting the ambiguity of the layer where the focus point is located to be 0;
when the focus point moves, moving any one layer on the graphical user interface from the current position to the target position;
and generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
8. The method for simulating an interface effect when focusing on human eyes as claimed in claim 7, wherein said moving any one layer on said graphical user interface from a current position to a target position further comprises:
performing interpolation of a nearest neighbor interpolation method on a low-resolution area brightness image area of any image layer on the graphical user interface by adopting an interpolation model to obtain a high-resolution area brightness image;
calculating a loss function of the interpolation model, and summing smoothness of the high-resolution area brightness map;
projecting the descending direction of the loss function to a feasible direction and determining a descending step length;
and correcting the brightness value of the pixel of the brightness image of the high-resolution area to reduce the value of the loss function.
9. The method according to claim 7, wherein the generating the blurred image of any one of the layers on the gui by the blur algorithm according to the depth difference between any one of the layers on the gui and the layer where the focus point is located comprises:
acquiring a clear image of any layer on the graphical user interface;
determining the ambiguity of any layer on the graphical user interface according to the depth difference d between any layer on the graphical user interface and the layer where the focus point is located, wherein the ambiguity is related to n, n is d/s, and s is the maximum depth of the layer on the graphical user interface divided by the maximum ambiguity;
and performing Gaussian blur processing on the clear image according to the blur degree of any layer on the graphical user interface to generate a blurred image of any layer on the graphical user interface.
10. The method according to claim 7, wherein the generating the blurred image of any one of the layers on the gui by the blur algorithm according to the depth difference between any one of the layers on the gui and the layer where the focus point is located comprises:
determining the blurring radius for performing Gaussian blurring processing according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located;
in the transverse direction of any one layer on the graphical user interface, based on the fuzzification radius and the abscissa of each pixel point, comparing the color value of the pixel point Pi with a layer transverse fuzzy threshold value aiming at any pixel point Pi in the transverse direction, if the color value of the pixel point Pi is within the range of the layer transverse fuzzy threshold value, retaining the color value of the pixel point Pi, and if not, taking the layer transverse fuzzy threshold value as the color value of the pixel point Pi;
in the longitudinal direction of any one layer on the graphical user interface, based on the fuzzification radius and the vertical coordinate of each pixel point, aiming at any pixel point Pj in the longitudinal direction, comparing the color value of the pixel point Pj with a layer longitudinal fuzzy threshold, if the color value of the pixel point Pj is within the range of the layer longitudinal fuzzy threshold, retaining the color value of the pixel point Pj, and if not, taking the layer longitudinal fuzzy threshold as the color value of the pixel point Pj;
and combining the layer subjected to the fuzzification treatment in the transverse direction and the layer subjected to the fuzzification treatment in the longitudinal direction to obtain a blurred image of any layer on the graphical user interface.
11. An apparatus for simulating an interface effect when focusing on a human eye, the apparatus comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focus point is located to be 0;
an offset value determining module, configured to determine, when the focus point moves, an offset value of any one layer according to the maximum range of the focus point determined based on the height and the width of the device display screen or the maximum deflection radian of the device, where an offset ratio of any one layer is related to the depth of any one layer;
and the generating module is used for generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
12. An apparatus for simulating an interface effect when focusing on a human eye, the apparatus comprising:
a focus point determination module for determining a focus point on the graphical user interface based on a height and width of a display screen of the device or by angular motion detection means of the device;
the setting module is used for setting the ambiguity of the layer where the focus point is located to be 0;
the interpolation module is used for moving any one layer on the graphical user interface from the current position to the target position when the focus point moves;
and the generating module is used for generating a fuzzy image of any layer on the graphical user interface through a fuzzy algorithm according to the depth difference between any layer on the graphical user interface and the layer where the focus point is located.
13. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program performs the steps of the method for simulating an interface effect when focusing on the human eye of any one of claims 1 to 6 or the steps of the method for simulating an interface effect when focusing on the human eye of any one of claims 7 to 10.
14. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for simulating an interface effect when focusing on the human eye of any one of claims 1 to 6, or carries out the steps of the method for simulating an interface effect when focusing on the human eye of any one of claims 7 to 10.
CN202110239627.2A 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes Active CN112835453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110239627.2A CN112835453B (en) 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110239627.2A CN112835453B (en) 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes

Publications (2)

Publication Number Publication Date
CN112835453A true CN112835453A (en) 2021-05-25
CN112835453B CN112835453B (en) 2023-05-09

Family

ID=75934555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110239627.2A Active CN112835453B (en) 2021-03-04 2021-03-04 Method, apparatus and storage medium for simulating interface effect when focusing human eyes

Country Status (1)

Country Link
CN (1) CN112835453B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040100479A1 (en) * 2002-05-13 2004-05-27 Masao Nakano Portable information terminal, display control device, display control method, and computer readable program therefor
CN103226436A (en) * 2013-03-06 2013-07-31 广东欧珀移动通信有限公司 Man-machine interaction method and system of intelligent terminal
CN104349153A (en) * 2013-08-06 2015-02-11 宏达国际电子股份有限公司 Image processing methods and systems in accordance with depth information
CN105590294A (en) * 2014-11-18 2016-05-18 联想(北京)有限公司 Image-processing method and electronic equipment
CN106257391A (en) * 2015-06-18 2016-12-28 苹果公司 Equipment, method and graphic user interface for navigation medium content
CN106537219A (en) * 2014-05-30 2017-03-22 奇跃公司 Methods and system for creating focal planes in virtual and augmented reality
CN107003734A (en) * 2014-12-23 2017-08-01 美达公司 Vision accommodation and vision convergence are coupled to conplane equipment, the method and system of any depth of object of interest
CN108769545A (en) * 2018-06-12 2018-11-06 Oppo(重庆)智能科技有限公司 A kind of image processing method, image processing apparatus and mobile terminal
CN108986228A (en) * 2018-07-06 2018-12-11 网易(杭州)网络有限公司 The method and device shown for virtual reality median surface
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040100479A1 (en) * 2002-05-13 2004-05-27 Masao Nakano Portable information terminal, display control device, display control method, and computer readable program therefor
CN103226436A (en) * 2013-03-06 2013-07-31 广东欧珀移动通信有限公司 Man-machine interaction method and system of intelligent terminal
CN104349153A (en) * 2013-08-06 2015-02-11 宏达国际电子股份有限公司 Image processing methods and systems in accordance with depth information
CN106537219A (en) * 2014-05-30 2017-03-22 奇跃公司 Methods and system for creating focal planes in virtual and augmented reality
CN105590294A (en) * 2014-11-18 2016-05-18 联想(北京)有限公司 Image-processing method and electronic equipment
CN107003734A (en) * 2014-12-23 2017-08-01 美达公司 Vision accommodation and vision convergence are coupled to conplane equipment, the method and system of any depth of object of interest
CN106257391A (en) * 2015-06-18 2016-12-28 苹果公司 Equipment, method and graphic user interface for navigation medium content
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system
CN108769545A (en) * 2018-06-12 2018-11-06 Oppo(重庆)智能科技有限公司 A kind of image processing method, image processing apparatus and mobile terminal
CN108986228A (en) * 2018-07-06 2018-12-11 网易(杭州)网络有限公司 The method and device shown for virtual reality median surface

Also Published As

Publication number Publication date
CN112835453B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
US11756223B2 (en) Depth-aware photo editing
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
EP3958207A2 (en) Method and apparatus for video frame interpolation, and electronic device
US20140078170A1 (en) Image processing apparatus and method, and program
CN112218107B (en) Live broadcast rendering method and device, electronic equipment and storage medium
CN111275801A (en) Three-dimensional picture rendering method and device
CN111311482A (en) Background blurring method and device, terminal equipment and storage medium
US20220375042A1 (en) Defocus Blur Removal and Depth Estimation Using Dual-Pixel Image Data
CN111507997A (en) Image segmentation method, device, equipment and computer storage medium
JP2019186762A (en) Video generation apparatus, video generation method, program, and data structure
CN111371983A (en) Video online stabilization method and system
CN109145688A (en) The processing method and processing device of video image
CN114494046A (en) Touch trajectory processing method, device, terminal, storage medium and program product
WO2024067320A1 (en) Virtual object rendering method and apparatus, and device and storage medium
CN111583329B (en) Augmented reality glasses display method and device, electronic equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN111652794B (en) Face adjusting and live broadcasting method and device, electronic equipment and storage medium
CN112835453B (en) Method, apparatus and storage medium for simulating interface effect when focusing human eyes
CN113496506A (en) Image processing method, device, equipment and storage medium
CN113810755B (en) Panoramic video preview method and device, electronic equipment and storage medium
CN113256785B (en) Image processing method, apparatus, device and medium
CN115205456A (en) Three-dimensional model construction method and device, electronic equipment and storage medium
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN110689609B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652025B (en) Face processing and live broadcasting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant