CN109697001B - Interactive interface display method and device, storage medium and electronic device - Google Patents

Interactive interface display method and device, storage medium and electronic device Download PDF

Info

Publication number
CN109697001B
CN109697001B CN201711000972.0A CN201711000972A CN109697001B CN 109697001 B CN109697001 B CN 109697001B CN 201711000972 A CN201711000972 A CN 201711000972A CN 109697001 B CN109697001 B CN 109697001B
Authority
CN
China
Prior art keywords
dimensional
interactive interface
display mode
data
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711000972.0A
Other languages
Chinese (zh)
Other versions
CN109697001A (en
Inventor
沈超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711000972.0A priority Critical patent/CN109697001B/en
Priority to PCT/CN2018/111650 priority patent/WO2019080870A1/en
Publication of CN109697001A publication Critical patent/CN109697001A/en
Application granted granted Critical
Publication of CN109697001B publication Critical patent/CN109697001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a display method and device of an interactive interface, a storage medium and an electronic device. Wherein, the method comprises the following steps: displaying a three-dimensional interactive interface in a virtual reality scene of a target application according to a first display mode, wherein the three-dimensional interactive interface is used for setting the target application; acquiring an operation instruction of a first account, wherein the first account is an account of a target application, the operation instruction is used for indicating that a first operation is executed on a target object on a three-dimensional interactive interface, and the first operation is used for setting the target application; and responding to the operation instruction, executing a first operation on the target object, and displaying the three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode. The invention solves the technical problem that the feedback on the operation of the user cannot be carried out in the related technology.

Description

Interactive interface display method and device, storage medium and electronic device
Technical Field
The invention relates to the field of internet, in particular to a display method and device of an interactive interface, a storage medium and an electronic device.
Background
Virtual reality VR (virtual reality), also called virtual technology or virtual environment, is a virtual world that uses computer simulation to generate a three-dimensional space, and provides the user with simulation about senses such as vision, etc., so that the user feels like the experience of the user, and can observe the things in the three-dimensional space in time without limitation. VR may be implemented in a "software + hardware device" manner.
Common VR software includes Steam and Oculus, Steam is a digital publishing, digital copyright management and social system, is used for publishing, selling and subsequent updating of digital software and games, supports operating systems such as Windows, OS X and Linux, and is the largest PC digital game platform in the world at present. Oculus VR is a virtual reality technology company.
The hardware product of Steam has the Steam VR, and the Steam VR is that 360 degrees room type space virtual reality that a function is complete experience, and this development external member has contained a wear-type display, two single-hand-held controller, one can be in the space the positioning system who tracks display and controller simultaneously, and the other equipment that provide on the collocation Steam can experience high-order virtual reality.
The hardware products of Oculus VR are Oculus Rift, which is a realistic virtual reality head-mounted display, and Oculus Touch, which is currently commercially available. The Oculus Touch is a motion capture handle of Oculus Rift and is used in cooperation with a space positioning system, the Oculus Touch adopts a design similar to a bracelet, a camera is allowed to track hands of a user, a sensor can also track finger movement, and meanwhile a convenient gripping mode is brought to the user.
By using the hardware product, a user can experience a virtual reality scene, but in the virtual reality scene experience process, when the user touches one object in the virtual reality scene, the virtual reality scene can not feed back the touch operation of the user, and then the user can not know whether the object is touched.
Aiming at the technical problem that the operation of the user cannot be fed back in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a display method and device of an interactive interface, a storage medium and an electronic device, which are used for at least solving the technical problem that the operation of a user cannot be fed back in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a display method of an interactive interface, the display method including: displaying a three-dimensional interactive interface in a virtual reality scene of a target application according to a first display mode, wherein the three-dimensional interactive interface is used for setting the target application; acquiring an operation instruction of a first account, wherein the first account is an account of a target application, the operation instruction is used for indicating that a first operation is executed on a target object on a three-dimensional interactive interface, and the first operation is used for setting the target application; and responding to the operation instruction, executing a first operation on the target object, and displaying the three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode.
According to another aspect of the embodiments of the present invention, there is also provided a display device of an interactive interface, the display device including: the system comprises a first display unit, a second display unit and a third display unit, wherein the first display unit is used for displaying a three-dimensional interactive interface in a virtual reality scene of a target application according to a first display mode, and the three-dimensional interactive interface is used for setting the target application; the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an operation instruction of a first account, the first account is an account of a target application, the operation instruction is used for indicating to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used for setting the target application; and the second display unit is used for responding to the operation instruction, executing the first operation on the target object and displaying the three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode.
In the embodiment of the invention, a three-dimensional interactive interface is displayed in a virtual reality scene of a target application according to a first display mode, and the three-dimensional interactive interface is used for setting the target application; acquiring an operation instruction of a first account, wherein the operation instruction instructs to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used for setting a target application; responding to an operation instruction, executing a first operation on a target object, and displaying a three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode, and displaying by adopting a display mode different from the display mode when the target object is not touched, so that the first operation of a user is fed back, the technical problem that the operation of the user cannot be fed back in the related technology can be solved, and the technical effect of feeding back the operation of the user is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment for a display method of an interactive interface according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method of displaying an interactive interface in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative interactive interface according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of an alternative interactive interface according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of an alternative interactive interface according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of an alternative panel parameter according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating alternative texture information according to an embodiment of the present invention;
FIG. 8 is a flow chart of an alternative method of displaying an interactive interface in accordance with an embodiment of the present invention;
FIG. 9 is a schematic view of a display device of an alternative interactive interface according to an embodiment of the present invention;
and
fig. 10 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present invention are applied to the following explanations:
HUD: head Up Displays (HUDs), hereinafter referred to as HUDs, are flight aids for aircraft and may be used in other fields, such as games.
According to an embodiment of the present invention, a method embodiment of a display method of an interactive interface is provided.
Alternatively, in this embodiment, the above-described display method of the interactive interface may be applied to a hardware environment formed by the server 102, the terminal 104, and the VR glasses 106 as shown in fig. 1. As shown in fig. 1, a server 102 is connected to a terminal 104 via a network including, but not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc. in a wide area network, a metropolitan area network, or a local area network. The method for displaying the interactive interface according to the embodiment of the present invention may be executed by the server 102, the terminal 104, or both the server 102 and the terminal 104. The method for displaying the interactive interface executed by the terminal 104 according to the embodiment of the present invention may be executed by a client installed thereon, and the result of executing the method may be displayed by the VR glasses 106.
When the display method of the interactive interface of the embodiment of the invention is executed by the terminal alone, the program code corresponding to the method of the application can be executed on the terminal directly.
Fig. 2 is a flowchart of a display method of an alternative interactive interface according to an embodiment of the present invention, and as shown in fig. 2, the method may include the following steps:
step S202, displaying a three-dimensional interactive interface in a virtual reality scene of the target application according to a first display mode, wherein the three-dimensional interactive interface is used for setting the target application.
The virtual reality scene can be realized in a mode of 'software + hardware equipment', and the target application is software for realizing the virtual reality scene and hardware equipment for displaying the three-dimensional interactive interface. The first display mode may be a default display mode of the three-dimensional interactive interface in the target application.
Optionally, the target applications described above include, but are not limited to, social applications and gaming applications.
Step S204, acquiring an operation instruction of a first account, wherein the first account is an account of the target application, the operation instruction is used for instructing to execute a first operation on a target object on the three-dimensional interactive interface, and the first operation is used for setting the target application.
The first account is an account for identifying a virtual object in the virtual reality scene, and an action of the virtual object in the virtual reality scene is indicated by a user of the first account in reality, for example, the virtual object performs an operation that the user instructs to perform, the virtual object performs an action that follows a user in reality, and the like.
The three-dimensional interactive interface comprises one or more operation controls (such as operation buttons, sliders and the like), and the area where each operation control is located can be understood as a target object. The operation instruction is generated when the virtual object touches the three-dimensional interactive interface.
And step S206, responding to the operation instruction, executing a first operation on the target object, and displaying the three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode.
When a virtual object in a virtual reality scene touches the three-dimensional interactive interface, the three-dimensional interactive interface is displayed in a display mode different from the display mode when the virtual object is not touched, namely the display mode is used for displaying, the feedback of the first operation of the virtual object is displayed in the second display mode, namely the feedback of the operation of a user, and when the user observes that the display mode is changed, the user knows that the three-dimensional interactive interface is touched by the first operation.
Through the steps S202 to S206, displaying a three-dimensional interactive interface in the virtual reality scene of the target application according to the first display mode, where the three-dimensional interactive interface is used for setting the target application; acquiring an operation instruction of a first account, wherein the operation instruction instructs to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used for setting a target application; responding to an operation instruction, executing a first operation on a target object, and displaying a three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode, and displaying by adopting a display mode different from the display mode when the target object is not touched, so that the first operation of a user is fed back, the technical problem that the operation of the user cannot be fed back in the related technology can be solved, and the technical effect of feeding back the operation of the user is achieved.
For applications in non-VR environments (such as social applications and games), applicants have recognized that realistic touch feedback and touch feedback in applications are disruptive:
(1) for example, a host PC game, a PC game, etc. are non-touch screen games in a 2D display environment, and no real-world touch mechanism exists regardless of whether the game is based on a handle or a keyboard or a mouse, because a player always holds an operation control (i.e., the touch mechanism is converted to operate the control), that is, always touches the control. In order to make a player experience such a feeling of collision in designing a game, a touch in the virtual world, for example, a touch relationship in which an operating player is hit by a bullet by another person, is generally made to make the player experience such a feeling by a visual and auditory method, and a mode in which a handle is vibrated is also commonly used to enhance the user's cognition. But in general real touch feedback and in-game touch feedback are divorced because the player has not done so.
(2) For another example, in a touch screen game represented by a mobile phone game, the largest change brought by the touch screen game is that a player really has the action of touching, and because the 2D screen is really existed, the player has real tactile feedback, which is a real tactile interaction means, and is the most advantageous of the touch screen game, so that the player can play some click-type games on the touch screen game with a very real feeling, because the accuracy and naturalness of the operation are all the player's desire. However, from the perspective of the virtual world and the real world, the real touch feedback and the touch feedback in the game are still split, because the virtual world is a 3D world, and the real world is also a 3D world, for the mobile game, only the 2D screen is used as a window, the user feeling of the player is folded like a mirror, the real finger of the player touches the screen, and then a mapping relation is needed to influence the operation of the virtual world, so the immersion feeling and the substitution feeling of the player are influenced.
Therefore, for touch feedback in a VR virtual reality environment, applications in a non-VR environment do not give a usable implementation mechanism.
Further, the applicant recognizes through the analysis of the VR virtual display that the VR virtual reality environment brings the greatest advantage that the user can perceive the change and feeling of the 3D space, because the feeling of the real 3D space position of the user corresponds to the position in the virtual world one by one, so that the real action and the action in the game are not split but completely fused for the user. Thus, for touch feedback, this provides a large environment and possibility that the touch feedback of the user's reality and the touch feedback in the game are not separated from each other, and the user experience is enhanced by enhancing the visual feedback to simulate the touch feedback. But there is no ability for haptic simulation due to current VR devices.
Within the VR application of the present application, to the feedback of the touch, several implementations are provided:
(1) through vibration feedback, as long as collision is caused, vibration is triggered to prompt a user, and the operation of the user is not influenced;
(2) a logic change is triggered to perform feedback, for example, a hand of the virtual world should always follow the position of a real hand, but if a virtual table in the virtual world blocks the hand of the player, but the table is not in the real world, the hand of the player can reach the position of the table in the virtual world, and at this time, the hand of the virtual world stays at the edge of the table instead of penetrating into the table along with the real hand, which is a common handling method (if the first method is that the hand penetrates into the table and vibrates);
(3) changing the position of the touched object, for example, the hand of the virtual world should always follow the position of the real hand, but if a virtual table in the virtual world blocks the hand of the player, but the table is not in the real world, the hand of the player can reach the position of the table in the virtual world, and if the hand of the virtual world continues to push the table, the table is collided with the original position by the hand, and appears to move in the virtual world.
Of the above three ways, the third way is the most realistic to feel but affects the change of game scene, and is not applicable to some components that should not affect the game logic, such as 3D panels for displaying UI; the second method can also be regarded as a visual enhancement method because the virtual hand seen by the player and the actual hand position felt by the player are not consistent, which brings the player feel a sense of incongruity; the first approach is relatively simple and crude, and visual feedback is not very comfortable. Therefore, the user experience is affected by the three kinds of feedback.
In order to further improve the experience of the user, the application also provides a feedback mode in the VR environment, and the method is a method for realizing the visual feedback effect based on the finger click operation. The visual feedback effect is realized, when the fingers of the player click on the plane, the plane has a fluctuating dynamic effect, and the visual feedback is used for simulating and expressing the tactile feedback. This approach is therefore particularly suited to the use of 3D panels displaying UI content in a virtual world, and for the interaction of such panels, it enhances the visual representation of the accuracy of the user's click, letting the user know where the particular location of the user's click is.
Embodiments of the present application are further detailed below in conjunction with the steps shown in fig. 2:
in the technical solution provided in step S202, a three-dimensional interactive interface is displayed in a virtual reality scene of a target application according to a first display mode.
As shown in FIG. 3, a first display mode, i.e., a default display mode, of the three-dimensional interactive interface is shown. In the three-dimensional interactive interface, a user or a virtual object can set a target application, for example, a certain function of the target application is set, and the function can be embodied in the form of an icon (i.e., a target object).
In the technical solution provided in step S204, the operation instruction for acquiring the first account includes, but is not limited to, the following implementation manners:
(1) operation instruction generated according to operation behavior of user
When a user operates, acquiring position data of the user in real time through positioning equipment, mapping the operation action of the user to a virtual object according to the acquired position data, and triggering and generating the operation instruction if the operation of the virtual object touches a three-dimensional interactive interface;
(2) operation instruction generated according to input operation trigger of input equipment
The input device may be a part of a hardware device, or a device connected to the hardware device, through which a user may control a virtual object in a virtual reality scene, and when the virtual object is controlled by the input device to be set on the three-dimensional interaction panel, the operation instruction is generated.
In the technical solution provided in step S206, in response to the operation instruction, a first operation is performed on the target object, and the three-dimensional interactive interface is displayed in a second display manner.
The second display mode identifies the first operation by adopting a display mode different from the first display mode, including but not limited to being embodied by the following forms:
(1) the second display mode is different from the first display mode in color, such as background color, font color, integral color of the three-dimensional interactive interface and the like;
(2) the second display mode is different from the first display mode in the background picture;
(3) the second display mode is different from the first display mode in display modes of the content in the three-dimensional interactive interface.
For the first two types, which are easier to implement, the following description focuses on type 3:
and when the three-dimensional interactive interface is displayed according to the second display mode, determining the second display mode indicating texture, forming the three-dimensional texture at least in the first area of the three-dimensional interactive interface, and displaying the three-dimensional interactive interface with the three-dimensional texture at least in the first area.
The first area is an area where a target object is located on the three-dimensional interactive interface, namely, a click position of the virtual object.
Optionally, according to the indication of the second display mode, displaying the three-dimensional interactive interface with the preset three-dimensional texture formed at least in the first area may be implemented by: displaying the three-dimensional interactive interface formed with the three-dimensional texture within a preset time period, wherein the distance between the three-dimensional texture displayed at a first moment and the target object within the preset time period is smaller than the distance between the three-dimensional texture displayed at a second moment and the target object, and the second moment within the preset time period is later than the first moment.
It should be noted that when the three-dimensional interaction interface formed with the three-dimensional texture is displayed within the preset time period, the three-dimensional texture may be displayed near the target object, and if the three-dimensional texture is displayed with the target object as the center, the display effect is better.
The three-dimensional texture comprises three-dimensional ripples, and when a three-dimensional interaction interface with the three-dimensional texture is displayed within a preset time period, the three-dimensional texture can be realized through the following steps:
step S2062, displaying the three-dimensional interactive interface with the first three-dimensional ripple formed at the first moment, wherein the first three-dimensional ripple takes the target object as the center. Step S2062 may be realized by the following substeps (step one and step two):
the method comprises the steps of firstly, obtaining a first data set and a second data set, wherein the first data set comprises a plurality of first data, each first data is used for indicating the position of one vertex of a grid panel at a first moment, the grid panel is used for displaying a three-dimensional interactive interface in a second area, the second area is the area where the three-dimensional interactive interface displayed according to a first display mode is located, the second data set comprises a plurality of second data, and each second data is used for indicating the position of a normal line of one vertex of the grid panel at the first moment.
(1) Obtaining operation strength of first operation indicated in operation instruction
For each operation strength, an initial offset generated at a position of a target vertex (denoted as a first vertex) under the influence of the operation strength may be preconfigured, and as time goes on, a range influenced by the operation strength may be expanded to other regions (i.e., a vertex where a diffused ripple is located, denoted as a second vertex, and a radius of the ripple where the second vertex is located is greater than a vertex of the ripple where the first vertex is located), but the resulting offset may be smaller than the initial offset, and meanwhile, at a position where the ripple is originally generated (i.e., the first vertex), the influence of the operation strength may be reduced, that is, the offset generated at the position of the first vertex is smaller than the initial offset, and the configuration may be specifically performed. An alternative configuration is as follows:
offset y ═ y0-at,y0For the initial offset, t is the time from the click to the start of the three-dimensional interactive interface, and a is a constant, representing the amount of offset that decays every second.
Alternatively, the offset may be non-linear with respect to time, such as a quadratic curve, a logarithmic curve, or the like.
In acquiring the position offset amount corresponding to the manipulation strength, the position offset amount of each vertex may be acquired in the above manner.
(2) Acquiring first data for indicating a first position according to the position offset
The first position is determined according to the position offset and the second position, and the second position is the position where the target vertex is located before the target vertex is offset.
The first data is position data indicating a position where the target vertex is located after the displacement.
Alternatively, in order to make the resulting curve more uniform, the following data optimization process may be performed.
(3) Data optimization process
And averaging the data of each vertex and the first data of the adjacent vertex. For example, a target vertex (referred to as a third vertex) is averaged with a vertex adjacent to and close to the target object, and the third vertex is averaged with a vertex adjacent to and far from the target object.
(4) Obtaining second data in a second data set
The second data is used to indicate the position of the normal of one vertex of the mesh at the first moment, which may be a vector of the normal.
For a vertex, which is typically at the intersection of four meshes, there are four normals to the vertex, and the distribution is found for each mesh (corresponding to a plane). It is equivalent to one vertex corresponding to four vectors, and in this application, the second data may refer to a vector having a binding relationship with the four vectors, such as an average of the four vectors, or a vector having another binding relationship with the four vectors.
After the second data is obtained, data optimization processing may be performed, and a specific processing manner is similar to that of the first data, and is not described herein again.
Rendering the grids of the grid panel according to the first data set and the second data set so as to display a three-dimensional interactive interface with first three-dimensional ripples, wherein the grids of the grid panel are made of liquid, and the first three-dimensional textures are textures generated by liquid disturbance.
Optionally, rendering the grid of the grid panel according to the first data set and the second data set comprises: determining light and shadow information of a target grid according to first data and second data of a vertex of the target grid, wherein the target grid is a grid to be rendered currently in a grid panel; and rendering the material of the target grid according to the shadow information.
The light and shadow information includes one or more of incident direction, reflection angle, refraction angle, etc. of the light.
And S2064, displaying the three-dimensional interactive interface with the second three-dimensional ripple at the second moment, wherein the second three-dimensional ripple is the ripple formed after the first three-dimensional ripple is diffused.
The implementation manner of step S2064 is similar to the implementation manner of step S2062, and the difference is that when the first data and the second data are acquired, the attenuation of the position offset needs to be considered with respect to the second time.
As an alternative embodiment, the following describes an embodiment of the present application in detail from the product side.
For the interactive interface (i.e. a semitransparent UI is displayed on a 3D Mesh) and the operation control (i.e. the target object) above as shown in fig. 3, the user can click on the operation control (as shown in fig. 4), and click on the UI interactive interface with a finger, when the finger of the user touches the panel, the panel will generate a disturbance effect (i.e. ripple) at the touched place, and as the finger touches the water surface, there will be a vibrating ripple and diffusion effect, as shown in fig. 5, the apparent degree parameter of the effect can be adjusted, the effect is that the Mesh is clicked, but at the same time, if the clicked position is some interactive position on the UI panel, such as a button, the visual representation of the button will also occur simultaneously.
The invention also provides a preferred embodiment, and the implementation process of the product is detailed from the technical side.
(1) Structural logic with respect to the entire product
For the logical structure of the implemented component, comprising a Mesh panel (3D Mesh) to which UI widgets can be pasted, fig. 6 shows the parameters of this panel, and it should be noted that the number of vertices of the panel used is much more than enough, e.g. the panel used here comprises 1000 × 512 Mesh vertices and their material, rather than a simple block with only four vertices, because the following ripple effect is caused by the change of the true vertex position, and therefore there are enough vertices to implement this. The second message shown in fig. 6 is a Material named "Water Material Widget", and the concrete implementation of the Material is shown in fig. 7, and the final effect of the 3D panel, which can be finally seen in the game, is achieved by the Material. The implementation mainly includes three aspects, namely SlateUI, that is, pasting the UI Widget panel of the target onto the Mesh as a map, using a pre-computed normal map (corresponding to the second data set), and using a pre-computed position disturbance map (corresponding to the first data set) of the panel.
The Slate is a set of cross-platform UI framework, which can be used as a UI for applications (e.g., UE4 Editor), tools, or games, and the Slate UI is a UI in games, and is equivalent to a HUD in games.
The UI Widget described above is a basic UI component, which is just a rectangle that can be positioned at will on the screen as needed, and this pendant has an area, which is invisible during operation, which is an ideal container for accommodating other components.
The Mesh is a Mesh which can generate the shocking effects like terrain and sleep, and the created Mesh mainly comprises three parameters of a vertex, a triangle and a segment number.
(2) Regarding the operating logic of the whole product:
as shown in fig. 8:
in step S801, when the finger touches the panel or leaves the panel, information of the clicked or touched position, such as coordinates, is obtained.
In step S802, the strength of the click is set, and the initial strength can be determined according to the speed of the finger moving in unit time.
In step S803, the clicked point (i.e., Draw Material to Render Target A) is drawn.
Step S804, drawing the animation of each frame, and the drawing of the animation of each frame can be implemented as follows.
For the above mentioned display of the final 3D mesh panel, it is changed for each frame, and it is first done to start with the node of the "first frame", that is, the first frame in the game, to obtain the current map describing the mesh panel vertex displacement information (the first data), and then modify the map to smooth the original data, so as to have the effect of slowly diffusing to the periphery like water wave until it is completely smooth, specifically, to average the data around each position in the map, and then Render the result on a map (i.e., update the image), which is the further thing of "Draw Material to Render Target a". In the first frame of the game, the further thing to do is "update the normals" (i.e., Draw Material to Render Target AN), which is similar to the above one, gets the current map describing the mesh panel vertex normals information, and then calculates a new normals map using the similar above algorithm of averaging the surrounding data. What needs to be done in the next step is to obtain the current UI Widget rendering effect (i.e. update the UI panel), and also draw the result on a pre-prepared map dedicated to storing UI Widget rendering, which is what the "UI Widget" step needs to do. All data required for the material used for the 3D mesh that is finally displayed is now prepared.
It is noted in the above steps that more than one (i.e. at least two copies of the first data set and the second data set, respectively) need to be prepared for each type of map, at least two, in order to prevent read-write access collisions and avoid waiting in rendering. For example, in N frames, the result obtained by rendering by using the data modification of the A map is stored in the B map, and then the result obtained after the data modification by using the B map is stored in the A map in N +1 frames. In the following steps, the result of the A-map is used at the time of N frames, and the result of the B-map is used at the time of N +1 frames, so that data access conflict during multiple rendering is avoided.
The preparation of the data required by the Material of the 'Water Material _ Widget' is completed, so that the next three steps are to use the prepared data to be finally used on the 3D grid panel to obtain the final expected effect. The step of updating the grid vertex position is to obtain a map of the calculated grid vertex displacement data, then obtain the data on the map corresponding to each vertex position of the 3D panel, and then make certain position changes of the vertices of the 3D panel in the world space according to the data, which is why the panel has position changes like the water surface after the click panel can be seen. Because the panel is transparent, the normal of the panel needs to be updated in real time for correct display effect, and the step of "updating the grid material" is to obtain the prepared normal map, and then adjust the normal of each vertex of the 3D grid panel according to the corresponding relationship of the positions. The step of updating the grid map is to obtain and store the map represented by the current UI Widget, and directly paste the map on the grid panel for display. This allows the UI grid panel to be seen in the game, which may change due to finger-click interactions.
What is said above is to do every frame, which is to make the whole panel smoother until it is completely calm, like sleeping. When the finger touches the panel or leaves the panel, the water surface fluctuates as if the water surface throws a stone, and the response is generated, which is realized as the flow chart shown in the above fig. 8. When the 'finger touches the panel' or 'finger leaves the panel' event occurs, firstly, the position and intensity of the finger click are confirmed, the two parameters are corresponded to the corresponding positions of the stored vertex displacement mapping, then a great circle is drawn at the corresponding position on the mapping according to the intensity, and the vertex at the position is greatly deviated, namely, the fluctuation effect is generated.
The invention mainly describes a method for realizing a visual feedback effect based on finger click operation in a VR environment. The virtual reality environment provides a realized environment for some experiences which cannot be achieved by the real world, and particularly effectively improves the visual and auditory immersion feeling brought by the virtual reality environment. However, the current VR devices also have the fatal disadvantage that the way of man-machine interaction input and output is limited, and particularly, only vibration except vision and hearing is generated in the aspect of output. The visual feedback effect is realized, when the fingers of the player click the plane, the plane has a fluctuating dynamic effect, and the tactile feedback is simulated and expressed through the visual feedback. By means of the mode of intensifying the visual feedback, the user can be more integrated into the virtual world.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to the embodiment of the invention, the display device of the interactive interface is also provided, wherein the display device is used for implementing the display method of the interactive interface. Fig. 9 is a schematic diagram of a display device of an alternative interactive interface according to an embodiment of the present invention, and as shown in fig. 9, the device may include: a first display unit 91, an acquisition unit 93, and a second display unit 95.
The first display unit 91 is configured to display a three-dimensional interactive interface in a virtual reality scene of the target application according to a first display manner, where the three-dimensional interactive interface is used to set the target application.
The virtual reality scene can be realized in a mode of 'software + hardware equipment', and the target application is software for realizing the virtual reality scene and is used for displaying a three-dimensional interaction interface, namely the hardware equipment. The first display mode may be a default display mode of the three-dimensional interactive interface in the target application.
Optionally, the target applications described above include, but are not limited to, social applications and gaming applications.
The obtaining unit 93 is configured to obtain an operation instruction of a first account, where the first account is an account of a target application, the operation instruction is used to instruct to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used to set the target application.
The first account is an account for identifying a virtual object in the virtual reality scene, and an action of the virtual object in the virtual reality scene is indicated by a user of the first account in reality, for example, the virtual object performs an operation that the user instructs to perform, the virtual object performs an action that follows a user in reality, and the like.
The three-dimensional interactive interface comprises one or more operation controls (such as operation buttons, sliders and the like), and the area where each operation control is located can be understood as a target object. The operation instruction is generated when the virtual object touches the three-dimensional interactive interface.
And a second display unit 95, configured to, in response to the operation instruction, execute a first operation on the target object, and display the three-dimensional interactive interface according to a second display mode, where the second display mode is used to identify the first operation by using a display mode different from the first display mode.
When a virtual object in a virtual reality scene touches the three-dimensional interactive interface, the three-dimensional interactive interface is displayed in a display mode different from the display mode when the virtual object is not touched, namely the display mode is used for displaying, the feedback of the first operation of the virtual object is displayed in the second display mode, namely the feedback of the operation of a user, and when the user observes that the display mode is changed, the user knows that the three-dimensional interactive interface is touched by the first operation.
It should be noted that the first display unit 91 in this embodiment may be configured to execute step S202 in this embodiment, the obtaining unit 93 in this embodiment may be configured to execute step S204 in this embodiment, and the second display unit 95 in this embodiment may be configured to execute step S206 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Through the module, a three-dimensional interactive interface is displayed in a virtual reality scene of the target application according to a first display mode, and the three-dimensional interactive interface is used for setting the target application; acquiring an operation instruction of a first account, wherein the operation instruction instructs to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used for setting a target application; responding to an operation instruction, executing a first operation on a target object, and displaying a three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode, and displaying by adopting a display mode different from the display mode when the target object is not touched, so that the first operation of a user is fed back, the technical problem that the operation of the user cannot be fed back in the related technology can be solved, and the technical effect of feeding back the operation of the user is achieved.
The second display unit is further configured to display a three-dimensional interactive interface with a three-dimensional texture formed at least in a first area according to an indication of a second display mode, where the first area is an area where a target object is located on the three-dimensional interactive interface.
Optionally, the second display unit is further configured to display the three-dimensional interactive interface formed with the three-dimensional texture within a preset time period, where a distance between the three-dimensional texture displayed at a first time within the preset time period and the target object is smaller than a distance between the three-dimensional texture displayed at a second time and the target object, where the second time within the preset time period is later than the first time.
The second display unit described above may include: the first display module is used for displaying a three-dimensional interactive interface with a first three-dimensional ripple formed at a first moment, wherein the first three-dimensional ripple takes a target object as a center; and the second display module is used for displaying the three-dimensional interactive interface with the second three-dimensional ripple at a second moment, wherein the second three-dimensional ripple is the ripple formed after the first three-dimensional ripple is diffused.
Optionally, the first display module includes: the obtaining submodule is used for obtaining a first data set and a second data set, wherein the first data set comprises a plurality of first data, each first data is used for indicating the position of one vertex of a grid panel at a first moment, the grid panel is used for displaying a three-dimensional interactive interface in a second area, the second area is the area where the three-dimensional interactive interface displayed according to the first display mode is located, the second data set comprises a plurality of second data, and each second data is used for indicating the position of a normal line of one vertex of the grid panel at the first moment; and the display submodule is used for rendering the grids of the grid panel according to the first data set and the second data set so as to display the three-dimensional interactive interface with the first three-dimensional ripples, wherein the grids of the grid panel are made of liquid, and the first three-dimensional textures are textures generated by liquid disturbance.
The obtaining submodule is further configured to obtain an operation strength of the first operation indicated in the operation instruction; acquiring a position offset corresponding to the operation strength, wherein the position offset is used for indicating an offset generated at the position of a target vertex under the influence of the operation strength, and the target vertex is any one vertex of a grid of the grid panel; and acquiring first data for indicating a first position according to the position offset, wherein the first position is determined according to the position offset and a second position, and the second position is the position of the target vertex before the target vertex is offset.
The display submodule is further configured to determine shadow information of the target mesh according to the first data and the second data of the vertex of the target mesh, where the target mesh is a mesh to be currently rendered in the mesh panel; and rendering the material of the target grid according to the shadow information.
The biggest advantage brought by the VR virtual reality environment is that the user can perceive the change and feeling of the 3D space, because the feeling of the real 3D space position of the user corresponds to the position of the virtual world one by one, so that the real action and the action in the game are not split but completely fused for the user. Thus, for touch feedback, this provides a large environment and possibility that the touch feedback of the user's reality and the touch feedback in the game are not separated from each other, and the user experience is enhanced by enhancing the visual feedback to simulate the touch feedback. But there is no ability for haptic simulation due to current VR devices.
The application provides a display device of an interactive interface under a VR environment, a visual feedback effect is achieved, when a finger of a player clicks a plane, the plane has a fluctuating dynamic effect, and the visual feedback is used for simulating and expressing the tactile feedback. This approach is therefore particularly suited to the use of 3D panels displaying UI content in a virtual world, and for the interaction of such panels, it enhances the visual representation of the accuracy of the user's click, letting the user know where the particular location of the user's click is.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to the embodiment of the invention, the invention further provides the server or the terminal for implementing the display method of the interactive interface.
Fig. 10 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 10, the terminal may include: one or more (only one shown in fig. 10) processors 1001, memory 1003, and transmission apparatus 1005 (such as the transmission apparatus in the above embodiments), as shown in fig. 10, the terminal may further include an input-output device 1007.
The memory 1003 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for displaying an interactive interface in the embodiment of the present invention, and the processor 1001 executes various functional applications and data processing by running the software programs and modules stored in the memory 1003, that is, implements the above-mentioned method for displaying an interactive interface. The memory 1003 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1003 may further include memory located remotely from the processor 1001, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1005 is used for receiving or transmitting data via a network, and can also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmitting device 1005 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1005 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Among them, the memory 1003 is used to store an application program, in particular.
The processor 1001 may call an application stored in the memory 1003 via the transmitting device 1005 to perform the following steps:
displaying a three-dimensional interactive interface in a virtual reality scene of a target application according to a first display mode, wherein the three-dimensional interactive interface is used for setting the target application;
acquiring an operation instruction of a first account, wherein the first account is an account of a target application, the operation instruction is used for indicating that a first operation is executed on a target object on a three-dimensional interactive interface, and the first operation is used for setting the target application;
and responding to the operation instruction, executing a first operation on the target object, and displaying the three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode.
The processor 1001 is further configured to perform the following steps:
the method comprises the steps of obtaining a first data set and a second data set, wherein the first data set comprises a plurality of first data, each first data is used for indicating the position of one vertex of a grid panel at a first moment, the grid panel is used for displaying a three-dimensional interactive interface in a second area, the second area is the area where the three-dimensional interactive interface displayed according to a first display mode is located, the second data set comprises a plurality of second data, and each second data is used for indicating the position of a normal line of one vertex of the grid panel at the first moment;
and rendering the grid of the grid panel according to the first data set and the second data set so as to display the three-dimensional interactive interface formed with the first three-dimensional ripples, wherein the material of the grid panel is set to be liquid, and the first three-dimensional texture is texture generated by the disturbance of the liquid.
By adopting the embodiment of the invention, the three-dimensional interactive interface is displayed in the virtual reality scene of the target application according to the first display mode, and the three-dimensional interactive interface is used for setting the target application; acquiring an operation instruction of a first account, wherein the operation instruction instructs to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used for setting a target application; responding to an operation instruction, executing a first operation on a target object, and displaying a three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode, and displaying by adopting a display mode different from the display mode when the target object is not touched, so that the first operation of a user is fed back, the technical problem that the operation of the user cannot be fed back in the related technology can be solved, and the technical effect of feeding back the operation of the user is achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be used to execute a program code of a display method of an interactive interface.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s21, displaying a three-dimensional interactive interface in a virtual reality scene of the target application according to a first display mode, wherein the three-dimensional interactive interface is used for setting the target application;
s22, acquiring an operation instruction of a first account, wherein the first account is an account of a target application, the operation instruction is used for instructing to execute a first operation on a target object on a three-dimensional interactive interface, and the first operation is used for setting the target application;
and S23, responding to the operation instruction, executing a first operation on the target object, and displaying the three-dimensional interactive interface according to a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s31, acquiring a first data set and a second data set, wherein the first data set comprises a plurality of first data, each first data is used for indicating the position of one vertex of a grid panel at a first moment, the grid panel is used for displaying a three-dimensional interactive interface in a second area, the second area is the area where the three-dimensional interactive interface displayed according to the first display mode is located, the second data set comprises a plurality of second data, and each second data is used for indicating the position of a normal line of one vertex of the grid panel at the first moment;
and S32, rendering the grid of the grid panel according to the first data set and the second data set so as to display the three-dimensional interactive interface with the first three-dimensional ripples, wherein the material of the grid panel is set as liquid, and the first three-dimensional textures are textures generated by the disturbance of the liquid.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A display method of an interactive interface is characterized by comprising the following steps:
displaying a three-dimensional interactive interface in a virtual reality scene of a target application according to a first display mode, wherein the three-dimensional interactive interface is used for setting the target application;
acquiring an operation instruction of a first account, wherein the first account is an account of the target application, the operation instruction is used for instructing to execute a first operation on a target object on the three-dimensional interactive interface, and the first operation is used for setting the target application;
responding to the operation instruction, executing the first operation on the target object, and displaying the three-dimensional interactive interface with a preset three-dimensional texture formed in a first area according to an indication of a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode, the first area is an area where the target object is located on the three-dimensional interactive interface, and under the condition that the target object is an interactive object, visual expression changes while the first area forms a shock wave and a diffused three-dimensional texture.
2. The method according to claim 1, wherein displaying the three-dimensional interactive interface with the preset three-dimensional texture formed in the first area according to the indication of the second display mode comprises:
displaying the three-dimensional interactive interface formed with the three-dimensional texture within a preset time period, wherein a distance between the three-dimensional texture displayed at a first time and the target object within the preset time period is smaller than a distance between the three-dimensional texture displayed at a second time and the target object, wherein the second time within the preset time period is later than the first time.
3. The method of claim 2, wherein the three-dimensional texture comprises three-dimensional ripples, and wherein displaying the three-dimensional interactive interface formed with the three-dimensional texture for a preset time period comprises:
displaying the three-dimensional interactive interface formed with a first three-dimensional ripple at the first moment, wherein the first three-dimensional ripple is centered on the target object;
and displaying the three-dimensional interactive interface formed with a second three-dimensional ripple at the second moment, wherein the second three-dimensional ripple is formed after the first three-dimensional ripple is diffused.
4. The method of claim 3, wherein displaying the three-dimensional interactive interface formed with the first three-dimensional ripple at the first time comprises:
acquiring a first data set and a second data set, wherein the first data set comprises a plurality of first data, each first data is used for indicating the position of one vertex of a grid panel at the first moment, the grid panel is used for displaying the three-dimensional interactive interface in a second area, the second area is the area where the three-dimensional interactive interface is displayed according to the first display mode, the second data set comprises a plurality of second data, and each second data is used for indicating the position of a normal of one vertex of the grid panel at the first moment;
rendering the grid of the grid panel according to the first data set and the second data set so as to display the three-dimensional interactive interface formed with the first three-dimensional ripples, wherein the material of the grid panel is set to be liquid, and the first three-dimensional textures are textures generated by the disturbance of the liquid.
5. The method of claim 4, wherein rendering the grid of the grid panel from the first set of data and the second set of data comprises:
determining light and shadow information of a target grid according to the first data and the second data of the vertex of the target grid, wherein the target grid is a grid to be rendered currently in the grid panel;
and rendering the material of the target grid according to the light and shadow information.
6. The method of claim 4, wherein obtaining a first set of data comprises obtaining the first data for each vertex of a mesh of the mesh panel as follows:
acquiring the operation strength of the first operation indicated in the operation instruction;
acquiring a position offset corresponding to the operation strength, wherein the position offset is used for indicating an offset generated at a position of a target vertex under the influence of the operation strength, and the target vertex is any one vertex of a mesh of the mesh panel;
and acquiring the first data for indicating a first position according to the position offset, wherein the first position is determined according to the position offset and a second position, and the second position is the position of the target vertex before the target vertex is offset.
7. A display device for an interactive interface, comprising:
the system comprises a first display unit, a second display unit and a third display unit, wherein the first display unit is used for displaying a three-dimensional interactive interface in a virtual reality scene of a target application according to a first display mode, and the three-dimensional interactive interface is used for setting the target application;
an obtaining unit, configured to obtain an operation instruction of a first account, where the first account is an account of the target application, the operation instruction is used to instruct a target object on the three-dimensional interactive interface to perform a first operation, and the first operation is used to set the target application;
and the second display unit is used for responding to the operation instruction, executing the first operation on the target object, and displaying the three-dimensional interactive interface with a preset three-dimensional texture formed in a first area according to an indication of a second display mode, wherein the second display mode is used for identifying the first operation by adopting a display mode different from the first display mode, the first area is an area where the target object is located on the three-dimensional interactive interface, and under the condition that the target object is an interactive object, the visual expression changes while the first area forms a vibration ripple and a diffused three-dimensional texture.
8. The apparatus according to claim 7, wherein the second display unit is further configured to display the three-dimensional interactive interface formed with the three-dimensional texture within a preset time period, wherein a distance between the three-dimensional texture displayed at a first time and the target object within the preset time period is smaller than a distance between the three-dimensional texture displayed at a second time and the target object, wherein the second time within the preset time period is later than the first time.
9. The apparatus of claim 8, wherein the second display unit comprises:
the first display module is used for displaying the three-dimensional interactive interface with a first three-dimensional ripple formed at the first moment, wherein the first three-dimensional ripple takes the target object as the center;
and the second display module is used for displaying the three-dimensional interactive interface with the second three-dimensional ripple at the second moment, wherein the second three-dimensional ripple is the ripple formed after the first three-dimensional ripple is diffused.
10. The apparatus of claim 9, wherein the first display module comprises:
an obtaining submodule, configured to obtain a first data set and a second data set, where the first data set includes multiple first data, each of the first data is used to indicate a position of one vertex of a mesh panel at the first time, the mesh panel is used to display the three-dimensional interactive interface in a second area, the second area is an area where the three-dimensional interactive interface is displayed according to the first display mode, the second data set includes multiple second data, and each of the second data is used to indicate a position of a normal line of one vertex of the mesh panel at the first time;
and the display submodule is used for rendering the grid of the grid panel according to the first data set and the second data set so as to display the three-dimensional interactive interface formed with the first three-dimensional ripples, wherein the material of the grid panel is set to be liquid, and the first three-dimensional textures are textures generated by the disturbance of the liquid.
11. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 6.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 6 by means of the computer program.
CN201711000972.0A 2017-10-24 2017-10-24 Interactive interface display method and device, storage medium and electronic device Active CN109697001B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711000972.0A CN109697001B (en) 2017-10-24 2017-10-24 Interactive interface display method and device, storage medium and electronic device
PCT/CN2018/111650 WO2019080870A1 (en) 2017-10-24 2018-10-24 Interaction interface display method and device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711000972.0A CN109697001B (en) 2017-10-24 2017-10-24 Interactive interface display method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN109697001A CN109697001A (en) 2019-04-30
CN109697001B true CN109697001B (en) 2021-07-27

Family

ID=66227798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711000972.0A Active CN109697001B (en) 2017-10-24 2017-10-24 Interactive interface display method and device, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN109697001B (en)
WO (1) WO2019080870A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112587927B (en) * 2020-12-29 2023-07-07 苏州幻塔网络科技有限公司 Prop control method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183276A (en) * 2007-12-13 2008-05-21 上海交通大学 Interactive system based on CCD camera porjector technology
CN103474007A (en) * 2013-08-27 2013-12-25 湖南华凯创意展览服务有限公司 Interactive display method and system
CN104281260A (en) * 2014-06-08 2015-01-14 朱金彪 Method and device for operating computer and mobile phone in virtual world and glasses adopting method and device
CN106775258A (en) * 2017-01-04 2017-05-31 虹软(杭州)多媒体信息技术有限公司 The method and apparatus that virtual reality is interacted are realized using gesture control
CN106774824A (en) * 2016-10-26 2017-05-31 网易(杭州)网络有限公司 Virtual reality exchange method and device
CN106896915A (en) * 2017-02-15 2017-06-27 传线网络科技(上海)有限公司 Input control method and device based on virtual reality

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289337A (en) * 2010-06-18 2011-12-21 上海三旗通信科技有限公司 Brand new display method of mobile terminal interface
CN102430244B (en) * 2011-12-30 2014-11-05 领航数位国际股份有限公司 Method for generating visual man-machine interaction by touching with finger
US9378592B2 (en) * 2012-09-14 2016-06-28 Lg Electronics Inc. Apparatus and method of providing user interface on head mounted display and head mounted display thereof
CN104460988B (en) * 2014-11-11 2017-12-22 陈琦 A kind of input control method of smart mobile phone virtual reality device
US10296086B2 (en) * 2015-03-20 2019-05-21 Sony Interactive Entertainment Inc. Dynamic gloves to convey sense of touch and movement for virtual objects in HMD rendered environments
US9851799B2 (en) * 2015-09-25 2017-12-26 Oculus Vr, Llc Haptic surface with damping apparatus
CN105630160A (en) * 2015-12-21 2016-06-01 黄鸣生 Virtual reality using interface system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183276A (en) * 2007-12-13 2008-05-21 上海交通大学 Interactive system based on CCD camera porjector technology
CN103474007A (en) * 2013-08-27 2013-12-25 湖南华凯创意展览服务有限公司 Interactive display method and system
CN104281260A (en) * 2014-06-08 2015-01-14 朱金彪 Method and device for operating computer and mobile phone in virtual world and glasses adopting method and device
CN106774824A (en) * 2016-10-26 2017-05-31 网易(杭州)网络有限公司 Virtual reality exchange method and device
CN106775258A (en) * 2017-01-04 2017-05-31 虹软(杭州)多媒体信息技术有限公司 The method and apparatus that virtual reality is interacted are realized using gesture control
CN106896915A (en) * 2017-02-15 2017-06-27 传线网络科技(上海)有限公司 Input control method and device based on virtual reality

Also Published As

Publication number Publication date
WO2019080870A1 (en) 2019-05-02
CN109697001A (en) 2019-04-30

Similar Documents

Publication Publication Date Title
US10754531B2 (en) Displaying a three dimensional user interface
EP3223116B1 (en) Multiplatform based experience generation
US9905052B2 (en) System and method for controlling immersiveness of head-worn displays
CN109557998B (en) Information interaction method and device, storage medium and electronic device
CN108273265A (en) The display methods and device of virtual objects
CN111192354A (en) Three-dimensional simulation method and system based on virtual reality
CN111167120A (en) Method and device for processing virtual model in game
CN106445157B (en) Method and device for adjusting picture display direction
CN109725956B (en) Scene rendering method and related device
CN112684970B (en) Adaptive display method and device of virtual scene, electronic equipment and storage medium
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
WO2019166005A1 (en) Smart terminal, sensing control method therefor, and apparatus having storage function
CN109697001B (en) Interactive interface display method and device, storage medium and electronic device
CN110215686A (en) Display control method and device, storage medium and electronic equipment in scene of game
CN110025953B (en) Game interface display method and device, storage medium and electronic device
CN115115814A (en) Information processing method, information processing apparatus, readable storage medium, and electronic apparatus
CN114504808A (en) Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus
CN114299203A (en) Processing method and device of virtual model
CN116774835B (en) Interaction method, device and storage medium in virtual environment based on VR handle
Jung et al. Web-based 3D virtual experience using unity and leap motion
CN106484114B (en) Interaction control method and device based on virtual reality
CN113941143A (en) Virtual card processing method, nonvolatile storage medium and electronic device
VRED DEVELOPMENT OF AN INDUSTRIAL LINE PARAMETRIC EDITOR IN VIRTUAL REALITY
CN115576420A (en) Control method and device for feedback behavior of virtual object
CN116271832A (en) Editing method, device, medium, electronic device and program product for virtual image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant