CN108334273B - Information display method and device, storage medium, processor and terminal - Google Patents

Information display method and device, storage medium, processor and terminal Download PDF

Info

Publication number
CN108334273B
CN108334273B CN201810135433.6A CN201810135433A CN108334273B CN 108334273 B CN108334273 B CN 108334273B CN 201810135433 A CN201810135433 A CN 201810135433A CN 108334273 B CN108334273 B CN 108334273B
Authority
CN
China
Prior art keywords
interaction area
template
preset interaction
touch operation
time length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810135433.6A
Other languages
Chinese (zh)
Other versions
CN108334273A (en
Inventor
陈贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201810135433.6A priority Critical patent/CN108334273B/en
Publication of CN108334273A publication Critical patent/CN108334273A/en
Application granted granted Critical
Publication of CN108334273B publication Critical patent/CN108334273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Abstract

The invention discloses an information display method and device, a storage medium, a processor and a terminal. Wherein, the method comprises the following steps: simulating condensation and displaying a specific fog-shaped body in a preset interaction area of a graphical user interface, detecting a first touch operation in the preset interaction area, and acquiring first position information of the first touch operation; creating a first template according to the first position information; obtaining a picture bottom plate of a specific fog-shaped body, and cutting the picture bottom plate of the specific fog-shaped body according to a first template; and displaying the residual pixels of the cut picture bottom plate in a preset interaction area. The invention solves the technical problem of low interchangeability of user interfaces in the related technology.

Description

Information display method and device, storage medium, processor and terminal
Technical Field
The invention relates to the field of games, in particular to an information display method and device, a storage medium, a processor and a terminal.
Background
In terminal (such as mobile phone) games, a User Interface (UI) is the most frequently contacted part of a User during a game, and is also the most intuitive feeling of the User on the game, so that the appearance of the UI largely determines the first impression of the User on the game. The UI interface of the existing mainstream terminal game mainly uses simple operation controls, text and picture display, and sequence frame animation. The characteristics of the UI are concise and elegant, and basic interaction and display functions can be provided, but the UI has the defects of general interactivity and single display mode, and is not enough to leave a deeper impression for users.
Related art there are some solutions for realizing animation effect, including:
firstly, a scheme of sequence frame animation: most game UI interfaces employ sequential frame animation to prompt visual effects. The sequence frame animation technology is a simpler solution in implementation, and only different animations need to be played according to different operations of a user. However, if a rich interaction effect is required, a very large amount of animation is necessary, and the representation form of the animation is fixed in the cocos engineering project in advance. This approach suffers from the following technical drawbacks: some special effect performances such as water mist condensation and erasure, finger pressing mist marks, dynamic water flowing effect and the like cannot be realized; in order to aim at different operation modes and different button expression effects, a large amount of UI resource support is needed; large-scale sequence frame animation increases memory consumption, which cannot be avoided by scripts and does not perform performance adaptation well.
Secondly, drawing the track by using a draw interface: tracks can be drawn on the U layer through the draw () interface of opengl-es automation or the draw primitives () interface provided by cos2dx, and the effects of drawing and erasing traces can be approximately simulated. However, the method has good drawing support for regular images, the user-defined track needs to be spliced at a little point, the sawtooth phenomenon is easy to occur, and the effect of leaving the background after erasing cannot be simulated because new colors are added to the original image layer during drawing. This approach has the following technical drawbacks: the drawing is that color pixel points are covered on the original basis, and the effect of leaving a background after erasing cannot be simulated; for irregular shape drawing, the shape needs to be formed by a series of points, so that the method has a serious sawtooth effect and is high in consumption; the process of trace disappearance is difficult to realize, and if the trace displayed is controlled by using a script in a variable recording mode, the logic is complex and the consumption is huge.
Thirdly, the rendersexture realizes the erasing effect: the rendertext is a class provided in cos2dx for dynamically rendering a texture, and a visual element can be selectively rendered on texture by begin () and end () methods of Rendertexture, and vis it () methods of the rendered element. The render is useful for quickly realizing screen capture, and also for realizing the functions of drawing or erasing by setting a blend mode among various elements. However, the erase function of renderitexecuture has a drawback that it cannot simulate the process of recovery, and it cannot display erasable contents and non-erasable contents in the same area in an overlapping manner, and in addition, it does not support different platforms well. This approach suffers from the following technical drawbacks: the biggest difficulty of render is that the image cannot be restored, because the erasing function needs to set the blend of the eraser Sprite into (GL _ ONE, GL _ ZERO), once GL _ ZERO acts on the original background, the background pixel information will be lost, and even if the background layer is redrawn, the slow recovery process cannot be simulated; yet another problem with renderitexture is that on different devices, there may be problems, especially with android devices that may display incorrect bugs.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an information display method and device, a storage medium, a processor and a terminal, which are used for at least solving the technical problem that the interchangeability of user interfaces in the related technology is not high.
According to an aspect of the embodiments of the present invention, there is provided an information display method applied to a terminal presenting a graphical user interface, including: simulating condensation and displaying a specific fog-shaped body in a preset interaction area of a graphical user interface, detecting a first touch operation in the preset interaction area, and acquiring first position information of the first touch operation; creating a first template according to the first position information; obtaining a picture bottom plate of a specific fog-shaped body, and cutting the picture bottom plate of the specific fog-shaped body according to a first template; and displaying the residual pixels of the cut picture bottom plate in a preset interaction area.
Optionally, simulating condensation of the specific fog in the preset interaction area is realized by: and adjusting the transparency of the picture bottom plate of the specific fog-shaped body, and displaying in a preset interaction area.
Optionally, after detecting the first touch operation in the preset interaction area, the method further includes: and acquiring a plurality of touch points of the first touch operation, wherein the plurality of touch points form a first preset track in a preset interaction area.
Optionally, creating the first template according to the location information includes: creating a plurality of first templates according to the position information of the plurality of touch points, wherein one first template is created corresponding to one touch point, and the shape of the first template is the same as that of the corresponding touch point; or creating a first template according to the first predetermined track, wherein the shape of the first template is the same as the first predetermined track.
Optionally, after displaying the remaining pixels of the cropped picture backplane in the preset interaction area, the method further includes: and gradually reducing the size of the first template to zero so as to recover the residual pixels of the specific fog body and display the residual pixels in the preset interaction area.
Optionally, the gradually reducing the size of the first template to zero so that the restoring of the remaining pixels of the specific fog and the displaying in the preset interaction area include: and in the process that the size of the first template is gradually reduced to zero, cutting the picture bottom plate by using the first template in the process, and displaying the residual pixels of the cut picture bottom plate in a preset interaction area.
Optionally, the method further comprises: acquiring first time length of a first touch operation, wherein the first template is gradually increased and the area displayed by the fog body is gradually reduced along with the increase of the first time length, and the first time length is the time length of the first touch operation acting on a preset interaction area; and/or when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a second time length, and gradually reducing the first template and the area displayed by the fog body along with the increase of the second time length, wherein the second time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
Optionally, the method further comprises: detecting a second touch operation in a preset interaction area; acquiring second position information of a second touch operation; creating a second template having a liquid particle effect according to the liquid particle pattern at a position corresponding to the second position information; and displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information under the condition that the touch point of the second touch operation leaves the preset interaction area.
Optionally, displaying the liquid particles generated by the second template simulation at the position corresponding to the second position information comprises: and displaying the second template and hiding the pixel content of the background picture in the preset interaction area at the position corresponding to the second position information.
Optionally, after displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information, the method further comprises: controlling the liquid particles to move towards the gravity direction of the terminal in a preset interaction area; the liquid particles are released when they move to the boundary of the preset interaction area.
Optionally, the controlling the liquid particles to move in the preset interaction region along the gravity direction of the terminal comprises: reducing the coordinate of the second template in the longitudinal axis direction in the preset interaction area; adding an offset to the coordinates of the second template in the direction of the transverse axis in the preset interaction area; the direction of the transverse axis is a direction perpendicular to the gravity direction of the terminal in the plane where the preset interaction area is located, and the direction of the longitudinal axis is a direction perpendicular to the direction of the transverse axis in the plane where the preset interaction area is located.
Optionally, releasing the liquid particles when the liquid particles move to the boundary of the preset interaction area comprises: and removing the second template from the list when the second template moves to the boundary of the preset interaction area.
Optionally, after detecting the first touch operation in the preset interaction area, the method further includes: condensing the particular mist at a second location a predetermined distance from the first location; acquiring a third time length of the first touch operation, and gradually fading in the condensed foggy body at the second position along with the increase of the third time length, wherein the third time length is the time length of the first touch operation acting on the preset interaction area; and/or when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a fourth time length, and gradually fading in the condensed fog at the second position along with the increase of the fourth time length, wherein the fourth time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
Optionally, condensing the particular mist at a second location a predetermined distance from the first location comprises: and creating a designated picture at the second position, wherein the area corresponding to the second position in the designated picture is transparent, and the other areas except the area in the designated picture are condensed with the specific fog.
Optionally, detecting a third touch operation in the preset interaction area, and acquiring third position information of the third touch operation; modifying the state of the control at the third position corresponding to the third position information.
Optionally, after modifying the state of the control at the third position corresponding to the third position information, the method further comprises: and under the condition that the time for the touch point of the third touch operation to leave the preset interaction area exceeds a preset threshold value, restoring the state of the control to the initial state.
Optionally, modifying the state of the control at the third position corresponding to the third position information comprises: reducing the transparency of the picture of the control.
Optionally, restoring the state of the control to the initial state comprises: and restoring the transparency of the picture of the control to the initial transparency of the picture.
According to an aspect of the embodiments of the present invention, there is provided an information display apparatus applied to a terminal presenting a graphical user interface, including: the first detection module is used for simulating condensation and displaying a specific fog-shaped body in a preset interaction area of the graphical user interface, detecting a first touch operation in the preset interaction area and acquiring first position information of the first touch operation; the first creating module is used for creating a first template according to the first position information; the cutting module is used for acquiring the picture bottom plate of the specific fog-shaped body and cutting the picture bottom plate of the specific fog-shaped body according to the first template; and the first display module is used for displaying the residual pixels of the cut picture bottom plate in the preset interaction area.
Optionally, the apparatus further comprises: and the reduction module is used for gradually reducing the size of the first template to zero so as to recover the residual pixels of the specific fog body and display the residual pixels in the preset interaction area.
Optionally, the apparatus further comprises: the second detection module is used for detecting a second touch operation in the preset interaction area and acquiring second position information of the second touch operation; a second creating module for creating a second template having a liquid particle effect according to the liquid particle pattern at a position corresponding to the second position information; and the second display module is used for displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information under the condition that the touch point of the second touch operation leaves the preset interaction area.
Optionally, the apparatus further comprises: a condensing module for condensing the specific mist at a second location a predetermined distance from the first location; the first obtaining module is used for obtaining a third time length of the first touch operation, wherein the condensed fog body at the second position gradually fades in along with the increase of the third time length, and the third time length is the time length of the first touch operation acting on the preset interaction area; and/or the fourth time length is acquired when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, wherein the condensed fog body at the second position gradually fades in along with the increase of the fourth time length, and the fourth time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
Optionally, the apparatus further comprises: the third detection module is used for detecting a third touch operation in the preset interaction area and acquiring third position information of the third touch operation; and the modification module is used for modifying the state of the control at the third position corresponding to the third position information.
According to an aspect of the embodiments of the present invention, there is provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute any one of the above-described information display methods.
According to an aspect of the embodiments of the present invention, there is provided a processor configured to execute a program, where the program executes to perform any one of the information display methods described above.
According to an aspect of an embodiment of the present invention, there is provided a terminal including: one or more processors, a memory, a display device, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the information display methods described above.
In embodiments of the present invention, where condensation of a particular fog is simulated within the interactive region of the graphical user interface, creating a first template based on first position information of a first touch operation on an interaction area, the picture template of the specific fog body is cut according to the created first template, and the residual pixels of the cut picture bottom plate are displayed in the preset interaction area, so that the purpose of erasing the specific fog body at the position of the interaction area acted by the first touch operation is realized, namely, the erasing of the specific fog body is realized by adopting the template cutting mode, because the first touch operation acts on the position of the interaction area, the size of the first template can influence the erasing position and the erasing area of the specific fog body, therefore, more display effects can be brought, the interface interchangeability is increased, and the technical problem that the user interface interchangeability is not high in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an information display method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a UI interface provided in accordance with a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a UI interface provided in accordance with a preferred embodiment of the present invention after the UI interface is pressed;
FIG. 4 is a schematic illustration of a water mist for wiping provided according to a preferred embodiment of the present invention to form a clear track;
fig. 5 is a schematic structural diagram of a Cl ippingNode scheme provided in accordance with a preferred embodiment of the present invention;
FIG. 6 is a schematic illustration of finger pressure producing ambient fogging provided in accordance with a preferred embodiment of the present invention;
FIG. 7 is a schematic diagram of a water droplet shedding simulation provided in accordance with a preferred embodiment of the present invention;
FIG. 8 is a schematic illustration of dissipation of a button provided in accordance with a preferred embodiment of the present invention;
FIG. 9(a) is a schematic illustration of a status bar icon provided with a water mist element in accordance with a preferred embodiment of the present invention;
FIG. 9(b) is a diagram of a status bar icon corresponding to a status of a player when the status changes during a game according to a preferred embodiment of the present invention;
fig. 10 is a block diagram of a structure of an information display apparatus provided according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an information display method, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Fig. 1 is a schematic flow chart of an information display method provided according to an embodiment of the present invention, which may be applied to a terminal presenting a graphical user interface, as shown in fig. 1, and the method includes the following steps:
step S102, simulating condensation and displaying a specific fog-shaped body in a preset interaction area of a graphical user interface, detecting a first touch operation in the preset interaction area, and acquiring first position information of the first touch operation;
step S104, a first template is established according to the first position information;
step S106, obtaining a picture bottom plate of the specific fog-shaped body, and cutting the picture bottom plate of the specific fog-shaped body according to the first template;
and step S108, displaying the residual pixels of the cut picture bottom plate in a preset interaction area.
Through the steps, under the condition that the specific fog-shaped body is simulated to be condensed in the interactive area of the graphical user interface, creating a first template based on first position information of a first touch operation on an interaction area, the picture template of the specific fog body is cut according to the created first template, and the residual pixels of the cut picture bottom plate are displayed in the preset interaction area, so that the purpose of erasing the specific fog body at the position of the interaction area acted by the first touch operation is realized, namely, the erasing of the specific fog body is realized by adopting the template cutting mode, because the first touch operation acts on the position of the interaction area, the size of the first template can influence the erasing position and the erasing area of the specific fog body, therefore, more display effects can be brought, the interface interchangeability is increased, and the technical problem that the user interface interchangeability is not high in the related technology is solved.
Optionally, in this embodiment, the terminal may include, but is not limited to, at least one of the following: mobile phones, tablet computers, notebook computers, desktop PCs, digital televisions and other hardware devices that require data to be displayed.
Alternatively, in this embodiment, the specific mist may be, but is not limited to, mist-like liquid such as water mist and oil stain. The first touch operation may be, but is not limited to, the pressing operation, the clicking operation, the sliding operation, and the like.
The first position information may be information of a position in the preset interaction area where the first touch operation is applied, and the size of the first template may be determined by a force (i.e., the pressure sensed by the screen) applied to the interaction area by the first touch operation, but is not limited thereto.
It should be noted that the steps S102 to S108 may be realized by using a ClippingNode scheme, the picture backplane may correspond to a backplane node in the ClippingNode scheme, the first template may be a template node in the ClippingNode scheme, and the step S108 may be realized by displaying information other than the template on the backplane in an inverted display manner. Namely, the scheme of ClippingNode is applied to the erasing process of the specific fog-shaped body, and compared with the scheme in the prior art, more display effects can be obtained.
In one embodiment of the invention, simulating condensation of a particular fog in the preset interaction area is achieved by: and adjusting the transparency of the picture bottom plate of the specific fog-shaped body, and displaying in a preset interaction area. Taking the specific fog as the water fog, the picture base plate may be a water fog background picture.
Optionally, a corresponding erase track may be formed in the interaction area along with the movement of the touch point of the first touch operation on the screen, and therefore, in an embodiment of the present invention, after the first touch operation is detected in step S102, the method further includes: and acquiring a plurality of touch points of the first touch operation, wherein the plurality of touch points form a first preset track in a preset interaction area. The step S104 may be represented as: creating a plurality of first templates according to the position information of the plurality of touch points, wherein one first template is created corresponding to one touch point, and the shape of the first template is the same as that of the corresponding touch point; or creating a first template according to the first predetermined track, wherein the shape of the first template is the same as the first predetermined track. That is, the erasing trace can be correspondingly formed by moving the first template or continuously creating a plurality of templates.
In order to better increase the interaction effect, slow recovery may be performed after erasing the specific fog, in an embodiment of the present invention, after the step S108, the method may further include: and gradually reducing the size of the first template to zero so as to recover the residual pixels of the specific fog body and display the residual pixels in the preset interaction area.
It should be noted that, gradually reducing the size of the first template to zero so as to perform the recovery processing on the remaining pixels of the specific fog and perform the display in the preset interaction area may be represented as: and in the process that the size of the first template is gradually reduced to zero, cutting the picture bottom plate by using the first template in the process, and displaying the residual pixels of the cut picture bottom plate in a preset interaction area.
It should be noted that the size of the first template may be gradually changed by the timing of a timer, or may be implemented by, for example, an animation function carried by the cocos engine, but is not limited thereto.
Optionally, in order to implement the erase region diffusion when the screen is pressed for a long time, in an embodiment of the present invention, the method may further include: acquiring a first time length of the first touch operation, wherein the first template is gradually increased along with the increase of the first time length, and the area displayed by the fog body is gradually reduced, wherein the first time length is the time length of the first touch operation acting on the preset interaction area. And/or, in order to realize that the erasing area is reduced when the hand is loose, the method can further comprise the following steps: and when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a second time length, gradually reducing the first template and gradually reducing the area displayed by the fog body along with the increase of the second time length, wherein the second time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
In an embodiment of the present invention, the method may further include: detecting a second touch operation in a preset interaction area; acquiring second position information of a second touch operation; creating a second template having a liquid particle effect according to the liquid particle pattern at a position corresponding to the second position information; and displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information under the condition that the touch point of the second touch operation leaves the preset interaction area. By increasing the generation process of the liquid particles, better interactivity is improved.
It should be noted that the step of detecting the second touch operation may be performed before, after, or simultaneously with step S102, and is not limited thereto. The liquid particles may be water droplets, oil droplets, etc. and are not limited thereto.
It should be noted that displaying the liquid particles generated by the simulation of the second template at the position corresponding to the second position information may be represented as: and displaying the second template and hiding the pixel content of the background picture in the preset interaction area at the position corresponding to the second position information.
In order to facilitate uniform management, after the liquid particles generated by the simulation of the second template are displayed at the position corresponding to the second position information, the method may further include: the resulting liquid particles are added to the list. For example, one or more lists may be created to store the generated templates corresponding to various forms of liquid particles or specific fogs, and when the templates are first acquired, the templates are included in the lists, and when the liquid particles or the specific fogs disappear in the preset interaction area or move to the edge of the screen to disappear, the corresponding templates are correspondingly deleted from the lists.
In order to increase the interactivity of the user with the interface and increase the dynamic change of the interface, after displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information, the method may further include: controlling the liquid particles to move towards the gravity direction of the terminal in a preset interaction area; the liquid particles are released when they move to the boundary of the preset interaction area. Namely, the process that the generated liquid particles flow downwards under the influence of gravity is realized.
Alternatively, controlling the liquid particles to move in the direction of gravity of the terminal within the preset interaction area may be represented as: reducing the coordinate of the second template in the longitudinal axis direction in the preset interaction area; adding an offset to the coordinates of the second template in the direction of the transverse axis in the preset interaction area; the direction of the transverse axis is a direction perpendicular to the gravity direction of the terminal in the plane where the preset interaction area is located, and the direction of the longitudinal axis is a direction perpendicular to the direction of the transverse axis in the plane where the preset interaction area is located.
It should be noted that the offset may be random. The liquid particle gravity-down process is simulated by adding random offsets in the direction of the horizontal axis and decreasing the coordinate in the direction of the vertical axis.
Alternatively, releasing the liquid particles when they move to the boundary of the preset interaction area may be represented by: and removing the second template from the list when the second template moves to the boundary of the preset interaction area. I.e. to release the liquid particles when they move beyond the screen.
In an embodiment of the present invention, after the first touch operation is detected in step S102, the method further includes: condensing the particular mist at a second location a predetermined distance from the first location; acquiring a third time length of the first touch operation, and gradually fading in the condensed foggy body at the second position along with the increase of the third time length, wherein the third time length is the time length of the first touch operation acting on the preset interaction area; and/or when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a fourth time length, and gradually fading in the condensed fog at the second position along with the increase of the fourth time length, wherein the fourth time length is the time length between the current time and the time when the touch point leaves the preset interaction area. Namely, the specific foggy bodies are condensed around the first designated area, the specific foggy bodies condensed around the first designated area change along with the time length of the first touch operation acting on the preset interaction area, and the specific foggy bodies condensed around the first designated area gradually fade away along with the time when the touch point of the first touch operation leaves the preset interaction area.
It should be noted that the condensation of the particular mist at the second location, which is a predetermined distance from the first location, may be represented by: and creating a designated picture at the second position, wherein the area corresponding to the second position in the designated picture is transparent, and the other areas except the area in the designated picture are condensed with the specific fog.
By the aid of the mode, the fog can be generated on the periphery except before the erasing effect is generated when a certain position of the screen is pressed for a long time, and the fog generated on the periphery slowly fades out when hands are released or a touch point leaves the screen.
In an embodiment of the present invention, the method may further include: detecting a third touch operation in the preset interaction area, and acquiring third position information of the third touch operation; modifying the state of the control at the third position corresponding to the third position information.
It should be noted that the step of detecting the third touch operation may be executed before, after or simultaneously with the step S102, and is not limited thereto. It should be noted that the states of the above-mentioned control may include a clear state and a fuzzy state, but are not limited thereto. It should be noted that, the modifying the state of the control at the third position corresponding to the third position information may be represented as: the controls described above become obscured. It should be noted that the degree of the blur of the control can be adjusted according to the picture of the control, and in this state, the user cannot perceive the control.
Optionally, modifying the state of the control at the third position corresponding to the third position information may be achieved by: reducing transparency of a picture of the control.
It should be noted that, after modifying the state of the control at the third position corresponding to the third position information, the method further includes: and under the condition that the time for the touch point of the third touch operation to leave the preset interaction area exceeds a preset threshold value, restoring the state of the control to the initial state. It should be noted that the initial state can be considered as a clear state, i.e., the above-mentioned control is clear. The restoration process of the control is revealed in the mode.
Optionally, restoring the state of the control to the initial state may be implemented by: and restoring the transparency of the picture of the control to the initial transparency of the picture.
It should be noted that the main body of the above steps may be a terminal, but is not limited thereto.
For a better understanding of the embodiments of the present invention, the following description is made in connection with a preferred embodiment in which the above-mentioned specific mist may be a water mist and the above-mentioned liquid particles may be water droplets.
The technical scheme provided by the preferred embodiment realizes a water mist expression form similar to the natural phenomenon, improves the interactivity between the user and the UI interface, increases the dynamic change of the UI interface, achieves the effect of making the player personally on the scene, and creates an atmosphere consistent with the scene.
The preferred embodiment of the invention adopts a brand-new erasable water mist theme UI interaction effect, and realizes a water mist expression form similar to the natural phenomenon based on the cocos2dx engine technology. The erasable water mist UI interactive interface improves the interactivity between the user and the UI interface, increases the dynamic change of the UI interface, achieves the effect of enabling the player to be personally on the scene, and creates an atmosphere in accordance with the scene. Fig. 2 is a schematic view of a UI interface provided according to a preferred embodiment of the present invention, and fig. 3 is a schematic view of a UI interface provided according to a preferred embodiment of the present invention after the UI interface is pressed.
Specifically, the technical scheme comprises the following steps:
(1) wiping and condensing of water mist
With the lapse of time or game progress, through changing the transparency of water smoke background picture (being equivalent to the picture bottom plate), can condense out the effect of a layer of similar water smoke in the interactive region of UI, the user can wipe the fog that condenses on the screen and thus obtain clearer visual effect through the mode of sliding the screen with the finger. The wiped off portion has a marked wiping mark and the wiped off mist recondenses again with the passage of time. Fig. 4 is a schematic diagram of a clear track formed by wiping water mist according to the preferred embodiment of the invention, as shown in fig. 4, a user drags a rocker to wipe off part of the water mist to form a clear track, and the clear track can be gradually condensed into complete water mist from the place where the water mist is initially wiped off over time. The function of the part is mainly finished by using a ClippingNode clipping node, and the erasing effect is achieved by setting the part needing clipping in the clipping node.
The water mist wiping and slow recovery can be realized in the following ways:
simulating an effect that can be restored by sliding erase and slow, the preferred embodiment employs a ClippingNode based scheme. The ClippingNode is a self-contained clipping node in the cos2dx, fig. 5 is a schematic structural diagram of a scheme of the ClippingNode provided according to the preferred embodiment of the present invention, and as shown in fig. 5, the core idea is to use a template to obtain pixel information of a corresponding position from a bottom plate by setting three layers of a stereo template, the bottom plate and a Layer, and then draw the pixel information to the Layer. Also ClippingNode supports Inverted display of invested (i.e. information on the backplane other than the template is displayed on Layer), transparency threshold AlphaThreshold (only pixels with alpha greater than the threshold will be drawn on Layer). In addition, in ClippingNode, its child nodes will be treated as default backplanes.
The erasing part in the ClippingNode scheme is realized by the node of cocosui, and the position, the size and the display time can be freely controlled, so that more display effects can be produced.
The basic erasing effect is realized by ClippingNode, and the picture (which must be write, and is equivalent to the preset interactive area or the picture backplane) which needs to be erased and is preset in the cos engineering is set as the backplane node. When a user clicks the screen, a template node (equivalent to the first template) is created at the corresponding position of the screen, the template node is used for cutting the bottom plate node, and the cut content is displayed on the layer in an inverted display mode, so that the erasing effect is realized. The concatenation into one erased track (equivalent to the above erased track) can be achieved by script moving the template or creating multiple templates in succession as the user slides on the screen. And to realize the condensation effect of the mist, the template node is only required to be gradually reduced along with the time until the size of the template node becomes 0, the erasing trace disappears, and the mist is condensed into a whole again. This operation can be changed in size by a timer, or can be implemented using a cocos-owned animation function. It has been found that, as long as the template node is not set too small, the template needs to be created every 2 or 3 frames to achieve a continuous visual effect unless the user slides the screen in an ultrafast manner.
The method comprises the following specific steps:
1) step of creating clippingNode node using cos2dx engine:
a, acquiring a water mist background picture in a cos engineering:
b, creating an empty clippingNode node;
c, setting the node position to the position of the background;
d, setting the node clipping mode as inverted display;
e, adding an empty clippingNode node to a display screen;
f, building a template node (equivalent to the first template) and adding the template node into the clippingNode;
g, arranging a bottom plate, namely an erased part;
h, moving the node in the original project from the original father node to a new clippingNode node:
so far, the clippingNode creation work is completed.
2) An erasing step:
a, creating a node with a special pattern through pictures under a specific path;
b, setting the size and the position of the picture node according to the intensity and the position of the finger pressing on the screen;
c, adding the picture node to a template node (corresponding to the first template) in the clippingNode;
d, using an animation function carried by the cocos to control the change of the template;
f, since the node is dynamically created, the memory must be released after the animation is finished and disappears in order to prevent the memory leak.
In the above preferred embodiment, if the erased areas need to be connected into a track, the positions of the fingers of the user on the screen need to be detected, and then different processing is performed according to different operations, such as the erased area is spread when the user presses for a long time, the erased area is reduced when the user releases the hand, and the erased area moves when the user moves, or a new erased area is created in a new place.
(2) Finger pressing effect
The surrounding fog effect generated when a user presses the surface of the water fog forcibly in the simulation reality. Fig. 6 is a schematic diagram of the mist generated by the finger pressing according to the preferred embodiment of the present invention, and as shown in fig. 6, a circle of distinct pressing mist is generated when the user performs a long pressing operation. Fog changes along with the length of the pressing time, and after a user stops pressing, the fog around the user can fade away gradually, and the phenomenon that the fog condenses towards the middle can be simulated by matching with the previous erasing and condensing effects. This is accomplished in part primarily through the creation and control of dynamic sprites.
The finger is pressed for a long time at one position, and besides the erasing effect, fog can be generated at the periphery, and the effect of fading in and out and changing the size is mainly realized by dynamically creating a picture with transparent middle and texture at the pressed position.
The main implementation logic is as follows:
1) judging whether the pressed part is in the UI interaction interface;
2) creating a picture at the press;
3) fade-in pictures over time;
4) when the hands are loosened, the picture can fade out slowly;
5) the middle erasing part of the watermark becomes larger along with the time when the pressing is carried out;
6) in this set of schemes, each frame range is increased by 0.01 up to a maximum of 0.5;
7) the watermark size is reset when the hands are loose.
(3) Dynamic water drop effect
The dynamic water drop effect is added on the basis of the water spray effect, water drops can be generated on a mist interface according to a certain rule, the generated water drops can slowly flow downwards along with the influence of gravity, and partial mist in the path can be wiped off in the flowing process. The rule of water drop generation can be formulated according to the requirement, and can be generated randomly according to the time or generated in the process of pressing and dragging by the user. Fig. 7 is a schematic diagram of a water droplet drooling simulation provided in accordance with a preferred embodiment of the present invention, as shown in fig. 7. During the game, water drops can randomly appear in the main operation area on the right side and flow downwards along the screen. The function is based on erasing, and an algorithm is adopted to dynamically generate water drops and simulate a flowing process.
The water drop effect is realized by adding a dynamic generation rule of water drops and simulation of a water drop flowing track on the basis of a water mist wiping effect. Wherein, the generation rule mainly comprises: when a user presses a template for a water drop effect at a pressing position every time a certain time is exceeded (the time can be a random number within a certain range), the size of the water drop can be adjusted properly according to the pressing time; generating a water drop when the user looses his hand; there is a limit to the minimum time interval between the formation of water droplets.
In addition, in order to simulate the effect of the water drops influenced by gravity and surface tension, the speed and the direction of the water drops are randomly changed within a range, however, the water drops always have a downward flowing trend, and when the flowing track exceeds the display range of the screen, the water drops are deleted. Wherein, the method for generating the water drops comprises the following steps:
1) judging whether the position of the generated water drops is in the interaction area of the water mist UI;
2) if not in the UI interaction area, no water drops are generated;
3) creating a water drop template node (corresponding to the second template) by the water drop pattern under the specific path;
4) setting the size and the position of a water drop template;
5) setting a mixing option of water drops, so that the place with the water drops only does not display the original background pattern;
6) and adding water drops into a list for unified management.
The water drop flowing rule is as follows:
1) acquiring a specific water drop template node from the list;
2) making a random offset in the transverse direction, and simulating the downward process along with the gravity by gradually reducing the vertical coordinate in the longitudinal direction;
3) changing the position of the water drop according to the generated random number;
4) releasing a water drop object when the screen range is exceeded;
(4) dissipation and restoring effect of push button
Unlike a general button click effect, the button click effect of the water mist UI simulates an effect similar to dissipation and restoration. Fig. 8 is a schematic diagram of dissipation of buttons provided according to a preferred embodiment of the present invention, as shown in fig. 8, when a user clicks a button in a UI interaction area, the button may become blurred as shown by the 2 nd button in the figure, and the degree of blurring may be adjusted according to the picture of the UI control, which means that the user cannot perceive the function of the button and cannot click the button again. Until the button slowly returns to its original shape and becomes perceptible and interactive. The function adopts the mode of mixing a plurality of controls in a specific blend mode so as to achieve the expected function.
The dissipation and recovery effects of the keys are obtained by mixing the two graphs in a certain blend mode, and are easier to control than the effect of a single button click state.
1) Dissipation process
a, first keeping the original state for a period of time, and after the period of time, starting a dissipation process to slowly reduce the transparency:
b, if not required for a period of time, immediately begin the dissipation process:
2) process of rehabilitation
a, keeping for a period of time, and after the period of time, beginning to recover, slowly increasing the transparency:
b, if the holding time is not needed, immediately starting the recovery process.
It should be noted that the above effects mainly exist in the core interaction area (areas like a joystick, a skill and attack button, etc.) of the UI interface. However, these UI interfaces with special expressions cannot cover the whole interactive process, so other secondary UI areas can simulate the display and interactive effects of the water mist UI in the form of pictures and animations, so as to achieve the overall visual uniformity. Fig. 9(a) is a schematic diagram of a status bar icon provided according to a preferred embodiment of the present invention with a water fog element, and fig. 9(b) is a schematic diagram of a status bar icon corresponding to a state when a state of a player changes during a game, as shown in fig. 9(a) and 9(b), the status bar icon displaying game information in the upper right corner of the UI interface is designed with a water fog element, and when the state changes, a zooming and transparency change animation is generated to simulate a blurring effect and achieve a warning function.
It should be noted that the water mist UI scheme provided by the preferred embodiment of the present invention is mainly to create a cold atmosphere to simulate some natural phenomena in cold conditions. Strong interactivity and dynamic changes are the biggest characteristics. The scheme is realized by adopting the ClippingNode as a core, and the following effects can be achieved:
1) the effects of mist wiping and condensation can be perfectly simulated.
2) The degree of the effect can be conveniently controlled through the script, the use and the performance of some effects are chosen or rejected in specific occasions, and the high-low matching can be better adapted.
Compared with the adoption of a draw () scheme and a renderexture scheme, the realization efficiency of the solution is clearer and simpler logically.
Among these, the impact on efficiency mainly exists in two aspects:
(1) the size of ClippingNode, because ClippingNode needs to draw background picture, the size of background picture affects the drawing efficiency. In practical use, the erasing effect is only applied to the core operation area (the left rocker and the right button), so that the consumption caused by drawing can be reduced.
(2) The number of dynamic template nodes, and the frequency of creation can be reduced by adopting larger nodes in order to ensure the visual continuity of the erasing effect. In addition, since the pipelining effect dynamically creates a larger number of nodes, the number of water drop generations can be reduced appropriately by changing the rule. To reduce the time consumed for creation and destruction, a cache queue may be created for the template node.
Fig. 10 is a block diagram of an information display apparatus according to an embodiment of the present invention, and is applied to a terminal for presenting a graphical user interface, as shown in fig. 10, where the information display apparatus includes:
the first detection module 1002 is configured to simulate condensation and display a specific fog in a preset interaction region of the graphical user interface, detect a first touch operation in the preset interaction region, and acquire first position information of the first touch operation;
a first creating module 1004, connected to the first detecting module 1002, for creating a first template according to the first location information;
a cutting module 1006, connected to the first creating module 1004, for obtaining the picture bottom plate of the specific fog body and cutting the picture bottom plate of the specific fog body according to the first template;
the first display module 1008 is connected to the cropping module 1006, and is configured to display the remaining pixels of the cropped picture backplane in the preset interaction area.
By the device, under the condition that the specific fog-shaped body is simulated and condensed in the interactive area of the graphical user interface, creating a first template based on first position information of a first touch operation on an interaction area, the picture template of the specific fog body is cut according to the created first template, and the residual pixels of the cut picture bottom plate are displayed in the preset interaction area, so that the purpose of erasing the specific fog body at the position of the interaction area acted by the first touch operation is realized, namely, the erasing of the specific fog body is realized by adopting the template cutting mode, because the first touch operation acts on the position of the interaction area, the size of the first template can influence the erasing position and the erasing area of the specific fog body, therefore, more display effects can be brought, the interface interchangeability is increased, and the technical problem that the user interface interchangeability is not high in the related technology is solved.
In an embodiment of the present invention, the apparatus may further include: the simulation module is used for simulating the specific condensed fog in the preset interaction area; the concrete can be represented as follows: the simulation module adjusts the transparency of the picture bottom plate of the specific fog-shaped body and displays the picture bottom plate in the preset interaction area.
It should be noted that, when the touch point moves on the screen along with the first touch operation, a corresponding erase track is formed in the interaction area, and therefore, in an embodiment of the present invention, the apparatus further includes: the second obtaining module is connected to the first detecting module 1002, and configured to obtain a plurality of touch points of the first touch operation, where the plurality of touch points form a first predetermined track in a preset interaction area. The first creating module 1004 may be further configured to create a plurality of first templates according to the position information of the plurality of touch points, where one touch point corresponds to one first template, and the shape of the first template is the same as that of the corresponding touch point; or creating a first template according to the first predetermined track, wherein the shape of the first template is the same as the first predetermined track. That is, the erasing trace can be correspondingly formed by moving the first template or continuously creating a plurality of templates.
In an embodiment of the present invention, the apparatus further includes: and a reducing module, connected to the first creating module 1004, configured to gradually reduce the size of the first template to zero, so as to perform recovery processing on the remaining pixels of the specific fog and display the remaining pixels in the preset interaction area.
The above-mentioned reducing module is further configured to, in the process that the size of the first template is gradually reduced to zero, cut the picture bottom plate by using the in-process first template, and display the remaining pixels of the cut picture bottom plate in the preset interaction area.
The above apparatus may further include: a third obtaining module, connected to the detecting module 1002, configured to obtain a first duration of the first touch operation, where the first template gradually increases and the area displayed by the fog gradually decreases as the first duration increases, where the first duration is a duration of the first touch operation acting on the preset interaction area; and/or when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a second time length, gradually reducing the first template and gradually reducing the area displayed by the fog-shaped body along with the increase of the second time length, wherein the second time length is the time length between the current time and the time when the touch point leaves the preset interaction area
In an embodiment of the present invention, the apparatus may further include: the second detection module is used for detecting a second touch operation in the preset interaction area and acquiring second position information of the second touch operation; the second creating module is connected with the second detecting module and used for creating a second template with a liquid particle effect according to the liquid particle pattern at a position corresponding to the second position information; and the second display module is connected with the second creation module and used for displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information under the condition that the touch point of the second touch operation leaves the preset interaction area.
It should be noted that the second display module is further configured to display the second template and hide the pixel content of the background picture in the preset interaction area at the position corresponding to the second position information.
The above-mentioned device still includes: and the adding module is connected with the second display module and is used for adding the generated liquid particles into a list.
In order to increase the interactivity of the user with the interface and increase the dynamic change of the interface, in an embodiment of the present invention, the apparatus further includes: the control module is connected with the second display module and used for controlling the liquid particles to move towards the gravity direction of the terminal in the preset interaction area; and the release module is connected with the control module and is used for releasing the liquid particles when the liquid particles move to the boundary of the preset interaction area.
It should be noted that the control module is further configured to reduce the coordinate of the second template in the longitudinal axis direction in the preset interaction area; adding an offset to the coordinates of the second template in the direction of the transverse axis in the preset interaction area; the direction of the transverse axis is a direction perpendicular to the gravity direction of the terminal in the plane where the preset interaction area is located, and the direction of the longitudinal axis is a direction perpendicular to the direction of the transverse axis in the plane where the preset interaction area is located. The releasing module is further configured to remove the second template from the list when the second template moves to the boundary of the preset interaction area. I.e. to release the liquid particles when they move beyond the screen.
In an embodiment of the present invention, the apparatus further includes: a condensing module connected to the first detecting module 1002 for condensing the specific mist at a second position spaced apart from the first position by a predetermined distance; the first obtaining module is connected with the condensing module and used for obtaining a third time length of the first touch operation, wherein the condensed foggy body gradually fades in at the second position along with the increase of the third time length, and the third time length is the time length of the first touch operation acting on the preset interaction area; and/or the fourth time length is acquired when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, wherein the condensed fog body at the second position gradually fades in along with the increase of the fourth time length, and the fourth time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
The condensing module is further configured to create a designated picture at a second location, where a region in the designated picture corresponding to the second location is transparent, and a specific fog is condensed in a region other than the region in the designated picture.
In an embodiment of the present invention, the apparatus further includes: the third detection module is used for detecting a third touch operation in the preset interaction area and acquiring third position information of the third touch operation; and the modification module is connected with the third detection module and is used for modifying the state of the control at the third position corresponding to the third position information.
The third detecting module may be connected to or disconnected from the first detecting module 1002, which is not limited to this.
It should be noted that the modification module is further configured to reduce transparency of the picture of the control.
It should be noted that the above apparatus further includes: and the restoring module is connected with the third detecting module and used for restoring the state of the control to the initial state under the condition that the time for the touch point of the third touch operation to leave the preset interaction area exceeds a preset threshold value. It should be noted that the foregoing restoring module is further configured to restore the transparency of the picture of the control to the initial transparency of the picture.
It should be noted that the above-mentioned apparatus may be located in a terminal, but is not limited thereto.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
The embodiment of the invention provides a storage medium, which comprises a stored program, wherein when the program runs, a device where the storage medium is located is controlled to execute any one of the information display methods.
The embodiment of the invention provides a processor, wherein the processor is used for running a program, and the program executes any one of the information display methods when running.
An embodiment of the present invention provides a terminal, including: one or more processors, a memory, a display device, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the information display methods described above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (24)

1. An information display method is applied to a terminal presenting a graphical user interface, and is characterized by comprising the following steps:
simulating condensation and displaying a specific fog-shaped body in a preset interaction area of the graphical user interface, detecting a first touch operation in the preset interaction area, and acquiring first position information of the first touch operation;
creating a first template according to the first position information;
obtaining the picture bottom plate of the specific fog-shaped body, and cutting the picture bottom plate of the specific fog-shaped body according to the first template;
displaying the residual pixels of the cut picture bottom plate in the preset interaction area;
and gradually reducing the size of the first template to zero so as to recover the residual pixels of the specific fog body and display the residual pixels in the preset interaction area.
2. The method according to claim 1, wherein simulating condensation of the particular fog in the preset interaction area is achieved by:
and adjusting the transparency of the picture bottom plate of the specific fog-shaped body, and displaying in the preset interaction area.
3. The method according to claim 1, wherein after detecting the first touch operation in the preset interaction area, the method further comprises:
and acquiring a plurality of touch points of the first touch operation, wherein the plurality of touch points form a first preset track in the preset interaction area.
4. The method of claim 3, wherein creating the first template from the location information comprises:
creating a plurality of first templates according to the position information of the plurality of touch points, wherein one first template is created corresponding to one touch point, and the shape of the first template is the same as that of the corresponding touch point; or
Creating the first template according to the first predetermined track, wherein the shape of the first template is the same as the first predetermined track.
5. The method of claim 1, wherein gradually reducing the size of the first template to zero so as to perform the recovery processing on the remaining pixels of the image backplane and display the pixels in the preset interaction area comprises: and in the process that the size of the first template is gradually reduced to zero, cutting the picture bottom plate by using the first template in the process, and displaying the residual pixels of the cut specific fog-shaped body in the preset interaction area.
6. The method of claim 1, further comprising:
acquiring a first time length of the first touch operation, wherein the first template is gradually increased and the area displayed by the fog is gradually reduced along with the increase of the first time length, and the first time length is the time length of the first touch operation acting on the preset interaction area; and/or
And when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a second time length, gradually reducing the first template along with the increase of the second time length, and gradually reducing the area displayed by the fog body, wherein the second time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
7. The method of claim 1, further comprising:
detecting a second touch operation in the preset interaction area;
acquiring second position information of the second touch operation;
creating a second template having a liquid particle effect according to the liquid particle pattern at a position corresponding to the second position information;
and displaying the liquid particles generated by the simulation of the second template on a position corresponding to the second position information under the condition that the touch point of the second touch operation leaves the preset interaction area.
8. The method of claim 7, wherein displaying the liquid particles generated by the second template simulation at the location corresponding to the second location information comprises:
and displaying the second template and hiding the pixel content of the background picture in the preset interaction area at the position corresponding to the second position information.
9. The method of claim 7, wherein after displaying the liquid particles generated by the second template simulation at locations corresponding to the second location information, the method further comprises:
controlling the liquid particles to move towards the gravity direction of the terminal in the preset interaction area;
releasing the liquid particles when the liquid particles move to the boundary of the preset interaction area.
10. The method of claim 9, wherein controlling the liquid particles to move in the direction of gravity of the terminal within the preset interaction region comprises:
reducing the coordinate of the second template in the longitudinal axis direction in the preset interaction area;
adding an offset to the coordinates of the second template in the direction of the transverse axis in the preset interaction area;
the direction of the transverse axis is a direction perpendicular to the gravity direction of the terminal in the plane where the preset interaction area is located, and the direction of the longitudinal axis is a direction perpendicular to the direction of the transverse axis in the plane where the preset interaction area is located.
11. The method of claim 10, wherein releasing the liquid particles when the liquid particles move to the boundary of the preset interaction region comprises:
and when the second template moves to the boundary of the preset interaction area, removing the second template from the list.
12. The method according to claim 1, wherein after detecting the first touch operation in the preset interaction area, the method further comprises:
condensing the particular mist at a second location a predetermined distance from the first location;
acquiring a third time length of the first touch operation, and gradually fading in the fog condensed at the second position along with the increase of the third time length, wherein the third time length is the time length of the first touch operation acting on the preset interaction area; and/or
And when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, acquiring a fourth time length, and gradually fading in the fog condensed at the second position along with the increase of the fourth time length, wherein the fourth time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
13. The method of claim 12, wherein condensing the particular mist at a second location a predetermined distance from the first location comprises:
and creating a designated picture at the second position, wherein the area corresponding to the second position in the designated picture is transparent, and the specific fog bodies are condensed in the other areas except the area in the designated picture.
14. The method of claim 1, further comprising:
detecting a third touch operation in the preset interaction area, and acquiring third position information of the third touch operation;
modifying a state of a control at a third location corresponding to the third location information.
15. The method of claim 14, wherein after modifying the state of the control at the third position corresponding to the third position information, the method further comprises:
and under the condition that the time for the touch point of the third touch operation to leave the preset interaction area exceeds a preset threshold value, restoring the state of the control to the initial state.
16. The method of claim 14, wherein modifying the state of the control at the third position corresponding to the third position information comprises: reducing transparency of a picture of the control.
17. The method of claim 15, wherein restoring the state of the control to the initial state comprises: and restoring the transparency of the picture of the control to the initial transparency of the picture.
18. An information display device applied to a terminal presenting a graphical user interface, comprising:
the first detection module is used for simulating condensation and displaying a specific fog-shaped body in a preset interaction area of the graphical user interface, detecting a first touch operation in the preset interaction area and acquiring first position information of the first touch operation;
the first creating module is used for creating a first template according to the first position information;
the cutting module is used for acquiring the picture bottom plate of the specific fog-shaped body and cutting the picture bottom plate of the specific fog-shaped body according to the first template;
the first display module is used for displaying the residual pixels of the cut picture bottom plate in the preset interaction area;
and the reduction module is used for gradually reducing the size of the first template to zero so as to recover the remaining pixels of the specific fog body and display the pixels in the preset interaction area.
19. The apparatus of claim 18, further comprising:
the second detection module is used for detecting a second touch operation in the preset interaction area and acquiring second position information of the second touch operation;
a second creating module for creating a second template having a liquid particle effect according to the liquid particle pattern at a position corresponding to the second position information;
and the second display module is used for displaying the liquid particles generated by the simulation of the second template on the position corresponding to the second position information under the condition that the touch point of the second touch operation leaves the preset interaction area.
20. The apparatus of claim 18, further comprising:
a condensation module for condensing the specific mist at a second location a predetermined distance from the first location;
a first obtaining module, configured to obtain a third time duration of the first touch operation, where the fog condensed at the second location gradually fades in as the third time duration increases, where the third time duration is a time duration in which the first touch operation acts on the preset interaction area; and/or the fourth time length is acquired when the first touch operation is finished and the touch point of the first touch operation leaves the preset interaction area, wherein the fog condensed at the second position gradually fades in along with the increase of the fourth time length, and the fourth time length is the time length between the current time and the time when the touch point leaves the preset interaction area.
21. The apparatus of claim 18, further comprising:
the third detection module is used for detecting a third touch operation in the preset interaction area and acquiring third position information of the third touch operation;
and the modification module is used for modifying the state of the control at the third position corresponding to the third position information.
22. A storage medium comprising a stored program, wherein an apparatus in which the storage medium is located is controlled to execute the information display method according to any one of claims 1 to 17 when the program is executed.
23. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the information display method according to any one of claims 1 to 17 when running.
24. A terminal, comprising: one or more processors, a memory, a display device, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the information display method of any of claims 1-17.
CN201810135433.6A 2018-02-09 2018-02-09 Information display method and device, storage medium, processor and terminal Active CN108334273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810135433.6A CN108334273B (en) 2018-02-09 2018-02-09 Information display method and device, storage medium, processor and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810135433.6A CN108334273B (en) 2018-02-09 2018-02-09 Information display method and device, storage medium, processor and terminal

Publications (2)

Publication Number Publication Date
CN108334273A CN108334273A (en) 2018-07-27
CN108334273B true CN108334273B (en) 2020-08-25

Family

ID=62927476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810135433.6A Active CN108334273B (en) 2018-02-09 2018-02-09 Information display method and device, storage medium, processor and terminal

Country Status (1)

Country Link
CN (1) CN108334273B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910307B (en) * 2019-11-29 2023-09-22 珠海豹趣科技有限公司 Image processing method, device, terminal and storage medium
CN111104021B (en) * 2019-12-19 2022-11-08 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic device
WO2021128240A1 (en) * 2019-12-27 2021-07-01 威创集团股份有限公司 Method for realizing transparent web page by embedding cef into cocos2dx
CN111461545B (en) * 2020-03-31 2023-11-10 北京深演智能科技股份有限公司 Method and device for determining machine access data
CN115545004A (en) * 2022-09-27 2022-12-30 北京有竹居网络技术有限公司 Navigation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01189651A (en) * 1988-01-25 1989-07-28 Mitsubishi Paper Mills Ltd Silver halide color photographic sensitive material
CN102968271A (en) * 2012-11-06 2013-03-13 广东欧珀移动通信有限公司 Unlocking method and mobile terminal
CN103970415A (en) * 2014-04-13 2014-08-06 数源科技股份有限公司 Method for achieving fade-in and fade-out effects based on Android
CN105786930A (en) * 2014-12-26 2016-07-20 北京奇虎科技有限公司 Touch interaction based search method and apparatus
CN106557154A (en) * 2015-09-29 2017-04-05 深圳市美贝壳科技有限公司 A kind of method for realizing finger touch area emergence transparent effect
CN106775342A (en) * 2015-11-25 2017-05-31 中兴通讯股份有限公司 Picture method of cutting out and device based on pressure sensitive technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01189651A (en) * 1988-01-25 1989-07-28 Mitsubishi Paper Mills Ltd Silver halide color photographic sensitive material
CN102968271A (en) * 2012-11-06 2013-03-13 广东欧珀移动通信有限公司 Unlocking method and mobile terminal
CN103970415A (en) * 2014-04-13 2014-08-06 数源科技股份有限公司 Method for achieving fade-in and fade-out effects based on Android
CN105786930A (en) * 2014-12-26 2016-07-20 北京奇虎科技有限公司 Touch interaction based search method and apparatus
CN106557154A (en) * 2015-09-29 2017-04-05 深圳市美贝壳科技有限公司 A kind of method for realizing finger touch area emergence transparent effect
CN106775342A (en) * 2015-11-25 2017-05-31 中兴通讯股份有限公司 Picture method of cutting out and device based on pressure sensitive technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
[Cocos2d-x 3.2]裁剪节点(ClippingNode)总结;逆风de蒲公英;《CSDN博客,https://blog.csdn.net/llb19911212/article/details/40152619》;20141016;第8-12页 *

Also Published As

Publication number Publication date
CN108334273A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108334273B (en) Information display method and device, storage medium, processor and terminal
CN102725711B (en) Edge gesture
CN102939575B (en) Ink presents
US10068547B2 (en) Augmented reality surface painting
Kazi et al. SandCanvas: A multi-touch art medium inspired by sand animation
CN111167120A (en) Method and device for processing virtual model in game
CN106385591A (en) Video processing method and video processing device
CN110215690A (en) View angle switch method, apparatus and electronic equipment in scene of game
US20140002457A1 (en) Creating a three dimensional user interface
US20110261060A1 (en) Drawing method and computer program
CN111870956B (en) Method and device for split screen display of game sightseeing, electronic equipment and storage medium
CN104599307A (en) Mobile terminal animated image display method
CN109118556A (en) A kind of method that realizing UI interface cartoon transition effect, system and storage medium
Winnemöller NPR in the Wild
CN109656463A (en) The generation method of individual character expression, apparatus and system
CN111766989B (en) Interface switching method and device
US20040174365A1 (en) Method and system for computer animation
CN109908576B (en) Information module presenting method and device, electronic equipment and storage medium
US10503694B2 (en) Deleting items based on user interation
CN114430466A (en) Material display method, device, electronic equipment, storage medium and program product
CN103795925A (en) Interactive main-and-auxiliary-picture real-time rendering photographing method
US20120327109A1 (en) Method For Displaying Contacts In Instant Messenger And Instant Messaging Client
Marzo et al. Collart: A tool for creating 3d photo collages using mobile augmented reality
CN104574482B (en) The rendering method and device of different conditions in a kind of Same Scene
Reunanen et al. PETSCII–A Character Set and a Creative Platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant