CN116974446A - Animation effect display method and device - Google Patents

Animation effect display method and device Download PDF

Info

Publication number
CN116974446A
CN116974446A CN202311196800.0A CN202311196800A CN116974446A CN 116974446 A CN116974446 A CN 116974446A CN 202311196800 A CN202311196800 A CN 202311196800A CN 116974446 A CN116974446 A CN 116974446A
Authority
CN
China
Prior art keywords
animation effect
application
coordinate position
scaling
screen coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311196800.0A
Other languages
Chinese (zh)
Other versions
CN116974446B (en
Inventor
马朝露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311196800.0A priority Critical patent/CN116974446B/en
Publication of CN116974446A publication Critical patent/CN116974446A/en
Application granted granted Critical
Publication of CN116974446B publication Critical patent/CN116974446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a display method and a display device for animation effects, which are applied to the technical field of terminals. The application provides a display method of an animation effect, which is characterized in that after electronic equipment detects a drag operation after long pressing of a first element, the electronic equipment responds to the drag operation to display a first animation effect, wherein the first animation effect is an animation effect gradually transiting from a screen coordinate position of a first image frame to a drag stay position. Further, the electronic equipment responds to the operation of releasing and dragging the first element by the user to generate a release animation effect, so that the animation effect in the dragging process is more natural and smooth, is more vivid and accords with the user experience, and the visual experience of the user is remarkably improved.

Description

Animation effect display method and device
Technical Field
The present application relates to the field of electronic devices, and more particularly, to a method and apparatus for displaying an animation effect.
Background
With the rapid development of intelligent terminals, intelligent terminals have become indispensable devices in the life of users. Through the application program in the intelligent terminal, the intelligent terminal can provide various experiences for users. When the user uses the intelligent terminal, the user can relate to a scene of dragging by long pressing a certain object, for example, when the user long presses a certain photo thumbnail in a gallery application, the user drags the photo thumbnail. The animation effect of the dragging process provided by the intelligent terminal at present is only an animation effect of simply following the movement of the finger, and the animation effect is hard. Therefore, how to optimize the animation effect in the dragging process is a problem to be solved.
Disclosure of Invention
In view of the foregoing, the present application provides a method, apparatus, computer-readable storage medium and computer program product for displaying an animation effect, which provides an animation effect for a drag operation, and provides a more natural and smooth animation effect, thereby enriching the visual experience of a user.
In a first aspect, there is provided a display method of an animation effect, including:
displaying a first interface, the first interface comprising a first element;
detecting a drag operation after the first element is pressed for a long time;
displaying a first animation effect in response to the drag operation, the first animation effect being a drag animation effect from a first screen coordinate position to a second screen coordinate position, the first screen coordinate position being a screen coordinate position of a first image frame of the first animation effect, the first screen coordinate position being determined based on a size of the first element, the screen coordinate position of the first element, the size of the first image frame, and a preset contact point position, the second screen coordinate position being a corresponding screen coordinate position when the drag operation is stopped;
the first animation effect is generated according to one or more of the following parameters: the size of the first image frame, the first screen coordinate position, the second screen coordinate position, a target scaling parameter, and an animation curve parameter.
The above-described aspects may be performed by an electronic device or a chip in an electronic device. Based on the above-described scheme, the electronic apparatus displays, in response to a drag operation after detecting the drag operation after the long press of the first element, a first animation effect (or referred to as a drag animation effect) which is an animation effect gradually transitioning from a screen coordinate position of a first image frame (or referred to as a floating shadow frame) to a drag stay position. Compared with the animation effect of direct hand following, the first animation effect of the embodiment of the application increases the transitional animation effect from the floating shadow frame to the contact position corresponding to the long-press operation, so that the dynamic effect of the whole dragging process is smoother and natural, and the visual experience of a user is enriched. Further, the screen coordinate position corresponding to the floating shadow frame is irrelevant to the screen coordinate position corresponding to the stay of the drag operation.
The display method of the animation effect is applied to the scene of calling the drag interface. For example, the drag interface refers to startDragAndDrop in the android architecture. That is, as long as an application scenario of calling the startdragand drop interface to respond to a drag operation of a user is involved, the animation effect display method of the embodiment of the present application may be adopted.
Optionally, in order to implement the animation effect display method according to the embodiment of the present application, the embodiment of the present application adds a view extension class and a drag function extension class in the native frame to generate the animation effect according to the embodiment of the present application.
In some possible implementations, the first animation effect includes: an animation effect from the first screen coordinate position to a third screen coordinate position, and an animation effect from the third screen coordinate position to the second screen coordinate position, wherein the third screen coordinate position is a contact point position corresponding to a long time by a user on the first element;
the animation effect from the third screen coordinate position to the second screen coordinate position comprises a displacement animation effect and a scaling animation effect, and the target scaling parameter corresponding to the scaling animation effect is determined based on a preset scaling rule.
That is, the first animation effect includes two stages of animation effects, respectively: responding to the transitional animation effect generated after long-press operation; and responding to the follow-up manual painting effect of the dragging operation after long pressing. Wherein, the transitional animation effect generated after responding to the long-press operation refers to: an animation effect of moving from a screen coordinate position (or a first screen coordinate position) of a first floating shadow frame to a contact position (or a third screen coordinate position) corresponding to a long press operation. The following manual painting effect after responding to the drag operation means: animation effects from the contact point position corresponding to the long press operation to the coordinate position where the drag operation stays, including displacement animation and scaling animation effects. The first animation effect obtained in this way is smoother and more natural and more vivid.
The object or element whose value is dragged by the first element, specifically the element inherited from the View (View). The type of the first element is not particularly limited in the embodiment of the present application. The first element may be one or more of the following elements: picture class elements, text class elements, file list class elements, icon class elements, uniform resource locator (uniform resource locator, URL) elements, card class elements, and other interface elements.
Different zoom rules may be provided for different types of dragged objects.
In some possible implementations, when the first element is a photograph thumbnail, a 1.1-fold scaling rule may be employed for the scaling process.
In some possible implementations, when the first element is a picture class element, the preset scaling rule includes: a scaling rule determined from the height and width of the first element or a scaling rule determined from the height and width of the first image frame.
In some possible implementations, for the case that the first element is a picture big picture, the preset scaling rule includes:
when the height of the first element is larger than or equal to a first threshold value, scaling according to a first scaling ratio, wherein the first scaling ratio is the first threshold value;
When the height of the first element is smaller than the first threshold value and the width of the first element is larger than or equal to a second threshold value, scaling according to a second scaling ratio, wherein the second scaling ratio is the second threshold value;
when the height of the first element is less than the first threshold and the width of the first element is less than the second threshold, no scaling is performed.
In some possible implementations, the first interface includes a first display area and a second display area, the first element is located in the first display area, the first display area corresponds to the first application, the second display area corresponds to the second application, and the drag operation includes an operation of dragging the first element from the first display area to the second display area, the first application being different from the second application. That is, embodiments of the present application are applicable to scenes dragged across applications.
In order to provide a complete drag effect flow, the embodiment of the application also designs a release animation effect. And based on different release positions, the corresponding release animation effect is designed, so that the release animation effect is more in line with the experience of a user.
In some possible implementations, the method further includes:
detecting an operation of releasing the first element at a first location, the first location being in the second display area;
in response to the first element being received by the second application, a first released animation effect is displayed, the first released animation effect being an animation effect in which image frames gradually shrink until disappearing.
Thus, for a release operation across applications (e.g., a release operation detected at an interface corresponding to a second application), if the dragged object is received by the second application, an animation effect may be generated in which the return image frame gradually shrinks until it disappears, so that the user can intuitively see a process in which the dragged object is received by the second application, or a process in which the dragged object falls to the second application, and the visual experience of the user is enriched.
In some possible implementations, the method further includes:
detecting an operation of releasing the first element at a second location, the second location being located in the second display area;
and in response to the first element not being received by the second application, displaying a second released animation effect, the second released animation effect being an animation effect in which the image frames gradually zoom in until disappearing.
Thus, for a release operation across applications (e.g., a release operation detected at an interface corresponding to a second application), if the dragged object is not received by the second application, an animation effect in which the return image frame is gradually enlarged until it disappears may be generated, so that the user can intuitively see the process that the dragged object is not received by the second application, and the visual experience of the user is enriched.
In some possible implementations, the method further includes:
detecting an operation of releasing the first element at a third location, the third location being in the first display area;
in response to an operation to release the first element, a third release animation effect is displayed, the third release animation effect being an animation effect that returns from the third position to an original screen coordinate position of the first element.
Thus, for a release operation within the same application, an animation effect may be generated that returns to the original screen position of the first element, so as to enable the user to intuitively see the process of returning the dragged object to the original screen position, enriching the visual experience of the user.
In a second aspect, there is provided an electronic device comprising means for performing any of the methods of the first aspect. The electronic device may be a terminal device or a chip in the terminal device. The electronic device includes an input unit, a display unit, and a processing unit.
When the electronic device is a terminal device, the processing unit may be a processor, the input unit may be a communication interface, and the display unit may be a graphic processing module and a screen; the terminal device may further comprise a memory for storing computer program code which, when executed by the processor, causes the terminal device to perform the method in any of the implementations of the first aspect.
When the electronic device is a chip in the terminal device, the processing unit may be a logic processing unit inside the chip, the input unit may be an input interface, a pin, a circuit, or the like, and the display unit may be a graphics processing unit inside the chip; the chip may also include memory, which may be memory within the chip (e.g., registers, caches, etc.), or memory external to the chip (e.g., read-only memory, random access memory, etc.); the memory is for storing computer program code which, when executed by the processor, causes the chip to perform any of the methods of the first aspect.
In a third aspect, there is provided a computer readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform any one of the methods of the first aspect.
In a fourth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the methods of the first aspect.
Drawings
FIG. 1 is a schematic diagram of a hardware system of an electronic device according to an embodiment of the present application;
FIG. 2 is a diagram of an example software system of an electronic device in accordance with an embodiment of the present application;
FIG. 3A is an exemplary diagram of an interface for a drag animation effect provided by an embodiment of the present application;
FIG. 3B is another exemplary diagram of an interface for a drag animation effect provided by an embodiment of the present application;
FIG. 3C is a diagram of yet another example interface for a drag animation effect provided by an embodiment of the application;
FIG. 4 is an exemplary diagram of an interface for releasing animation effects provided by an embodiment of the present application;
FIG. 5 is a diagram of another example interface for releasing animation effects provided by an embodiment of the present application;
FIG. 6 is a diagram of yet another example interface for releasing animation effects provided by an embodiment of the present application;
FIG. 7 is an exemplary diagram of calculating screen coordinate positions of floating shadow frames provided by an embodiment of the present application;
FIG. 8 is a flowchart illustrating an exemplary method for presetting a zoom rule according to an embodiment of the present application;
FIG. 9 is a timing flow diagram illustrating an exemplary method for displaying an animation effect according to an embodiment of the present application;
fig. 10 is another exemplary timing flow chart of the animation effect display method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
In the embodiments of the present application, unless otherwise indicated, the meaning of "plurality" may be two or more.
Currently, a mobile phone floats one image frame in response to a user's long press of a certain content, but does not generate an animation effect. Then, when the user performs a drag operation on the content, the mobile phone displays a picture of the content moving with the hand on the screen. Further, when the user releases the content by lifting his hand, the mobile phone responds to the operation of releasing the user by lifting his hand, and the following picture frame of the content displayed on the screen directly disappears, and no corresponding disappearing animation effect exists. The whole dragging process (such as a dragging stage and a releasing stage) is harder or not natural enough, has no animation effect transition and has poor experience.
In view of the above, the present application provides a display method of animation effect, in which an electronic device displays a first interface, and the first interface includes a first element; after a drag operation after a long press of the first element is detected, a first animation effect (or referred to as a drag animation effect) which is an animation effect of gradually transitioning from a screen coordinate position of a first image frame to a drag stay position is displayed in response to the drag operation. Compared with the animation effect of direct hand following, the first animation effect of the embodiment of the application increases the transitional animation effect from the floating shadow frame to the contact position corresponding to the long-press operation, so that the motion effect of the whole dragging process is smoother and more natural, and the motion effect is more vivid, and the visual experience of a user is enriched. Further, the screen coordinate position corresponding to the floating shadow frame is irrelevant to the screen coordinate position corresponding to the stay of the drag operation. The screen coordinate position corresponding to the floating shadow frame is determined based on the size of the first element, the screen coordinate position of the first element, the size of the first image frame, and a preset contact point position.
Further, the electronic equipment responds to the operation of releasing and dragging the first element by the user to generate a release animation effect, so that the animation effect in the dragging process is more natural and smooth, is more vivid and accords with the user experience, and the visual experience of the user is remarkably improved.
The embodiment of the application is applied to a scene that the electronic equipment responds to a user to execute a drag operation on a certain element, content or a certain object. For the scene related to the drag operation, the animation effect display method of the embodiment of the application is applicable.
In some embodiments, the animation effect display method of the embodiment of the application is applied to a scene of calling a drag interface. For example, the drag interface refers to startDragAndDrop in the android architecture. That is, as long as an application scenario of calling the startdragand drop interface to respond to a drag operation of a user is involved, the animation effect display method of the embodiment of the present application may be adopted.
An animation effect (may be simply referred to as a dynamic effect) is a dynamic picture formed by playing a plurality of image frames frame by frame (for example, picture elements or contents included in the plurality of image frames change) so that a user intuitively experiences a certain real scene.
In some embodiments, the display method of animation effects of the embodiments of the present application is applied to a drag operation scene across applications (i.e., application programs). For example, in response to a user dragging a first element from a first application to a second application, the electronic device displays a corresponding drag effect.
The type of application program in the embodiment of the present application is not particularly limited. The application may be a third party application or a system application. For example, the first application and the second application are both system applications. For another example, the first application is a third party application and the second application is a system application. For another example, the first application is a system application and the second application is a third party application.
The embodiment of the present application does not specifically limit the dragged element or content. The dragged element refers to an element inherited from the View (View). Taking the example that the dragged element is a first element, the first element may be one or more of the following elements: picture class elements, text class elements, file list class elements, icon class elements, uniform resource locator (uniform resource locator, URL) elements, card class elements, and other interface elements.
Before describing a method for displaying an animation effect according to an embodiment of the present application, a hardware system and a software system according to an embodiment of the present application will be described with reference to fig. 1 and 2.
Referring to fig. 1, fig. 1 shows a schematic diagram of a hardware system of an electronic device according to an embodiment of the present application.
The apparatus 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the embodiments of the present application do not limit the specific type of the apparatus 100.
The apparatus 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 1 does not constitute a specific limitation on the apparatus 100. In other embodiments of the application, the apparatus 100 may include more or fewer components than those shown in FIG. 1, or the apparatus 100 may include a combination of some of the components shown in FIG. 1, or the apparatus 100 may include sub-components of some of the components shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
In some possible embodiments, the processor 110 is configured to detect a drag operation after long pressing of the first element; in response to the drag operation, invoking a display screen 194 to display a first animation effect, the first animation effect being a drag animation effect moving from a first screen coordinate position, which is a screen coordinate position of a first image frame of the first animation effect, to a second screen coordinate position, which is a screen coordinate position corresponding to when the drag operation is stopped, the first screen coordinate position being determined based on a size of the first element, the screen coordinate position of the first element, the size of the first image frame, and a preset contact point position; the first animation effect is generated from one or more of: the first screen coordinate position, the second screen coordinate position, a target scaling parameter, and an animation curve parameter.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces.
The connection relationships between the modules shown in fig. 1 are merely illustrative, and do not constitute a limitation on the connection relationships between the modules of the apparatus 100. Alternatively, the modules of the apparatus 100 may be combined by using a plurality of connection manners in the foregoing embodiments.
The device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 may be used to display images or video. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini light-emitting diode (Mini LED), a Micro light-emitting diode (Micro LED), a Micro OLED (Micro OLED), or a quantum dot LED (quantum dot light emitting diodes, QLED). In some embodiments, the apparatus 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In some embodiments, the display screen 194 may be used to display an animation effect, such as a lightning animation effect, of a weather application.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates with conductive material, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the apparatus 100 detects the touch operation according to the pressure sensor 180A. The device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the device 100 and at a different location than the display 194.
The keys 190 include a power-on key and an volume key. The keys 190 may be mechanical keys or touch keys. The device 100 may receive signals input by the keys 190 and perform functions related to the signals input by the keys 190.
The motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The hardware system of the apparatus 100 is described in detail above, and the software system of the apparatus 100 is described below. The software system may employ a layered architecture, an event driven architecture, a microkernel architecture, a micro-service architecture, or a cloud architecture, and embodiments of the present application illustratively describe the software system of the apparatus 100.
As shown in fig. 2, the software system using the hierarchical architecture is divided into several layers, each of which has a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the software system is an application layer, an application framework layer, a system running layer, and a kernel layer (linux kernel), respectively, from top to bottom.
The application layer may include a plurality of applications. As shown in fig. 2, the application layer may include a first application, a second application, and the like.
The first application is a different application program than the second application. The types of the first application and the second application are not particularly limited in the embodiment of the application. Both may be third party applications, or both may be system applications, or one may be system applications, and one may be third party applications. For example, the first application is a gallery application and the second application is a notes application. For another example, the first application is a file class application and the second application is a mailbox application.
It will be appreciated that the applications included in the application layer may be third party applications or system applications. It will also be appreciated that other applications may be included in the application layer, for example, the application layer may also include settings, cameras, calendars, music, gallery, conversation, maps, navigation, WLAN, bluetooth, video, short messages (not shown in fig. 2).
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer may include some predefined functions.
As shown in FIG. 2, the application framework layer includes a View (View) framework. The view frame includes: view, drag shadow creation, surface controller, extended View class, session, and View rootlmpl.
View is a basic element that constitutes an interface. View may contain rich controls that may respond to user operational events. When the system interacts with the user, the system transmits the event to the View module, and the View module makes a corresponding response operation. Stated another way, view is the basic building block of the user interface component responsible for image interface rendering and event handling. The user can operate the interface through the View module.
Dragshadow builder is used to create shadows, or to create shadows when the View is dragged, i.e., a View that follows the finger movements. For example, the native architecture defaults to generating a shadow with transparency that is the same as the original View.
Surface control is used to implement the creation of Surface. Session is used to manage window sessions.
ViewRootpl is the top of a View hierarchy and can be understood as the manager of the root View for all views in a window. ViewRootpl is used to realize the connection between View and WMS, and is used to control the rendering of window.
The extended view class is a class newly added in the embodiment of the present application, or the original architecture does not include the extended view class. The extended view class is used to implement the rendering of floating shadow frames of embodiments of the present application. In some embodiments, the extended view class is used to determine the position of the floating shadow frame at the screen coordinates. In some embodiments, the extended View class is also used to determine the zoom size of the image frame in the drag animation effect based on the width and height of the cell phone screen, the size of the floating shadow frame, or the size of the original View.
It is to be understood that the classes included in the View (View) framework shown in fig. 2 are exemplary descriptions, and embodiments of the present application are not limited thereto. In particular implementations, other classes or modules that are invoked when drawing an image frame may also be included in the View (View) framework.
Illustratively, as shown in FIG. 2, the application framework layer also includes a window management service (window manager service, WMS). WMSs may be used for window management, window animation management, surface management, and as a transfer station for an input system. WMSs may create and manage windows. WMSs are services provided by window managers. The window manager may also obtain the display screen size, determine if there are status bars, lock screens, and intercept screens.
WMSs may also be referred to as window managers, window management modules, or window service modules, among others. As shown in fig. 2, the WMS includes a drag and drop controller (dragdrop controller), a drag state class (DragState), a surface control.
DragDropController is used to control the method or step corresponding to executing the drag event.
DragState is used to save information during a drag, including but not limited to drag events, whether the state was dragged, etc.
The surface control. Transaction class is used to apply parameters related to animation effects (e.g., drag animation effects, release animation effects) of embodiments of the application to a created layer.
In the embodiment of the present application, the view extension class in the application layer and the drag function extension class in the application framework layer are extension classes of the embodiment of the present application, and are used to implement the display method of the animation effect provided by the embodiment of the present application, and are not native classes in the android system architecture. In other words, in order to implement the display method of the animation effect according to the embodiment of the present application, the view extension class and the drag function extension class may be called, and the corresponding animation effect may be implemented in combination with the native class in the android architecture.
It is to be understood that the classes included in the WMS shown in fig. 2 are exemplary descriptions, and embodiments of the present application are not limited thereto. In particular implementations, other classes or modules may be included in the WMS that are invoked when drawing image frames.
The specific functions or roles of the respective classes included in the view frame and the respective classes included in the WMS will be described in detail later with reference to the timing diagrams in fig. 9 or 10.
It is understood that the application framework layer may also include a View System (View System). The view system is a user interface system of the android application. The view system is responsible for displaying the user interface of the application. A developer may use the view system to create a variety of user interfaces, such as buttons, text boxes, drop-down boxes, and the like.
Optionally, the application framework layer may also include a Shader (loader). The loader is a Shader in the drawing process. For example, a bitmap renderer (bitmap renderer) is included in the renderer.
Optionally, an android graphics coloring language (android graphics shading language, AGSL) Library (Library) may be included in the application framework layer (not shown), an open graphics Library framework for embedded systems (open graphics Library for embedded systems framework, openGL ES framework), an activity manager, a Window Animation (Window Animation) module, a resource module, a package manager service (package manager service, PMS), and an input manager.
OpenGL ES is an Application Programming Interface (API) for embedded systems. In the android architecture, developers can create and operate two-dimensional graphics and three-dimensional graphics using OpenGL ES. An open graphics library framework for embedded systems may include multiple base classes for creating and manipulating graphics.
The activity manager may provide activity management services (activity manager service, AMS) that may be used for system component (e.g., activity, service, content provider, broadcast receiver) start-up, handoff, scheduling, and application process management and scheduling tasks.
The window animation module is used for adding corresponding animations for displaying, disappearing, hiding and the like of the window after the window is created, and displaying the whole animation process on a screen.
The resource module is used for loading animation resources, for example, loading image frames corresponding to the animation resources.
The input manager may provide input management services (input manager service, IMS). IMS may be used to manage system inputs such as touch screen inputs, key inputs, sensor inputs, etc. The IMS retrieves events from the input device node and distributes the events to the appropriate windows through interactions with the WMS.
Optionally, the application framework layer further includes a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, and phonebooks.
The view system includes visual controls, such as controls to display text and controls to display pictures. The view system may be used to build applications. The display interface may be composed of one or more views, for example, a display interface including a text notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide communication functions of the device 100, such as management of call status (on or off).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, and video files.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as a notification manager, is used for download completion notification and message alerting. The notification manager may also manage notifications that appear in the system top status bar in the form of charts or scroll bar text, such as notifications for applications running in the background. The notification manager may also manage notifications that appear on the screen in the form of dialog windows, such as prompting text messages in status bars, sounding prompts, vibrating electronic devices, and flashing lights.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing functions such as management of object life cycle, stack management, thread management, security and exception management, garbage collection and the like.
The system Runtime layer may include three-dimensional graphics processing Libraries (e.g., open graphics Libraries (open graphics library for embedded systems, openGL ES) for embedded systems), 2D graphics engines (e.g., skia graphics library (skia graphics library, SGL)), display composition system (surface flinger) modules, media Libraries, browser kernels (Webkit), an Zhuoyun rows (Android run times), core Libraries, and the like.
Three-dimensional graphics processing libraries may be used to implement three-dimensional graphics drawing, image rendering, compositing, and layer processing.
The two-dimensional graphics engine is a drawing engine for 2D drawing. The Surface flinger may also be referred to as a graphics layer mix module. The Surface flinger module is used for mixing the layers and sending and displaying the mixed layers so that the content in the window is displayed on the display screen. Animation effects correspond to a layer instance in the Surface layer module.
The media library supports playback and recording of multiple audio formats, playback and recording of multiple video formats, and still image files. The media library may support a variety of audio video coding formats such as MPEG4, h.264, moving picture experts group audio layer 3 (moving picture experts group audio layerIII, MP 3), advanced audio coding (advanced audio coding, AAC), adaptive multi-rate (AMR), joint picture experts group (joint photographic experts group, JPG), and portable network graphics (portable network graphics, PNG). The browser kernel is used for calling a system browser.
Android Runtime (Android run) includes a core library and virtual machines. Android run is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The kernel layer (or driver layer) is the lowest layer of the software stack. The driving layer is used for managing and controlling the hardware equipment. The driver layer may include a graphics processor driver (GPU driver), camera driver, display driver, binder driver, audio driver (not shown in fig. 2), sensor driver, and the like.
In the software architecture shown in fig. 2, the application layer and the system service layer may communicate across processes. The communication mode can adopt an inter-process communication mechanism (Binder), and can also rely on a Binder driver to realize communication.
It should be understood that the hierarchy shown in FIG. 2 does not constitute a specific limitation on the software system of apparatus 100. In other embodiments of the present application, the software system of apparatus 100 may include more or less than the hierarchical architecture shown in fig. 2, or each layer of the software system of apparatus 100 may include more or less than the constituent structures shown in fig. 2, and embodiments of the present application are not limited in this respect.
For example, based on the software architecture shown in fig. 2, when the electronic device displays an interface of an application (such as a first application), in response to detecting a drag operation after a long press of a first element, a first animation effect may be displayed on a screen, where the first animation effect is an animation effect from a screen coordinate position of a first frame to a corresponding screen coordinate position when the drag operation stays. Further, the electronic device displays a release animation effect in response to the user's operation to release the first element.
For easy understanding, the following describes in detail the method for displaying the animation effect provided by the embodiment of the application with reference to the accompanying drawings by taking the electronic device as an example of a folding screen mobile phone.
It should be appreciated that the first element on which the long press operation or drag operation is performed may be any type of element. The interface example of the animation effect generated in the drag process is described below by taking a first application as a gallery application and a second application as a note application, and taking a picture thumbnail in the first application as an example.
In the embodiment of the application, the dragging animation effect comprises two stages of animation effects, namely: responding to the transitional animation effect generated after long-press operation; and responding to the follow-up manual painting effect of the dragging operation after long pressing. Wherein, the transitional animation effect generated after responding to the long-press operation refers to: an animation effect of moving from a screen coordinate position (or a first screen coordinate position) of a first floating shadow frame to a contact position (or a third screen coordinate position) corresponding to a long press operation. The following manual painting effect after responding to the drag operation means: animation effects from the contact point position corresponding to the long press operation to the coordinate position where the drag operation stays, including displacement animation and scaling animation effects. The following describes in detail the interfaces shown in (1) to (3) in fig. 3A.
Fig. 3A (1) shows an interface (corresponding to a first interface) in which a gallery application and a note application are simultaneously opened by a folding screen mobile phone (hereinafter simply referred to as a mobile phone). As shown in (1) of fig. 3A, the interface includes a first display area 11 and a second display area 12. It should be understood that (1) in fig. 3A shows an example of the first interface, and the embodiment of the present application is not limited thereto.
The first display area 11 is an interface corresponding to a gallery application. For example, currently displayed in the first display area 11 is a photo page of the gallery application. The photo page includes thumbnails of a plurality of photos, for example, thumbnail 101. It will be appreciated that other interface elements of the gallery application are also displayed in the first display area 11, such as a search bar album option, a time of day option, an authoring option, and so on. It is also understood that thumbnail 101 is one example of a first element, and embodiments of the present application are not limited thereto.
The second display area 12 is an interface corresponding to a note application, for example, the second display area 12 currently displays an input page of a note item created by the user. It will be appreciated that other interface elements of the note application in the edit state are also displayed in the second display area 12, such as a return control, a cancel input control, a restore cancel content control, a save control, a title, a time of creating the note, a sort option, a style option, a to-do option, an add option, a record option, and so forth.
In the interface shown in fig. 3A (1), the mobile phone generates a drag-and-drop effect from the original position to the user contact position in response to a long-press operation of the user on the thumbnail 101 at the contact a (or first position).
For example, after the mobile phone detects a long press operation of the user on the thumbnail 101 of the photograph at the contact a shown in fig. 3A (i.e., a dotted circle shown in the drawing), the interface shown in (2) in fig. 3A is displayed. In the interface shown in (2) of fig. 3A, the first display area 11 includes a floating shadow frame 1011.
The floating shadow frame 1011 is displayed above the thumbnail 101 (i.e., original View), and the floating shadow frame 1011 may be referred to as a floating shadow frame for the thumbnail 101. The floating shadow frame 1011 is the first image frame (which may be simply referred to as the first frame) that acts as a drag picture. The floating shadow frame 1011 may also be referred to as a first image frame, or a floating image frame.
Alternatively, the size of the floating shadow frame 1011 may be preset or given. In some embodiments, for the case where the first element is a photo thumbnail, the size of the floating shadow frame 1011 may be the same size as the thumbnail 101.
The drawing process of the floating shadow frame 1011 according to the embodiment of the present application is not particularly limited. In some embodiments, the rendering process of floating shadow frame 1011 includes: by extracting pixels of the original View (i.e., thumbnail 101), filling is performed with the extracted pixels, and at the same time, some shadow effects (e.g., edge blurring, diffusion, etc.) are added to the edges of the image frame, thereby realizing a visual shadow effect. Specific ways of increasing the shadow effect include: and (3) adding a single-layer or multi-layer shadow effect by calling a function for generating a shadow layer in the process of drawing the image frame. The related description regarding drawing floating shadow frames will be described later also in connection with the flow in fig. 9.
It should be noted that, in the embodiment of the present application, the coordinate position of the floating shadow frame 1011 on the screen is obtained by calculating the coordinate position of the thumbnail 101 in the screen, the size of the thumbnail 101, the size of the floating shadow frame 1011, and the preset touch point position, regardless of the touch point position (i.e., the touch point a) where the user performs the long press operation, the determination manner of the screen coordinate position of the floating shadow frame 1011 will be described in detail with reference to fig. 7.
The embodiment of the present application is not limited to a specific manner in which the user acts on the long press operation of the thumbnail 101. For example, the user performs a long press operation at the contact point a of the mobile phone screen by a finger. For another example, the user performs a long press operation at the contact point a of the mobile phone screen through the stylus. For another example, the user performs a long press operation at the contact point a of the mobile phone screen through the mouse.
In the embodiment of the present application, a displacement animation effect of transitioning from the floating shadow frame 1011 to the contact a is generated before the user performs a long press operation on the contact a and before performing a drag operation. Described below in connection with fig. 3B.
As shown in (1) in fig. 3B, the second image frame 1012 is included in the first display area 11. The second image frame 1012 is one of the drag animation effects of the transition from the floating shadow frame 1011 to the contact a. Alternatively, the size of the second image frame 1012 may be the same as the size of the floating shadow frame 1011.
Further, as shown in (2) in fig. 3B, the third image frame 1013 is included in the first display region 11. The third image frame 1013 is one of the drag animation effects of the transition from the floating shadow frame 1011 to the contact a. Alternatively, the size of the third image frame 1013 may be the same as the size of the floating shadow frame 1011.
Alternatively, when the user presses the thumbnail 101 long or performs a further drag operation, the thumbnail 101 may be grayed out to highlight the image frame (e.g., the second image frame 1012, the third image frame 1013) corresponding to the first animation effect. It should be noted that, in the process that the first element is pressed or dragged by the user for a long time, whether the first element is displayed in gray or not may depend on the implementation of the application, and may be displayed in gray or may not be displayed in gray, which is not limited in particular in the embodiment of the present application.
As can be seen from the interface shown in fig. 3A (1) to 3B (2), the finger contact position of the user is always the contact a. That is, when a long press operation is performed at the contact a and a drag operation is not performed, a transitional animation effect is generated so that the user visually perceives a process of moving the dragged photo thumbnail 101 to the contact a. The transitional animation effect may be regarded as an animation effect of the first stage of the drag animation effect, or, a transitional animation effect generated before the long press operation has been performed but the drag operation has not been performed, as an animation effect of the first stage of the drag animation effect.
It will be appreciated that after the user performs a long press operation on the thumbnail 101 at the contact point a, the user will typically continue to hold the thumbnail 101 long to perform a drag operation in order to drag the thumbnail 101 to the target location. It will also be appreciated that the drag operation may occur in gallery applications, and may occur across application scenarios. After the drag operation is started, the animation effect of the second stage of the drag animation effect can be generated in real time.
The interface shown in fig. 3B described above may be understood as a transitional animation effect (or drag effect) generated in response to a long press operation.
Shown in fig. 3C is an interface for a user to perform a drag operation. The mobile phone displays an interface as shown in (1) in fig. 3C in response to a drag operation of the user on the thumbnail image 101 moving from the contact a to the contact b. As shown in (1) in fig. 3C, a fourth image frame 1014 is included in the first display area 11 in the interface. The fourth image frame 1014 is an image frame that the handset displays on the interface when the user drags the thumbnail 101 to contact b. The fourth image frame 1014 is one of the following manual effects.
When the drag operation starts to be performed, a hand-following drag animation effect is generated based on the floating shadow frame, and the hand-following drag animation effect includes not only a displacement animation effect generated based on the drag trajectory, but also a zoom animation effect. Alternatively, different zoom rules may be employed based on different types of objects being dragged. In some embodiments, for the case where the dragged object is a photo thumbnail, scaling may be performed according to a scaling rule that is 1.1 times larger based on the photo thumbnail.
Illustratively, the fourth image frame 1014 has a size 1.1 times the size of the thumbnail 101. Specifically, the width of the fourth image frame 1014 is 1.1 times the width of the thumbnail 101; the fourth image frame 1014 is 1.1 times higher than the thumbnail 101.
In some embodiments, the floating shadow frame 1011 is the same size as the photo thumbnail 101. Accordingly, the size of the fourth image frame 1014 may be 1.1 times the size of the floating shadow frame 1011. Specifically, the fourth image frame 1014 is 1.1 times wider than the floating shadow frame 1011; the fourth image frame 1014 is 1.1 times higher than the floating shadow frame 1011.
Further, the mobile phone displays an interface as shown in (2) in fig. 3C in response to a drag operation of the user to move the thumbnail 101 from the contact b to the contact C. As shown in (2) in fig. 3C, a fifth image frame 1015 is included in the first display area 11 in the interface. The fifth image frame 1015 is the image frame that the handset displays on the interface when the user drags the thumbnail 101 to contact c.
It can be seen that the drag operation performed in the first display area 11 is shown in both (1) in fig. 3C and (2) in fig. 3C. Of course, the drag operation may occur between different applications, i.e., the user may perform the drag operation across applications.
The mobile phone responds to a drag operation of the user to move the thumbnail image 101 from the contact C to the contact d, and the interface displays an interface as shown in (3) in fig. 3C. As shown in (3) of fig. 3C, a sixth image frame 1016 is included in the second display area 12 in the interface. The sixth image frame 1016 is an image frame that the cell phone displays on the interface when the user drags the thumbnail 101 to the contact point d. The contact d shown in fig. 3C (3) is located at the interface of the second application.
The interface shown in fig. 3C described above may be understood as a follow-hand drag animation effect generated in response to a drag operation.
In some embodiments, the drag animation effect includes: first, a first image frame 1011 (i.e., a floating shadow frame) is generated on the thumbnail 101, and then an animation effect is generated from the position of the floating shadow frame 1011 to the contact a; further, along with the change of the dragging track, displacement animation passing through the contact a, the contact b, the contact c and the contact d successively is generated, so that the transition effect of the animation is smoother. In the process of generating the displacement animation, a scaling animation that is enlarged from the floating shadow frame 1011 to the target size (for example, 1.1 times) is superimposed at the same time.
In summary, with the interfaces shown in (1) to (3) in fig. 3A to 3C, the mobile phone floats a shadow frame (i.e., the aforementioned floating shadow frame 1011) above the thumbnail 101 in response to a drag operation by the user on the thumbnail 101, and generates a drag animation effect based on the floating shadow frame 1011. The drag animation effect includes at least the aforementioned floating shadow frame 1011, the second image frame 1012, the third image frame 1013, the fourth image frame 1014, the fifth image frame 1015, and the sixth image frame 1016.
It should be understood that in the interfaces shown in (1) to (3) in fig. 3A, a portion of an image frame of a drag animation effect is shown, and embodiments of the present application are not limited thereto.
It should also be understood that in the interfaces shown in fig. 3A (1) to 3C (3), the drag trajectory shown is also merely an exemplary description, and embodiments of the present application are not limited thereto.
After reaching the target drag position, the user may perform a release operation with respect to the thumbnail 101. The interface when the thumbnail 101 is released is described below with reference to fig. 4 to 6.
In some embodiments, when the operation of releasing the first element by the user occurs at the interface of the second application, or after the user moves the first element from the first application to the second application, the releasing operation is performed at the target location, where there may be a case where the first element is received by the second application, or there may be a case where the first element is not received by the second application. The following is a detailed description in connection with fig. 4 and 5, respectively.
The mobile phone displays an interface as shown in fig. 4 (1) in response to an operation of releasing the thumbnail 101 at the contact e (corresponding to the first position) by the user. As shown in fig. 4 (1), the contact e is located in the second display area 12, and the seventh image frame 1017 is included in the second display area 12. After the user performs the release operation at the contact e, if the thumbnail 101 is received by the second application, the mobile phone generates an animation effect (or referred to as a first release animation effect) in which the image frame gradually shrinks until disappearing, so as to enable the user to intuitively see the case where the thumbnail 101 is received by the second application. Compared with the situation that the user has no animation effect after the user is lifted and released, the second released animation effect provided by the embodiment of the application is more vivid. For example, another interface displayed after the user performs the release operation is shown in fig. 4 (2), and the eighth image frame 1018 is included in the second display area 12. It can be seen that eighth image frame 1018 is smaller in size than seventh image frame 1017.
It should be appreciated that seventh image frame 1017 and eighth image frame 1018 illustrate only a portion of the image frames in the first release animation effect, and embodiments of the application are not limited thereto.
After the first release animation effect is played, the interface of the mobile phone displays an interface shown in (3) in fig. 4. As shown in (3) of fig. 4, the second display area 12 includes a thumbnail 101. This illustrates that the thumbnail 101 has been received by the notes application and is presented in the notes application's current editing interface.
Optionally, the first release animation effect is generated based on one or more of the following factors: animation curve parameters, scaling parameters, etc. The embodiment of the application does not limit the specific process of generating the first release animation effect.
FIG. 5 illustrates another example interface diagram of a user dragging a thumbnail 101 across applications. The mobile phone responds to an operation by which the user drags the thumbnail 101 from the gallery application (corresponding to the first display area 11) to the contact f (corresponding to the second position) of the first page of the note application (corresponding to the third display area 13), and the interface display is as shown in fig. 5 (1). As shown in (1) of fig. 5, the contact f is located in the third display area 13, and the third display area 13 includes a ninth image frame 1021.
It should be understood that the drag trajectory (dotted line portion) shown in (1) of fig. 5 is merely an exemplary description, and embodiments of the present application are not limited thereto.
Further, the cellular phone displays an interface as shown in (2) in fig. 5 in response to the user's operation to release the ninth image frame 1021 at the contact point f. After the user performs the release operation at the contact f, if the thumbnail 101 is not received by the second application, the handset generates an animation effect (or referred to as a second release animation effect) in which the image frame is gradually enlarged until it disappears, so as to enable the user to intuitively see a process in which the thumbnail 101 is not received by the second application. Compared with the situation that the user has no animation effect after the user is lifted and released, the second released animation effect provided by the embodiment of the application is more vivid. For example, another interface displayed after the user performs the release operation is shown in fig. 5 (2), and the tenth image frame 1022 is included in the third display area 13. It can be seen that the tenth image frame 1022 is larger in size than the ninth image frame 1021.
It should be understood that the ninth image frame 1021 and the tenth image frame 1022 illustrate only a portion of the image frames in the second release animation effect, and the embodiment of the present application is not limited thereto.
Optionally, the second release animation effect is generated based on one or more of the following factors: animation curve parameters, scaling parameters, etc. The embodiment of the application is not limited to a specific process of generating the second release animation effect.
When the second release animation effect is played, the interface of the mobile phone displays an interface as shown in (3) in fig. 5. As shown in (3) of fig. 5, the third display area 13 does not include the thumbnail 101, i.e., the thumbnail 101 is not received by the note application.
In some embodiments, the handset displays a third release animation effect in response to a user's operation to release the first element in the first application, the third release animation effect being: and returning to the animation effect of the original screen coordinate position of the first element from the latest contact position when the first element is released (or the contact position corresponding to the release operation is executed).
The generation manner of the third release animation effect is not particularly limited in the embodiment of the present application.
Optionally, the third release animation effect is generated as follows: and calculating a difference value between the original coordinate position of the first element and the hand-lifting coordinate position, and gradually reducing the difference value until the difference value is 0.
Or, alternatively, the third release animation effect is a reverse animation effect of the drag animation effect.
FIG. 6 shows an example diagram of an interface where a user releases a thumbnail 101 at the same application. The mobile phone displays an interface shown in (1) in fig. 6 in response to an operation by the user to drag the thumbnail image 101 from the original coordinate position to the contact point g (corresponding to the third position). As shown in fig. 6 (1), the contact g is located in the first display area 11, and the first display area 11 includes an eleventh image frame 111.
It should be understood that the drag trajectory (dotted line portion) shown in (1) of fig. 6 is merely an exemplary description, and embodiments of the present application are not limited thereto.
Further, the mobile phone displays an interface as shown in (1) in fig. 6 in response to the user's operation to release the eleventh image frame 111 at the contact g. If the user performs a release operation in the gallery application, the handset generates an animation effect (or referred to as a third release animation effect) in which the image frames return to the original screen position of the thumbnail 101, so as to enable the user to intuitively see the process of returning the thumbnail 101 to the original screen position.
For example, fig. 6 (2) shows another interface displayed after the user performs the release operation, and the twelfth image frame 112 is included in the first display area 11. It can be seen that the twelfth image frame 112 is the same size as the eleventh image frame 111, but the screen coordinate positions are different.
For another example, fig. 6 (3) shows another interface displayed after the user performs the release operation, and the thirteenth image frame 113 is included in the first display area 11. It can be seen that the thirteenth image frame 113 has returned to the original screen position of the thumbnail 101. Further, the thirteenth image frame 113 automatically disappears after returning to the original screen position of the thumbnail 101, that is, is not displayed in the first display area 11, but the thumbnail 101 is displayed again in the first display area 11, and so on, the third release animation effect playback ends. That is, for a release operation within the gallery application, the handset-generated release animation effect is an animation effect that returns from the contact position of the release thumbnail to the original screen position of the thumbnail. Alternatively, the third release animation effect may be understood as a reverse animation effect of the drag animation effect when the thumbnail 101 is dragged from the original screen position to the contact g position, that is, a plurality of image frames corresponding to the drag animation effect when the thumbnail 101 is dragged from the original screen position to the contact g position are played in reverse, so that the third release animation effect may be obtained.
It will be appreciated that, for convenience of description, the application scenarios shown in fig. 3A to 6 are only described by way of example in the case of a folding screen mobile phone, and the embodiments of the present application are not limited thereto. In fact, other types of electronic devices (e.g., bar phone, tablet) are also suitable. For example, the screen of the straight mobile phone is divided into an upper area and a lower area, and the upper area and the lower area can respectively correspond to the interface of the first application and the interface of the second application.
It will also be appreciated that the foregoing description is merely by way of example in which the first element is a photograph thumbnail, and in fact, is applicable to other types of elements.
It will also be appreciated that the foregoing is described with the first application being a gallery application and the second application being a notes application, in fact, applicable to other types of applications. For example, the first application is a file application and the second application is a mailbox application.
As described above, in the embodiment of the present application, in response to the user pressing the first element for a long time, the mobile phone displays (or floats) an image frame, which may be referred to as a floating shadow frame (or a floating image frame), above the first element.
In some embodiments, the screen coordinate position of the floating shadow frame is determined according to one or more of the following factors: the size of the original image frame, the screen coordinate position of the original image frame, the size of the floating shadow frame, and the preset contact position.
The manner in which the coordinate position of the floating shadow frame on the screen (or first screen coordinate position) is determined is described below in connection with fig. 7. As shown in fig. 7, W1 represents the width of the original view, H1 represents the height of the original view, and (originViewX, originViewY) represents the coordinate position of the original view on the screen. W2 denotes the width of the floating shadow frame, H2 denotes the height of the floating shadow frame, and (viewPositionX, viewPositionY) denotes the coordinate position of the floating shadow frame on the screen. the thumb and thumb represent preset contact positions. Wherein, the thumb represents the distance between the preset contact and the upper side length of the floating shadow frame, and the thumb represents the distance between the preset contact and the left side length of the floating shadow frame.
In some embodiments, if the preset touch point position is not considered, the screen coordinate position of the floating shadow frame provided to the frame interface is a coordinate position corresponding to the upper left corner of the floating shadow frame, such that drawing starts from the upper left corner of the screen when the floating shadow frame is drawn. Correspondingly, (viewPositionX, viewPositionY) is calculated as follows:
;
optionally, the frame interface is provided with a screen coordinate position of the floating shadow frame as a center position, such that drawing starts from the center of the screen when drawing the floating shadow frame. At this point the preset contact position needs to be considered. Correspondingly, (viewPositionX, viewPositionY) is calculated as follows:
;
For example, the original view is the thumbnail image 101 shown in (1) in fig. 3A described above. The floating shadow frame may be the floating shadow frame 1011 shown in (2) of fig. 3A described above. The coordinate position of the floating shadow frame on the screen can be calculated by the aforementioned formula.
It should be noted that in the process of generating the drag animation effect based on the floating shadow frame, the corresponding scaling may be performed using the corresponding scaling rule based on the style or type of the first element, so as to scale the first element or the floating shadow frame to the target size.
In some embodiments, if the first element is a photo thumbnail, the photo thumbnail may be enlarged 1.1 times the photo thumbnail in generating the drag animation effect. For example, the size of the fourth image frame 1014 shown in (1) in fig. 3C described above is 1.1 times the size of the thumbnail 101.
In some embodiments, if the first element is a picture large map (or a picture that is larger in size than a photo thumbnail), scaling may be performed according to a preset scaling rule based on the width and height of the picture. For example, scaling the floating shadow frame according to a preset scaling rule.
In some embodiments, if the first element is an irregular picture, scaling may be performed according to a preset scaling rule based on the width and height corresponding to the irregular picture. For example, scaling the floating shadow frame according to a preset scaling rule.
In some embodiments, if the first element is a text class element, scaling is performed according to a preset scaling rule based on a width and a height of a border of the text class element. For example, scaling the floating shadow frame according to a preset scaling rule.
Optionally, scaling the first element based on the width of the first element and the height of the first element, scaling according to a preset scaling rule. The following is a description of the decision logic in fig. 8. As shown in fig. 8, the judgment logic includes:
step 1, after the drag is started, judging whether the height of the first element is greater than or equal to a height threshold value (or a first threshold value).
The height threshold may be set based on actual requirements, which is not limited by the embodiment of the present application. Optionally, the height threshold is 25% of the screen height of the electronic device.
If the height of the first element is greater than or equal to the height threshold, then step 2 is performed; if the first element is higher than the height threshold, step 3 is performed.
And 2, scaling the first element in equal proportion according to a first ratio (or first scaling ratio).
For example, the first ratio may be 25% of the screen height of the electronic device, i.e. scaling the first element etc. to 25% of the screen height of the electronic device.
And 3, judging whether the width of the first element is larger than or equal to a width threshold value (or a second threshold value).
The width threshold may be set based on actual requirements, which is not limited by the embodiment of the present application. Optionally, the width threshold is 80% of the screen width of the electronic device.
If the width of the first element is greater than or equal to the width threshold, then step 4 is performed; if the width of the first element is less than the width threshold, step 5 is performed.
And 4, scaling the first element in equal proportion according to a second ratio (or second scaling ratio).
For example, the second ratio may be 80% of the screen width of the electronic device, i.e., scaling the first element to 80% of the screen width of the electronic device.
Step 5, scaling the first element is not performed.
That is, in the case where the width of the first element is smaller than the width threshold, the dragging can be performed in accordance with the size of the first element without performing the scaling process on the size of the first element.
It should be appreciated that the method logic flow illustrated in fig. 8 is merely an example of a preset scaling rule, and embodiments of the present application are not limited in this regard.
In fig. 8, the "first element" as the judgment object may be replaced with "floating shadow frame corresponding to the first element". Alternatively, as an implementation, after the user performs the long press operation on the first element, a determination may be made as to whether to perform scaling based on the size of the generated floating shadow frame. For example, step 1 includes: after the dragging is started, judging whether the height of the floating shadow frame corresponding to the first element is larger than or equal to a height threshold value; the step 2 comprises the following steps: scaling the floating shadow frame by a first ratio, and the like, and other steps are not repeated here.
In some embodiments, if the first element is a file list class element, scaling is performed based on predefined rules corresponding to the file list class element. For example, for the operation of moving a plurality of files in the file list by a user, the plurality of files can be combined into a window with a preset size for display; the drag operation for the window of the preset size may be understood as drag for a plurality of files selected by the user.
The first animation effect is a drag animation effect in which a screen coordinate position corresponding to a first floating shadow frame is moved to a coordinate position where a finger stays (a coordinate position where a drag operation stays, or a coordinate position where a long press operation is possible). The following describes in detail a specific generation manner of the animation effect according to the embodiment of the present application.
In some embodiments, the first animation effect is generated from one or more of: the size of the first image frame, the first screen coordinate position, the second screen coordinate position, the target scaling parameter, and the animation parameter.
Optionally, the target scaling parameter is determined based on an initial scaling, a final scaling, and an animation curve parameter.
The initial scaling refers to an initial default scaling value, for example, the initial scaling defaults to 1.
The final scaling is determined based on the aforementioned preset scaling rules or predefined scaling. The final scaling may be determined differently for different types of first elements. As previously described, for the case where the first element is a photo thumbnail, the final scale may be 1.1 times; for a large picture, the final scaling may be determined with reference to the scaling rules described above. For brevity, no further description is provided herein.
The animation curve parameter is a parameter corresponding to a curve used when generating an animation effect. For example, the animation curve adopts Bezier curve; accordingly, the animation curve parameter is a coordinate value of a Seer curve.
Illustratively, the first animation effect is 250 milliseconds in duration, and the corresponding animation curve adopts a Bezier curve. The bezier curve is a smooth curve drawn based on coordinates of arbitrary points (including a start point, an end point, and two intermediate points separated from each other) at 4 positions. For example, the coordinates of the control points of the Bezier curves are (x 1, y1, x2, y 2); x1, y1 are the coordinates of the first control point and x2, y2 are the coordinates of the second control point.
For example, the coordinates of the bezier curve are (0.4 f, 0.0f, 0.2f, 1.0 f). f represents floating point type data. In generating the first animation effect, the parameter value of the animation curve may be obtained based on a trend of the Bezier curve. It should be understood that the parameters of the bezier curves set above are all adjustable parameters, and the above examples are only examples, and those skilled in the art can set reasonable parameter values based on actual requirements.
Illustratively, the target scaling parameter is determined in accordance with the following code:
currentScale=1.0f+(endScale-1.0f)*animation.getAnimatedFraction();
wherein currentScale represents the target scaling parameter; 1.0f is the initial scaling; endScale is the final scale; the animation curve parameter may be a value of a curve interpolator, which is obtained by calling animation.
Further, when scaling an image frame (for example, a floating shadow frame or an image frame corresponding to the first element) with the target scaling parameter, the image frame may be scaled based on the preset touch point position, that is, the preset touch point position is used as a scaling point, and the width and the height of the image frame are respectively scaled. It can be appreciated that the scaling point may be a coordinate point in the upper left corner of the image frame, i.e. scaling the image frame based on the coordinate point in the upper left corner; the image frame may also be scaled at its center coordinate point, i.e. based on the center point. The predetermined contact position may also be expressed differently, for example, thumb may be named sDragShadowthumb; the thumb may be named sDragShadowthumb OffsetY to assign these parameters into the drag function extension class of the present application.
The calculation result of the target scaling parameter can be saved through a matrix. Illustratively, the scaling value is saved in accordance with the following code:
matrix.setscale(currentScale,currentScale,sDragShadowThumbOffsetX,sDragShadowThumbOffsetY);
wherein, the first two terms in the matrix represent scaling of the X axis and the Y axis respectively, specifically: the first term is scaling the width of the image frame (X-axis direction) by currentScale; the second term is to scale the high (Y-axis direction) of the image frame by currentScale. The last two representative image frames in the matrix are scaled based on the contact (sDragShadowThumbOffsetX, sDragShadowThumbOffsetY). That is, if the image frame is scaled at the center point, the sDragShadowThumbOffsetX and sDragShadowThumbOffsetY may also be saved in matrix.
It will be appreciated that the position of the image frame on the screen will also change after the scaling process.
Illustratively, the displacement offset due to the scaling process may be obtained by:
scaleOffsetX= matrixValues[Matrix.MTRANS_X];
scaleOffsetY= matrixValues[Matrix.MTRANS_Y]。
the scaleOffsetX is a displacement offset in the X-axis direction after the image frame is affected by the scaling process. scaleOffsetY is the displacement offset in the Y-axis direction after the image frame is affected by the scaling process.
Since the scaling process of the image frames is involved, the scaling process also affects the true position of the image frames (i.e., the corresponding image frames when the drag operation is hovering). Optionally, the second screen coordinate position is based on the first screen coordinate position, the animation curve parameters, and the actual screen position after the drag operation has been stopped.
Illustratively, the second screen coordinate position is determined according to the following code:
currentpositionX=startPositionX+(realEndPositionX-startPositionX)*animation.getAnimatedFraction();
currentpositionY=startPositionY+(realEndPositionY-startPositionY)*animation.getAnimatedFraction();
wherein (currentposition x, currentposition y) represents the second screen coordinate position; (startPositionX, startPositionY) representing a first screen coordinate position, i.e., a screen position corresponding to the floating shadow frame calculated previously; (realEndPositionX, realEndPositionY) representing an actual screen position after the drag operation is stopped; the animation curve parameter may be a value of a curve interpolator, which is obtained by calling animation. For relevant description of the bezier curve, reference is made to the foregoing, and no further description is given here.
Optionally, the position information of each frame image frame in the calculated first animation effect is applied to the created image layer by calling the sDragTransaction () function, so as to complete drawing of the image frame. sDragTransaction is understood to mean SurfaceControl. The surfacecontrol.transaction may set atomic properties for SurfaceControl.
Illustratively, the sDragTransaction () function is called by the following code:
sDragtransaction.setPosition(sDragsurfaceControl,scaleOffsetX+currentpositionX- sDragShadowThumbOffsetX,scaleOffsetY+currentpositionY- sDragShadowThumbOffsetY);
wherein sDragsurface control is a surface control of floating shadow frames, and may be used to manage the surface of floating shadow frames (or floating shadow frames); the scalakoffsetx+currentposition x-scdragshadow thumb offsetx minus scdragshadow thumb and the scalakoffsety+currentposition y-scdragshadow thumb minus scdragshadow, indicate that the drawing image frame is drawn starting from the top left corner of the image frame, rather than the center point position. Wherein, sDragShadowthumb OffsetX can correspond to the thumb used in calculating floating shadow frames above; the sDragShadowThembOffsetY may correspond to the above-described thumb used in computing floating shadow frames.
It will be appreciated that other animation parameters, such as transparency parameters, may be added when generating the animation effect according to the embodiments of the present application, which are not limited in particular.
To facilitate an understanding of the specific implementation of creating an animation effect of an embodiment of the present application, the following description is made in conjunction with the timing diagrams shown in fig. 9 and 10.
Referring to fig. 9, fig. 9 shows an exemplary interaction diagram for creating a drag animation effect by an application program after receiving a long-press drag operation of a user according to an embodiment of the present application.
It should be appreciated that the interaction procedure provided in fig. 9 may be applied in fig. 1 and 2. The device 100 has an application (e.g., gallery application) installed therein. Optionally, in the software architecture layer of the apparatus 100 shown in fig. 2, the application, view, dragShadowBuilder class, surface controller (surface control), view extension class, session, and viewrotlmpl class in fig. 9 may be located in the application layer in the software system shown in fig. 2; the drag controller (dragdrop controller), drag state (DragState), surface control. Transaction, and drag function extension classes in fig. 9 are located in the framework layer (e.g., WMS) of the software system shown in fig. 2.
As shown in fig. 9, the specific steps for creating the animation effect by the weather application program in the embodiment of the present application are as follows:
and step 1, calling a drag interface by the application.
Taking the interface shown in (1) in fig. 3A as an example, the mobile phone calls a drag interface in response to an operation of the user to press the thumbnail 101 for a long time.
For example, the drag interface is invoked by the following code:
startDragAndDrop(ClipData date,1DragShadowBuilder shadowbuilder,Object myLocalState,int flags)。
it should be appreciated that the above example is described by way of example only with respect to a user long press thumbnail 101, and embodiments of the present application are not limited thereto.
Step 2, view calls DragShodaow builder to obtain the shadow size (or shadow size) and preset contact position. Finger position relative to shadow.
Optionally, the shadow size is application provided. The shadow sizes mentioned herein may correspond to the sizes of the floating shadow frames in fig. 7, previously described, W2 and H2, respectively.
Alternatively, the preset contact positions may also be provided by the application. For example, the preset contact position can also be understood as the position of the center point with respect to the width and height of the shadow. The preset contact locations mentioned herein may correspond to the thumb and thumb of fig. 7 above.
In some embodiments, the shadow size may be obtained by calling an onProvidShadowMetrics function.
For example, the shadow size is obtained by the following code:
onProvideShadowMetrics(Point shadowSize,Point shadowTouchPoint);
wherein Point shadow size represents the shadow size; point shadowTouchPoint the preset contact position.
Step 3, dragshadow builder creates a surfaceControl according to the shadow size.
Creating a surface control is herein understood to be a surface manager for managing a surface (surface).
And 4, drawing the shadow content to the surface by view.
Specifically, canvas is obtained from step 3, and shadow content is drawn onto surface. Shadow content is a pixel or feature extracted based on the original view, or may be understood as filling the content extracted from the original view onto a surface.
For example, the canvas in the step 3 is acquired through the call interface onDrawShadow (canvas), so that drawing of the shadow content is realized.
Optionally, step 4 includes: and according to different element types, different drawing is performed.
In some embodiments, for the case where the first element is a picture (e.g., a photo thumbnail, a photo large picture), the pixel points of the picture may be extracted for rendering.
In some embodiments, for the case that the first element is a video, pixel point extraction may be performed on the content of the first frame or the cover of the video, so as to implement drawing of shadow content.
In some embodiments, for the case where the first element is text, the bitmap of the text may be passed into surface for rendering.
Illustratively, taking the example that the first element is a photo thumbnail, after the pixel point of the picture is extracted and drawn, a plurality of layers (for example, 3 layers) or a single-layer shadow is drawn, so as to obtain a floating shadow frame, so that the floating shadow frame has the shadow effects of blurring, diffusion and the like.
It should be noted that, the floating shadow frame mentioned in the embodiment of the present application refers to the entire image frame, and does not refer to only the shadow displayed at the edge of the image. Taking the example in fig. 7 as an example, the floating shadow frame is an image frame overlaid on the original view, and does not refer to a filled area around the original view in fig. 7, and the content filled in the image frame can be obtained by extracting pixels of the original view.
And 5, calculating the scaling ratio by view.
In some embodiments, the final zoom size in constructing the drag animation effect is calculated from the screen width height, the size of the floating shadow frame, or the size of the original view. The preset scaling rules and the scaling modes based on different styles are described in the foregoing, and are not repeated here.
Illustratively, the final scale size is calculated by the following code:
calculateShadowTargetScale(int ShadowType,Point shadowSize/viewSize)。
it should be noted that, step 5 is a step newly added in the embodiment of the present application, and the android native method does not have this step.
And 6, the View calculates the coordinate position of the floating shadow frame on the screen.
Specifically, the position of the floating shadow frame on the screen may be calculated based on the original View, such as the aforementioned (viewPositionX, viewPositionY), or the aforementioned surface coordinate position on the screen.
Illustratively, step 6 calculates the coordinate position of the floating shadow frame on the screen by:
getDragViewPosition(DragView,ShadowView,Point shadowSize,Point shadowTouchPoint)。
it should be noted that, step 6 is a step newly added in the embodiment of the present application, and the android native method does not have this step.
Step 7, view passes relevant information required for constructing the animation to a system service (System Server) process, such as Session.
That is, the purpose of step 7 is to pass the relevant parameters or information for the subsequent build drag animation effect to the system service process.
In some embodiments, the relevant information required to construct the animation includes one or more of the following: applying a window (window), dragging the surfaceControl, the scaling calculated in step 5, the coordinate position of the floating shadow frame on the screen calculated in step 6, and other information related to the floating shadow frame. Reference may be made to the foregoing description for a determination of various terms or concepts, which are not repeated here.
Illustratively, step 7 is implemented by the following code: performDragWithAnim (IWindow windows, int flags, surfaceControl surface, int touchSource, float viewPositionX, float viewPositionY, float viewTargetScale, boolean isNeedAlpha, float touchX, float touchY, float thumbCenterX, float thumbCenterY, clipData data).
It should be noted that, step 7 is a step newly added in the embodiment of the present application, and the android native method does not have this step.
Step 8, the session calls dragdrop controller to execute the drag (performDrag).
Step 9, dragdrop controller constructs a drag state (DragState).
DragState is used to save some information during the drag. Or, dragState is used for processing events in drag, and provides for the subsequent realization of animation.
Step 10, dragdropcontroller registration event reception (registerInput).
Since the finger is always moving during the drag, registration event reception is required so that the position of the finger relative to the screen can be always obtained when the finger moves.
Step 11, recording the initial position of the animation (i.e. the coordinate position of the floating shadow frame on the screen).
The purpose of step 11 is to save the floating shadow frame calculated in step 6 in the WMS at the coordinate position of the screen for subsequent use in generating the drag animation effect.
For example, the mobile phone generates a release animation effect at the coordinate position of the screen based on the floating shadow frame in response to the operation of releasing the first element by lifting the user's hand.
Illustratively, the animation initial position is recorded by the following code:
setActTouchPoints(viewPositionX,viewPositionY)。
it should be noted that, step 11 is a step newly added in the embodiment of the present application, and the android native method does not have this step.
And step 12, setting the coordinate position of the first frame on the screen.
The first frame here is the floating shadow frame described above. The floating shadow frame is defined as the first image frame that has the effect of dragging. The subsequent steps will be described with the first frame replacing the floating shadow frame.
Illustratively, the coordinate position of the first frame on the screen is set by the following code:
setPosition(viewPositionX-thumbCenterX,viewPositionY-thumbCenterY);
the details regarding viewPositionX and viewPositionY, and thumbmentx and thumbmenty, are described in detail above in fig. 7 and will not be described here for brevity.
And step 13, displaying the first frame of the dragging animation effect, namely a floating shadow frame, through surface control.
Step 14, drag the drawing effect (or first animation effect) through DragState.
Step 14 can be understood as: and starting to generate displacement animation gradually transiting from the first frame to the finger position, and superposing dragging drawing effects of the zooming animation.
Illustratively, the drag draw effect begins by:
StartDragAnimator(viewPositionX,viewPositionY,viewTargetScale,isNeedAlpha);
wherein, viewTargetScale represents the target scaling value; the isceedalpha represents the transparency parameter in the dynamic effect.
And 15, setting part of parameters required in dragging the picture effect.
For example, some parameters required for drawing effects are transferred into the drag function drag class.
Optionally, the partial parameters include, but are not limited to: window information, preset contact position.
Illustratively, the part of the parameters required in the animation process are set by the following code:
setDragDropInfo(SurfaceControl,Transaction,ThumbOffsetX,ThumbOffsetY)。
and step 16, creating a drag animation effect based on the corresponding screen position when the drag operation stays, the zoom value and the coordinate position of the first frame on the screen.
Illustratively, a drag animation effect is created by:
createStartDragAnimator(viewPositionX,mCurrentX,viewPositionY,mCurrentY,scale);
wherein, the viewPositionX and viewPositionY are the coordinate positions of the frames on the screen calculated in the step 6; mcurrentX and mcurrentY are screen positions corresponding to the stay of the drag operation; scale is the scaling value calculated in step 5 above.
It should be noted that, the purpose of introducing the steps 15 and 16 is to decouple from the native frame, or introduce a drag function extension class (or other custom class) to generate a drag-drawing effect or drag-animation effect. Of course, in specific implementation, the drag function extension class may not be introduced, that is, the native frame may be adaptively modified to generate the drag effect (or the first animation effect), which is not limited specifically.
It should be understood that the time-series interaction flow shown in fig. 9 is introduced for ease of understanding only, and is an exemplary description, and is not intended to limit the display method of the animation effect of the embodiment of the present application to the scene shown in fig. 9.
Fig. 10 shows a flow chart when the user lifts his hand to release the first element. It should be understood that fig. 9 and fig. 10 may be implemented in combination or may be implemented separately, which is not particularly limited. For example, after performing the method flow described in fig. 9, if a user's lift release operation is detected, the method flow illustrated in fig. 10 may be continued. For another example, after the drag drawing effect is generated by using other method flows, if the user's hand-up release operation is detected, the method flow illustrated in fig. 10 may be continuously performed. As shown in fig. 10, the method includes:
in step 17, the application passes the release event to the viewrotlmpl.
That is, the application may pass the release event to the framing layer via ViewRootlmpl after detecting that the user has lifted his hand to release the first element.
Illustratively, a user's hand-up release event may be communicated by the following code:
dispatchDragEvent(DragEvent event)。
the application in step 17 may be the same application as the application in fig. 9, or may be a different application, which is not particularly limited.
Illustratively, the application in FIG. 9 is a gallery application, and the application in FIG. 10 is a gallery application; alternatively, the application in FIG. 9 is a gallery application and the application in FIG. 10 is a notes application.
In some embodiments, the first application or the second application detects a release operation for a scene dragged across the applications.
Step 18, viewRootpl returns the Boolean value.
The boolean value is used to indicate whether the dragged first element was received by the application. The value of the boolean value determines which release animation effect is to be generated later.
For example, in a scenario where a photo thumbnail is dragged to a notes application, if the notes application receives a photo thumbnail, then the transferred boolean value is 1; if the notes application did not receive a photo thumbnail, the transferred Boolean value is 0.
Illustratively, the hand-up release event may be handled by the following code:
Handle DragEvent(DragEvent event)。
in step 19, the viewrotlmpl delivers the result of whether the first element is received to the Session.
Illustratively, viewRootlmpl may deliver the received result by:
reportDragResult(mwindow,result)。
in step 20, the session passes the result of whether the first element was received to DragDropController.
For example, session may pass the received result through the following code:
reportDragResult(window,consumed)。
In step 21, dragdropcontroller ends the drag state, for example, call interface enddraglock ().
And step 22, constructing a corresponding release animation effect based on the judgment result.
Illustratively, if the judging result is that the photo thumbnail is received by the note application, generating an animation effect of shrinking before disappearing, and referring to the first release animation effect shown in fig. 4;
if the judgment result is that the photo thumbnail is not received by the note application, judging whether the application where the window is positioned and the gallery application are the same application when the release event is detected;
if the application where the window is located and the gallery application when the release event is detected, that is, the dragged photo thumbnail is released in the gallery application, generating an animation effect returning to the original position, and referring to the third release animation effect shown in fig. 6;
if a release operation is detected in a different application, for example, in a note application, an animation effect that disappears after enlargement is generated at a position corresponding to the release operation, and in particular, reference may be made to the second release animation effect shown in fig. 5 above.
It will be appreciated that embodiments of the present application are not limited to the specific implementation of step 22 to generate the release animation effect. Illustratively, the related flow for constructing the drag-drawing effect can be referred to in fig. 9, specifically, for example, the release of the animation effect based on the drag function extension class construction.
From the above, the method for displaying an animation effect according to the present application provides a drag animation effect in a drag process, and the first animation effect is an animation effect that gradually transitions from a floating shadow frame to a drag stay position. Further, the electronic equipment responds to the operation of releasing and dragging the first element by the user to generate a release animation effect, so that the animation effect in the dragging process is more natural and smooth, is more vivid and accords with the user experience, and the visual experience of the user is remarkably improved.
Embodiments of the present application provide a chip system including one or more processors configured to invoke from a memory and execute instructions stored in the memory, so that the method of the embodiments of the present application described above is performed. The chip system may be formed of a chip or may include a chip and other discrete devices.
The chip system may include an input circuit or interface for transmitting information or data, and an output circuit or interface for receiving information or data, among other things.
The application also provides a computer program product which, when executed by a processor, implements the method of any of the method embodiments of the application.
The computer program product may be stored in a memory and eventually converted to an executable object file that can be executed by a processor through preprocessing, compiling, assembling, and linking.
The application also provides a computer readable storage medium having stored thereon a computer program which when executed by a computer implements the method according to any of the method embodiments of the application. The computer program may be a high-level language program or an executable object program.
The computer readable storage medium may be volatile memory or nonvolatile memory, or may include both volatile memory and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and technical effects of the apparatus and device described above may refer to corresponding processes and technical effects in the foregoing method embodiments, which are not described in detail herein.
In the several embodiments provided by the present application, the disclosed systems, devices, and methods may be implemented in other manners. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described apparatus embodiments are merely illustrative, the division of units is merely a logical function division, and there may be additional divisions in actual implementation, and multiple units or components may be combined or integrated into another system. In addition, the coupling between the elements or the coupling between the elements may be direct or indirect, including electrical, mechanical, or other forms of connection.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The terms "first," "second," …, etc. appearing in embodiments of the present application are for descriptive purposes only and are merely for distinguishing between different objects, e.g., different "coordinates," etc., and are not to be construed as indicating or implying a relative importance or an implicit indication of the number of features indicated. Thus, features defining "first", "second", …, etc., may include one or more features, either explicitly or implicitly. In the description of embodiments of the application, "at least one (an item)" means one or more. The meaning of "plurality" is two or more. "at least one of (an) or the like" below means any combination of these items, including any combination of a single (an) or a plurality (an) of items.
In summary, the foregoing description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of displaying an animation effect, comprising:
displaying a first interface, the first interface comprising a first element;
detecting a drag operation after the first element is pressed for a long time;
displaying a first animation effect in response to the drag operation, the first animation effect being a drag animation effect from a first screen coordinate position to a second screen coordinate position, the first screen coordinate position being a screen coordinate position of a first image frame of the first animation effect, the first screen coordinate position being determined based on a size of the first element, the screen coordinate position of the first element, the size of the first image frame, and a preset touch point position, the second screen coordinate position being a corresponding screen coordinate position when the drag operation is stopped;
the first animation effect is generated according to one or more of the following parameters: the size of the first image frame, the first screen coordinate position, the second screen coordinate position, a target scaling parameter, and an animation curve parameter.
2. The method of claim 1, wherein the first animation effect comprises: an animation effect from the first screen coordinate position to a third screen coordinate position, and an animation effect from the third screen coordinate position to the second screen coordinate position, wherein the third screen coordinate position is a contact point position corresponding to a long time by a user on the first element;
the animation effect from the third screen coordinate position to the second screen coordinate position comprises a displacement animation effect and a scaling animation effect, and the target scaling parameter corresponding to the scaling animation effect is determined based on a preset scaling rule.
3. The method of claim 2, wherein when the first element is a picture-type element, the preset scaling rule comprises: a scaling rule determined from the height and width of the first element or a scaling rule determined from the height and width of the first image frame.
4. A method according to claim 3, wherein the preset scaling rules comprise:
when the height of the first element is larger than or equal to a first threshold value, scaling according to a first scaling ratio, wherein the first scaling ratio is the first threshold value;
When the height of the first element is smaller than the first threshold value and the width of the first element is larger than or equal to a second threshold value, scaling according to a second scaling ratio, wherein the second scaling ratio is the second threshold value;
when the height of the first element is less than the first threshold and the width of the first element is less than the second threshold, no scaling is performed.
5. The method of claim 1, wherein the first interface includes a first display area and a second display area, the first element being located in the first display area, the first display area corresponding to the first application and the second display area corresponding to the second application, the drag operation including an operation of dragging the first element from the first display area to the second display area, the first application being different from the second application.
6. The method of claim 5, wherein the method further comprises:
detecting an operation of releasing the first element at a first location, the first location being in the second display area;
in response to the first element being received by the second application, a first released animation effect is displayed, the first released animation effect being an animation effect in which image frames gradually shrink until disappearing.
7. The method of claim 5, wherein the method further comprises:
detecting an operation of releasing the first element at a second location, the second location being located in the second display area;
and in response to the first element not being received by the second application, displaying a second released animation effect, the second released animation effect being an animation effect in which the image frames gradually zoom in until disappearing.
8. The method of claim 5, wherein the method further comprises:
detecting an operation of releasing the first element at a third location, the third location being in the first display area;
in response to an operation to release the first element, a third release animation effect is displayed, the third release animation effect being an animation effect that returns from the third position to an original screen coordinate position of the first element.
9. The method of claim 1, wherein the first element comprises one or more of the following elements: picture class elements, text class elements, file list class elements, uniform resource locator elements, card class elements, and icon class elements.
10. An electronic device comprising a processor and a memory, the processor and the memory being coupled, the memory being for storing a computer program that, when executed by the processor, causes the electronic device to perform the method of any one of claims 1 to 9.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes an electronic device to perform the method of any one of claims 1 to 9.
12. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of any of claims 1 to 9.
CN202311196800.0A 2023-09-18 2023-09-18 Animation effect display method and device Active CN116974446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311196800.0A CN116974446B (en) 2023-09-18 2023-09-18 Animation effect display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311196800.0A CN116974446B (en) 2023-09-18 2023-09-18 Animation effect display method and device

Publications (2)

Publication Number Publication Date
CN116974446A true CN116974446A (en) 2023-10-31
CN116974446B CN116974446B (en) 2024-06-14

Family

ID=88475200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311196800.0A Active CN116974446B (en) 2023-09-18 2023-09-18 Animation effect display method and device

Country Status (1)

Country Link
CN (1) CN116974446B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150143285A1 (en) * 2012-10-09 2015-05-21 Zte Corporation Method for Controlling Position of Floating Window and Terminal
CN107066172A (en) * 2017-02-16 2017-08-18 北京小米移动软件有限公司 The document transmission method and device of mobile terminal
WO2020253282A1 (en) * 2019-06-21 2020-12-24 海信视像科技股份有限公司 Item starting method and apparatus, and display device
CN112130788A (en) * 2020-08-05 2020-12-25 华为技术有限公司 Content sharing method and device
CN113419649A (en) * 2021-05-31 2021-09-21 广州三星通信技术研究有限公司 Method for operating electronic device and device thereof
WO2022022495A1 (en) * 2020-07-29 2022-02-03 华为技术有限公司 Cross-device object dragging method and device
CN114527901A (en) * 2020-10-31 2022-05-24 华为技术有限公司 File dragging method and electronic equipment
CN115268807A (en) * 2021-04-30 2022-11-01 华为技术有限公司 Cross-device content sharing method and electronic device
CN115729431A (en) * 2021-08-31 2023-03-03 华为技术有限公司 Control content dragging method, electronic device and system
CN116560865A (en) * 2022-01-27 2023-08-08 华为技术有限公司 Method and terminal for sharing information between applications

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150143285A1 (en) * 2012-10-09 2015-05-21 Zte Corporation Method for Controlling Position of Floating Window and Terminal
CN107066172A (en) * 2017-02-16 2017-08-18 北京小米移动软件有限公司 The document transmission method and device of mobile terminal
WO2020253282A1 (en) * 2019-06-21 2020-12-24 海信视像科技股份有限公司 Item starting method and apparatus, and display device
WO2022022495A1 (en) * 2020-07-29 2022-02-03 华为技术有限公司 Cross-device object dragging method and device
CN112130788A (en) * 2020-08-05 2020-12-25 华为技术有限公司 Content sharing method and device
CN114527901A (en) * 2020-10-31 2022-05-24 华为技术有限公司 File dragging method and electronic equipment
CN115268807A (en) * 2021-04-30 2022-11-01 华为技术有限公司 Cross-device content sharing method and electronic device
CN113419649A (en) * 2021-05-31 2021-09-21 广州三星通信技术研究有限公司 Method for operating electronic device and device thereof
CN115729431A (en) * 2021-08-31 2023-03-03 华为技术有限公司 Control content dragging method, electronic device and system
WO2023029983A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Control content dragging method and system, and electronic device
CN116560865A (en) * 2022-01-27 2023-08-08 华为技术有限公司 Method and terminal for sharing information between applications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三秦宛香: ""华为鸿蒙系统3.0超级中转站功能,带你体验新一代智慧操作系统"", pages 51, Retrieved from the Internet <URL:好看视频 https://haokan.baidu.com/v?pd=wisenatural&vid=5687713590486478095> *

Also Published As

Publication number Publication date
CN116974446B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
US9195362B2 (en) Method of rendering a user interface
US9196075B2 (en) Animation of computer-generated display components of user interfaces and content items
US8984448B2 (en) Method of rendering a user interface
EP3111318B1 (en) Cross-platform rendering engine
CN113805745B (en) Control method of suspension window and electronic equipment
US20230367464A1 (en) Multi-Application Interaction Method
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
WO2023040666A1 (en) Keyboard display method, foldable screen device, and computer-readable storage medium
CN114995929B (en) Popup window display method and device
WO2022247541A1 (en) Method and apparatus for application animation linking
CN115643485A (en) Shooting method and electronic equipment
CN115640083A (en) Screen refreshing method and equipment capable of improving dynamic performance
CN116974446B (en) Animation effect display method and device
CN113805746B (en) Method and device for displaying cursor
WO2023005751A1 (en) Rendering method and electronic device
CN113934340B (en) Terminal equipment and progress bar display method
WO2023072113A1 (en) Display method and electronic device
CN116700554B (en) Information display method, electronic device and readable storage medium
CN116048373B (en) Display method of suspension ball control, electronic equipment and storage medium
CN114866641B (en) Icon processing method, terminal equipment and storage medium
WO2021253922A1 (en) Font switching method and electronic device
CN116700914B (en) Task circulation method and electronic equipment
WO2024078114A1 (en) Window display method, electronic device, and computer-readable storage medium
WO2024027504A1 (en) Application display method and electronic device
WO2024055822A1 (en) Information display method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant