CN114115665A - Page element processing method and device and computer readable storage medium - Google Patents

Page element processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN114115665A
CN114115665A CN202111412083.1A CN202111412083A CN114115665A CN 114115665 A CN114115665 A CN 114115665A CN 202111412083 A CN202111412083 A CN 202111412083A CN 114115665 A CN114115665 A CN 114115665A
Authority
CN
China
Prior art keywords
page element
event
page
offset
scaling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111412083.1A
Other languages
Chinese (zh)
Inventor
林溢彬
卢道和
林挺
万纯
李为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202111412083.1A priority Critical patent/CN114115665A/en
Publication of CN114115665A publication Critical patent/CN114115665A/en
Priority to PCT/CN2022/097864 priority patent/WO2023092992A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a page element processing method, a page element processing device and a computer-readable storage medium, wherein the method comprises the following steps: obtaining an operation event aiming at a page element; calculating attribute parameters generated by processing page elements by the operation events based on the operation coordinates corresponding to the operation events; performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters; and performing deformation processing on the page elements based on the converted attribute parameters.

Description

Page element processing method and device and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing of financial technology (Fintech), and relates to but is not limited to a page element processing method, a page element processing device and a computer-readable storage medium.
Background
With the development of computer computing, more and more technologies are applied in the financial field, and the traditional financial industry is gradually changing to financial technology (Fintech), but higher requirements are also put forward on the technologies due to the requirements of the financial industry on safety and real-time performance.
In the field of financial technology, a processor which is mainly applied to page elements of a frame (Vue) is an official recommended v-viewer, which is a component for viewing pictures and supports picture browsing in two modes of thumbnails and original pictures besides operations such as rotation, zooming and turning. Js, the bottom layer core is realized based on absolute positioning and relative positioning of a positioning function (position) of a Cascading Style sheet second Level (CSS 2), the picture is displayed on a canvas, the drawing effect is realized by changing the top (top) offset and the left (left) offset of the picture, and the zooming effect is realized by changing the width (width) and the height (height) of the picture.
The v-viewer uses the rotation function (rotate) of the third Level of the Cascading Style sheet (Cascading Style Sheets Level 3, CSS3) only at rendering time. Js processes various events such as mouse events, keyboard events, touch events and the like on the bottom layer, and the operation bar can be customized through configuration, but the attribute parameters cannot be converted through a customized algorithm.
In the related art, because the CSS3 is relatively expensive to implement and complicated in calculation, a viewer is implemented by using a position, where three attributes are generally processed separately, such as scaling, shifting, and rotating, and when a page element is a picture, in a small picture, because the occurrence time of the operations is very short, the operations are implemented simultaneously, but actually, the operations are not, that is, the attributes cannot be converted simultaneously, and when a large page element such as a large picture is processed, because the occurrence time of the operations has an obvious interval, the viewer can perceive the phenomena of apparently unsmooth operation, blocking, and frame dropping.
Disclosure of Invention
The embodiment of the application provides a page element processing method and device and a computer-readable storage medium, so as to solve the problem that the conventional position viewer is easy to jam and drop frames in a picture viewing mode.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a page element processing method, which comprises the following steps:
obtaining an operation event aiming at a page element;
calculating attribute parameters generated by processing the page elements by the operation events based on the operation coordinates corresponding to the operation events;
performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters;
and carrying out deformation processing on the page elements based on the converted attribute parameters.
An embodiment of the present application provides a processing device for a page element, including:
a memory for storing executable instructions; a processor, when executing executable instructions stored in the memory, implements the method described above.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions for causing a processor to implement the above-mentioned method when executed.
The embodiment of the application has the following beneficial effects:
obtaining an operation event aiming at a page element; calculating attribute parameters generated by processing page elements by the operation events based on the operation coordinates corresponding to the operation events; performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters; namely, converting the attribute parameters based on a self-research algorithm corresponding to a deformation function (transform) of the CSS 3; finally, based on the converted attribute parameters, carrying out deformation processing on the page elements; therefore, the new attribute parameters are applied to the transform attributes, and it can be seen that the processing method of the page element provided by the application designs a plurality of attributes such as a displacement function (translate), a scaling function (scale) and a rotation function (rotate) at one time from a microscopic angle through the self-research algorithm, is substantially different from a position scheme, and can perform rendering optimization by using a Graphics Processing Unit (GPU) by using the CSS3 transform, thereby avoiding frame dropping due to blocking.
Drawings
Fig. 1 is an alternative architecture diagram of a terminal provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for processing a page element according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for processing a page element in different event scenarios according to an embodiment of the present application;
FIG. 4 is an operation diagram in a drag scenario provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating operations in a zoom scenario provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating operations of multi-finger scrolling and multi-finger zooming provided by embodiments of the present application;
FIG. 7 is a schematic illustration of the operation of the rotation provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a second order Bezier curve provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a third-order bezier curve provided in the embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
An exemplary application of the processing device for the page element provided in the embodiment of the present application is described below, and the processing device for the page element provided in the embodiment of the present application may be implemented as any terminal having an on-screen display function, such as a notebook computer, a tablet computer, a desktop computer, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), an intelligent robot, or may be implemented as a server. Next, an exemplary application when the processing device of the page element is implemented as a terminal will be explained.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal 100 according to an embodiment of the present application, where the terminal 100 shown in fig. 1 includes: at least one processor 110, at least one network interface 120, a user interface 130, and memory 150. The various components in terminal 100 are coupled together by a bus system 140. It is understood that the bus system 140 is used to enable connected communication between these components. The bus system 140 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 140 in fig. 1.
The Processor 110 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 130 includes one or more output devices 131, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 130 also includes one or more input devices 132 including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 150 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 150 optionally includes one or more storage devices physically located remotely from processor 110. The memory 150 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 150 described in embodiments herein is intended to comprise any suitable type of memory. In some embodiments, memory 150 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 151 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 152 for communicating to other computing devices via one or more (wired or wireless) network interfaces 120, exemplary network interfaces 120 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
an input processing module 153 for detecting one or more user inputs or interactions from one of the one or more input devices 132 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in software, and fig. 1 illustrates a page element processing apparatus 154 stored in the memory 150, where the page element processing apparatus 154 may be a page element processing apparatus in the terminal 100, which may be software in the form of programs and plug-ins, and includes the following software modules: an acquiring module 1541, a processing module 1542, these modules being logical and thus arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be explained below.
In other embodiments, the apparatus provided in this embodiment may be implemented in hardware, and for example, the apparatus provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the processing method of the page element provided in this embodiment, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field-Programmable Gate arrays (FPGAs), or other electronic components.
The following describes a method for processing a page element according to an embodiment of the present application, with reference to an exemplary application and implementation of the terminal 100 according to an embodiment of the present application. Referring to fig. 2, fig. 2 is an alternative flowchart of a processing method of a page element provided in an embodiment of the present application, which will be described with reference to the steps shown in fig. 2,
in step S201, an operation event for a page element is obtained.
In the embodiment of the present application, the Document Object Model (DOM) is a set of Web page (Web) standards of the international organization of the world wide Web consortium (W3C). It defines a set of properties, methods and events for accessing hypertext markup language (HTML) document objects. Here, the page element may be obtained by a method of obtaining a DOM element. Page elements include, but are not limited to, pictures, text on a page.
In some embodiments, the operation event includes an operation event generated by operating on the page element through an input module of the terminal, such as at least one of a display system vertical and horizontal position indicator and a shortcut key of a keyboard. Of course, the operation event may also include an event generated in other manners, for example, an operation of an operation object is detected by a touch module of the terminal, and the generated operation event.
For the terminal, the method and the device are not only suitable for a Personal Computer (PC) browser, but also suitable for common mobile terminal equipment such as a smart phone, a tablet and the like, and the notebook is also subjected to adaptation processing so as to identify the touch pad gesture. The self-research algorithm provided by the application designs the effect of simultaneously realizing displacement, scaling and rotation of a plurality of attributes at one time from a microscopic angle, processes a mouse event at a PC (personal computer) end, processes a touch event at a mobile end and recognizes gestures aiming at a touch pad.
Here, the association between operation events corresponding to different terminal devices will be described:
the events of different devices are different and need to be processed independently, for example, the mobile terminal does not provide visual gesture judgment like a touch pad, and at the moment, the judgment gesture is simulated through different touch points, so that the same effect is achieved.
For example, the zooming event can be realized by the pc end through the roller, the touch pad can judge the rolling direction easily through the rolling event, the moving end needs to be realized through gestures, the central point of the two fingers is used as the origin, and the rolling direction is obtained by judging the offset of the x axis and the y axis.
In the dragging event, the PC end is different from the moving end, one is a mouse dragging and moving (mouseMove) event, and the other is a finger touch moving (touchmove) event for monitoring the touch screen, but the principles are basically the same, namely, coordinates between the front point and the rear point are calculated, and an offset is calculated, and the offset is the dragging displacement.
Step S202, calculating attribute parameters generated by processing the page elements by the operation event based on the operation coordinates corresponding to the operation event.
In the embodiment of the application, different operation events are calculated, and the attribute parameters generated by processing the page elements by the calculated operation events are different. Meanwhile, in the process of calculating the attribute parameters generated by processing the page elements by the operation events based on the operation coordinates corresponding to the operation events, the influence of control displacement, a rotation angle, a scaling ratio and the like among the three is fully considered.
Step S203, based on the event type of the operation event, performing three-level conversion processing on the attribute parameters by using the cascading style sheet to obtain the converted attribute parameters.
In the embodiment of the application, a self-research algorithm is newly added in the CSS3, multiple attributes such as a displacement function (translate), a scaling function (scale), and a rotation function (rotate) are simultaneously designed at one time from a microscopic angle, and further, based on an event type of an operation event, corresponding three-level conversion processing of a cascading style sheet can be performed on the attribute parameters, so that the converted attribute parameters are obtained.
And step S204, based on the converted attribute parameters, performing deformation processing on the page elements.
In the embodiment of the application, the attribute parameters are calculated and converted into actual CSS3 attribute values through CSS3 transform, and finally, the deformation processing effects of dragging, rotating, zooming and the like are realized by changing the CSS3 values.
Compared with the traditional position scheme, the self-developed dragging, rolling and zooming algorithm can be realized by completely using CSS3, and the GPU can be used for rendering optimization when CSS3 transform is used, so that the picture browsing performance is better. Meanwhile, the large graph can be prevented from dragging and zooming katoon through css3 transform and rotate.
It should be noted that, in the process of processing a page element, in step S203, based on an event type of an operation event, a three-level conversion process is performed on an attribute parameter to obtain a converted attribute parameter, and as an alternative, a similar function can be realized by setting an outer edge distance of an element, such as a left edge distance (margin-left) of the element, and a top edge distance (margin-top) of the element.
According to the page element processing method, the operation event aiming at the page element is obtained; calculating attribute parameters generated by processing page elements by the operation events based on the operation coordinates corresponding to the operation events; performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters; namely, converting the attribute parameters based on a self-research algorithm corresponding to a deformation function (transform) of the CSS 3; finally, based on the converted attribute parameters, carrying out deformation processing on the page elements; therefore, the new attribute parameters are applied to the transform attributes, and it can be seen that the processing method of the page element provided by the application designs a plurality of attributes such as a displacement function (transform), a scaling function (scale) and a rotation function (rotate) at one time from a microscopic angle by a CSS 3-based self-research algorithm, is substantially different from a position scheme, and can perform rendering optimization by using a Graphics Processing Unit (GPU) by using CSS3 transform, thereby avoiding frame dropping due to blocking.
Referring to fig. 3, the operation events in the embodiment of the present application include a drag event, a zoom event, and a rotation event; the processing of events is a focus of the application, and for functions such as dragging and zooming, the displacement analyzed after the events are processed is relied on. Different event processing essentially records the coordinates of the DOM, and finally converts the coordinates into actual CSS3 attribute values through calculation, and finally realizes the effects of dragging, rotating and zooming by changing the CSS3 value.
The following will be described with respect to the drag event, the zoom event, and the rotation event, respectively, in conjunction with fig. 3:
in other embodiments of the present application, if the operation event includes a drag event, that is, the page element processing method provided in the present application is applied to a drag scene based on CSS3, and step S202 is performed to calculate an attribute parameter generated by processing a page element by the operation event based on an operation coordinate corresponding to the operation event, and the following steps may be implemented:
a11, obtaining a first coordinate of an operation object of the page element before dragging on the page and a second coordinate of the operation object after dragging on the page;
here, the operation object includes, but is not limited to, a display system vertical and horizontal position indicator such as a mouse, a stylus, and a finger.
For example, if the operation object is a finger, at this time, a single-finger operation shown in a in fig. 4 or a multi-finger operation shown in b in fig. 4 may be performed, and thus the application supports rich drag operation gestures.
And A12, substituting the first coordinate, the second coordinate and the scaling into the following calculation formula, and calculating a first offset generated by processing the page element by the drag event: the first offset is characterized as (xOffset1, yOffset1),
xOffset1=parseInt(clientX-this.clickLeft)/this.scale;
yOffset1 ═ parseInt (clientY-this. clickTop)/this. scale; wherein the first coordinate is characterized as P1(this. clickleft, this. clicktop) and the second coordinate is characterized as P2 (clicentx, clicenty).
In the embodiment of the present application, after the first offset is calculated, the first offset is not directly applied to the deformation processing of the page element, and the influence of the rotation angle on the displacement is considered, further, step S203 performs three-level conversion processing on the attribute parameter based on the event type of the operation event, so as to obtain the converted attribute parameter, which may be implemented by the following steps:
a21, obtaining a second offset of the page element before dragging;
a22, obtaining a first rotation angle of the page element during dragging;
a23, dividing the first rotation angle by a preset angle and taking the remainder to obtain a second rotation angle;
and A24, performing three-level conversion processing on the first offset based on the second offset and the mapping relation between the offset and the rotation angle to obtain a converted third offset.
Wherein the preset angle is 360 degrees, the second offset is represented by (xOffset2, yOffset2), the first offset is represented by (xOffset1, yOffset1), the third offset is represented by (xOffset3, yOffset3), and the second rotation angle is represented by rotadeg; a24 performs three-level conversion processing on the first offset based on the second offset and the mapping relationship between the offset and the rotation angle to obtain a converted third offset, where the following conditions exist:
when rotadeg is 0: xOffset3 ═ xOffset2+ xOffset1, yOffset3 ═ yOffset2+ yOffset 1;
when rotateDeg is-90 or 270: xOffset3 ═ yOffset2+ xOffset1, yOffset3 ═ xOffset2-yOffset 1;
when rotadeg is-180 or 180: xOffset 3-xOffset 2-xOffset1, yOffset 3-yOffset 2-yOffset 1;
when rotadeg is 90: xOffset3 is yOffset 2-xooffset 1, and yOffset3 is xOffset2+ yOffset 1.
In the embodiment of the application, when a drag event occurs, in the process of calculating the offset generated by processing the page element by the drag event, the offset is the displacement and is combined with the rotation angle, and the scaling is converted to obtain the displacement finally applied to the CSS3 transform.
From the above, the principle of realizing the dragging effect of the present application is to calculate and convert the displacement into the displacement of the CSS3 by calculating and continuously comparing the coordinates of the two nodes before and after the dragging effect, and then change the position of the page element such as the picture by the translate function. When dragging or zooming is started, a mouse dragging and moving (mouseMove) event of a mouse or a finger touching and moving (touchmove) event of a touch screen is triggered for the first time, coordinates of the mouse or the finger on a page at the moment are recorded and stored as p1, then the mouse dragging or the finger touching and moving event is continuously triggered by continuing dragging, the coordinates obtained each time are recorded as p2, p1 is fixed and unchanged, p2 changes continuously along with event triggering, the final moving displacement is obtained by subtracting p1 from p2 to obtain unprocessed original displacement, and the displacement needs to be converted by combining with a rotation angle and a zooming ratio so as to obtain displacement finally applied to a deformation function (transform).
In other embodiments of the present application, if the operation event includes a zoom event, that is, the page element processing method provided in the present application is applied to a zoom scene based on CSS3, and step S202 is to calculate an attribute parameter generated by processing a page element by the operation event based on an operation coordinate corresponding to the operation event, and the method may be implemented by the following steps:
b11, obtaining the first scaling of the page element before scaling, the second scaling of the page element when scaling and the width and height of the page element before scaling;
b12, obtaining a fourth offset of the page element relative to the canvas of the page when zooming;
b13, obtaining a third coordinate of the operation object of the page element before zooming on the page;
and B14, calculating a fifth offset for scaling the page element with the third coordinate as the center based on the width, the height, the fourth offset, the first scaling ratio and the second scaling ratio.
The key point of scaling the page element by taking the third coordinate as the center to further implement the center scaling provided by the embodiment of the present application is that before a new scale proportion is applied to a transform attribute, a final horizontal offset xOffset and a final vertical offset yOffset of the transform, that is, a fifth offset, need to be calculated, and a simplified calculation formula may be: view offset/current scale, where the emphasis is on computing the pixel values that need to be offset in the horizontal and vertical directions to ensure that the picture is scaled in the center.
Illustratively, in one achievable zoom scenario, the operation object is a finger, in which case the operation may be a multi-finger zoom operation as shown in FIG. 5. And the gesture central point is used as an original point, so that the picture is ensured to be zoomed by taking the original point as a center.
Further, to achieve a more precise center scaling effect, B14 calculates a fifth offset for scaling the page element centered on the third coordinate based on the width, height, fourth offset, first scaling and second scaling, which can be achieved by the following formula:
the fifth offset amount is characterized as (xOffset5, yOffset5), where,
xfset 5 ═ ((v-this. refer scale) x (naturlalwidth x k-this. scale x) + this. refer x this. refer scale)/v; wherein the first scaling is characterized by being, the second scaling is characterized by v, the fourth offset is characterized by (being, the scaling x, the scaling y), the width is characterized by naturalsWidth, the third coordinate is characterized by (being, the scaling x, the scaling y), and k is a positive number; exemplarily, k is 0.5;
yOffset5 ═ ((v-this. refer scale) × (naturalhheight × k-this. scale y) + this. refer y × this. refer scale)/v; wherein high is characterized as naturalHeight.
Refer scale is recorded only once, taking the operation object as a mouse as an example, and unless the coordinates of the mouse change, the reference calculation is performed by using data triggered by the wheel for the first time. Scalex, as mentioned above, can be understood as the relative displacement of the current mouse in the horizontal direction of a page element (relative to the left border of a picture) such as a picture when zooming; scaley, as described above, can be understood as the relative displacement of the current mouse in the vertical direction of the picture (relative to the on-picture boundary) when zooming. The comparison X and the comparison Y ensure that the image can still be enlarged by taking the current mouse as the center after being dragged.
From the above, the implementation principle of the zooming effect of the present application is as follows: and when the zooming is carried out, the origin of the mouse or the gesture is not changed, and after the zooming ratio is changed, a new offset is recalculated according to the existing offset. It should be noted that, in the application, coordinates where a mouse is located or a gesture center point is used as an origin before and after zooming, a new offset is calculated while zooming is performed, so that the image is zoomed by using the origin as the center instead of achieving a zooming function.
In other embodiments of the present application, in a scenario supporting both scrolling and zooming events, the operation event for the page element obtained in step S201 can be implemented by the following steps C11 or C12:
and C11, operating the page element through one input module of the vertical and horizontal position indicators of the display system and the shortcut keys of the keyboard, and generating an operation event. Here, the shortcut keys include, but are not limited to, Ctrl/Shift keys.
And C12, operating the page element through two input modules, namely the vertical and horizontal position indicator of the display system and the shortcut key of the keyboard, and generating an operation event.
Compared with the two input modules, the operation event generated by operating the page element through one input module is different; wherein the operation events include a zoom event and a scroll event.
Example one, only operating the mouse wheel represents a scrolling event, pressing Ctrl/shift switches to a zooming event, and then operating the mouse wheel represents the zooming event. In this manner, simultaneous support for zoom and scroll events is achieved through key differentiation.
In the second example, only the mouse wheel is operated to represent a zooming event, when Ctrl/shift is pressed, the switching is made to a scrolling event, and then the mouse wheel is operated to represent the scrolling event. In this manner, simultaneous support for zoom and scroll events is achieved through key differentiation.
In other embodiments of the present application, a dragging boundary may also be set, and if a user drags a page element such as a picture and is going to exceed the boundary, here, the picture does not exceed the boundary as in a conventional scheme, but is guaranteed by defining a maximum value or a minimum value, and coordinates of the dragged picture do not exceed the maximum value or the minimum value. Therefore, the method and the device not only limit the picture not to exceed the browser boundary all the time, but also support the scrolling through mouse scrolling and gestures, and simultaneously support the scrolling in the vertical and horizontal directions.
In the Position scheme in the related art, a general method is to provide only a zoom operation, and not to support a scroll operation, because when scrolling needs to be supported and rotation is performed, a rotate attribute of CSS3 is used for a rotation angle, and the two do not belong to the same scheme per se, and the calculated reference and method are different, so that compatible processing cannot be performed, and a picture cannot be scrolled after being zoomed in, dragged and rotated, thereby achieving a desired effect. When the CSS3 scheme is completely used, the same set of reference is used, and the two functions can be realized simultaneously. The mouse wheel generally appears to the user to represent either scrolling or zooming, but the scrolling event can represent only one event, so either mouse scrolling is used for the scrolling event or just the zooming event. If the user enlarges the picture beyond the view port, the user needs to drag the picture by himself to see the hidden part, and the experience is poor. The method simultaneously supports zooming and scrolling operations, and is realized by adding a shortcut key such as Ctrl/Shift to switch a scrolling event and a zooming event.
According to the method and the device, the pause phenomenon of the large graph during dragging and zooming can be effectively optimized, and with the development of the bottom layer capability of the browser basic css3, the processing advantages of the large graph based on the self-research algorithm of css3 are gradually highlighted, so that great convenience is brought to the fields of e-commerce, office work and detailed browsing requirements of the large graph. In addition, abundant operation gestures and shortcut keys can bring better operation experience for users, and the embarrassing scene that certain touch screen users commonly use gestures cannot support is avoided.
In other embodiments of the present application, in a scenario supporting multi-finger scrolling and zooming events, the operation event for the page element obtained in step S201 can be implemented by the following steps D11 to D13, D11 to D12 and D14, or D11 to D12 and D15:
d11, receiving the multi-finger scrolling operation executed for the page element through the touch module;
d12, acquiring the horizontal displacement and the vertical displacement of each finger on the canvas of the page before and after the multi-finger scrolling;
d13, if the horizontal displacement corresponding to the multiple fingers is increased or decreased simultaneously, and the horizontal displacement is larger than the vertical displacement, generating a horizontal scrolling event aiming at the page element;
d14, if the horizontal displacement corresponding to the multiple fingers is increased or decreased simultaneously, and the horizontal displacement is smaller than the vertical displacement, generating a vertical scroll event aiming at the page element;
d15, if the horizontal displacements corresponding to the multiple fingers are not simultaneously increased and/or not simultaneously decreased, generating a zoom event for the page element.
In the embodiment of the application, as for the phenomenon that the touch pad and the touch screen do not roll, the rolling operation and the zooming operation are distinguished by identifying the angle of the double-finger operation, and the same is true for the three-finger operation. In the embodiment of the application, the algorithm for distinguishing the scroll and zoom operations of the two fingers is to judge the displacement direction of the two fingers. For example, referring to a in fig. 6, if the displacement directions of the two fingers are the same, the two fingers are scrolled. Referring to b in fig. 6, if the two fingers are displaced in opposite directions, the scaling is performed.
In the embodiment of the present application, the wheel event of the mouse and the two-finger scrolling event of the touch pad are the same event, and what needs to be specially processed is the two-finger zooming gesture specifically realized through steps D11 to D12 and D15. After the processing of D11-D15, the two-finger scrolling operation of the touch pad and the touch screen may cause simultaneous scrolling in the horizontal and vertical directions, at this time, it is necessary to intercept the horizontal and vertical displacements in the scrolling event, and if the horizontal displacement > the vertical displacement, the horizontal scrolling is performed, and if the horizontal displacement < the vertical displacement, the vertical scrolling is performed.
In other embodiments of the present application, in a scenario supporting a multi-finger rotation event, the step S201 obtains an operation event for a page element, which may be implemented by the following steps E11 to E12, or E11 and E13:
e11, receiving the double-finger gesture operation executed for the page element through the touch module;
e12, if one finger moves around the other finger in the double-finger gesture operation and the arc length between the other finger and the horizontal axis becomes small, generating a clockwise rotation event for the page element;
e13, if one finger moves around the other finger in the two-finger gesture operation and the arc length between the other finger and the horizontal axis becomes large, a counterclockwise rotation event for the page element is generated.
Here, the PC end is great because the general size of screen, and toolbar button clicking area is more obvious, and the user can rotate the picture through clicking the rendering button, but touch-sensitive screen equipment and touch pad degree of depth user then more are used to through gesture operation, and the reason has two aspects: firstly, touch-sensitive screen equipment is generally less, and the button is inconvenient to click, and secondly the gesture operation more accords with the operation custom.
For example, referring to fig. 7, the rotation gesture is a two-finger gesture, coordinates of two fingers are recorded every time of an event, and the key for determining the rotation direction is to calculate angles between the two fingers and the horizontal axis during the rotation process, wherein the counterclockwise rotation is performed when the angle between the thumb and the horizontal axis is increased or the angle between the index finger and the horizontal axis is decreased, and the clockwise rotation is performed when the angle between the thumb and the horizontal axis is decreased or the angle between the index finger and the horizontal axis is increased. The specific rotation angle can obtain a result by calculating the difference value of the coordinates before and after the rotation angle is calculated.
It should be noted that, in a multi-finger rotation scene, in addition to the two-finger gesture operation, the present application also supports three-finger gesture operation, even four-finger gesture operation and five-finger gesture operation, which are not specifically limited in this application, and the determination of the rotation direction and the rotation angle is similar to the two-finger gesture operation. Therefore, the gesture of the user is processed by combining the js event, and the gesture which is more comprehensive and more in line with the habit of the user is supported.
In other embodiments of the present application, in a scene with a custom scaling, before performing deformation processing on a page element, the scaling can be customized by the following steps:
f11, obtaining the scaling range of the page element; the zooming scale range represents the fixed proportion of the size of the page element relative to the original size of the page element during zooming and zooming in or out each time;
f12, determining at least three reference points in the scaling range; wherein the at least three reference points comprise a zoom starting point, a zoom ending point and at least one zoom order point between the zoom starting point and the zoom ending point; the closer at least one zoom step point is to the zoom end point, the larger the fixed scale of each corresponding zoom-in or zoom-out.
In the embodiment of the present application, in consideration of the process of enlarging a picture, if a fixed ratio, for example, 10%, is enlarged or reduced each time, when the original size of the picture is large, the picture needs to be enlarged to the original size by scaling many times, so that a user can define the curves of enlarging and reducing the picture by himself/herself, and the following effects are achieved: when the picture is not enlarged to 40% of the original size, 10% of the picture is enlarged each time, and when the picture exceeds 40% of the original size, 30% of the picture is enlarged each time, and the picture is reduced in the same way.
In the embodiment of the application, when the scaling is customized, the method can be realized by combining a Bezier curve (bezier), and a user can determine and configure the Bezier curves of several orders according to requirements, so that the purpose of finely controlling the scaling is achieved, the scaling which is flexibly changed is provided, and the scaling experience of the user on the page elements is improved.
The bezier curve is also applied to animation (animation) attributes of the CSS3, and the corresponding attribute is a bezier curve function (cubic-bezier) and is mostly used for controlling animation tracks and the process of element size change, such as uniform movement, acceleration before deceleration and the like. However, animation is not suitable for fine control of the scaling, and taking the picture enlarged from 10% to 20% as an example, animation can only adjust the speed of the picture enlarging process in a certain period of time, and the picture is enlarged at a constant speed or at an accelerated speed and a decelerated speed, and is mainly the effect presented visually, and is not used for calculating the scaling of the picture. The method and the device apply the Bessel principle to the calculation of the scaling ratio to realize the scaling curve of the user-defined picture.
Further, when the operation event includes a scaling event, the step S204 performs deformation processing on the page element based on the converted attribute parameters, including the following steps:
g11, obtaining a target proportion of the size of the page element relative to the original size of the page element during zooming;
g12, selecting a target reference point matched with the target proportion from at least three reference points;
and G13, zooming the page element based on the converted attribute parameters and the fixed proportion of the target reference point corresponding to each zooming-in or zooming-out.
In the embodiment of the present application, to further describe the zoom page element by selecting a bezier curve with more than two orders, referring to fig. 8, a variable t [0, 1] in the formula indicates a position between p0 and p2, for example, t ═ 0.1 indicates that the position is 10% of the position of the red line this time. P0(x0, y0) and P2(x2, y2) represent the starting and ending points, respectively, and P1(x1, y2) is the reference point for adjusting the curve change. In practical use, if the scaling range is determined to be [ 10%, 500% ], p0 can be set to (0, 10), p2 can be set to (0, 500), the middle p1 can be set to p1 according to practical situations, so that the scaling change is expected, t can be set to 0.0.5 or 0.1 (free setting) each time, and then the change of the scaling is observed to be adjusted appropriately.
Referring to fig. 8 and 9, the second-order curve is determined by three points, the third-order curve is determined by four points, and the more reference points, the more precise the control of the curve, which can be used according to actual conditions. Thus, the self-definition and flexible adjustment of the scaling are realized.
In other embodiments of the present application, in a scenario of customizing an initial display scale of a page element, before obtaining an operation event for the page element in step S201, the initial display scale of the page element may also be customized through the following steps:
h11, obtaining the aspect ratio of the page elements;
h12, if the aspect ratio meets the condition of the custom aspect ratio, calculating the maximum display proportion of the page elements based on the custom aspect ratio and the size of the canvas of the page;
h13, displaying the page element on the page at the maximum display scale.
In some embodiments, the aspect ratio of the page element is characterized as sizeRatio, the original length of the picture is characterized as naturalHeight, the original width of the picture is characterized as naturalWidth, the width of the canvas of the page is characterized as width, the length of the canvas of the page is characterized as height, the maximum display ratio is characterized as initialScale, the calculated width of the picture is characterized as sizeW, and the calculated length of the picture is characterized as sizeH; wherein, sizeRatio is naturalHeight/naturalWidth. In the embodiment of the present application, the aspect ratio of the page element is also referred to as the aspect ratio.
If the aspect ratio meets the condition of the custom aspect ratio, the H12 calculates the maximum display scale of the page element based on the custom aspect ratio and the canvas size of the page, and can be implemented as follows:
first, in the case where sizedratio is smaller than the custom aspect ratio, as an example one, when naturalHeight > height, sizewjjiri (height-margin)/this. Further, when sizews > width, initialScale is calculated as width/naturaldidth according to the following formula; when sizews < width, calculating initialScale as sizews/naturalWidth according to the following formula; finally, the page element is displayed on the page with the calculated initialScale.
In the case where sizedratio is smaller than the custom aspect ratio, example two, when naturallwidth < width, if sizehight, initialScale ═ height/this. naturallheight is calculated according to the following formula; when sizeH < height, calculating initialScale ═ sizeH/this. naturalheight according to the following formula; finally, the page element is displayed on the page with the calculated initialScale.
In the second mode, when the sizedratio is larger than the custom aspect ratio, initialScale ═ width × sizepratio/naturalhight is calculated according to the following formula.
In the related art, when a page element such as a picture is displayed, the picture is forced to be displayed in an initialized manner according to the size of a screen, a user cannot determine the display of some special pictures by himself or herself, such as longitudinal strip-shaped pictures like newspapers or some product plans, and in some scenes, when the user wants to initialize, the user can adapt a canvas to display a larger size instead of a thumbnail. The method and the device optimize the initial display of the special picture, allow a user to cover default configuration, allow the user to define the initial display proportion by a reserved interface, and define the aspect ratio of the strip-shaped picture by the user. Here, for the implementation of the above-described steps H11-H13, the default configuration is VerticalPic: { lengthWidthRatio: and 3, initial show: true, namely, the default aspect ratio is 3, a large-size picture is adapted to be displayed, and a user can also select an incoming parameter to be overwritten when the initial setting of the init method is called, for example, VerticalPic: { length width ratio: 4, initial show: true }. Of course, the default aspect ratio may be other parameters, and the application is not particularly limited.
Continuing with the exemplary structure of the page element processing device 154 provided by the embodiment of the present application implemented as a software module, in some embodiments, as shown in fig. 1, the software module stored in the page element processing device 154 of the memory 150 may be a page element processing device in the terminal 100, including:
an obtaining module 1541, configured to obtain an operation event for a page element;
the processing module 1542 is configured to calculate, based on the operation coordinate corresponding to the operation event, an attribute parameter generated by the operation event for processing the page element; performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters; and performing deformation processing on the page elements based on the converted attribute parameters.
In some embodiments, the operation event includes a drag event, and the processing module 1542 is further configured to obtain a first coordinate of the operation object of the page element before dragging on the page and a second coordinate of the operation object after dragging on the page;
substituting the first coordinate, the second coordinate and the scaling into the following calculation formula, and calculating a first offset generated by processing the page element by the drag event: the first offset is characterized as (xOffset1, yOffset1),
xOffset1=parseInt(clientX-this.clickLeft)/this.scale;
yOffset1 ═ parseInt (clientY-this. clickTop)/this. scale; wherein the first coordinate is characterized as P1(this. clickleft, this. clicktop) and the second coordinate is characterized as P2 (clicentx, clicenty).
In some embodiments, the processing module 1542 is further configured to obtain a second offset of the page element before dragging;
obtaining a first rotation angle of the page element during dragging;
dividing the first rotation angle by a preset angle and taking the remainder to obtain a second rotation angle;
and performing three-level conversion processing on the first offset based on the second offset and the mapping relation between the offset and the rotating angle to obtain a converted third offset.
In some embodiments, the preset angle is 360 degrees, the second offset amount is characterized by (xOffset2, yOffset2), the first offset amount is characterized by (xOffset1, yOffset1), the third offset amount is characterized by (xOffset3, yOffset3), the second rotation angle is characterized by rotadeg, the processing module 1542 is further configured to, when rotadeg is 0: xOffset3 ═ xOffset2+ xOffset1, yOffset3 ═ yOffset2+ yOffset 1;
when rotateDeg is-90 or 270: xOffset3 ═ yOffset2+ xOffset1, yOffset3 ═ xOffset2-yOffset 1;
when rotadeg is-180 or 180: xOffset 3-xOffset 2-xOffset1, yOffset 3-yOffset 2-yOffset 1;
when rotadeg is 90: xOffset3 is yOffset 2-xooffset 1, and yOffset3 is xOffset2+ yOffset 1.
In some embodiments, the operation event comprises a zoom event, the processing module 1542 further configured to obtain a first zoom ratio of the pre-zoom page element, a second zoom ratio of the page element at the time of zooming, and a width and a height of the pre-zoom page element;
obtaining a fourth offset of the page element relative to a canvas of the page during zooming;
obtaining a third coordinate of an operation object of the page element before zooming on the page;
and calculating a fifth offset for scaling the page element with the third coordinate as the center based on the width, the height, the fourth offset, the first scaling ratio and the second scaling ratio.
In some embodiments, the fifth offset is characterized as (xOffset5, yOffset5), and the processing module 1542 is further configured to, xOffset5 ═ ((v-this. referrscale) × (naturaltaldwidthk-this. scalex) + this. referrx × this. referrscale)/v; wherein the first scaling is characterized by being, the second scaling is characterized by v, the fourth offset is characterized by (being, the scaling x, the scaling y), the width is characterized by naturalsWidth, the third coordinate is characterized by (being, the scaling x, the scaling y), and k is a positive number;
yOffset5 ═ ((v-this. refer scale) × (naturalhheight × k-this. scale y) + this. refer y × this. refer scale)/v; wherein high is characterized as naturalHeight.
In some embodiments, the processing module 1542 is further configured to perform an operation on the page element through one of the input modules of the vertical and horizontal position indicators of the display system and the shortcut keys of the keyboard, and generate an operation event; alternatively, the first and second electrodes may be,
operating page elements through two input modules, namely a longitudinal and transverse position indicator of a display system and a shortcut key of a keyboard, and generating an operation event;
compared with the two input modules, the operation event generated by operating the page element through one input module is different; wherein the operation events include a zoom event and a scroll event.
In some embodiments, the processing module 1542 is further configured to receive, by the touch module, a multi-finger scrolling operation performed on the page element;
acquiring horizontal displacement and vertical displacement of each finger on a canvas of a page before and after the multi-finger scrolling;
if the horizontal displacement corresponding to the multiple fingers is increased or decreased simultaneously, and the horizontal displacement is larger than the vertical displacement, generating a horizontal scrolling event aiming at the page element;
if the horizontal displacement corresponding to the multiple fingers is increased or decreased simultaneously, and the horizontal displacement is smaller than the vertical displacement, generating a vertical scrolling event aiming at the page element;
if the horizontal displacements corresponding to the multiple fingers are not simultaneously increased and/or not simultaneously decreased, a zoom event is generated for the page element.
In some embodiments, the processing module 1542 is further configured to receive, by the touch module, a double-finger gesture operation performed on the page element;
if one finger in the double-finger gesture operation moves by taking the other finger as a center and the arc length between the other finger and the horizontal axis becomes small, generating a clockwise rotation event aiming at the page element;
if one finger moves around the other finger in the two-finger gesture operation and the arc length between the other finger and the horizontal axis becomes large, a counterclockwise rotation event for the page element is generated.
In some embodiments, the operation event includes a zoom event, and the processing module 1542 is further configured to obtain a zoom range of the page element; the zooming scale range represents the fixed proportion of the size of the page element relative to the original size of the page element during zooming and zooming in or out each time;
determining at least three reference points within a zoom scale range; wherein the at least three reference points comprise a zoom starting point, a zoom ending point and at least one zoom order point between the zoom starting point and the zoom ending point; the closer at least one zooming step point is to the zooming end point, the larger the fixed proportion of each corresponding zooming-in or zooming-out is;
obtaining a target proportion of the size of the page element relative to the original size of the page element during zooming;
selecting a target reference point matched with the target proportion from at least three reference points;
and zooming the page elements based on the converted attribute parameters and the fixed proportion of the target reference point corresponding to each zooming-in or zooming-out.
In some embodiments, the processing module 1542 is further configured to obtain an aspect ratio of the page element;
if the length-width ratio meets the condition of the user-defined length-width ratio, calculating the maximum display proportion of the page elements based on the user-defined length-width ratio and the size of the canvas of the page;
the page elements are displayed on the page at the maximum display scale.
The processing device of the page element obtains the operation event aiming at the page element; calculating attribute parameters generated by processing page elements by the operation events based on the operation coordinates corresponding to the operation events; performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters; namely, converting the attribute parameters based on a self-research algorithm corresponding to a deformation function (transform) of the CSS 3; finally, based on the converted attribute parameters, carrying out deformation processing on the page elements; therefore, the new attribute parameters are applied to the transform attributes, and it can be seen that the processing method of the page element provided by the application designs a plurality of attributes such as a displacement function (transform), a scaling function (scale) and a rotation function (rotate) at one time from a microscopic angle by a CSS 3-based self-research algorithm, is substantially different from a position scheme, and can perform rendering optimization by using a Graphics Processing Unit (GPU) by using CSS3 transform, thereby avoiding frame dropping due to blocking.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a storage medium having stored therein executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, the method as shown in fig. 2.
The computer-readable storage medium provided by the application is used for obtaining an operation event aiming at a page element; calculating attribute parameters generated by processing page elements by the operation events based on the operation coordinates corresponding to the operation events; performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters; namely, converting the attribute parameters based on a self-research algorithm corresponding to a deformation function (transform) of the CSS 3; finally, based on the converted attribute parameters, carrying out deformation processing on the page elements; therefore, the new attribute parameters are applied to the transform attributes, and it can be seen that the processing method of the page element provided by the application designs a plurality of attributes such as a displacement function (transform), a scaling function (scale) and a rotation function (rotate) at one time from a microscopic angle by a CSS 3-based self-research algorithm, is substantially different from a position scheme, and can perform rendering optimization by using a Graphics Processing Unit (GPU) by using CSS3 transform, thereby avoiding frame dropping due to blocking.
In some embodiments, the storage medium may be a computer-readable storage medium, such as a Ferroelectric Random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a charged Erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), and the like; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (hypertext Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (13)

1. A method for processing page elements is characterized by comprising the following steps:
obtaining an operation event aiming at a page element;
calculating attribute parameters generated by processing the page elements by the operation events based on the operation coordinates corresponding to the operation events;
performing three-level conversion processing on the attribute parameters by using a cascading style sheet based on the event type of the operation event to obtain converted attribute parameters;
and carrying out deformation processing on the page elements based on the converted attribute parameters.
2. The method according to claim 1, wherein the operation event includes a drag event, and the calculating the attribute parameter generated by the operation event for processing the page element based on the operation coordinate corresponding to the operation event includes:
obtaining a first coordinate of an operation object of the page element on a page before dragging and a second coordinate of the operation object on the page after dragging;
substituting the first coordinate, the second coordinate and the scaling into the following calculation formula to calculate a first offset generated by the drag event processing the page element: the first offset is characterized as (xOffset1, yOffset1),
xOffset1=parseInt(clientX-this.clickLeft)/this.scale;
yOffset1 ═ parseInt (clientY-this. clickTop)/this. scale; wherein the first coordinate is characterized as P1(this. clickLeft, this. clickTop) and the second coordinate is characterized as P2 (clicentX, clicentY).
3. The method according to claim 2, wherein the performing a third-level conversion process on the attribute parameter based on the event type of the operation event to obtain a converted attribute parameter comprises:
obtaining a second offset of the page element before dragging;
obtaining a first rotation angle of the page element during dragging;
dividing the first rotation angle by a preset angle and taking the remainder to obtain a second rotation angle;
and performing three-level conversion processing on the first offset based on the second offset and the mapping relation between the offset and the rotation angle to obtain a converted third offset.
4. The method according to claim 3, wherein the preset angle is 360 degrees, the second offset is characterized by (xOffset2, yOffset2), the first offset is characterized by (xOffset1, yOffset1), the third offset is characterized by (xOffset3, yOffset3), and the second rotation angle is characterized by rotadeg, and the third offset is obtained by performing three-level conversion processing of a cascading style sheet on the first offset based on the second offset and the mapping relationship between the offset and the rotation angle, and comprises:
when rotadeg is 0: xOffset3 ═ xOffset2+ xOffset1, yOffset3 ═ yOffset2+ yOffset 1;
when rotateDeg is-90 or 270: xOffset3 ═ yOffset2+ xOffset1, yOffset3 ═ xOffset2-yOffset 1;
when rotadeg is-180 or 180: xOffset 3-xOffset 2-xOffset1, yOffset 3-yOffset 2-yOffset 1;
when rotadeg is 90: xOffset3 is yOffset 2-xooffset 1, and yOffset3 is xOffset2+ yOffset 1.
5. The method according to claim 1, wherein the operation event includes a zoom event, and the calculating, based on the operation coordinates corresponding to the operation event, the attribute parameters generated by the operation event for processing the page element includes:
obtaining a first scaling of the page element before scaling, a second scaling of the page element during scaling, and a width and a height of the page element before scaling;
obtaining a fourth offset of the page element relative to a canvas of the page when zooming;
obtaining a third coordinate of an operation object of the page element on the page before zooming;
calculating a fifth offset for scaling the page element centered on the third coordinate based on the width, the height, the fourth offset, the first scaling and the second scaling.
6. The method of claim 5, wherein calculating a fifth offset for scaling the page element centered on the third coordinate based on the width, the height, the fourth offset, the first scaling and the second scaling comprises:
the fifth offset amount is characterized as (xOffset5, yOffset5), wherein,
xfset 5 ═ ((v-this. refer scale) x (naturlalwidth x k-this. scale x) + this. refer x this. refer scale)/v; wherein the first scale is characterized by being, the second scale is characterized by v, the fourth offset is characterized by (is, scale x, is, scale y), the width is characterized by naturalWidth, the third coordinate is characterized by (is, scale x, is, scale y), and k is a positive number;
yOffset5 ═ ((v-this. refer scale) × (naturalhheight × k-this. scale y) + this. refer y × this. refer scale)/v; wherein the high is characterized as naturalHeight.
7. The method of claim 1, wherein obtaining the operation event for the page element comprises:
operating the page element through an input module in a vertical and horizontal position indicator of a display system and a shortcut key of a keyboard to generate the operation event; alternatively, the first and second electrodes may be,
operating the page elements through two input modules, namely a longitudinal and transverse position indicator of the display system and shortcut keys of the keyboard, and generating the operation events;
comparing the operation events generated by operating the page elements through the input module with the operation events generated by operating the page elements through the two input modules; wherein the operational events include a zoom event and a scroll event.
8. The method of claim 1, wherein obtaining the operation event for the page element comprises:
receiving a multi-finger rolling operation executed aiming at the page element through a touch module;
acquiring horizontal displacement and vertical displacement of each finger on a canvas of a page before and after the multi-finger scrolling;
if the horizontal displacement corresponding to the multiple fingers is increased or decreased simultaneously, and the horizontal displacement is larger than the vertical displacement, generating a horizontal scrolling event aiming at the page element;
if the horizontal displacement corresponding to the multiple fingers is increased or decreased simultaneously, and the horizontal displacement is smaller than the vertical displacement, generating a vertical scrolling event aiming at the page element;
generating a zoom event for the page element if the horizontal displacements corresponding to the multiple fingers are not simultaneously increasing and/or not simultaneously decreasing.
9. The method of claim 1, wherein obtaining the operation event for the page element comprises:
receiving a double-finger gesture operation executed aiming at the page element through a touch module;
if one finger in the double-finger gesture operation moves by taking the other finger as a center and the arc length between the other finger and the horizontal axis is reduced, generating a clockwise rotation event aiming at the page element;
and if the finger moves by taking the other finger as a center in the double-finger gesture operation and the arc length between the other finger and the horizontal axis is increased, generating a counterclockwise rotation event for the page element.
10. The method of claim 1, further comprising:
obtaining a scaling range of the page element; wherein the scaling range represents a fixed proportion of the size of the page element relative to the original size of the page element during scaling, each time of the page element is enlarged or reduced;
determining at least three reference points within the zoom scale range; wherein the at least three reference points comprise a zoom start point, a zoom end point, and at least one zoom order point located between the zoom start point and the zoom end point; the closer the at least one zooming step point is to the zooming end point, the larger the corresponding fixed proportion of each zooming-in or zooming-out is;
correspondingly, the operation event includes a scaling event, and the performing deformation processing on the page element based on the converted attribute parameter includes:
obtaining a target proportion of the size of the page element relative to the original size of the page element during zooming;
selecting a target reference point matched with the target proportion from the at least three reference points;
and zooming the page element based on the converted attribute parameters and the fixed proportion of each zooming-in or zooming-out corresponding to the target reference point.
11. The method of any of claims 1-10, wherein prior to obtaining the operational event for the page element, the method further comprises:
obtaining the aspect ratio of the page element;
if the aspect ratio meets the condition of the user-defined aspect ratio, calculating the maximum display proportion of the page element based on the user-defined aspect ratio and the size of the canvas of the page;
and displaying the page elements on the page at the maximum display scale.
12. An apparatus for processing page elements, comprising:
a memory for storing executable instructions; a processor for implementing the method of any one of claims 1 to 11 when executing executable instructions stored in the memory.
13. A computer-readable storage medium having stored thereon executable instructions for causing a processor, when executing, to implement the method of any one of claims 1 to 11.
CN202111412083.1A 2021-11-25 2021-11-25 Page element processing method and device and computer readable storage medium Pending CN114115665A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111412083.1A CN114115665A (en) 2021-11-25 2021-11-25 Page element processing method and device and computer readable storage medium
PCT/CN2022/097864 WO2023092992A1 (en) 2021-11-25 2022-06-09 Page element processing method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111412083.1A CN114115665A (en) 2021-11-25 2021-11-25 Page element processing method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114115665A true CN114115665A (en) 2022-03-01

Family

ID=80373069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111412083.1A Pending CN114115665A (en) 2021-11-25 2021-11-25 Page element processing method and device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN114115665A (en)
WO (1) WO2023092992A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023092992A1 (en) * 2021-11-25 2023-06-01 深圳前海微众银行股份有限公司 Page element processing method and device, and computer-readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311708B (en) * 2023-09-18 2024-04-05 中教畅享科技股份有限公司 Dynamic modification method and device for resource display page in 3D scene of webpage end

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959221A (en) * 2011-11-18 2014-07-30 索尼爱立信移动通讯有限公司 Method and apparatus for performing a zooming action
CN111651107A (en) * 2020-06-03 2020-09-11 山东中创软件商用中间件股份有限公司 3D model front-end display method, device, equipment and medium
CN112965645B (en) * 2021-03-15 2022-07-29 中国平安财产保险股份有限公司 Page dragging method and device, computer equipment and storage medium
CN114115665A (en) * 2021-11-25 2022-03-01 深圳前海微众银行股份有限公司 Page element processing method and device and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023092992A1 (en) * 2021-11-25 2023-06-01 深圳前海微众银行股份有限公司 Page element processing method and device, and computer-readable storage medium

Also Published As

Publication number Publication date
WO2023092992A1 (en) 2023-06-01

Similar Documents

Publication Publication Date Title
US20220070380A1 (en) Digital viewfinder user interface for multiple cameras
US10296166B2 (en) Device, method, and graphical user interface for navigating and displaying content in context
US9804761B2 (en) Gesture-based touch screen magnification
US8930852B2 (en) Touch screen folder control
AU2007100826C4 (en) Multimedia communication device with touch screen responsive to gestures for controlling, manipulating, and editing of media files
US8823749B2 (en) User interface methods providing continuous zoom functionality
KR101720849B1 (en) Touch screen hover input handling
US11947791B2 (en) Devices, methods, and systems for manipulating user interfaces
EP3564807A1 (en) Flexible display device control method and apparatus
US20120169598A1 (en) Multi-Touch Integrated Desktop Environment
WO2023092992A1 (en) Page element processing method and device, and computer-readable storage medium
KR20150095540A (en) User terminal device and method for displaying thereof
US20120169622A1 (en) Multi-Touch Integrated Desktop Environment
EP2661671B1 (en) Multi-touch integrated desktop environment
CN111796746B (en) Volume adjusting method, volume adjusting device and electronic equipment
CN110286827B (en) Element scaling control method, device, equipment and storage medium
EP2791773B1 (en) Remote display area including input lenses each depicting a region of a graphical user interface
AU2011253700A1 (en) Gestures for controlling, manipulating, and editing of media files using touch sensitive devices
CN115617226A (en) Icon management method and device
JP2020507174A (en) How to navigate the panel of displayed content
KR20130115037A (en) Method, device, and computer-readable recording medium for realizing touch input using mouse
CN108205407B (en) Display device, display method, and storage medium
CN114756792A (en) Page display method and device, electronic equipment and readable storage medium
CN115357167A (en) Screen capturing method and device and electronic equipment
AU2017210607A1 (en) Apparatus and method for controlling motion-based user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination