CN118606586A - Page processing method, device and equipment - Google Patents

Page processing method, device and equipment Download PDF

Info

Publication number
CN118606586A
CN118606586A CN202410781000.3A CN202410781000A CN118606586A CN 118606586 A CN118606586 A CN 118606586A CN 202410781000 A CN202410781000 A CN 202410781000A CN 118606586 A CN118606586 A CN 118606586A
Authority
CN
China
Prior art keywords
page
data
loading
loading item
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410781000.3A
Other languages
Chinese (zh)
Inventor
刘正保
宋竟轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202410781000.3A priority Critical patent/CN118606586A/en
Publication of CN118606586A publication Critical patent/CN118606586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the specification discloses a page processing method, device and equipment. The page processing scheme comprises the following steps: when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item, wherein the appointed operation is to operate the page loading item to trigger the operation of jumping to a page corresponding to the page loading item; if yes, intercepting the appointed operation, and acquiring basic data from the preloaded service data, wherein the basic data is service data required for jumping to a target service page corresponding to the page loading item; and jumping to the target service page according to the acquired basic data.

Description

Page processing method, device and equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for processing a page.
Background
In the existing scheme, when a user needs to jump from an initial page to a final page, the user needs to jump from the initial page to a plurality of intermediate pages, then the user data needed by jumping to the final page is obtained step by step through the plurality of intermediate pages, for example, the user data is obtained from a client through the intermediate pages, or the user data issued by a server is received, and then the jump to the final page is triggered from the last intermediate page according to the obtained user data.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method, an apparatus, and a device for processing a page, which simplify the page processing procedure by improving the overall architecture of the page processing, shorten the overall time consumed for page skip, and implement the page processing without perception of a user.
The embodiment of the specification adopts the following technical scheme:
the embodiment of the specification provides a page processing method, which comprises the following steps:
when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item, wherein the appointed operation is to operate the page loading item to trigger the operation of jumping to a page corresponding to the page loading item;
If yes, intercepting the appointed operation, and acquiring basic data from the preloaded service data, wherein the basic data is service data required for jumping to a target service page corresponding to the page loading item;
And jumping to the target service page according to the acquired basic data.
The embodiment of the specification also provides a page processing device, which comprises:
the determining module is used for determining whether a page loading item triggered by a specified operation belongs to an acceleration loading item or not when the specified operation of a user is monitored, wherein the specified operation is an operation on the page loading item to trigger an operation of jumping to a page corresponding to the page loading item;
The data acquisition module intercepts the appointed operation when the determination module determines that the page loading item belongs to the acceleration loading item, and acquires basic data from the pre-loaded service data, wherein the basic data is the service data required for jumping to a target service page corresponding to the page loading item;
And the jump module jumps to the target service page according to the acquired basic data.
The embodiment of the specification also provides an electronic device for page processing, including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor, the instructions are executable by the at least one processor to enable the at least one processor to:
when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item, wherein the appointed operation is to operate the page loading item to trigger the operation of jumping to a page corresponding to the page loading item;
If yes, intercepting the appointed operation, and acquiring basic data from the preloaded service data, wherein the basic data is service data required for jumping to a target service page corresponding to the page loading item;
And jumping to the target service page according to the acquired basic data.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect:
By improving the page processing architecture, namely adding an acceleration module, the acceleration module is utilized to acquire data, the original utilization of the operation container to load the middle page to acquire data is replaced, the page skip times and the operation time consumption of the operation container can be reduced, the whole page processing process is simple, the time consumption is short, and the page processing can be completed without perception of a user.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a conventional page processing scheme in an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of a page processing scheme according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of a page processing method according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a section implementation structure of a speed-up module in a page processing method according to an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a page processing apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
In the existing page processing scheme, more than one jump is often needed to reach the final page required by the user, for example, the first jump is performed from the initial page, that is, the first jump is performed to the first intermediate page of a plurality of intermediate pages, then the data required for deciding to jump to the final page is sequentially acquired from the plurality of intermediate pages, and then the last jump is performed from the final intermediate page to the final page.
For example, as shown in fig. 1, a user wants to jump from an initial page in an App (application program) to a final target service page by clicking, for example, the initial page provides a "health code" view function item, the final target service page is a health code generation page, at this time, the user needs to click on a "health code" option (for example, a small block is identified in the figure) in the initial page first, after acquiring the clicking operation of the user, the terminal jumps to an intermediate page for the first time, then acquires a target city corresponding to the health code in the intermediate page, and when acquiring the target city in the intermediate page, jumps to the health code generation page corresponding to the target city for the second time according to the city data.
Therefore, the user needs to jump from the initial page to the final page more than once, and the data needed by the jump to the final page needs to be collected through the plurality of intermediate pages, so that the time needed by the operation container to complete the jump more than once is longer, the time for acquiring the data in the plurality of intermediate pages is longer, the whole processing process of the user to the final page is complex, the time consumption is long, and the user experience is poor.
In the embodiment of the present disclosure, the page may be a page running in the App, or may be a page form of an applet running in the App (or a running page of the applet), H5, or may be a page form of an applet running in an App by jumping out of the App, H5, or the like, where the applet may be a program running in an App application page and displaying the page, or may be a program that jumps out of the App and runs and displays independently, and the applet may be used without downloading and installing, and may be used to provide special service such as a health code, a citizen center, a life payment, an off-terminal jump, or the like.
Based on this, the embodiments of the present specification provide a new page processing scheme.
Fig. 2 is a schematic structural diagram of a page processing scheme according to an embodiment of the present disclosure.
As shown in the figure, the acceleration module is adopted to acquire data by improving the overall structure of page processing, namely adding the acceleration module, so as to replace data required by the container loading middle page acquisition jump.
Specifically, when the user clicks a page loading item in an initial page, that is, when the user clicks a target service item (such as a small block identified in a figure) corresponding to a target service page, the user is triggered to jump to a final target service page (such as a target small program), the clicked jump can be quickly intercepted according to the terminal intelligence in the acceleration module, whether the target item is a preset acceleration service item is judged, service data required by jumping to the target service page corresponding to the target item is obtained in the acceleration module, wherein the service data required by jumping to the target service page is preloaded service data in the acceleration module by the terminal intelligence, and then the jump to the target service page can be directly performed for the first time without jumping to any intermediate page according to the obtained service data.
In addition, the terminal intelligence can trigger preloading and cache the business data into the acceleration module to form preloaded business data under the preset triggering condition; after jumping to the target service page, the target service page can also read the cache data from the acceleration module by calling a data acquisition interface provided by the acceleration module.
Therefore, by improving the page processing architecture and constructing a new and universal page processing scheme by utilizing the acceleration module, the intermediate page which needs to appear in the page jumping processing can be removed, the jumping times and the time consumption for data acquisition are reduced, and the page loading speed is accelerated.
In the embodiment of the present specification, the terminal intelligence refers to artificial intelligence on the terminal side, and the terminal intelligence is not specifically limited herein.
The terminal side terminal in the embodiment of the present disclosure may include a smart terminal such as a mobile phone, a camera, a sensor, a robot, and the like, which is not specifically limited herein.
The application of machine learning can be directly arranged on the terminal side through the terminal intelligence, so that intelligent processing such as reasoning, training and the like can be performed without leaving the terminal, and compared with the cloud intelligence, the intelligent processing has the natural privacy advantage, and can be performed under the condition that a user does not feel.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 3 is a flowchart of a page processing method according to an embodiment of the present disclosure.
As shown in fig. 3, a page processing method provided in the embodiment of the present disclosure includes:
Step S102, judging whether the specified operation of the user is monitored, if so, executing step S104.
The method comprises the steps that an appointed operation is an operation of operating a page loading item to trigger jumping to a business page corresponding to the page loading item; the page loading item is a service option provided in the initial page and used for loading a service page; the service page is a page appearing in the service.
For example, the user wants to view his own health code through the health code function provided by the App.
At this time, the user is required to first run an App capable of providing a health code viewing function, after the App is run, a service option for querying the health code is provided in a corresponding page, after the user clicks the "health code" option, the App is loaded with a running health code generation page, and the generated health code is displayed to the user.
Thus, the specified operation may be a click operation on a "health code" option, the page loader may be a "health code" business option in an App page, and the business page may generate a page for the health code.
It should be noted that, in the process of generating the health code, a page for acquiring the city where the user is located may appear, and at this time, the page for acquiring the city may be used as an intermediate page in the process of providing the service, and the health code generation page may be used as a target service page for providing the health code service.
It should be noted that if the specified operation of the user is not monitored, the page processing may still be in a monitored state, or other processing may be performed, which is not specifically limited herein.
Step S104, determining whether the page loading item triggered by the specified operation belongs to an acceleration loading item, and if yes, executing step S106.
The acceleration loading item is a loading item which can directly jump to a target service page corresponding to the page loading item from the page where the page loading item is located after the page loading item is operated by the appointed operation, namely, when the acceleration loading item is triggered, the acceleration loading item can directly jump to the corresponding target service page without jumping to an intermediate page.
In specific implementation, whether the page loading item in the App page belongs to the acceleration loading item or not can be preset according to the actual application condition.
For example, the page loader is the aforementioned "health code" business option.
Generally, when the method is used for the first time, it may be difficult to determine the city corresponding to the health code only according to the existing user data, at this time, the "health code" (i.e. page loader) may be set as the non-acceleration loader, at this time, before the method jumps to the health code generation page (i.e. target page), the method jumps to the middle page according to the original business processing flow, so as to obtain the target city corresponding to the health code desired by the user in the middle page.
However, in the subsequent use, since the city associated with the user is known, for example, the city corresponding to the health code that the user wants to view can be accurately judged according to the existing user data, such as the positioning information of other services, the LBS request and the like, the health code can be preset to belong to the acceleration loading item, and the original service processing flow is not adopted before the user jumps to the health code generation page, that is, the user does not need to jump to the middle page to collect city data, but the page processing scheme provided by the embodiment of the present specification is adopted to acquire the data and carry out the page jump, so that the user jumps to the health code generation page directly.
It should be noted that when it is determined that the page loader is not an acceleration loader, the jump to the target service page may be continued according to the original service processing flow, for example, the original processing flow needs to jump to a plurality of intermediate pages first, and then jump to the corresponding intermediate pages first, which is not limited specifically herein.
In specific implementation, the user terminal can intelligently determine whether the page loading item belongs to the acceleration loading item, or can determine whether the page loading item belongs to the acceleration loading item through a server terminal (such as a cloud terminal), and the determination is not limited in detail herein.
And S106, intercepting the appointed operation, and acquiring basic data from the preloaded service data.
In specific implementation, the specified operation can trigger the terminal intelligence, and the terminal intelligence can rapidly intercept the page skip triggered by the specified operation, so as to avoid the specified operation triggering the operation container to skip to the middle page to acquire data.
After triggering interception, decision data (i.e., base data) for jumping to the target service page can be directly obtained from the preloaded service data.
The pre-loaded business data is pre-loaded business data, the pre-loaded business data at least comprises basic data, and the basic data is business data required by jumping from a current page to a target business page.
For example, the App provides a plurality of service items, the basic data required by the service items should be used as the pre-loading service data, and other service data required by the processing service in the target service page can also be used as the pre-loading service data, so that the target service page can conveniently call the data directly from the pre-loading service data.
For example, the App provides a health code viewing function, where the basic data may be user information (such as identity information), health code color, code value, city where the user is located, and the like, and the pre-loaded service data may include the foregoing basic data and may also include other service data.
Step S108, jumping to the target business page according to the acquired basic data.
The basic data is acquired in the preloaded service data, instead of acquiring the data from a plurality of intermediate pages, the current page can be directly jumped to the target service page according to the acquired basic data without being jumped to the intermediate pages.
And S102-S108, by monitoring the appointed operation of the page loading item in the page and when the appointed operation is monitored and the page loading item corresponding to the appointed operation is determined to be the acceleration loading item, basic data necessary for jumping to the target service page is obtained from the pre-loaded service data, and then the target service page is directly jumped to without jumping to the middle page by the operation container according to the obtained basic data.
Therefore, after the new page processing scheme is formed by adopting the steps S102-S108, the operation container does not need to jump to the middle page to acquire data, and the target business page can be reached only by jumping once, so that the time for acquiring the data is shortened, the jumping times and the jumping time of the operation container are reduced, the overall time for the page processing process is effectively shortened, and the user experience is improved.
In some implementations, a monitoring item may be set for a page load item to determine whether a specified operation is monitored by the monitoring item.
For example, by monitoring whether each page loader in the App is triggered, such as clicked, selected, etc., if so, it may be determined that the specified operation is monitored.
In some embodiments, a buried point for monitoring may be set for the page load item, through which it may be quickly determined whether a specified operation is monitored.
It should be noted that the buried points may be set according to the application scenario, which is not limited herein.
In some embodiments, whether the data attribute of the acceleration loading item belongs to each page loading item can be set in the App, and whether the data attribute of the page loading item belongs to a page needing acceleration loading can be rapidly identified by judging the data attribute of the page loading item.
For example, for the foregoing "health code" service option, a flag bit is used to identify the data attribute, for example, when the flag bit is set to "0" at the first use, which indicates that the data belongs to a non-acceleration loading item, and when the service option is used, the flag bit is set to "1" when the city in which the user is located can be accurately identified according to the existing user data, which indicates that the data belongs to the acceleration loading item.
In some embodiments, identifying whether a page loader belongs to an acceleration loader may be performed using a user's identifying characteristics, which may be a characteristic of the user's use of the page loader's corresponding business.
For example, after monitoring that the user clicks on the "health code" service option, it may be determined that the user has not left the city in the near term (e.g., within 14 days) according to the geographic location where the user is logged in, where the geographic location may be used as an identification feature.
Therefore, the intelligent recognition of the recognition features can be realized by the utilization terminal, whether the page loading item belongs to the acceleration loading item can be recognized quickly after the appointed operation is monitored, and the page processing can be performed under the condition that the user does not feel.
In some embodiments, the client may be used to intelligently predict the loading trigger timing of the pre-loaded service data, such as to obtain LBS (Locat ion Based Services, location-based service) data of the client, obtain a server URL (Uniform Resource Locator ) and other service data.
In the specific implementation, the user behavior characteristics can be identified through the terminal intelligent recognition, and the loading of the service data is finished in advance under the triggering of the preset triggering condition, so that after a target service page (such as a target applet) is entered, a request is not required to be initiated again, and the required data can be obtained from the preloaded service data, thereby reducing the overall time consumption of processing.
In specific implementation, the preset triggering condition can be set according to an actual application scene.
For example, after the target service page is skipped, other service data is required for service processing, and at this time, the preset trigger condition may be set to a condition that determines that the page loader is an acceleration loader.
For example, when the page loader is a commonly used service option, the preset trigger condition may be set to App start, so that the loading is performed without clicking the service option.
The data is preloaded through the terminal intelligence to form the cache data, so that the data required by decision jump can be obtained from the cache data without jumping the middle page, and the time consumption for obtaining the data can be reduced.
In some embodiments, the pre-load service data may be acquired from a client and/or a server that provides a service corresponding to the page load item, and after the pre-load service data is acquired, the pre-load service data is loaded into a cache to form cache data.
In some embodiments, a data interface may be provided for a target service page, so that after the target service page is skipped, a request is not required to be initiated again, and service data required by service processing is acquired by service of the target service page through the data acquisition interface, so that time consumption for data acquisition may be reduced.
In the implementation, JSAPI interfaces can be provided for the target service page as data acquisition interfaces, so that applets, H5 and the like can conveniently call cached data, return result data and the like through the data interfaces.
It should be noted that, the JSAPI interface is a client bridge mechanism, which may allow the front end applet, the H5, and other pages to directly call the corresponding functions Nat ive, such as payment, photographing, sharing, and popping up the floating layer, through a specific JS method, and the JSAPI interface is not described herein.
In some embodiments, after the service is online, the online operation condition can be controlled, online processing can be performed, and user experience is improved.
In specific implementation, the improved business process can be subjected to processing such as page granularity control, gray level release, rollback operation and the like, and the description is not expanded here.
In some embodiments, each function in the foregoing embodiment of the acceleration module may be implemented in a tangent plane manner, for example, monitoring a specified operation, for example, the user may intelligently determine whether the page loader belongs to the acceleration loader, for example, intercept a page jump corresponding to the specified operation, for example, obtain basic data, for example, preload service data, for example, make a jump after deciding according to the basic data, and so on.
In the specific implementation, codes corresponding to all functions in the acceleration module can be extracted and packaged into independent functional units, so that the improvement workload of a service scheme can be reduced, and the stability, expandability, execution efficiency and the like of the improvement of the service scheme can be improved.
For example, the monitoring operation is cut, and the code corresponding to the monitoring function can be triggered to be executed to form an independent functional unit, for example, the code unit is executed under the triggering of the designated operation.
Fig. 4 is a schematic structural diagram of the acceleration module in which each function is implemented in a tangential manner.
As shown in the figure, the acceleration module added in the improvement can be divided into 3 parts: a front portion, a core portion, and a rear portion.
The pre-processing part is used for preprocessing page processing, such as monitoring, preloading, data caching, identifying acceleration loading items, intercepting, on-line emergency management and the like, so that the pre-processing part can comprise more than one of the following processing functions: monitoring specified operations, intercepting jumps, feature identification, management and control strategies, and the like.
In a specific implementation, the specified operation can directly trigger the preamble portion, and then the corresponding preamble processing is completed after the triggering.
The core part is used for carrying out the flow of page processing, such as data acquisition, decision jump, setting buried points, intelligent learning at the end and the like, so that the core functions of acquiring data from an intermediate page of an operation container, decision jump and the like in the original business flow can be replaced, and therefore, the core part can comprise more than one of the following processing functions: acquiring client data (such as LBS, etc.), acquiring server data (such as target page url, etc.), caching data, embedding points, jumping, end intelligence, etc.
The post-portion is used for interfacing with the target service page, such as providing a data interface, facilitating the target service page to acquire data, return service results, etc., so that the post-portion may include one or more of the following processing functions: read cache data, return business results, etc.
By adopting the tangent plane mode, the influence of the improvement of the scheme on the original business scheme, such as the improved workload, the expandability, the stability and the like of the scheme after the improvement, can be reduced.
It should be noted that, each function of the foregoing acceleration module may be cut according to the actual application requirement, and other functions in the actual application may also be added to the corresponding cut, which is not further described herein.
Based on the same inventive concept, the embodiments of the present specification also provide an apparatus, an electronic device, and a non-volatile computer storage medium for page processing.
Fig. 5 is a schematic structural diagram of a page processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the page processing apparatus 10 includes: the determining module 11 is used for determining whether a page loading item triggered by a specified operation belongs to an acceleration loading item or not when the specified operation of a user is monitored, wherein the specified operation is an operation on the page loading item to trigger an operation of jumping to a page corresponding to the page loading item; the data acquisition module 12 intercepts the appointed operation and acquires basic data from the pre-loaded service data when the determination module 11 determines that the page loading item belongs to the acceleration loading item, wherein the basic data is the service data required for jumping to a target service page corresponding to the page loading item; the skip module 13 skips to the target service page according to the obtained basic data.
Optionally, monitoring the specified operation of the user includes:
and determining whether the appointed operation of the user is monitored or not through the embedded point, wherein the embedded point is the embedded point for triggering the page loading item aiming at the appointed operation.
Optionally, determining whether the page loader triggered by the specified operation belongs to an acceleration loader includes:
Identifying user characteristics of the page loading item triggered by the appointed operation, wherein the user characteristics are used for representing historical data of a user using a service corresponding to the page loading item;
and determining whether the page loading item belongs to the acceleration loading item according to the identification result.
Optionally, the page processing apparatus 10 further includes: the prediction module 14 is used for intelligently predicting the trigger time for loading the pre-load service data by the utilization end; and the loading module 15 loads the pre-loaded service data under the triggering of the triggering time.
Optionally, loading the pre-loaded service data includes:
acquiring the pre-load service data from a client and/or a server for providing the service corresponding to the page loading item;
and after the pre-loading service data is acquired, loading the pre-loading service data into a cache.
Optionally, intercepting the specified operation includes: intercepting the appointed operation by adopting a tangent plane mode.
Optionally, the page processing apparatus 10 further includes: the data interface module 16 provides a data interface to the target service page.
Optionally, the page processing apparatus 10 of any one of the preceding claims further comprises: the on-line management and control module 17 performs at least one of the following management and control on the page processing after the on-line: page granularity control, gray level release and rollback operation.
The embodiment of the specification also provides an electronic device for page processing, including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor, the instructions are executable by the at least one processor to enable the at least one processor to:
when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item, wherein the appointed operation is to operate the page loading item to trigger the operation of jumping to a page corresponding to the page loading item;
If yes, intercepting the appointed operation, and acquiring basic data from the preloaded service data, wherein the basic data is service data required for jumping to a target service page corresponding to the page loading item;
And jumping to the target service page according to the acquired basic data.
The present specification embodiments also provide a non-volatile computer storage medium for page processing, storing computer-executable instructions configured to:
when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item, wherein the appointed operation is to operate the page loading item to trigger the operation of jumping to a page corresponding to the page loading item;
If yes, intercepting the appointed operation, and acquiring basic data from the preloaded service data, wherein the basic data is service data required for jumping to a target service page corresponding to the page loading item;
And jumping to the target service page according to the acquired basic data.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the system, apparatus, device, non-volatile computer storage medium embodiments, since they correspond to the methods, the description is simpler, and the relevant points are found in the partial description of the method embodiments.
The systems, apparatuses, devices, and non-volatile computer storage media provided in the embodiments of the present disclosure correspond to the methods, and they also have similar beneficial technical effects as those of the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the corresponding systems, apparatuses, devices, and non-volatile computer storage media will not be described in detail herein.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable GATE ARRAY, FPGA)) is an integrated circuit whose logic functions are determined by user programming of the device. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated Circuit chips, this programming is mostly implemented with "logic compiler (logic compi ler)" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming Language, which is called Hardware Description Language (HDL), but HDL is not just one, but a plurality of kinds, such as ABEL(Advanced Boolean Express ion Language)、AHDL(Altera Hardware Descript ion Language)、Confluence、CUPL(Cornel l Univers ity Programming Language)、HDCal、JHDL(Java Hardware Descript ion Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Descript ion Language), and VHDL (Very-High-SPEED INTEGRATED Circuit HARDWARE DESCRIPT ion Language) and Veri log are most commonly used at present. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of the controller including, but not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Si l icone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (trans itory media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (18)

1. A method of page processing, comprising:
when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item or not;
If the page loading item triggered by the appointed operation belongs to the acceleration loading item, intercepting the appointed operation;
Basic data is obtained from the pre-loaded business data, wherein the basic data comprises business data required by jumping to a target business page corresponding to the page loading item;
and jumping to the target service page according to the acquired basic data.
2. The method of claim 1, monitoring a specified operation by a user, comprising:
and determining whether the appointed operation of the user is monitored or not through the embedded point, wherein the embedded point is the embedded point for triggering the page loading item aiming at the appointed operation.
3. The method of claim 1, the determining whether the page loader triggered by the specified operation belongs to an acceleration loader, comprising:
Identifying user characteristics of the page loading item triggered by the appointed operation, wherein the user characteristics are used for representing historical data of a user using a service corresponding to the page loading item; the user characteristics comprise the situation that the user uses the corresponding service of the page loading item;
and determining whether the page loading item belongs to the acceleration loading item according to the identification result.
4. The method of claim 1, the determining whether the page loader triggered by the specified operation belongs to an acceleration loader, comprising:
identifying data attributes of the page loading items triggered by the appointed operation, wherein the data attributes are used for representing whether the flag bit identifiers of the acceleration loading items are set for the page loading items;
and determining whether the page loading item belongs to the acceleration loading item according to the identification result.
5. The method of claim 1, the method further comprising:
The triggering time for loading the pre-load business data is intelligently predicted by the utilization end;
and loading the pre-loaded service data under the triggering of the triggering time.
6. The method of claim 5, loading the pre-load service data, comprising:
acquiring the pre-load service data from a client and/or a server for providing the service corresponding to the page loading item;
and after the pre-loading service data is acquired, loading the pre-loading service data into a cache.
7. The method of claim 5, wherein the trigger opportunity includes an application program start of the target service page, or determining that the page loader belongs to an acceleration loader.
8. The method of claim 1, the method further comprising: and providing a data interface for the target service page.
9. The method of claim 1, intercepting the specified operation, comprising: intercepting the appointed operation by adopting a tangent plane mode.
10. The method of claim 1, the method further comprising:
If the page loading item triggered by the appointed operation does not belong to the acceleration loading item, jumping to a target service page according to the original service processing flow; the original business processing flow is a processing flow which jumps to a plurality of intermediate pages after the appointed operation is acquired and jumps to a target business page; the intermediate page is used for collecting the basic data.
11. The method of claim 1, wherein the basic data includes at least user information necessary for generating a target service page and city information in which the user is located.
12. The method of any one of claims 1 to 11, the method further comprising: and performing at least one of the following management and control on the page processing after online: page granularity control, gray level release and rollback operation.
13. A page processing apparatus comprising:
The determining module is used for determining whether the page loading item triggered by the appointed operation belongs to the acceleration loading item or not when the appointed operation of the user is monitored;
The specified operation interception module intercepts the specified operation when the determination module determines that the page loading item belongs to the acceleration loading item;
The data acquisition module acquires basic data from the pre-loaded business data, wherein the basic data comprises business data required by jumping to a target business page corresponding to the page loading item;
and the jump module jumps to the target business page according to the acquired basic data.
14. The apparatus of claim 13, monitoring a specified operation by a user, comprising:
and determining whether the appointed operation of the user is monitored or not through the embedded point, wherein the embedded point is the embedded point for triggering the page loading item aiming at the appointed operation.
15. The apparatus of claim 13, the apparatus further comprising:
the prediction module is used for intelligently predicting the trigger time for loading the pre-load service data by the utilization end;
and the loading module loads the pre-loaded business data under the triggering of the triggering time.
16. The apparatus of claim 13, the apparatus further comprising:
and the data interface module is used for providing a data interface for the target service page.
17. The apparatus of any of claims 13 to 16, the apparatus further comprising:
The on-line management and control module is used for carrying out at least one of the following management and control on the page processing after the on-line processing: page granularity control, gray level release and rollback operation.
18. An electronic device for page processing, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor, the instructions are executable by the at least one processor to enable the at least one processor to:
when the appointed operation of a user is monitored, determining whether a page loading item triggered by the appointed operation belongs to an acceleration loading item or not;
If the page loading item triggered by the appointed operation belongs to the acceleration loading item, intercepting the appointed operation;
Basic data is obtained from the pre-loaded business data, wherein the basic data comprises business data required by jumping to a target business page corresponding to the page loading item;
and jumping to the target service page according to the acquired basic data.
CN202410781000.3A 2020-07-28 2020-07-28 Page processing method, device and equipment Pending CN118606586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410781000.3A CN118606586A (en) 2020-07-28 2020-07-28 Page processing method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010736737.5A CN111783018B (en) 2020-07-28 2020-07-28 Page processing method, device and equipment
CN202410781000.3A CN118606586A (en) 2020-07-28 2020-07-28 Page processing method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010736737.5A Division CN111783018B (en) 2020-07-28 2020-07-28 Page processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN118606586A true CN118606586A (en) 2024-09-06

Family

ID=72766283

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010736737.5A Active CN111783018B (en) 2020-07-28 2020-07-28 Page processing method, device and equipment
CN202410781000.3A Pending CN118606586A (en) 2020-07-28 2020-07-28 Page processing method, device and equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010736737.5A Active CN111783018B (en) 2020-07-28 2020-07-28 Page processing method, device and equipment

Country Status (1)

Country Link
CN (2) CN111783018B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112698899A (en) * 2020-12-30 2021-04-23 北京光启元数字科技有限公司 Data transformation method, device, equipment and medium based on data visualization
CN113296859B (en) * 2021-04-28 2023-03-28 青岛海尔科技有限公司 Page loading method and device, storage medium and electronic device
CN113378087B (en) * 2021-06-22 2024-01-09 北京百度网讯科技有限公司 Page processing method, page processing device, electronic equipment and storage medium
CN114117285B (en) * 2022-01-27 2022-05-31 浙江口碑网络技术有限公司 Position information processing method and device based on H5 page and electronic equipment
CN118260753A (en) * 2023-07-31 2024-06-28 华为技术有限公司 Application program management and control method and electronic equipment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7636786B2 (en) * 2003-06-19 2009-12-22 International Business Machines Corporation Facilitating access to a resource of an on-line service
US8522131B1 (en) * 2004-04-14 2013-08-27 Sprint Spectrum L.P. Intermediation system and method for enhanced rendering of data pages
US10289743B2 (en) * 2012-01-19 2019-05-14 Microsoft Technology Licensing, Llc Client-side minimal download and simulated page navigation features
CN104111944B (en) * 2013-04-19 2018-09-18 阿里巴巴集团控股有限公司 Page processing method and device and page generation method and device
US20170011133A1 (en) * 2014-03-31 2017-01-12 Open Garden Inc. System and method for improving webpage loading speeds
CN107943825A (en) * 2017-10-19 2018-04-20 阿里巴巴集团控股有限公司 Data processing method, device and the electronic equipment of page access
CN107844324B (en) * 2017-10-23 2021-11-02 北京京东尚科信息技术有限公司 Client page jump processing method and device
CN108549673A (en) * 2018-03-29 2018-09-18 优视科技有限公司 Pre-add support method, client, server and the network system of web page resources
CN108763541B (en) * 2018-05-31 2021-07-13 维沃移动通信有限公司 Page display method and terminal
CN111061978B (en) * 2018-10-16 2023-04-28 阿里巴巴集团控股有限公司 Page jump method and device
CN109213948B (en) * 2018-10-18 2020-12-04 网宿科技股份有限公司 Webpage loading method, intermediate server and webpage loading system
CN111367596B (en) * 2018-12-25 2023-06-23 阿里巴巴集团控股有限公司 Method and device for realizing business data processing and client
CN109840418B (en) * 2019-02-19 2021-01-01 Oppo广东移动通信有限公司 Jump control method and device for application program, storage medium and terminal
CN111273987A (en) * 2020-01-20 2020-06-12 北京点众科技股份有限公司 Method and equipment for displaying information by client, terminal and storage medium

Also Published As

Publication number Publication date
CN111783018B (en) 2024-07-05
CN111783018A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111783018B (en) Page processing method, device and equipment
CN110163417B (en) Traffic prediction method, device and equipment
CN110569428B (en) Recommendation model construction method, device and equipment
CN109104327B (en) Service log generation method, device and equipment
CN110674408A (en) Service platform, and real-time generation method and device of training sample
CN112307509B (en) Desensitization processing method, equipment, medium and electronic equipment
CN110635962B (en) Abnormity analysis method and device for distributed system
CN116521380A (en) Resource self-adaptive collaborative model training acceleration method, device and equipment
CN112597013A (en) Online development and debugging method and device
CN115840802A (en) Service processing method and device
CN115841335B (en) Data processing method, device and equipment
CN115774552A (en) Configurated algorithm design method and device, electronic equipment and readable storage medium
CN117370536B (en) Task execution method and device, storage medium and electronic equipment
CN111443997B (en) Data anomaly identification method and device
CN113886033A (en) Task processing method and device
CN110968483B (en) Service data acquisition method and device and electronic equipment
CN117421214A (en) Batch counting method, device, electronic equipment and computer readable storage medium
CN112286549A (en) Gray scale publishing method
CN115543945B (en) Model compression method and device, storage medium and electronic equipment
CN116822606A (en) Training method, device, equipment and storage medium of anomaly detection model
CN115994252A (en) Data processing method, device and equipment
CN109992468B (en) Process performance analysis method, device and system and computer storage medium
CN111241395B (en) Recommendation method and device for authentication service
CN113992429A (en) Event processing method, device and equipment
CN111880922A (en) Processing method, device and equipment for concurrent tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination