CN117421499A - Front-end processing method, front-end processing device, terminal equipment and storage medium - Google Patents

Front-end processing method, front-end processing device, terminal equipment and storage medium Download PDF

Info

Publication number
CN117421499A
CN117421499A CN202311353656.7A CN202311353656A CN117421499A CN 117421499 A CN117421499 A CN 117421499A CN 202311353656 A CN202311353656 A CN 202311353656A CN 117421499 A CN117421499 A CN 117421499A
Authority
CN
China
Prior art keywords
data
network request
cache
request
end processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311353656.7A
Other languages
Chinese (zh)
Inventor
王龙
周辉
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311353656.7A priority Critical patent/CN117421499A/en
Publication of CN117421499A publication Critical patent/CN117421499A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a front-end processing method, a front-end processing device, terminal equipment and a storage medium, wherein a network request corresponding to a loading instruction is intercepted through a service working thread by responding to the loading instruction of a front-end page; if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, rendering and displaying the target data, intercepting the network request through a service working thread, optimizing the network request of which the target data does not exist in the prestored cache data based on the adaptive adjustment strategy, acquiring the target data corresponding to the network request through the back-end node, rendering and displaying the target data, so that the timely processing of the network request is realized, the data response speed is improved, and the use experience of a user is improved.

Description

Front-end processing method, front-end processing device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a front end processing method, a front end processing device, a terminal device, and a storage medium.
Background
In the front-end page display process, a plurality of HTTP requests need to be sent to the back end simultaneously in the initial page loading process, and due to the problem of network or back-end service response, a Pending or overtime state of a part of network requests occurs, so that a front-end white screen is easy to cause, and the user experience is poor.
In the prior art, a buffer queue mode is adopted, and a sequence request queue is constructed for Promise through reduction, so that the concurrency number of network requests can be effectively reduced.
In the process of designing and implementing the present application, the inventors found that at least the following problems exist: all requests can be successfully returned, and when one request fails, the subsequent request is not sent, so that the method has low efficiency, and the reliability and fault tolerance of the prior art are low, and the problem of page white screen caused by response timeout due to high concurrent request of a front page is difficult to overcome.
Therefore, there is a need for a solution that enables a fast response to concurrent network requests to avoid the front-end white screen problem.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a front-end processing method, a front-end processing device, terminal equipment and a storage medium, which aim to realize quick response of concurrent network requests so as to avoid the problem of front-end white screen.
In order to achieve the above object, the present invention provides a front-end processing method, including:
responding to a loading instruction of a front-end page, and intercepting a network request corresponding to the loading instruction through a service work thread;
if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, and rendering and displaying the target data.
Optionally, after the step of intercepting, by the service work thread, a network request corresponding to the load instruction in response to the load instruction of the front-end page, the method further includes:
and if the target data corresponding to the network request exists in the prestored cache data, calling the target data from the cache data and rendering and displaying the target data.
Optionally, before the step of intercepting, by the service work thread, the network request corresponding to the load instruction, the method further includes:
judging whether the browser version of the front-end page supports the service work thread or not;
and if the browser version to which the front-end page belongs supports the service working thread, starting the service working thread.
Optionally, if the target data corresponding to the network request exists in the pre-stored cache data, before the step of calling the target data from the cache data and performing rendering and displaying, the method further includes:
acquiring historical behavior data corresponding to the front-end page;
analyzing the historical behavior data to obtain user preference information;
and storing the data related to the user preference information to obtain the cache data.
Optionally, the step of storing the data related to the user preference information to obtain the cached data includes:
storing the data related to the user preference information by taking a single page as granularity to obtain the cache data; and/or the number of the groups of groups,
and storing the data related to the user preference information by taking the data in the page as granularity to obtain the cache data.
Optionally, the step of storing the data related to the user preference information with the data in the page as granularity to obtain the cached data includes:
setting a cache configuration of at least one data type in the full data based on a pre-trained data change prediction model, and executing at least one of data storage, data change and data deletion based on the cache configuration; and/or the number of the groups of groups,
and inquiring the data related to the user preference information at intervals of preset time, judging whether a changed data identifier exists, and if the changed data identifier exists, executing at least one of data storage, data change and data deletion according to the data identifier.
Optionally, the step of setting a cache configuration of at least one data type in the full data further includes, based on a pre-trained data change prediction model:
acquiring pre-acquired sample data, wherein the sample data comprises at least one of a data type, a change data amount, a change time and a service type;
processing the sample data to obtain various multidimensional data sets;
and performing model training based on the multi-dimensional data sets to obtain the data change prediction model.
Optionally, the step of storing the data related to the user preference information to obtain the cached data further includes:
the full data is preloaded in response to a starting instruction of an initial page corresponding to the full data;
responding to the received data request instruction for the initial page, and selecting corresponding request data from the loaded full data according to the data request instruction;
and sending the request data to a main thread of the browser for data rendering and displaying.
Optionally, the target data includes preference data and/or non-preference data, and the step of calling the target data from the cache data and performing rendering and displaying if the target data corresponding to the network request exists in the pre-stored cache data includes:
if the preference data in the target data exist in the prestored cache data, calling the preference data from the cache data and rendering and displaying; and/or the number of the groups of groups,
and if the non-preference data in the target data have corresponding initialization data, calling the initialization data and rendering and displaying.
Optionally, the step of determining the access destination domain name corresponding to the network request based on the adaptive adjustment policy includes:
According to the original access destination domain name of the network request, initiating the network request to at least one back-end node, and under the condition that the access of the network request fails, changing the original access destination domain name, and additionally initiating the network request according to the changed domain name; and/or the number of the groups of groups,
and initiating a speed measurement network request to each back-end node to acquire a speed measurement result of each back-end node, determining the priority of each back-end node according to the speed measurement result, and determining an access destination domain name corresponding to the network request based on the priority of each back-end node.
Optionally, the step of initiating a query request to at least one backend node includes:
carrying out batch processing and/or encapsulation processing on the network request to obtain a processed network request;
and sending the processed network request to at least one corresponding back-end node.
In addition, to achieve the above object, the present invention also provides a front-end processing apparatus including:
the interception module is used for responding to a loading instruction of the front-end page and intercepting a network request corresponding to the loading instruction through a service working thread;
the initiation module is used for determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy if the target data corresponding to the network request does not exist in the prestored cache data, and is used for initiating a query request to at least one back-end node to acquire the target data corresponding to the network request and rendering and displaying the target data.
In addition, to achieve the above object, the present invention also provides a terminal device including a memory, a processor, and a front-end processing program stored on the memory and executable on the processor, the front-end processing program implementing the steps of the front-end processing method as described above when executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a front-end processing program which, when executed by a processor, implements the steps of the front-end processing method as described above.
According to the front-end processing method, the device, the terminal equipment and the storage medium, which are provided by the embodiment of the invention, a network request corresponding to a loading instruction is intercepted by a service working thread by responding to the loading instruction of a front-end page; if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, rendering and displaying the target data, intercepting the network request through a service working thread, optimizing the network request of which the target data does not exist in the prestored cache data based on the adaptive adjustment strategy, acquiring the target data corresponding to the network request through the back-end node, rendering and displaying the target data, so that the timely processing of the network request is realized, the data response speed is improved, and the use experience of a user is improved.
Drawings
FIG. 1 is a schematic diagram of functional modules of a terminal device to which a front-end processing apparatus of the present invention belongs;
FIG. 2 is a flow chart of an exemplary embodiment of a front-end processing method according to the present invention;
FIG. 3 is a flow chart of another exemplary embodiment of a front-end processing method according to the present invention;
FIG. 4 is a schematic diagram of a cache data comparison principle in an embodiment of the present invention;
fig. 5 is a schematic flow chart of step S03 in the embodiment of fig. 3;
FIG. 6 is a schematic diagram of an example of code initialization in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a regression prediction flow according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a specific flow of step S30 in the embodiment of FIG. 2;
fig. 9 is a schematic diagram of an example network request in an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: intercepting a network request corresponding to a loading instruction by a service working thread through responding to the loading instruction of a front-end page; if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, and rendering and displaying the target data, so that the timely processing of the network request is realized, the data response speed is improved, and the use experience of a user is improved.
Technical terms related to the embodiment of the invention:
service workbench: the worker thread is serviced.
In the front-end page display, a plurality of HTTP requests are usually required to be sent to the back-end in the initial page loading process, and due to the problem of network or back-end service response (ClickHouse has larger delay for part of types of requests, such as user portraits, etc.), a Pending or overtime state of a part of network requests occurs, so that the front-end white screen is easily caused, and the user experience is poor.
In the prior art, a buffer queue mode is adopted, a sequential request queue is constructed for Promise through reduction, the concurrency of network requests can be effectively reduced, but all requests can be successfully returned in the mode, when one request fails, the subsequent requests cannot be sent, the efficiency is low, for example, 100 network requests in the prior art need to be concurrent, 100 network requests are sequentially sent in the buffer queue mode, pages need to be blocked and wait for 100 requests to respond, page display can be rendered, the number of the requests is calculated according to 0.1s, 100 requests need to wait for approximately 10 seconds, and a white screen phenomenon can be generated in the page. The existing scheme can improve the response efficiency of the service request to a certain extent no matter the result is in a successful state or a failure state, but still can not well solve the problem that part of request delay is large to cause front-end white screen.
The invention provides a concurrent request processing method based on a service worker, which aims to solve the technical problem of page white screen caused by response timeout due to high concurrent request of a front-end page, and is used for intercepting response optimization and request optimization of concurrent network requests, so that quick response of the concurrent network requests is realized, and user experience is improved.
Specifically, referring to fig. 1, fig. 1 is a schematic functional block diagram of a terminal device to which a front-end processing apparatus of the present invention belongs. The front-end processing means may be a device independent of the terminal device capable of front-end processing, which may be carried on the terminal device in the form of hardware or software. The terminal equipment can be an intelligent mobile terminal with a data processing function such as a mobile phone and a tablet personal computer, and can also be a fixed terminal equipment or a server with a data processing function.
In this embodiment, the terminal device to which the front-end processing apparatus belongs at least includes an output module 110, a processor 120, a memory 130, and a communication module 140.
The memory 130 stores an operating system and a front-end processing program, and the front-end processing device may store information such as a load instruction of a front-end page, a network request corresponding to the load instruction, prestored cache data, target data, an access destination domain name corresponding to the network request, and the like in the memory 130; the output module 110 may be a display screen or the like. The communication module 140 may include a WIFI module, a mobile communication module, a bluetooth module, and the like, and communicates with an external device or a server through the communication module 140.
Wherein the front-end processing program in the memory 130 when executed by the processor performs the steps of:
responding to a loading instruction of a front-end page, and intercepting a network request corresponding to the loading instruction through a service work thread;
if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, and rendering and displaying the target data.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
and if the target data corresponding to the network request exists in the prestored cache data, calling the target data from the cache data and rendering and displaying the target data.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
judging whether the browser version of the front-end page supports the service work thread or not;
and if the browser version to which the front-end page belongs supports the service working thread, starting the service working thread.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
acquiring historical behavior data corresponding to the front-end page;
analyzing the historical behavior data to obtain user preference information;
and storing the data related to the user preference information to obtain the cache data.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
storing the data related to the user preference information by taking a single page as granularity to obtain the cache data; and/or the number of the groups of groups,
and storing the data related to the user preference information by taking the data in the page as granularity to obtain the cache data.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
setting a cache configuration of at least one data type in the full data based on a pre-trained data change prediction model, and executing at least one of data storage, data change and data deletion based on the cache configuration; and/or the number of the groups of groups,
and inquiring the data related to the user preference information at intervals of preset time, judging whether a changed data identifier exists, and if the changed data identifier exists, executing at least one of data storage, data change and data deletion according to the data identifier.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
acquiring pre-acquired sample data, wherein the sample data comprises at least one of a data type, a change data amount, a change time and a service type;
processing the sample data to obtain various multidimensional data sets;
and performing model training based on the multi-dimensional data sets to obtain the data change prediction model.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
the full data is preloaded in response to a starting instruction of an initial page corresponding to the full data;
responding to the received data request instruction for the initial page, and selecting corresponding request data from the loaded full data according to the data request instruction;
and sending the request data to a main thread of the browser for data rendering and displaying.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
if the preference data in the target data exist in the prestored cache data, calling the preference data from the cache data and rendering and displaying; and/or the number of the groups of groups,
And if the non-preference data in the target data have corresponding initialization data, calling the initialization data and rendering and displaying.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
according to the original access destination domain name of the network request, initiating the network request to at least one back-end node, and under the condition that the access of the network request fails, changing the original access destination domain name, and additionally initiating the network request according to the changed domain name; and/or the number of the groups of groups,
and initiating a speed measurement network request to each back-end node to acquire a speed measurement result of each back-end node, determining the priority of each back-end node according to the speed measurement result, and determining an access destination domain name corresponding to the network request based on the priority of each back-end node.
Further, the front-end processing program in the memory 130, when executed by the processor, further performs the steps of:
carrying out batch processing and/or encapsulation processing on the network request to obtain a processed network request;
and sending the processed network request to at least one corresponding back-end node.
According to the scheme, the network request corresponding to the loading instruction is intercepted by the service working thread in response to the loading instruction of the front-end page; if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, and rendering and displaying the target data, so that the timely processing of the network request is realized, the data response speed is improved, and the use experience of a user is improved.
The method embodiment of the invention is proposed based on the above-mentioned terminal equipment architecture but not limited to the above-mentioned architecture.
The main execution body of the method of the present embodiment may be a front-end processing device or a terminal device, and the front-end processing device is exemplified in the present embodiment.
Referring to fig. 2, fig. 2 is a flowchart of an exemplary embodiment of a front-end processing method according to the present invention. The front-end processing method comprises the following steps:
step S10, responding to a loading instruction of a front-end page, and intercepting a network request corresponding to the loading instruction through a service work thread;
the front-end processing method in the embodiment of the invention is suitable for high concurrency access scenes of the large data of operators to products in the external industry, has adaptability in the industries of finance field, public service, cultural tourism, transportation and the like, and is described in terms of application in an intelligent financial platform.
In the front-end page display of the intelligent financial platform, a plurality of HTTP requests are usually required to be sent to the back end simultaneously in the initial page loading process, so that the processing load of the browser is heavy.
Optionally, before the step of intercepting, by the service work thread, the network request corresponding to the load instruction, the method further includes:
judging whether the browser version of the front-end page supports the service work thread or not;
and if the browser version to which the front-end page belongs supports the service working thread, starting the service working thread.
Specifically, more than 11.3 of the chrome and ios safari of the mobile terminal support Service workbench, and more than 11.3 of the fire fox, google, oupun and the like of the pc terminal support Service workbench, so that the mobile terminal and the pc terminal can be used as a whole, and when the page is opened, the front end loads static files (such as JS, css, pictures, font files and the like) of the page, and the static resource files contain JS scripts. After the JS script is loaded on the page, judging whether the browser related information contains support ('Service Worker' in navigator) or not through the JS script, if not, not starting the Service Worker, and if so, starting the Service Worker.
Optionally, the Service Worker may be understood as a proxy server between the client and the server, where the Service Worker belongs to an existing thread of the browser, and is independent of an independent thread of the JavaScript main thread, and may have multiple independent threads, and is responsible for triggering calculation through network events of registered domains on a Service, and is located in an independent process. The operation of consuming a large amount of resources is performed in the operation without blocking the main thread, the offline caching capacity is increased on the basis of the Web Worker, the interception response function is realized on the proxy server between the Web application program (server) and the browser, and the cache and the index xDB of the browser can be accessed.
Alternatively, when it is determined that the browser supports the Service workbench, the Service workbench may be utilized to intercept the HTTP request. In the embodiment of the invention, the function of intercepting HTTP/HTTPS requests by using the services works provides support for the processing of subsequent steps, the services works are services for providing details as a proxy between a browser and a network or a cache, the services works are registered in a main script file and are referenced to a special Service works file, and the services works are referenced to a manager.
Optionally, there is some cost to start the Service workbench, for example, the starting speed is slow, and the Service workbench is too high, so that the operation efficiency of some pages of the browser is low. In the embodiment of the invention, the resource request in the browser is parallel to the starting of the Service work, and the Service work is an independent thread independent of the main thread of the browser, so that the parallel to the main thread (resource request) of the browser is realized, and then the Service work is used for processing the request, so that the influence of the starting of the Service work is reduced to a certain extent.
Step S20, if the target data corresponding to the network request does not exist in the pre-stored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment policy, where the access destination domain name is used to initiate a query request to at least one back-end node, so as to obtain the target data corresponding to the network request, and render and display the target data.
Optionally, after intercepting an HTTP request by a Service Worker of a Service thread and comparing caches according to a front-end query parameter, if it is determined that target data corresponding to a network request does not exist in pre-stored cache data, the network request can be normally initiated, in the process of initiating the network request, if multiple external Web Service agents exist at the rear end of a server, the network request can be initiated by initiating the speed measurement network request to the multiple external Web Service agents, and then a server with high response speed is selected to initiate the network request according to a speed measurement result; and the access destination domain name can be changed under the condition that the network request fails, and then the network request is initiated according to the changed access destination domain name, so that the target data responded by the server side is obtained and rendered and displayed.
In the embodiment, a network request corresponding to a loading instruction is intercepted by a service working thread by responding to the loading instruction of a front-end page; if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, rendering and displaying the target data, intercepting the network request through a service working thread, optimizing the network request of which the target data does not exist in the prestored cache data based on the adaptive adjustment strategy, acquiring the target data corresponding to the network request through the back-end node, rendering and displaying the target data, so that the timely processing of the network request is realized, the data response speed is improved, and the use experience of a user is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating another exemplary embodiment of a front-end processing method according to the present invention. Based on the embodiment shown in fig. 2, in this embodiment, the front-end processing method further includes:
step S30, if the target data corresponding to the network request exists in the pre-stored cache data, the target data is called from the cache data and rendered and displayed.
Further, after the HTTP request is intercepted by the Service Worker of the Service thread, the cache can be compared, inquired and responded according to the front-end inquiry parameters.
Optionally, in order to reduce the pressure of the concurrent request on the server, the data query service is cached in the embodiment of the present invention, and the user preference information is obtained mainly by analyzing the historical behavior data, where the obtaining of the user preference is similar to the data mining mode of the existing recommendation system, and the related algorithm of the recommendation system may also be applied to the embodiment of the present invention, for example, through a collaborative filtering algorithm and the like. The network request may be sent through a Service Worker to obtain the full amount of data related to the user preference.
Optionally, in the use process of the front-end page of the intelligent financial platform, the query condition is generally changed, the response data is visually presented in a thermodynamic diagram, a histogram, a pie chart and the like, a new query request is required to be initiated to the ClickHouse database each time the query condition is changed, so that the pressure of the back-end resource is high, redundancy exists among multiple requests, and the data is not effectively multiplexed.
For example: the user selects active user data of the Chengdu Hou Ou, intercepts a query request through a Service Worker, screens and queries a local IndexedDB based on screening conditions of the active user of the Chengdu Hou Ou, the JS script of the browser judges whether the data cached by the browser contains the data, and when the data is judged to be contained, the data (such as IndexedDB data) cached by the local browser is directly returned to a main thread of the browser through PostMessage and rendered in a front-end page; and when judging that the data is not contained, normally initiating a network request, and after normally responding, storing the responded data into an IndexdDB.
Optionally, if the target data corresponding to the network request exists in the pre-stored cache data, before the step of calling the target data from the cache data and performing rendering and displaying, the method further includes:
acquiring historical behavior data corresponding to the front-end page;
analyzing the historical behavior data to obtain user preference information;
and storing the data related to the user preference information to obtain the cache data.
It should be noted that, in the embodiment of the present invention, step numbers such as S20 and S30 are adopted, and the purpose of the present invention is to more clearly and briefly describe the corresponding content, and not to constitute a substantial limitation on the sequence, and those skilled in the art may execute S30 first and then execute S20 when implementing the present invention, which are all within the protection scope of the present application.
Referring to fig. 4, fig. 4 is a schematic diagram of comparison of cache data in the embodiment of the present invention, as shown in fig. 4, a plurality of network requests may be included in a front-end page, where each network request is generally composed of an API interface url+a request parameter, where the index db cache data is stored with a hash value corresponding to the interface URL as a storage key value (e.g., 5279d70779d9de9b2c4efd7e8a32ba 19), the different request parameters are stored as storage key values after hash in the form of JSON array in a value corresponding to the key value 5279d70779d9de9b2c4efd7e8a32ba19, when the corresponding key value is included in the index db cache data, the browser queries whether the corresponding key value is included according to the hash value id2 of the query parameter of the network request parameter, and when the corresponding key value is included, the corresponding key value is returned to the front-end page for rendering.
Optionally, the cache data may be generally selected or determined based on the user preference information, so that the historical behavior data corresponding to the front-end page may be obtained first, and then the historical behavior data may be analyzed to obtain the user preference information.
Optionally, the server side is responsible for providing user preference information, which is obtained based on analysis of historical behavior data during the process of browsing pages by the user, and the user preference information may be obtained from the server side according to a periodic request, for example: the user preference information is obtained once every half month, wherein the user preference information is a characteristic portrait generated by a user after big data conduct behavior analysis on click events of the user and historical orders of the user in the process of using an App by the user, the user portrait also reflects the user preference, the network preference can be associated with a data type, a data request interface and a data display mode, and a browser can acquire a page, a data type and a data style which are queried by preference by combining the ID of the current user of the browser and the user characteristic portrait, so that corresponding data is requested and cached in advance under the silence condition.
Optionally, in the embodiment of the present invention, the acquisition of the user preference is similar to the data mining mode of the existing recommendation system, and the related algorithm of the recommendation system may also be applied to the embodiment, for example, through a collaborative filtering algorithm. The network request may be sent through a Service Worker to obtain the full amount of data related to the user preference.
Alternatively, in the embodiment of the present invention, the full data related to the preference of the user may be represented by two granularity: 1) The single page is used as granularity, namely response data of a plurality of network requests of the whole page of the judged user preference page are cached, for example, a user information table, a user payment record and other items of data (possibly corresponding to a plurality of network request interfaces) in the user detail page are cached; 2) And caching response data of the API interface corresponding to the judged user preference data item by taking the data in the page as granularity, for example, caching user payment records (corresponding to a payment record query interface) in the user detail page.
According to the scheme, the historical behavior data corresponding to the front-end page are obtained; analyzing the historical behavior data to obtain user preference information; and storing the data related to the user preference information to obtain the cache data, so that the user preference data is stored in advance, and the network request of the user can be responded in time later, thereby improving the data response speed and improving the use experience of the user.
Referring to fig. 5, fig. 5 is a schematic flowchart of step S03 in the embodiment of fig. 3. The present embodiment is based on any one of the above embodiments, and in the present embodiment, the step S03 includes:
Step S031, storing the data related to the user preference information by taking a single page as granularity, so as to obtain the cache data;
specifically, taking a single page as granularity refers to caching response data of multiple network requests of the whole page of the judged user preference page, for example, caching multiple data (possibly corresponding to multiple network request interfaces) of a user information table, a user payment record and the like in the user detail page.
Optionally, when the page is used as the data caching granularity, all response data of network requests related to the page are cached in advance through a Service Worker, and a caching validity period is set, and within the validity period, the network requests can be responded through locally cached data, and the network requests are sent to the server and the latest data are requested and cached unless a user manually deletes the cache or manually performs operations such as page refreshing.
And step S032, storing the data related to the user preference information by taking the data in the page as granularity, and obtaining the cache data.
Optionally, the data in the page is taken as granularity, which refers to the response data caching of the API interface corresponding to the judged user preference data item, for example, the user payment record (corresponding to the payment record query interface) in the user detail page is cached.
Optionally, the target data includes preference data and/or non-preference data, and the step of calling the target data from the cache data and performing rendering and displaying if the target data corresponding to the network request exists in the pre-stored cache data includes:
if the preference data in the target data exist in the prestored cache data, calling the preference data from the cache data and rendering and displaying; and/or the number of the groups of groups,
and if the non-preference data in the target data have corresponding initialization data, calling the initialization data and rendering and displaying.
Specifically, when the API is used as the caching granularity, the full data related to the user preference level cached in advance by the Service workbench may be part of the data of one page, for example, 8 parts of the data are needed for rendering and displaying 7 parts of the data in the page, after the Service workbench intercepts the user request, most of the data interested by the user are rendered and displayed by searching the local cache, so that the user can firstly intercept the request by the Service workbench and search the local cache, the network requests corresponding to other data are normally responded as soon as possible, further, in some cases, the user hopes that the page is opened, that is, all the data of the page can not appear in a white screen condition, and the rendering and displaying of the page need to be normally rendered and displayed after the response is obtained, during which the page is displayed in a white screen or loading state page, therefore, the optimal condition is to cache and respond to all the data in the same page, and the preferred data are intercepted by using the API as the granularity, the initial data of the preferred data can be processed as soon as possible by adopting the initial data of the script, and the data of the initial data of the script is quite the initial data, and the data of the script is under the initial data of the initial data, and the initial data of the script is under the condition of the initial data, and the data of the initial response of the script is set at the initial data of the data, and the initial response of the data of the script. Referring to fig. 6, fig. 6 is a schematic diagram of an example of code initialization in an embodiment of the present invention, and as shown in fig. 6, by performing initialization assignment on presentation data variables in a code, the problem of page whitescreen generated by non-preference data can be solved.
Optionally, the step of storing the data related to the user preference information with the data in the page as granularity to obtain the cached data includes:
setting a cache configuration of at least one data type in the full data based on a pre-trained data change prediction model, and executing at least one of data storage, data change and data deletion based on the cache configuration; and/or the number of the groups of groups,
and inquiring the data related to the user preference information at intervals of preset time, judging whether a changed data identifier exists, and if the changed data identifier exists, executing at least one of data storage, data change and data deletion according to the data identifier.
Specifically, in the embodiment of the invention, in order to make the data displayed on the page consistent with the actual data of the server as much as possible, the real-time performance of the application data is ensured, so that two modes are adopted to process the effective caching time of the data, including: 1. based on the data change condition, predicting the change time of the data of different interfaces, and setting different cache effective time for the data of different interfaces through a preset value; 2. user preference data is requested at regular intervals, for example, at intervals of half an hour, the user preference data includes, in addition to data that the user preference views, an identifier of whether each piece of data has changed, and the server side sets the data to change accordingly when performing update, insert operations on the corresponding data table.
Optionally, different types of data are set with different cache times, in the prior art, the setting of the cache is mostly performed according to experience of a developer or fixed cache times are set, sometimes, the setting is inaccurate, for example, too large setting causes the technical problem that some data are not updated timely, so that the front end cannot acquire the latest data timely, too small setting of the effective time can cause frequent updating of the cache, network requests and flow rate can be increased, the meaning of the cache is lost, the cache utilization rate is not high, the setting of the cache time based on the data change characteristics of the data is not realized (for example, position data and call data belong to frequently updated data, user attribute information belongs to low-frequency change data), after the server predicts the data change frequency, a corresponding cache time threshold can be set for different types of data, after the browser acquires the time threshold and stores the time threshold in a local localmap, whether the cache time threshold for different data is exceeded or not is compared every time when the browser is opened, and if the cache time threshold for the different data is exceeded, the cache time threshold for the specific data is deleted, and the server is reacquired through a server.
Optionally, the step of setting a cache configuration of at least one data type in the full data further includes, based on a pre-trained data change prediction model:
Acquiring pre-acquired sample data, wherein the sample data comprises at least one of a data type, a change data amount, a change time and a service type;
processing the sample data to obtain various multidimensional data sets;
and performing model training based on the multi-dimensional data sets to obtain the data change prediction model.
Referring to fig. 7, fig. 7 is a schematic diagram of a regression prediction flow in an embodiment of the present invention, as shown in fig. 7, when data is changed, data such as a data type, a changed data amount, a changed time, a service type corresponding to data of each data, etc. are collected through a script and stored in a database, and after the collected data is processed, an Xgboost algorithm is applied to predict a time interval of next change of each data set. Based on whether the data is newly added in the current period, whether the data is changed in the current period, and whether the Xgboost algorithm predicts the time interval of the next change of the data, the cache configuration of each data is set, wherein the data sets correspond to interfaces, different front-end modules request url+ request parameters to correspond to different data sets,/api/v 2/user_dataid=1034, wherein/api/v 2/user_data is a request path, id=1034 is a request parameter id equal to 1034, the request corresponds to one data set described in the scheme, and the data sets have a relationship of containing, being contained or partially overlapping due to different query conditions.
Alternatively, the data collection may be periodic collection (for example, collection every 5 or 10 days), and the data types, the change data amounts (number of data pieces), the change times, and the service types (application types) are collected through scripts, and the Hash values of the respective data sets are calculated according to the data content using the digest algorithm SHA1, and the data are stored in the database. And further, the collected data is processed, which mainly comprises updating the marks of whether to add or change each data in each version, and for each data: if the data is not available at the previous time point, the data is newly added data, the value of 'whether to be newly added' is 1, and the value of 'whether to be changed' is 1; if the previous time point has the same data (the data relative path and the data name are the same), the value of 'whether to be added newly' is 0; the Hash value of the version data is compared with the Hash value of the previous version data, and if the values are the same, the data is considered unchanged, and if the value is changed, the value is 0, otherwise, the value is 1.
Optionally, screening out data added or modified in the data of each time point of the history; the number of modified versions of the data in the version is calculated, the data is modified by the first time of the last day, and the data is in a time interval of the next modification (if in a certain historical version, a certain data is not modified after the historical version, and the latest version is assumed to be modified in calculation).
Optionally, based on the collected data, an Xgboost algorithm is applied to predict the time interval for its next data change.
Optionally, after the above procedure, for each dataset, a piece of multidimensional data can be obtained, including: data type, change data amount (number of data pieces), change time, service type (application type), and data change time. The 1 st to 4 th indexes are X variables of the model, the 5 th indexes are Y variables, and the model is used for training the Xgboost model and optimizing the model effect by tuning.
Optionally, a trained model is applied to give a predicted value of the time interval of the next change of each dataset in the current period, and the data caching configuration (e.g., caching time) is set based on the predicted change period of each data.
Optionally, the step of storing the data related to the user preference information to obtain the cached data further includes:
the full data is preloaded in response to a starting instruction of an initial page corresponding to the full data;
responding to the received data request instruction for the initial page, and selecting corresponding request data from the loaded full data according to the data request instruction;
And sending the request data to a main thread of the browser for data rendering and displaying.
Specifically, after the user preference information is obtained, the user preference information can be cached for a period of time, when the user opens a page, a network request is sent through a Service work thread, so that the full data related to the user preference is obtained for caching, the full data is communicated with a UI layer through a PostMessage method, the UI layer receives response data, and DOM elements are rendered in real time to display the data. In the step, the characteristic that the Service workbench can independently operate is fully utilized, the data preloading function is realized, the user opens the page later, and the cached data can be directly sent to the main thread of the browser for data rendering as long as the preference data cached before is still in the validity period. For example: for the financial panoramic platform and the like in the embodiment of the invention, a data source which is liked to be checked by a user in daily life can be acquired firstly, when the user opens a page, a Service Worker initiates a data request to a website server after activation, the process is completely imperceptible to the user, when the user needs to check the interested page, background data is acquired and cached through the Service Worker, the user can return the cached interested data after clicking the data to check the page, the main thread of the browser does not need to wait for data request response like the prior art, the rendering and displaying of the data can be performed immediately, the rendering delay is shortened, the probability of a white screen of a client is reduced, the experience of the user for using Web application is greatly improved, the concurrency pressure of a back-end server is also reduced, and the data for page rendering acquired from the data cached in advance by the Service Worker is the interested data which is relevant to the user, so that the interested page data of the user can be quickly and displayed to the user, and the user use experience is further improved.
According to the scheme, the data related to the user preference information are stored by taking a single page as granularity, so that the cache data are obtained; and/or, with the data in the page as granularity, storing the data related to the user preference information to obtain the cache data, caching the response data of the request by performing a full request covering most scenes once when the page is opened, storing the response data into an IndexdDB so as to multiplex under the subsequent various query conditions, performing data cache configuration through a pre-trained data change prediction model, and performing refined processing on the setting of the cache time of the data, thereby realizing effective management of the cache time and saving network resources; by preloading the cache data, the data response speed can be further improved, and the use experience of a user is improved.
Referring to fig. 8, fig. 8 is a specific flowchart of step S30 in the embodiment of fig. 2. The present embodiment is based on any one of the above embodiments, in which the step S30 includes at least one of the following:
step S301, according to the original access destination domain name of the network request, initiating the network request to at least one back-end node, and under the condition that the access of the network request fails, changing the original access destination domain name, and additionally initiating the network request according to the changed domain name;
Specifically, after intercepting an HTTP request through a Service Worker of a Service thread and comparing caches according to front-end query parameters, if it is determined that target data corresponding to a network request does not exist in prestored cache data, the network request can be normally initiated, in the process of initiating the network request, the network request can be initiated to at least one back-end node according to the original access destination domain name of the network request, if the network request fails, the access destination domain name can be changed, and then the network request is initiated according to the changed access destination domain name, so that target data responded by the Service end can be obtained and rendered and displayed.
Step S302, a speed measurement network request is initiated to each back-end node to obtain a speed measurement result of each back-end node, the priority of each back-end node is determined according to the speed measurement result, and the access destination domain name corresponding to the network request is determined based on the priority of each back-end node.
Optionally, when there are multiple external Web Service agents at the back end of the server, it may be preferable that one of the Web Service agents initiates a network request, and the preferable mode may be based on that when the Service workbench is started on a page, initiate a speed measurement network request for the multiple external Web Service agents, record a time interval of 200OK response of each server as a speed measurement result, and preferably initiate a subsequent data network request by a server with a fast response speed. That is, the function of load sharing can be realized through a Service Worker at the front end level (client) by intercepting the modification of the access domain name, that is, the Service Worker of the Service Worker initiates the first request of page starting, and by modifying the destination domain name of the interception request, the query request can be initiated to a plurality of back end nodes (each node corresponds to a node in the clickHouse distributed cluster).
Referring to fig. 9, fig. 9 is a schematic diagram of an example network request in an embodiment of the present invention, as shown in fig. 9, there are 5 requests in total, the request 1-2 may be sent to the first server, the request 3 may be sent to the second server, and the request 4-5 may be sent to the third server, which is equivalent to implementing a load sharing, and the load sharing distribution is not required by the back-end server, and the sharing distribution of the request is directly performed through JS script processing of the front-end, so that the data query pressure of the back-end server is effectively reduced, and the Service Worker distributes the request to the server with a faster response speed through speed measurement selection, so that the request data can be ensured to be quickly obtained, and the request efficiency is improved.
Optionally, the step of initiating a query request to at least one backend node includes:
carrying out batch processing and/or encapsulation processing on the network request to obtain a processed network request;
and sending the processed network request to at least one corresponding back-end node.
Specifically, in the embodiment of the present invention, batch requests may also be performed on concurrent requests, for example: 100 requests are sent for 10 times, and the concurrency is reduced; or, the plurality of network requests are packaged into one network request, so that the number of network requests is reduced, and the optimization of the network request mode is realized.
In addition, an embodiment of the present invention further provides a front-end processing apparatus, where the front-end processing apparatus includes:
the interception module is used for responding to a loading instruction of the front-end page and intercepting a network request corresponding to the loading instruction through a service working thread;
the calling module is used for calling the target data and rendering and displaying if the target data corresponding to the network request exists in the prestored cache data;
the initiation module is used for determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy if the target data corresponding to the network request does not exist in the prestored cache data, and is used for initiating a query request to at least one back-end node to acquire the target data corresponding to the network request and rendering and displaying the target data.
In order to solve the technical problem that a response timeout is caused by a high concurrent request of a front-end page, and a page blank is caused, the embodiment of the invention provides a front-end processing method based on multithreading concurrent request processing, which is used for intercepting response optimization and request optimization for concurrent network requests, so that quick response of the concurrent network requests is realized, the access performance of the front end of the existing platform is effectively improved, and the user experience is improved. For the front-end page of the data application belongs to a basic patent technology, because of the mode of uniformly setting the indistinguishable data types in the prior art, the method in the embodiment of the invention sets longer caching time (such as 6 months) for the data with low change frequency (such as the attribute data of the age, the sex, the activity area and the like of the user), sets shorter caching time for the data with low change frequency, and can effectively reduce unnecessary network data transmission, thereby greatly saving related cost and having higher practical application value.
The principle and implementation process of front-end processing are implemented in this embodiment, please refer to the above embodiments, and no further description is given here.
In addition, the embodiment of the invention also provides a terminal device, which comprises a memory, a processor and a front-end processing program stored on the memory and capable of running on the processor, wherein the front-end processing program realizes the steps of the front-end processing method when being executed by the processor.
Because the front-end processing program is executed by the processor and adopts all the technical schemes of all the embodiments, the front-end processing program at least has all the beneficial effects brought by all the technical schemes of all the embodiments and is not described in detail herein.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a front-end processing program, and the front-end processing program realizes the steps of the front-end processing method when being executed by a processor.
Because the front-end processing program is executed by the processor and adopts all the technical schemes of all the embodiments, the front-end processing program at least has all the beneficial effects brought by all the technical schemes of all the embodiments and is not described in detail herein.
Compared with the prior art, the front-end processing method, the device, the terminal equipment and the storage medium provided by the embodiment of the invention intercept the network request corresponding to the loading instruction through the service working thread by responding to the loading instruction of the front-end page; if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, rendering and displaying the target data, intercepting the network request through a service working thread, optimizing the network request of which the target data does not exist in the prestored cache data based on the adaptive adjustment strategy, acquiring the target data corresponding to the network request through the back-end node, rendering and displaying the target data, so that the timely processing of the network request is realized, the data response speed is improved, and the use experience of a user is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (14)

1. A front-end processing method, characterized in that the front-end processing method comprises the steps of:
responding to a loading instruction of a front-end page, and intercepting a network request corresponding to the loading instruction through a service work thread;
if the target data corresponding to the network request does not exist in the prestored cache data, determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy, and initiating a query request to at least one back-end node to acquire the target data corresponding to the network request, and rendering and displaying the target data.
2. The front-end processing method according to claim 1, wherein the step of intercepting, by a service work thread, a network request corresponding to a load instruction of a front-end page in response to the load instruction further comprises:
and if the target data corresponding to the network request exists in the prestored cache data, calling the target data from the cache data and rendering and displaying the target data.
3. The front-end processing method of claim 1, wherein the step of intercepting, by a service worker thread, the network request corresponding to the load instruction further comprises:
Judging whether the browser version of the front-end page supports the service work thread or not;
and if the browser version to which the front-end page belongs supports the service working thread, starting the service working thread.
4. The front-end processing method according to claim 2, wherein if the target data corresponding to the network request exists in the pre-stored cache data, the step of calling the target data from the cache data and performing rendering and displaying further includes:
acquiring historical behavior data corresponding to the front-end page;
analyzing the historical behavior data to obtain user preference information;
and storing the data related to the user preference information to obtain the cache data.
5. The front-end processing method of claim 4, wherein the step of storing the data related to the user preference information to obtain the cached data comprises:
storing the data related to the user preference information by taking a single page as granularity to obtain the cache data; and/or the number of the groups of groups,
and storing the data related to the user preference information by taking the data in the page as granularity to obtain the cache data.
6. The front-end processing method of claim 5, wherein the step of storing the data related to the user preference information with the data in the page as granularity, and obtaining the cache data comprises:
setting a cache configuration of at least one data type in the full data based on a pre-trained data change prediction model, and executing at least one of data storage, data change and data deletion based on the cache configuration; and/or the number of the groups of groups,
and inquiring the data related to the user preference information at intervals of preset time, judging whether a changed data identifier exists, and if the changed data identifier exists, executing at least one of data storage, data change and data deletion according to the data identifier.
7. The front-end processing method of claim 6, wherein the step of setting a cache configuration for at least one data type in the full data further comprises, prior to the step of setting a cache configuration for at least one data type in the full data, based on a pre-trained data change prediction model:
acquiring pre-acquired sample data, wherein the sample data comprises at least one of a data type, a change data amount, a change time and a service type;
Processing the sample data to obtain various multidimensional data sets;
and performing model training based on the multi-dimensional data sets to obtain the data change prediction model.
8. The front-end processing method of claim 6, wherein the step of storing the data related to the user preference information to obtain the cached data further comprises:
the full data is preloaded in response to a starting instruction of an initial page corresponding to the full data;
responding to the received data request instruction for the initial page, and selecting corresponding request data from the loaded full data according to the data request instruction;
and sending the request data to a main thread of the browser for data rendering and displaying.
9. The front-end processing method according to claim 2, wherein the target data includes preference data and/or non-preference data, and the step of calling the target data from the cache data and rendering and displaying if the target data corresponding to the network request exists in the pre-stored cache data includes:
if the preference data in the target data exist in the prestored cache data, calling the preference data from the cache data and rendering and displaying; and/or the number of the groups of groups,
And if the non-preference data in the target data have corresponding initialization data, calling the initialization data and rendering and displaying.
10. The front-end processing method of claim 1, wherein the step of determining the access destination domain name corresponding to the network request based on an adaptive adjustment policy comprises:
according to the original access destination domain name of the network request, initiating the network request to at least one back-end node, and under the condition that the access of the network request fails, changing the original access destination domain name, and additionally initiating the network request according to the changed domain name; and/or the number of the groups of groups,
and initiating a speed measurement network request to each back-end node to acquire a speed measurement result of each back-end node, determining the priority of each back-end node according to the speed measurement result, and determining an access destination domain name corresponding to the network request based on the priority of each back-end node.
11. The front-end processing method of claim 1, wherein the step of initiating a query request to at least one back-end node comprises:
carrying out batch processing and/or encapsulation processing on the network request to obtain a processed network request;
And sending the processed network request to at least one corresponding back-end node.
12. A front-end processing apparatus, the front-end processing apparatus comprising:
the interception module is used for responding to a loading instruction of the front-end page and intercepting a network request corresponding to the loading instruction through a service working thread;
the initiation module is used for determining an access destination domain name corresponding to the network request based on an adaptive adjustment strategy if the target data corresponding to the network request does not exist in the prestored cache data, and is used for initiating a query request to at least one back-end node to acquire the target data corresponding to the network request and rendering and displaying the target data.
13. A terminal device, characterized in that it comprises a memory, a processor and a front-end processing program stored on the memory and executable on the processor, which front-end processing program, when executed by the processor, implements the steps of the front-end processing method according to any of claims 1-11.
14. A computer readable storage medium, wherein a front-end processing program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the front-end processing method according to any of claims 1-11.
CN202311353656.7A 2023-10-18 2023-10-18 Front-end processing method, front-end processing device, terminal equipment and storage medium Pending CN117421499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311353656.7A CN117421499A (en) 2023-10-18 2023-10-18 Front-end processing method, front-end processing device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311353656.7A CN117421499A (en) 2023-10-18 2023-10-18 Front-end processing method, front-end processing device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117421499A true CN117421499A (en) 2024-01-19

Family

ID=89527684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311353656.7A Pending CN117421499A (en) 2023-10-18 2023-10-18 Front-end processing method, front-end processing device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117421499A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117914942A (en) * 2024-03-20 2024-04-19 广东银基信息安全技术有限公司 Data request caching method and device, intelligent terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117914942A (en) * 2024-03-20 2024-04-19 广东银基信息安全技术有限公司 Data request caching method and device, intelligent terminal and storage medium

Similar Documents

Publication Publication Date Title
US7062756B2 (en) Dynamic object usage pattern learning and efficient caching
CN110489447B (en) Data query method and device, computer equipment and storage medium
US20180336216A1 (en) Cache Aware Searching Based on One or More Files in One or More Buckets in Remote Storage
US9253284B2 (en) Historical browsing session management
US8849802B2 (en) Historical browsing session management
US9953014B1 (en) Collection management in document object model virtualization
US10009439B1 (en) Cache preloading
US20120060083A1 (en) Method for Use in Association With A Multi-Tab Interpretation and Rendering Function
US20130080577A1 (en) Historical browsing session management
US20130080576A1 (en) Historical browsing session management
AU2016202333B2 (en) Historical browsing session management
US20180285470A1 (en) A Mobile Web Cache Optimization Method Based on HTML5 Application Caching
US20150199344A1 (en) Speculative rendering during cache revalidation
CN117421499A (en) Front-end processing method, front-end processing device, terminal equipment and storage medium
CN111367596B (en) Method and device for realizing business data processing and client
US10678881B2 (en) Usage-based predictive prefetching and caching of component-based web pages for performance optimization
US11210198B2 (en) Distributed web page performance monitoring methods and systems
US8972477B1 (en) Offline browsing session management
US9292454B2 (en) Data caching policy in multiple tenant enterprise resource planning system
US11966416B2 (en) Cross-organization and cross-cloud automated data pipelines
US9892202B2 (en) Web page load time reduction by optimized authentication
CN112632358A (en) Resource link obtaining method and device, electronic equipment and storage medium
CN113282354B (en) H5 page loading method, device and equipment of application program and storage medium
US20190342405A1 (en) Usage-based intelligent loading of components in a component-driven, multi-tenant cloud application
CN109101429B (en) Method and device for debugging browser page of set top box

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination