CN113419784A - Page resource caching method, device, equipment and medium - Google Patents

Page resource caching method, device, equipment and medium Download PDF

Info

Publication number
CN113419784A
CN113419784A CN202110722084.XA CN202110722084A CN113419784A CN 113419784 A CN113419784 A CN 113419784A CN 202110722084 A CN202110722084 A CN 202110722084A CN 113419784 A CN113419784 A CN 113419784A
Authority
CN
China
Prior art keywords
page
address
predicted
access
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110722084.XA
Other languages
Chinese (zh)
Inventor
余鸿飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weikun Shanghai Technology Service Co Ltd
Original Assignee
Weikun Shanghai Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weikun Shanghai Technology Service Co Ltd filed Critical Weikun Shanghai Technology Service Co Ltd
Priority to CN202110722084.XA priority Critical patent/CN113419784A/en
Priority to PCT/CN2021/109049 priority patent/WO2023272858A1/en
Publication of CN113419784A publication Critical patent/CN113419784A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and discloses a caching method, a caching device, caching equipment and a caching medium for page resources, wherein the method comprises the following steps: responding to a page loading completion signal, and acquiring a target user portrait according to a user identifier to be predicted; adopting a first preset page prediction model to perform page address prediction of i +1 access according to the true value of the ith access page address and the target user portrait to obtain a predicted value of the i +1 access page address; searching the predicted value of the address of the (i + 1) th access page in a local cache to obtain a cache searching result; when the cache searching result is not cached, acquiring the page resource from the server according to the predicted value of the (i + 1) th access page address to obtain the page resource to be cached; and storing the page resource to be cached to a local cache. Therefore, the page resources which are possibly accessed by the user next time are automatically cached to the local cache under the condition that the user does not sense, and the loading speed of the unopened page is improved.

Description

Page resource caching method, device, equipment and medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a medium for caching page resources.
Background
When a user accesses a certain page, the client first downloads static resources of the page, such as CSS (cascading style sheet), js (javascript), HTML (hypertext markup language), picture files, and the like, and when all static resources of the page are downloaded, the whole page is normally loaded. The loading mode has the following problems: if the static resource of the page is larger or the current network state of the user is not ideal, more time is consumed for downloading the static resource, and the user often sees a long-time loading animation or a page white screen when waiting for page loading, thereby influencing the user experience. In order to solve the problem of slow loading, a cache function of the client is adopted, when a user accesses a page, the client caches the static resource of the page locally, and when the user opens the page again, the client directly reads the static resource file from the local cache, so that the loading speed is increased. However, the method of caching static resources of the accessed page by using the cache function of the client relies on that the user opens the page in advance, and the cache policy is invalid for the page which is not opened by the user, so that the loading speed of the page which is not opened cannot be increased.
Disclosure of Invention
The application mainly aims to provide a page resource caching method, device, equipment and medium, and aims to solve the technical problem that the loading speed of an unopened page cannot be increased by adopting a method for caching static resources of an accessed page by a client with a caching function in the prior art.
In order to achieve the above object, the present application provides a method for caching page resources, where the method includes:
acquiring a page loading completion signal of an ith access page, wherein the page loading completion signal carries a user identifier to be predicted and an address real value of the ith access page;
responding to the page loading completion signal, and acquiring a target user portrait according to the user identification to be predicted;
adopting a first preset page prediction model to perform page address prediction of i +1 access according to the true value of the ith access page address and the target user portrait to obtain an i +1 access page address prediction value;
searching the predicted value of the address of the (i + 1) th access page in a local cache;
when the cache result is not found, acquiring a page resource from a server according to the predicted value of the (i + 1) th access page address to obtain the page resource to be cached;
and storing the page resource to be cached to the local cache.
Further, the step of obtaining a target user representation according to the user identifier to be predicted includes:
and calling a page prediction interface, inputting the user identification to be predicted to a user portrait model, and acquiring a target user portrait output by the user portrait model, wherein the user portrait model acquires user data to be portrait according to the user identification to be predicted, and performs user portrait according to the user data to be portrait to obtain the target user portrait.
Further, the step of searching the predicted value of the address of the i +1 th access page in a local cache includes:
searching the predicted value of the address of the (i + 1) th access page in the local cache;
when the page address is not found, determining that a cache result is not found in the local cache;
when the page address is found, acquiring a version identifier from the server according to the predicted value of the (i + 1) th access page address to obtain a version identifier to be cached, and taking the version identifier of the page resource corresponding to the predicted value of the (i + 1) th access page address in the local cache as the local cache version identifier;
comparing the version identification to be cached with the local cache version identification;
when the version identification to be cached is the same as the local cache version identification, determining that a cache result is searched in the local cache;
and when the version identification to be cached is different from the local cache version identification, determining that a cache result is not found in the local cache.
Further, the step of obtaining the page resource from the server according to the predicted value of the i +1 th access page address to obtain the page resource to be cached includes:
generating a page resource acquisition request according to the predicted value of the (i + 1) th access page address, and sending the page resource acquisition request to the server;
and acquiring the page resource sent by the server according to the page resource acquisition request as the page resource to be cached.
Further, after the step of obtaining the page load completion signal of the ith access page, the method further includes:
acquiring a page address real value of an i-1 visited page according to the user identifier to be predicted and the i-th visited page address real value to obtain an i-1 visited page address real value;
acquiring a user image to be predicted according to the user identifier to be predicted;
training a second preset page prediction model according to the ith-1 visit page address real value, the ith visit page address real value and the user image to be predicted;
and updating the first preset page prediction model according to the trained second preset page prediction model.
Further, the step of training a second preset page prediction model according to the i-1 th visited page address real value, the i-th visited page address real value and the user image to be predicted includes:
taking the real value of the address of the ith access page as a calibration value of the ith access page;
inputting the user image to be predicted and the real value of the (i-1) th access page address into the second preset page prediction model to perform the page address prediction of the ith access, so as to obtain the predicted value of the ith access page address;
and training the second preset page prediction model according to the ith visit page address prediction value and the ith visit page calibration value.
Further, the step of updating the first preset page prediction model according to the trained second preset page prediction model includes:
and updating the model parameters in the first preset page prediction model according to the model parameters in the second preset page prediction model by adopting preset model parameter updating time, wherein the model structures of the first preset page prediction model and the second preset page prediction model are the same.
The present application further provides a device for caching page resources, the device includes:
the page loading completion signal acquisition module is used for acquiring a page loading completion signal of the ith access page, wherein the page loading completion signal carries the user identifier to be predicted and the real value of the address of the ith access page;
the target user portrait acquisition module is used for responding to the page loading completion signal and acquiring a target user portrait according to the user identification to be predicted;
the page address prediction module is used for predicting the page address of the i +1 th visit according to the real value of the address of the i th visit page and the target user portrait by adopting a first preset page prediction model to obtain a predicted value of the address of the i +1 th visit page;
the cache searching module is used for searching the predicted value of the address of the (i + 1) th access page in a local cache;
the page resource determining module to be cached is used for acquiring the page resource from the server according to the predicted value of the (i + 1) th access page address when the caching result is not found, so as to obtain the page resource to be cached;
and the storage module is used for storing the page resource to be cached to the local cache.
The present application further proposes a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.
The present application also proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any of the above.
The method, the device, the equipment and the medium for caching the page resources acquire the page loading completion signal of the ith access page firstly, the page loading completion signal carries the user identification to be predicted and the real value of the address of the ith access page, secondly, the page loading completion signal is responded, a target user portrait is acquired according to the user identification to be predicted, a first preset page prediction model is adopted, the page address prediction of the (i + 1) th access is performed according to the real value of the address of the ith access page and the target user portrait to acquire the address prediction value of the (i + 1) th access page, then the address prediction value of the (i + 1) th access page is searched in a local cache, and when the cache result is not searched, the page resources are acquired from a server side according to the address prediction value of the (i + 1) th access page, and finally, the page resource to be cached is stored in the local cache, so that the page resource which is possibly accessed by the user next time is automatically cached in the local cache under the condition that the user does not sense, and when the user actually accesses the predicted page address, the page resource is obtained from the local cache to be loaded, so that the page loading time is reduced, the loading speed of the page which is not opened is improved, and the user experience is improved.
Drawings
Fig. 1 is a schematic flowchart of a page resource caching method according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a structure of a cache device of a page resource according to an embodiment of the present application;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, an embodiment of the present application provides a page resource caching method, where the method includes:
s1: acquiring a page loading completion signal of an ith access page, wherein the page loading completion signal carries a user identifier to be predicted and an address real value of the ith access page;
s2: responding to the page loading completion signal, and acquiring a target user portrait according to the user identification to be predicted;
s3: adopting a first preset page prediction model to perform page address prediction of i +1 access according to the true value of the ith access page address and the target user portrait to obtain an i +1 access page address prediction value;
s4: searching the predicted value of the address of the (i + 1) th access page in a local cache;
s5: when the cache result is not found, acquiring a page resource from a server according to the predicted value of the (i + 1) th access page address to obtain the page resource to be cached;
s6: and storing the page resource to be cached to the local cache.
In the embodiment, a page loading completion signal of an ith access page is firstly obtained, the page loading completion signal carries a user identifier to be predicted and an ith access page address true value, then a target user portrait is obtained according to the user identifier to be predicted in response to the page loading completion signal, a first preset page prediction model is adopted, page address prediction of i +1 access is carried out according to the ith access page address true value and the target user portrait, an i +1 access page address prediction value is obtained, then the i +1 access page address prediction value is searched in a local cache, when a cache result is not found, a page resource is obtained from a service end according to the i +1 access page address prediction value, the page resource to be cached is obtained, and finally the page resource to be cached is stored in the local cache, therefore, the page resources which are possibly accessed by the user next time are automatically cached to the local cache under the condition that the user does not sense, and the page resources are obtained from the local cache to be loaded when the user actually accesses the predicted page address, so that the page loading time is reduced, the loading speed of the unopened page is improved, and the user experience is improved.
For S1, the client loads the ith access page of the user identifier to be predicted, generates a page loading completion signal when the ith access page is loaded, takes the page address of the ith access page as the true value of the ith access page address, and takes the user identifier to be predicted and the true value of the ith access page address as parameters carried by the page loading completion signal.
The page load completion signal is a signal generated by the completion of the load of the ith access page.
The client can be a client of a mobile device, a client of a computer and a browser.
Optionally, the ith access page refers to a page currently browsed by the user. The page is a Web (global wide area network) page.
The user identification to be predicted, namely the user identification of the user accessing the ith access page. The user identification may be a user name, a user ID, or the like, which uniquely identifies a user.
And the true value of the address of the ith access page is the URL (uniform resource locator) address of the page actually accessed by the ith user corresponding to the user identifier to be predicted.
For S2, when receiving the page load complete signal, in response to the page load complete signal, obtaining a user portrait corresponding to the user identifier to be predicted as a target user portrait.
Optionally, when the page loading completion signal is received, responding to the page loading completion signal, calling a page prediction interface, and obtaining a user portrait corresponding to the user identifier to be predicted as a target user portrait, so as to decouple the user portrait from the client, which is beneficial to improving the stability of the client service.
Optionally, the step of obtaining the user portrait corresponding to the user identifier to be predicted as the target user portrait includes: acquiring a user image library; and searching the user identification to be predicted in the user portrait library, and taking the user portrait corresponding to the user identification searched in the user portrait library as the target user portrait.
The user image library includes: user identification, user figures, each user identification corresponding to a user figure.
User representations include, but are not limited to: gender, age, school calendar, location, equipment information, user category labels, historical browsing history.
When the application is applied to the financial industry, the user portrait further comprises: user risk level, user security level, historical investment records (product type, amount).
And S3, inputting the true value of the ith visit page address and the target user portrait into the first preset page prediction model to perform page address prediction of the (i + 1) th visit, and taking the predicted page address as a predicted value of the (i + 1) th visit page address.
Optionally, the page prediction interface is called, the true value of the ith access page address and the target user portrait are input into the first preset page prediction model to perform the prediction of the page address of the (i + 1) th access, and the predicted page address is used as the predicted value of the (i + 1) th access page address, so that the first preset page prediction model and the client are decoupled, and the stability of the client service is improved.
For example, the true value of the address of the ith access page is the true value of the page address of the ith access page of the user m, and the predicted value of the address of the (i + 1) th access page is the page address of the page predicted to be accessed by the user m at the (i + 1) th time, which is not specifically limited in this example.
The target user profile is used as an input to the first predetermined page prediction model because the next possible incoming page is similar for users with the same or similar profiles. For example, a user with high risk tolerance may tend to select a fund product with greater volatility when browsing an investment list page, so when the user does not enter a fund product page, page resources of a page corresponding to the fund product with greater volatility are cached in advance to a local cache of the client, which is not specifically limited in this example.
Wherein the first preset page prediction model is a model obtained by training based on an AI (artificial intelligence) deep learning model.
For S4, searching in the local cache of the client according to the i +1 th access page address predicted value, determining that the cache search result is cached when the page resource corresponding to the i +1 th access page address predicted value is found in the local cache of the client, and determining that the cache search result is not cached when the page resource corresponding to the i +1 th access page address predicted value is not found in the local cache of the client.
Page resources, that is, static resources of the page, such as CSS (cascading style sheet), js (javascript), HTML (hypertext markup language), picture files, and the like.
For S5, when the cache result is not found, it means that the page resource corresponding to the i +1 th access page address prediction value is not stored in the local cache of the client, so that the page resource is obtained from the server according to the i +1 th access page address prediction value, and all the obtained page resources are used as the page resource to be cached.
For S6, the page resource to be cached is stored in the local cache of the client, so that when the user accesses the page corresponding to the i +1 th access page address prediction value, the page resource is obtained from the local cache of the client for loading, thereby reducing the page loading time, increasing the loading speed of the unopened page, and improving the user experience.
In an embodiment, the step of obtaining the target user representation according to the user identifier to be predicted includes:
s21: and calling a page prediction interface, inputting the user identification to be predicted to a user portrait model, and acquiring a target user portrait output by the user portrait model, wherein the user portrait model acquires user data to be portrait according to the user identification to be predicted, and performs user portrait according to the user data to be portrait to obtain the target user portrait.
According to the method and the device, the user identification to be predicted is input to the user portrait model by calling a page prediction interface, the target user portrait output by the user portrait model is obtained, the user portrait model obtains the user data to be portrait according to the user identification to be predicted, the user portrait is carried out according to the user data to be portrait, and the target user portrait is obtained, so that the user portrait can be generated in real time, the accuracy of the determined target user portrait is improved, and the accuracy of the page address prediction of the next visit is improved; the user portrait is decoupled from the client through the page prediction interface, which is beneficial to improving the stability of the client service.
And S21, calling a page prediction interface, inputting the user identification to be predicted to a user portrait model, acquiring user data to be portrait when the user portrait model receives the user identification to be predicted, performing user portrait in real time according to the user data to be portrait, and taking the data obtained by user portrait as the target user portrait.
User data to be rendered includes, but is not limited to: user basic information, user category labels and historical browsing records. The user basic information includes but is not limited to: gender, age, school calendar, location, equipment information. The user category label is a category label that classifies the user. The history browsing record is a record of the history browsing page of the user.
The user image model is a model obtained based on neural network training.
In an embodiment, the step of searching the predicted value of the address of the i +1 th access page in the local cache includes:
s41: searching the predicted value of the address of the (i + 1) th access page in the local cache;
s42: when the page address is not found, determining that a cache result is not found in the local cache;
s43: when the page address is found, acquiring a version identifier from the server according to the predicted value of the (i + 1) th access page address to obtain a version identifier to be cached, and taking the version identifier of the page resource corresponding to the predicted value of the (i + 1) th access page address in the local cache as the local cache version identifier;
s44: comparing the version identification to be cached with the local cache version identification;
s45: when the version identification to be cached is the same as the local cache version identification, determining that a cache result is searched in the local cache;
s46: and when the version identification to be cached is different from the local cache version identification, determining that a cache result is not found in the local cache.
The embodiment realizes the comparison between the version identifier to be cached and the local cache version identifier, determines that the cache search result is cached when the version identifier to be cached is the same as the local cache version identifier, and determines that the cache search result is not cached when the version identifier to be cached is different from the local cache version identifier, thereby improving the accuracy of the cache search result.
For S42, when no page address is found, it means that the i +1 th access page address prediction value does not cache a page resource in the local cache of the client, so it may be determined that a cache result is not found in the local cache.
For S43, when the page address is found, it means that the i +1 th access page address predicted value caches a page resource in a local cache of the client, acquiring a version identifier from the server according to the i +1 th access page address predicted value, and taking the acquired version identifier as a version identifier to be cached; and taking the version identifier of the page resource corresponding to the (i + 1) th access page address predicted value in the local cache of the client as the local cache version identifier. That is to say, the version identifier to be cached is the version identifier of the latest version of the page resource at the server, and the local cache version identifier is the version identifier of the locally cached page resource.
For S45, when the version identifier to be cached is the same as the local cache version identifier, it means that the page resource of the latest version of the server is the same as the page resource of the local cache, and thus it is determined that the cache result is found in the local cache.
For S46, when the version identifier to be cached is not the same as the local cache version identifier, it means that the page resource of the latest version of the server is not the same as the page resource of the local cache, and thus it is determined that the cache result is not found in the local cache.
In an embodiment, the step of obtaining the page resource from the server according to the predicted value of the i +1 th access page address to obtain the page resource to be cached includes:
s51: generating a page resource acquisition request according to the predicted value of the (i + 1) th access page address, and sending the page resource acquisition request to the server;
s52: and acquiring the page resource sent by the server according to the page resource acquisition request as the page resource to be cached.
According to the embodiment, the page resource obtaining request is generated according to the i +1 th access page address predicted value, then the page resource sent by the server according to the page resource obtaining request is obtained and used as the page resource to be cached, so that the latest version of the page resource of the server is obtained, and a foundation is provided for caching the page resource in the local cache of the client in advance.
For S51, a page resource obtaining request is generated according to the i +1 th access page address prediction value, that is, when the page resource obtaining request is generated, the i +1 th access page address prediction value is encapsulated in a parameter of the page resource obtaining request.
For S52, when receiving the page resource acquisition request, the server first parses the i +1 th visited page address prediction value from the page resource acquisition request, then searches the parsed i +1 th visited page address prediction value from a page resource library, and sends the page resource corresponding to the page address found in the page resource library to the client corresponding to the page resource acquisition request.
The page resource library comprises: page addresses and page resources, wherein each page address corresponds to one page resource.
In an embodiment, after the step of obtaining the page load completion signal of the ith access page, the method further includes:
s71: acquiring a page address real value of an i-1 visited page according to the user identifier to be predicted and the i-th visited page address real value to obtain an i-1 visited page address real value;
s72: acquiring a user image to be predicted according to the user identifier to be predicted;
s73: training a second preset page prediction model according to the ith-1 visit page address real value, the ith visit page address real value and the user image to be predicted;
s74: and updating the first preset page prediction model according to the trained second preset page prediction model.
According to the embodiment, when the page loading completion signal of the ith visit page is acquired, the second preset page prediction model is trained, and the first preset page prediction model is updated according to the trained second preset page prediction model, so that training data do not need to be acquired manually, and the accuracy of the first preset page prediction model for predicting the page address of the next visit is rapidly improved; by separating the trained model and the service providing model, the response efficiency of the first preset page prediction model providing the service is improved.
For S71, according to the user identifier to be predicted and the real value of the ith visit page address, the real value of the page address of the ith-1 visit page is obtained from the local cache of the client, and the obtained real value of the page address of the ith-1 visit page is used as the real value of the ith-1 visit page address.
And S72, inputting the user identification to be predicted to a user portrait model, and acquiring the user portrait to be predicted sent by the user portrait model according to the user identification to be predicted.
Optionally, the page training interface is called, the user identifier to be predicted is input to the user portrait model, and the user portrait to be predicted sent by the user portrait model according to the user identifier to be predicted is obtained, so that the user portrait and the client are decoupled, and the stability of the client service is improved.
Optionally, the step of obtaining the user representation to be predicted according to the user identifier to be predicted includes: acquiring the user image library; and searching the user identification to be predicted in the user portrait library, and taking the user portrait corresponding to the user identification searched in the user portrait library as the user portrait to be predicted.
And S73, inputting the real value of the ith-1 visit page address and the user image to be predicted into the second preset page prediction model to predict the page address of the ith visit, and training the second preset page prediction model according to the predicted page address and the real value of the ith visit page address.
And calling the page training interface, inputting the real value of the i-1 th visit page address and the user image to be predicted into the second preset page prediction model to predict the page address of the i visit, and training the second preset page prediction model according to the predicted page address and the real value of the i visit page address, so that the second preset page prediction model is decoupled from the client, and the stability of the client service is improved.
The model structures of the first preset page prediction model and the second preset page prediction model are the same.
For step S74, updating the model parameters of the first preset page prediction model according to the trained model parameters of the second preset page prediction model.
It is understood that, in another embodiment, the first preset page prediction model may also be trained according to the i-1 th visited page address real value, the i-th visited page address real value, and the user image to be predicted, which is not limited herein.
In an embodiment, the step of training a second preset page prediction model according to the i-1 th visited page address real value, the i-th visited page address real value, and the user image to be predicted includes:
s731: taking the real value of the address of the ith access page as a calibration value of the ith access page;
s732: inputting the user image to be predicted and the real value of the (i-1) th access page address into the second preset page prediction model to perform the page address prediction of the ith access, so as to obtain the predicted value of the ith access page address;
s733: and training the second preset page prediction model according to the ith visit page address prediction value and the ith visit page calibration value.
In this embodiment, the user image to be predicted and the real value of the ith-1 th visited page address are input into the second preset page prediction model to perform the page address prediction of the ith visit, and the second preset page prediction model is trained according to the predicted value of the ith visited page address and the calibration value of the ith visited page, so that training data do not need to be acquired manually, and the accuracy of the first preset page prediction model in predicting the page address of the next visit is rapidly improved.
For S731, since the address real value of the ith access page is the page address actually accessed by the user, the address real value of the ith access page is used as the calibration value of the ith access page.
For step S732, the user image to be predicted and the real value of the i-1 th visited page address are input into the second preset page prediction model to perform the prediction of the page address visited the i-th time, and the predicted page address is used as the predicted value of the i-th visited page address.
For step S733, the predicted value of the ith access page address and the calibrated value of the ith access page are input to a cross entropy loss function to perform loss value calculation, so as to obtain a target loss value, and the parameters of the second preset page prediction model are updated according to the target loss value.
In an embodiment, the step of updating the first preset page prediction model according to the trained second preset page prediction model includes:
s741: and updating the model parameters in the first preset page prediction model according to the model parameters in the second preset page prediction model by adopting preset model parameter updating time, wherein the model structures of the first preset page prediction model and the second preset page prediction model are the same.
According to the method and the device, the model parameters in the first preset page prediction model are updated by adopting the preset model parameter updating time, so that the influence of frequent updating on the service providing capability of the first preset page prediction model is avoided.
For S741, the preset model parameter update time includes, but is not limited to: 3 points daily.
Extracting model parameters from the second preset page prediction model to obtain a model parameter matrix to be updated; and updating the model parameters in the first preset page prediction model according to the model parameter matrix to be updated, using the updated first preset page prediction model for predicting the next page address for next access, and only updating the model parameters, so that the updated data volume is reduced and the updating efficiency is improved.
Referring to fig. 2, the present application further provides a page resource caching apparatus, where the apparatus includes:
a page loading completion signal obtaining module 100, configured to obtain a page loading completion signal of an ith access page, where the page loading completion signal carries a user identifier to be predicted and a true value of an address of the ith access page;
a target user portrait acquisition module 200, configured to respond to the page loading completion signal and acquire a target user portrait according to the user identifier to be predicted;
the page address prediction module 300 is configured to perform, by using a first preset page prediction model, prediction of a page address of the (i + 1) th visit according to the true value of the address of the ith visit page and the target user portrait, so as to obtain a predicted value of the address of the (i + 1) th visit page;
a cache lookup module 400, configured to lookup the predicted value of the i +1 th access page address in a local cache;
the to-be-cached page resource determining module 500 is configured to, when a cache result is not found, obtain a page resource from a server according to the i +1 th access page address predicted value, and obtain the to-be-cached page resource;
a storing module 600, configured to store the page resource to be cached in the local cache.
In the embodiment, a page loading completion signal of an ith access page is firstly obtained, the page loading completion signal carries a user identifier to be predicted and an ith access page address true value, then a target user portrait is obtained according to the user identifier to be predicted in response to the page loading completion signal, a first preset page prediction model is adopted, page address prediction of i +1 access is carried out according to the ith access page address true value and the target user portrait, an i +1 access page address prediction value is obtained, then the i +1 access page address prediction value is searched in a local cache, when a cache result is not found, a page resource is obtained from a service end according to the i +1 access page address prediction value, the page resource to be cached is obtained, and finally the page resource to be cached is stored in the local cache, therefore, the page resources which are possibly accessed by the user next time are automatically cached to the local cache under the condition that the user does not sense, and the page resources are obtained from the local cache to be loaded when the user actually accesses the predicted page address, so that the page loading time is reduced, the loading speed of the unopened page is improved, and the user experience is improved.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing data such as a cache method of page resources. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of caching page resources. The page resource caching method comprises the following steps: acquiring a page loading completion signal of an ith access page, wherein the page loading completion signal carries a user identifier to be predicted and an address real value of the ith access page; responding to the page loading completion signal, and acquiring a target user portrait according to the user identification to be predicted; adopting a first preset page prediction model to perform page address prediction of i +1 access according to the true value of the ith access page address and the target user portrait to obtain an i +1 access page address prediction value; searching the predicted value of the address of the (i + 1) th access page in a local cache; when the cache result is not found, acquiring a page resource from a server according to the predicted value of the (i + 1) th access page address to obtain the page resource to be cached; and storing the page resource to be cached to the local cache.
In the embodiment, a page loading completion signal of an ith access page is firstly obtained, the page loading completion signal carries a user identifier to be predicted and an ith access page address true value, then a target user portrait is obtained according to the user identifier to be predicted in response to the page loading completion signal, a first preset page prediction model is adopted, page address prediction of i +1 access is carried out according to the ith access page address true value and the target user portrait, an i +1 access page address prediction value is obtained, then the i +1 access page address prediction value is searched in a local cache, when a cache result is not found, a page resource is obtained from a service end according to the i +1 access page address prediction value, the page resource to be cached is obtained, and finally the page resource to be cached is stored in the local cache, therefore, the page resources which are possibly accessed by the user next time are automatically cached to the local cache under the condition that the user does not sense, and the page resources are obtained from the local cache to be loaded when the user actually accesses the predicted page address, so that the page loading time is reduced, the loading speed of the unopened page is improved, and the user experience is improved.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for caching a page resource, and the method includes: acquiring a page loading completion signal of an ith access page, wherein the page loading completion signal carries a user identifier to be predicted and an address real value of the ith access page; responding to the page loading completion signal, and acquiring a target user portrait according to the user identification to be predicted; adopting a first preset page prediction model to perform page address prediction of i +1 access according to the true value of the ith access page address and the target user portrait to obtain an i +1 access page address prediction value; searching the predicted value of the address of the (i + 1) th access page in a local cache; when the cache result is not found, acquiring a page resource from a server according to the predicted value of the (i + 1) th access page address to obtain the page resource to be cached; and storing the page resource to be cached to the local cache.
The executed caching method of the page resource comprises the steps of firstly obtaining a page loading completion signal of an ith access page, wherein the page loading completion signal carries a user identifier to be predicted and an ith access page address true value, secondly responding to the page loading completion signal, obtaining a target user portrait according to the user identifier to be predicted, adopting a first preset page prediction model, carrying out page address prediction of i +1 access according to the ith access page address true value and the target user portrait to obtain an i +1 access page address predicted value, then searching the i +1 access page address predicted value in a local cache, and obtaining the page resource to be cached from a server side according to the i +1 access page address predicted value when a cache result is not found, and finally, the page resources to be cached are stored in the local cache, so that the page resources which are possibly accessed by the user next time are automatically cached in the local cache under the condition that the user does not sense, and the page resources are obtained from the local cache to be loaded when the user actually accesses the predicted page address, so that the page loading time is reduced, the loading speed of the unopened page is improved, and the user experience is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A caching method for page resources is characterized by comprising the following steps:
acquiring a page loading completion signal of an ith access page, wherein the page loading completion signal carries a user identifier to be predicted and an address real value of the ith access page;
responding to the page loading completion signal, and acquiring a target user portrait according to the user identification to be predicted;
adopting a first preset page prediction model to perform page address prediction of i +1 access according to the true value of the ith access page address and the target user portrait to obtain an i +1 access page address prediction value;
searching the predicted value of the address of the (i + 1) th access page in a local cache;
when the cache result is not found, acquiring a page resource from a server according to the predicted value of the (i + 1) th access page address to obtain the page resource to be cached;
and storing the page resource to be cached to the local cache.
2. The method for caching page resources according to claim 1, wherein said step of obtaining a target user representation according to said user identifier to be predicted comprises:
and calling a page prediction interface, inputting the user identification to be predicted into a user portrait model, and acquiring a target user portrait output by the user portrait model, wherein the user portrait model acquires user data to be portrait according to the user identification to be predicted, and performs user portrait according to the user data to be portrait to obtain the target user portrait.
3. The method for caching page resources according to claim 1, wherein the step of searching the predicted value of the i +1 th access page address in a local cache comprises:
searching the predicted value of the address of the (i + 1) th access page in the local cache;
when the page address is not found, determining that a cache result is not found in the local cache;
when the page address is found, acquiring a version identifier from the server according to the predicted value of the (i + 1) th access page address to obtain a version identifier to be cached, and taking the version identifier of the page resource corresponding to the predicted value of the (i + 1) th access page address in the local cache as the local cache version identifier;
comparing the version identification to be cached with the local cache version identification;
when the version identification to be cached is the same as the local cache version identification, determining that a cache result is searched in the local cache;
and when the version identification to be cached is different from the local cache version identification, determining that a cache result is not found in the local cache.
4. The method for caching page resources according to claim 1, wherein the step of obtaining the page resources to be cached from the server according to the predicted value of the i +1 th access page address comprises:
generating a page resource acquisition request according to the predicted value of the (i + 1) th access page address, and sending the page resource acquisition request to the server;
and acquiring the page resource sent by the server according to the page resource acquisition request as the page resource to be cached.
5. The method for caching page resources according to claim 1, wherein after the step of obtaining the page load completion signal of the ith access page, the method further comprises:
acquiring a page address real value of an i-1 visited page according to the user identifier to be predicted and the i-th visited page address real value to obtain an i-1 visited page address real value;
acquiring a user image to be predicted according to the user identifier to be predicted;
training a second preset page prediction model according to the ith-1 visit page address real value, the ith visit page address real value and the user image to be predicted;
and updating the first preset page prediction model according to the trained second preset page prediction model.
6. The method for caching page resources according to claim 5, wherein said step of training a second preset page prediction model according to said i-1 th visited page address true value, said i-th visited page address true value, and said user picture to be predicted comprises:
taking the real value of the address of the ith access page as a calibration value of the ith access page;
inputting the user image to be predicted and the real value of the (i-1) th access page address into the second preset page prediction model to perform the page address prediction of the ith access, so as to obtain the predicted value of the ith access page address;
and training the second preset page prediction model according to the ith visit page address prediction value and the ith visit page calibration value.
7. The method for caching page resources according to claim 5, wherein said step of updating said first predetermined page prediction model according to said trained second predetermined page prediction model comprises:
and updating the model parameters in the first preset page prediction model according to the model parameters in the second preset page prediction model by adopting preset model parameter updating time, wherein the model structures of the first preset page prediction model and the second preset page prediction model are the same.
8. An apparatus for caching page resources, the apparatus comprising:
the page loading completion signal acquisition module is used for acquiring a page loading completion signal of the ith access page, wherein the page loading completion signal carries the user identifier to be predicted and the real value of the address of the ith access page;
the target user portrait acquisition module is used for responding to the page loading completion signal and acquiring a target user portrait according to the user identification to be predicted;
the page address prediction module is used for predicting the page address of the i +1 th visit according to the real value of the address of the i th visit page and the target user portrait by adopting a first preset page prediction model to obtain a predicted value of the address of the i +1 th visit page;
the cache searching module is used for searching the predicted value of the address of the (i + 1) th access page in a local cache;
the page resource determining module to be cached is used for acquiring the page resource from the server according to the predicted value of the (i + 1) th access page address when the caching result is not found, so as to obtain the page resource to be cached;
and the storage module is used for storing the page resource to be cached to the local cache.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110722084.XA 2021-06-28 2021-06-28 Page resource caching method, device, equipment and medium Pending CN113419784A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110722084.XA CN113419784A (en) 2021-06-28 2021-06-28 Page resource caching method, device, equipment and medium
PCT/CN2021/109049 WO2023272858A1 (en) 2021-06-28 2021-07-28 Page resource caching method and apparatus, device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110722084.XA CN113419784A (en) 2021-06-28 2021-06-28 Page resource caching method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113419784A true CN113419784A (en) 2021-09-21

Family

ID=77717808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110722084.XA Pending CN113419784A (en) 2021-06-28 2021-06-28 Page resource caching method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN113419784A (en)
WO (1) WO2023272858A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193907A (en) * 2023-08-25 2023-12-08 中移互联网有限公司 Page processing method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116383537B (en) * 2023-05-23 2023-09-08 飞狐信息技术(天津)有限公司 Page data preloading method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107864173A (en) * 2017-06-26 2018-03-30 平安普惠企业管理有限公司 Terminal page caching method, system and readable storage medium storing program for executing
CN111639289A (en) * 2020-05-13 2020-09-08 北京三快在线科技有限公司 Webpage loading method and device
CN112905939A (en) * 2021-02-25 2021-06-04 平安普惠企业管理有限公司 HTML5 page resource loading method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9756108B2 (en) * 2012-05-29 2017-09-05 Google Inc. Preloading resources of a web page
CN108280125A (en) * 2017-12-12 2018-07-13 腾讯科技(深圳)有限公司 Method, apparatus, storage medium and the electronic device that the page is shown
CN112073405B (en) * 2020-09-03 2024-02-06 中国平安财产保险股份有限公司 Webpage data loading method and device, computer equipment and storage medium
CN112612982A (en) * 2021-01-05 2021-04-06 上海哔哩哔哩科技有限公司 Webpage preloading method and device and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107864173A (en) * 2017-06-26 2018-03-30 平安普惠企业管理有限公司 Terminal page caching method, system and readable storage medium storing program for executing
CN111639289A (en) * 2020-05-13 2020-09-08 北京三快在线科技有限公司 Webpage loading method and device
CN112905939A (en) * 2021-02-25 2021-06-04 平安普惠企业管理有限公司 HTML5 page resource loading method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193907A (en) * 2023-08-25 2023-12-08 中移互联网有限公司 Page processing method and device

Also Published As

Publication number Publication date
WO2023272858A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
CN109582899B (en) Page loading method and device, computer equipment and storage medium
US8745341B2 (en) Web server cache pre-fetching
CN112905939B (en) HTML5 page resource loading method, device, equipment and storage medium
CN113419784A (en) Page resource caching method, device, equipment and medium
CN110321408B (en) Searching method and device based on knowledge graph, computer equipment and storage medium
CN113282354B (en) H5 page loading method, device and equipment of application program and storage medium
CN114238811A (en) Page loading method, page request response method, device, equipment and medium
CN111431767B (en) Multi-browser resource synchronization method and device, computer equipment and storage medium
CN112231379A (en) API (application program interface) auditing method, device, equipment and storage medium based on micro-service architecture
CN114064733B (en) Database query method, device, equipment and medium suitable for local client
CN114265976A (en) H5 page loading method, device, equipment and medium based on intelligent recommendation technology
US7254542B2 (en) Portal data passing through non-persistent browser cookies
CN112433784A (en) Page loading method, device, equipment and storage medium
CN113326080B (en) H5 page loading method, device, equipment and storage medium
CN113064584A (en) Idempotent realizing method, device, equipment and medium
CN114827271B (en) Multi-level cache online control method and related device
CN109962976A (en) A kind of http caching method and device based on small routine framework
CN114936237A (en) Behavior data analysis method, behavior data analysis device, behavior data analysis equipment and storage medium
CN114610973A (en) Information search matching method and device, computer equipment and storage medium
CN114003813A (en) Client grouping method, device, equipment and storage medium
CN114862470A (en) Advertisement resource adaptation method and device, computer equipment and storage medium
CN113486267A (en) Analysis method, device and equipment of application entry page and storage medium
CN110196724B (en) File loading method, terminal, server, computer equipment and storage medium
CN114548068A (en) Method, device, equipment and storage medium for generating insurance proposal
CN113934954A (en) Webpage first screen rendering method and device in application program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921