CN110929192A - Front-end request processing method and device - Google Patents

Front-end request processing method and device Download PDF

Info

Publication number
CN110929192A
CN110929192A CN201911103187.7A CN201911103187A CN110929192A CN 110929192 A CN110929192 A CN 110929192A CN 201911103187 A CN201911103187 A CN 201911103187A CN 110929192 A CN110929192 A CN 110929192A
Authority
CN
China
Prior art keywords
requests
request
page
page data
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911103187.7A
Other languages
Chinese (zh)
Inventor
雷涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Urban Network Neighbor Information Technology Co Ltd
Beijing City Network Neighbor Technology Co Ltd
Original Assignee
Beijing City Network Neighbor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing City Network Neighbor Technology Co Ltd filed Critical Beijing City Network Neighbor Technology Co Ltd
Priority to CN201911103187.7A priority Critical patent/CN110929192A/en
Publication of CN110929192A publication Critical patent/CN110929192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing a front-end request. The method comprises the following steps: after receiving static resource data, a front-end page generates different types of front-end requests according to the static resource data, wherein the different types of front-end requests at least comprise: a page data request and a buried point request; creating a front-end request queue, and adding all page data requests and all embedded point requests into the front-end request queue in sequence, wherein all the page data requests are in front of the embedded point requests; and sending all page data requests in the front-end request queue to a server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests. The invention can preferentially ensure the normal display of the page under the condition of poor network condition.

Description

Front-end request processing method and device
Technical Field
The present invention relates to the field of internet front-end, and in particular, to a method and an apparatus for processing a front-end request.
Background
In the internet front-end technology, when loading and displaying a front-end page, static resource data is generally acquired first, and then a front-end request is initiated according to the static resource data. And sending all the initiated front-end requests to the background server so as to acquire data needing to be displayed on the front-end page and other statistical analysis data and the like from the background server.
At present, when a front-end request is sent to a background server, all the front-end requests are usually sent to the background server in a parallel sending mode, so that the loading speed of a front-end page can be increased.
However, under some poor network conditions, for example, when the network bandwidth is small and the network speed is slow, if the number of the front-end requests sent in parallel is too large, congestion will be caused, so that the loading speed of the front-end page is slow.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide a front-end request processing method and apparatus that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides a method for processing a front-end request, where the method includes:
after receiving static resource data, a front-end page generates different types of front-end requests according to the static resource data, wherein the different types of front-end requests at least comprise: a page data request and a buried point request;
creating a front-end request queue, and adding all page data requests and all embedded point requests into the front-end request queue in sequence, wherein all the page data requests are in front of the embedded point requests;
and sending all page data requests in the front-end request queue to a server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
Optionally, the static resource data includes: first data written by a hypertext markup language, second data written by a cascading style sheet, and third data written by an transliteration scripting language.
Optionally, a state identifier corresponding to each front-end request is further stored in the front-end request queue, where the state identifier includes: a first flag indicating that the front-end request has finished executing and a second flag indicating that the front-end request has not executed.
Optionally, the step of sending all the page data requests in the front-end request queue to a server, and after receiving page display data returned by the server according to all the page data requests, starting to send the embedded point request to the server includes:
setting the state identifiers corresponding to all page data requests in the front-end request queue as the second identifiers;
sending the request contents of all page data requests to the server in a parallel sending mode so as to enable the server to return page display data according to the page data requests;
receiving the page display data, and setting a state identifier corresponding to a page data request corresponding to the received page display data as the first identifier;
and when the state identifiers corresponding to all the page data requests are the first identifiers, sending the embedded point requests to the server, and setting the state identifiers corresponding to the embedded point requests as the second identifiers.
Optionally, when the state identifiers corresponding to all the page data requests are the first identifiers, the method further includes:
and deleting all page data requests from the front-end request queue.
In a second aspect, an embodiment of the present invention further provides a device for processing a front-end request, where the device includes:
a generating module, configured to generate different types of front-end requests according to static resource data after a front-end page receives the static resource data, where the different types of front-end requests at least include: a page data request and a buried point request;
the queue module is used for creating a front-end request queue and sequentially adding all page data requests and all embedded point requests into the front-end request queue, wherein all the page data requests are in front of the embedded point requests;
and the processing module is used for sending all the page data requests in the front-end request queue to a server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
Optionally, a state identifier corresponding to each front-end request is further stored in the front-end request queue, where the state identifier includes: a first flag indicating that the front-end request has finished executing and a second flag indicating that the front-end request has not executed.
Optionally, the processing module includes:
a first state unit, configured to set state identifiers corresponding to all page data requests in the front-end request queue as the second identifier;
the first sending unit is used for sending the request contents of all the page data requests to the server in a parallel sending mode so as to enable the server to return page display data according to all the page data requests;
the second state unit is used for receiving the page display data and setting a state identifier corresponding to a page data request corresponding to the received page display data as the first identifier;
and the second sending unit is used for sending the embedded point request to the server and setting the state identifier corresponding to the embedded point request as the second identifier when the state identifiers corresponding to all the page data requests are the first identifiers.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps in the method for processing a front-end request as described above when executing the computer program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps in the method for processing the front-end request described above.
In the embodiment of the invention, after different types of front-end requests are generated according to static resource data, a front-end request queue is created, and the front-end requests are added into the front-end request queue for unified management. The page data requests related to the page display data are placed in front of the queue, so that all the page data requests are guaranteed to be sent preferentially; meanwhile, after page display data are received, a point burying request is sent to avoid the influence of the point burying request on page display. Therefore, normal display of the page is preferentially ensured under the condition of poor network condition.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram illustrating a method for processing a front-end request according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a step of sending a page data request and a buried point request according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a step of sending a page data request and a buried point request according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a device for processing a front-end request according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a processing module according to an embodiment of the invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for processing a front-end request, where the method includes:
step 101: after the front-end page receives the static resource data, generating different types of front-end requests according to the static resource data, wherein the different types of front-end requests at least comprise: a page data request and a buried point request;
it should be noted that when the front-end page performs page loading, it is preferred to acquire static resource data. The static resource data carries various types of front-end requests. The page data request in the front-end request is used for acquiring data such as pictures and characters required by page display. The buried point request is used to collect user information, jump information, etc., and does not relate to data required for page display. Preferably, the static resource data includes: first data written by HyperText Markup Language (HTML), second data written by Cascading Style Sheets (CSS), and third data written by interpreted scripting Language (JavaScript).
Step 102: creating a front-end request queue, and adding all page data requests and all embedded point requests into the front-end request queue in sequence, wherein all the page data requests are in front of the embedded point requests;
it should be noted that the front-end request queue follows the first-in-first-out principle, i.e. the front-end request at the front position in the queue will be executed first.
Step 103: and sending all page data requests in the front-end request queue to the server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
It should be noted that the server stores therein data required for loading the front-end page. And after the server receives the page data request, returning corresponding page display data according to the received page data request. In order to avoid the influence of executing the buried point request simultaneously when executing the page data request, the sending or executing of the buried point request may be started after all the page data requests are executed.
In the embodiment of the invention, after different types of front-end requests are generated according to static resource data, a front-end request queue is created, and the front-end requests are added into the front-end request queue for unified management. The page data requests related to the page display data are placed in front of the queue, so that all the page data requests are guaranteed to be sent preferentially; meanwhile, after page display data are received, a point burying request is sent to avoid the influence of the point burying request on page display. Therefore, normal display of the page is preferentially ensured under the condition of poor network condition.
In order to facilitate checking of the execution state of the front-end request, on the basis of the foregoing embodiment of the present invention, in the embodiment of the present invention, a state identifier corresponding to each front-end request is further stored in the front-end request queue, where the state identifier includes: a first flag indicating that the front-end request has finished executing and a second flag indicating that the front-end request has not executed.
It should be noted that each front-end request corresponding status identifier may be updated once every predetermined time, or when a change in the execution status of the front-end request is detected.
As shown in fig. 2, on the basis of the foregoing embodiments of the present invention, in the embodiments of the present invention, the step of sending all page data requests in the front-end request queue to the server, and after receiving page display data returned by the server according to all page data requests, starting to send a buried point request to the server includes:
step 201: setting state identifications corresponding to all page data requests in the front-end request queue as second identifications;
it should be noted that, since none of the front-end requests is executed, the status flags thereof are the second flag, that is, the front-end request does not start to be executed.
Step 202: sending the request contents of all page data requests to a server in a parallel sending mode so that the server returns page display data according to the page data requests;
it should be noted that the execution efficiency of the front-end request can be improved by the form of parallel transmission.
Step 203: receiving page display data, and setting a state identifier corresponding to a page data request corresponding to the received page display data as a first identifier;
it should be noted that after receiving the page display data, it indicates that the corresponding page data request has been executed, so that the corresponding status flag is changed to the first flag, i.e. the corresponding front-end request has been executed and ended. Of course, if the received data indicates that the execution of the page data request fails, the state identifier is also changed to the first identifier.
Step 204: and when the state identifiers corresponding to all the page data requests are the first identifiers, sending the embedded point requests to the server, and setting the state identifiers corresponding to the embedded point requests as the second identifiers.
It should be noted that, when the state identifiers corresponding to all the page data requests are the first identifiers, the method further includes: all page data requests are removed from the front-end request queue.
As shown in fig. 3, in order to combine the page data request and the buried point request, after all the page data requests are started to be executed, one buried point request may be started to be executed every time one page data request is executed, so as to ensure that the number of the front-end requests executed at the same time is not increased. Specifically, the method comprises the following steps:
step 301: and initiating a request, and generating various types of front-end requests after receiving the static resource data.
Step 302: and entering a request queue, adding the generated front-end requests into the request queue, and enabling all page data requests to be in front of the buried point request.
Step 303: whether the request queue is larger than 0 or not, and if not, ending; if yes, go to step 304.
Step 304: index is set to 0.
Step 305: taking a front-end request of a corresponding index in a request queue; each front-end request in the request queue has its own index, the index of the first front-end request is 0, and the indexes of each front-end request are sequentially added with 1 from the beginning to the end.
Step 306: checking the execution state of the extracted front-end request to determine whether the request is executed, if so, executing step 307, and otherwise, executing step 308;
step 307: add 1 to index and continue to step 205.
Step 308: whether the index is greater than an upper limit value, wherein the upper limit value is the number of all page data requests, and if yes, ending; if not, go to step 309:
step 309: the fetched front-end request is executed, and if the execution of the front-end request is not finished, step 307 is executed.
Step 310: the front end requests the execution to be finished, namely the front end requests the execution to be successful or the execution is failed.
Step 311: the front-end request for which the execution is finished is deleted from the request queue, and the process returns to step 303.
In order to conveniently view the execution state of the front-end request and distinguish each front-end request, the form of the front-end request in the embodiment of the present invention may be:
{
url:“https://www.58.com”,
status:1,
key:0.9164581064115993
}
where url represents the url of the request, status represents the execution status of the current request, 1 is executing, 0 is not executing, and key is a random number, such as a random number generated by JavaScript, which can be used as a unique identifier of the request.
The foregoing describes a method for processing a front-end request provided by an embodiment of the present invention, and a processing apparatus for a front-end request provided by an embodiment of the present invention is described below with reference to the accompanying drawings.
Referring to fig. 4 and fig. 5, an embodiment of the present invention further provides a front-end request processing apparatus, where the apparatus includes:
a generating module 41, configured to generate different types of front-end requests according to the static resource data after the front-end page receives the static resource data, where the different types of front-end requests at least include: a page data request and a buried point request;
the queue module 42 is configured to create a front-end request queue, and add all page data requests and all embedded point requests to the front-end request queue in sequence, where all the page data requests are in front of the embedded point requests;
and the processing module 43 is configured to send all the page data requests in the front-end request queue to the server, and start sending the embedded point request to the server after receiving the page display data returned by the server according to all the page data requests.
It should be noted that the static resource data includes: first data written by a hypertext markup language, second data written by a cascading style sheet, and third data written by an transliteration scripting language. The front-end request queue also stores a state identifier corresponding to each front-end request, wherein the state identifier comprises: a first flag indicating that the front-end request has finished executing and a second flag indicating that the front-end request has not executed.
Wherein, the processing module 43 includes:
a first status unit 431, configured to set status identifiers corresponding to all page data requests in the front-end request queue as second identifiers;
a first sending unit 432, configured to send request contents of all page data requests to the server in a parallel sending manner, so that the server returns page display data according to all page data requests;
a second status unit 433, configured to receive the page display data, and set a status identifier corresponding to a page data request corresponding to the received page display data as a first identifier;
a second sending unit 434, configured to send the embedded point request to the server when the state identifiers corresponding to all the page data requests are the first identifiers, and set the state identifier corresponding to the embedded point request as the second identifier.
The device also includes: and the deleting module is used for deleting all the page data requests from the front-end request queue when the state identifiers corresponding to all the page data requests are the first identifiers.
The processing device for the front-end request provided in the embodiment of the present invention can implement each process implemented by the processing method for the front-end request in the method embodiments of fig. 1 to fig. 3, and is not described here again to avoid repetition.
In the embodiment of the invention, after different types of front-end requests are generated according to static resource data, a front-end request queue is created, and the front-end requests are added into the front-end request queue for unified management. The page data requests related to the page display data are placed in front of the queue, so that all the page data requests are guaranteed to be sent preferentially; meanwhile, after page display data are received, a point burying request is sent to avoid the influence of the point burying request on page display. Therefore, normal display of the page is preferentially ensured under the condition of poor network condition.
On the other hand, the embodiment of the present invention further provides an electronic device, which includes a memory, a processor, a bus, and a computer program stored on the memory and executable on the processor, where the processor implements the steps in the method for processing the front-end request when executing the program.
For example, fig. 6 shows a schematic physical structure diagram of an electronic device.
As shown in fig. 6, the electronic device may include: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may call logic instructions in the memory 630 to perform the following method:
after the front-end page receives the static resource data, generating different types of front-end requests according to the static resource data, wherein the different types of front-end requests at least comprise: a page data request and a buried point request;
creating a front-end request queue, and adding all page data requests and all embedded point requests into the front-end request queue in sequence, wherein all the page data requests are in front of the embedded point requests;
and sending all page data requests in the front-end request queue to the server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In still another aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform a method for processing a front-end request provided in the foregoing embodiments, for example, the method includes:
after the front-end page receives the static resource data, generating different types of front-end requests according to the static resource data, wherein the different types of front-end requests at least comprise: a page data request and a buried point request;
creating a front-end request queue, and adding all page data requests and all embedded point requests into the front-end request queue in sequence, wherein all the page data requests are in front of the embedded point requests;
and sending all page data requests in the front-end request queue to the server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for processing a front-end request, the method comprising:
after receiving static resource data, a front-end page generates different types of front-end requests according to the static resource data, wherein the different types of front-end requests at least comprise: a page data request and a buried point request;
creating a front-end request queue, and adding all the page data requests and all the embedded point requests into the front-end request queue in sequence, wherein all the page data requests are in front of the embedded point requests;
and sending all page data requests in the front-end request queue to a server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
2. The method of claim 1, wherein the static resource data comprises: first data written by a hypertext markup language, second data written by a cascading style sheet, and third data written by an transliteration scripting language.
3. The method of claim 1, wherein a status identifier corresponding to each front-end request is further stored in the front-end request queue, wherein the status identifier comprises: a first flag indicating that the front-end request has finished executing and a second flag indicating that the front-end request has not executed.
4. The method of claim 3, wherein the step of sending all page data requests in the front-end request queue to a server and starting sending the embedded point request to the server after receiving page display data returned by the server according to all page data requests comprises:
setting the state identifiers corresponding to all page data requests in the front-end request queue as the second identifiers;
sending the request contents of all page data requests to the server in a parallel sending mode so as to enable the server to return page display data according to the page data requests;
receiving the page display data, and setting a state identifier corresponding to a page data request corresponding to the received page display data as the first identifier;
and when the state identifiers corresponding to all the page data requests are the first identifiers, sending the embedded point requests to the server, and setting the state identifiers corresponding to the embedded point requests as the second identifiers.
5. The method according to claim 4, wherein when the state identifiers corresponding to all page data requests are the first identifiers, the method further comprises:
and deleting all page data requests from the front-end request queue.
6. An apparatus for processing a front-end request, the apparatus comprising:
a generating module, configured to generate different types of front-end requests according to static resource data after a front-end page receives the static resource data, where the different types of front-end requests at least include: a page data request and a buried point request;
the queue module is used for creating a front-end request queue and sequentially adding all page data requests and all embedded point requests into the front-end request queue, wherein all the page data requests are in front of the embedded point requests;
and the processing module is used for sending all the page data requests in the front-end request queue to a server, and starting to send the embedded point request to the server after receiving page display data returned by the server according to all the page data requests.
7. The apparatus of claim 6, wherein a status identifier corresponding to each front-end request is further stored in the front-end request queue, wherein the status identifier comprises: a first flag indicating that the front-end request has finished executing and a second flag indicating that the front-end request has not executed.
8. The method of claim 7, wherein the processing module comprises:
a first state unit, configured to set state identifiers corresponding to all page data requests in the front-end request queue as the second identifier;
the first sending unit is used for sending the request contents of all the page data requests to the server in a parallel sending mode so as to enable the server to return page display data according to all the page data requests;
the second state unit is used for receiving the page display data and setting a state identifier corresponding to a page data request corresponding to the received page display data as the first identifier;
and the second sending unit is used for sending the embedded point request to the server and setting the state identifier corresponding to the embedded point request as the second identifier when the state identifiers corresponding to all the page data requests are the first identifiers.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the computer program, when executed by the processor, implements the steps of the method of processing a front-end request according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of processing a front-end request according to any one of claims 1 to 5.
CN201911103187.7A 2019-11-12 2019-11-12 Front-end request processing method and device Pending CN110929192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911103187.7A CN110929192A (en) 2019-11-12 2019-11-12 Front-end request processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911103187.7A CN110929192A (en) 2019-11-12 2019-11-12 Front-end request processing method and device

Publications (1)

Publication Number Publication Date
CN110929192A true CN110929192A (en) 2020-03-27

Family

ID=69852733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911103187.7A Pending CN110929192A (en) 2019-11-12 2019-11-12 Front-end request processing method and device

Country Status (1)

Country Link
CN (1) CN110929192A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146233B2 (en) * 2000-02-11 2006-12-05 Sun Microsystems, Inc. Request queue management
US8316080B2 (en) * 2003-01-17 2012-11-20 International Business Machines Corporation Internationalization of a message service infrastructure
CN102984275A (en) * 2012-12-14 2013-03-20 北京奇虎科技有限公司 Method and browser for web downloading
CN106970872A (en) * 2016-11-10 2017-07-21 阿里巴巴集团控股有限公司 Information buries point methods and device
CN108156006A (en) * 2016-12-05 2018-06-12 阿里巴巴集团控股有限公司 One kind buries point data report method, device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146233B2 (en) * 2000-02-11 2006-12-05 Sun Microsystems, Inc. Request queue management
US8316080B2 (en) * 2003-01-17 2012-11-20 International Business Machines Corporation Internationalization of a message service infrastructure
CN102984275A (en) * 2012-12-14 2013-03-20 北京奇虎科技有限公司 Method and browser for web downloading
CN106970872A (en) * 2016-11-10 2017-07-21 阿里巴巴集团控股有限公司 Information buries point methods and device
CN108156006A (en) * 2016-12-05 2018-06-12 阿里巴巴集团控股有限公司 One kind buries point data report method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OLIVIA: "埋点", 《HTTPS://SEGMENTFAULT.COM/A/1190000015863478》 *

Similar Documents

Publication Publication Date Title
US10198410B2 (en) Method, device and mobile terminal for restoring page
CN108366058B (en) Method, device, equipment and storage medium for preventing traffic hijacking of advertisement operator
US20190109920A1 (en) Browser resource pre-pulling method, terminal and storage medium
CN107239701B (en) Method and device for identifying malicious website
WO2016011879A1 (en) Web page display method and apparatus
CN113703893B (en) Page rendering method, device, terminal and storage medium
CN111737443B (en) Answer text processing method and device and key text determining method
CN105095220B (en) A kind of browser implementation method, terminal and virtualization agent device
CN111177601A (en) Page rendering processing method, device and equipment and readable storage medium
CN110750244A (en) Code synchronization method and device, electronic equipment and storage medium
CN105119944B (en) Application starting method and related device
CN108494728B (en) Method, device, equipment and medium for creating blacklist library for preventing traffic hijacking
CN111367922A (en) Data updating method and related equipment
CN109885347B (en) Method, device, terminal, system and storage medium for acquiring configuration data
CN113704647A (en) Method and device for jumping multiple types of pages and electronic equipment
CN111597107A (en) Information output method and device and electronic equipment
CN106682014B (en) Game display data generation method and device
EP3869330A1 (en) Method and apparatus for lazy loading of js script
EP3866031A1 (en) Webpage loading method, intermediate server, and webpage loading system
CN105610596B (en) Resource directory management method and network terminal
CN110929192A (en) Front-end request processing method and device
CN102999580A (en) Code input frame element processing method and browser
CN108009226B (en) Promotion content display implementation method based on intelligent terminal application and intelligent terminal
US20140237133A1 (en) Page download control method, system and program for ie core browser
WO2017197889A1 (en) Keyword link method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication