WO2013163926A1 - Method and system for processing user requests of accessing to web pages - Google Patents

Method and system for processing user requests of accessing to web pages Download PDF

Info

Publication number
WO2013163926A1
WO2013163926A1 PCT/CN2013/074441 CN2013074441W WO2013163926A1 WO 2013163926 A1 WO2013163926 A1 WO 2013163926A1 CN 2013074441 W CN2013074441 W CN 2013074441W WO 2013163926 A1 WO2013163926 A1 WO 2013163926A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
request
current request
processing unit
proxy server
Prior art date
Application number
PCT/CN2013/074441
Other languages
French (fr)
Chinese (zh)
Inventor
刘华
Original Assignee
北京奇虎科技有限公司
奇智软件(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司, 奇智软件(北京)有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2013163926A1 publication Critical patent/WO2013163926A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method and system for processing a request for a user to access a webpage. Background technique
  • a browser is a tool for obtaining web content from a website. In general, it must have the ability to parse the various elements on the web page. After parsing is complete, the positioning of each element of the page is calculated. Then, the browser calls the platform-based API to complete the drawing of the page, and the various elements on the final page will be displayed in front of the user.
  • This process is a very smooth process on the PC, but on mobile terminals such as mobile phones, due to the limitations of mobile communication technology, it is currently impossible for mobile terminals to reach the transmission speed that Ethernet can achieve.
  • the hardware processing capability is limited, and rendering, typography, and drawing web pages consume a lot of resources for calculation. For these mobile terminals with limited processing capabilities, the time and power consumption required are relatively large. To solve these problems, Techniques based on server rendering and typography came into being.
  • This technology encapsulates time-consuming and resource-intensive operations on the server side, and browsers using this technology are typically designed as C/S (client/proxy) architectures.
  • C/S client/proxy
  • the proxy server sends a request for accessing the webpage to the webpage server, and after obtaining the webpage resource, the resource of the large traffic can be compressed. And processing, and then sending the compressed and processed data to the client, the client can display the content of the webpage simply by simply performing the operation on the data.
  • This light client mode reduces the requirements for mobile terminals, but it can also achieve a better user experience in the case of low network speeds used by mobile terminal users and limited processing capabilities of mobile devices.
  • the current mode of server-based rendering and typography is popular in browsers used by mobile terminals such as mobile phones, and the technology is developing toward a more lightweight client and a more weighted server.
  • the server is responsible for large traffic resources. File compression, page parsing, page positioning calculation, and page layout, and converting the typed structure into binary data that can be parsed by the client, so that the client directly parses the binary data and then proceeds The corresponding drawing and display can be.
  • the present invention has been made in order to provide a method and system for processing a user's request to access a web page that overcomes the above problems or at least partially solves or alleviates the above problems.
  • a method for processing a request for a user to access a webpage in which at least two processes are pre-launched in the same proxy server, and at least two processing units are created in each process, the method comprising:
  • the process When receiving the current request of multiple users to access the webpage, the process is assigned to each user's current request according to the attribute information of each user and the status information of each process;
  • a system for processing a request for a user to access a webpage in which at least two processes are pre-launched in the same proxy server, and at least two processing units are created in each process, the system comprising:
  • a process allocation module configured to allocate a process for the current request of each user according to attribute information of each user and status information of each process when receiving a current request of multiple users to access a webpage;
  • a processing unit allocation module configured to allocate a processing unit to the current request in the allocated process
  • a webpage content processing module configured to send, by using the allocated processing unit, a request to the web server corresponding to the current request to obtain webpage content, so as to return to the client for presentation.
  • a computer program comprising computer readable code causing the server to perform any of claims 1-11 when run on a server The method of processing a request for a user to access a web page.
  • a computer readable medium storing the computer program according to claim 23 is provided.
  • multiple processes can be started in the same proxy server in advance and multiple processing units can be created in each process, so that it is not necessary to re-create the process and the processing unit when the user request arrives.
  • the processing time that the user actually perceives is shortened, and the processing requirement of the large user amount concurrently can be satisfied.
  • the multi-process method can fully proxy the multi-core resources in the server, thereby improving the processing efficiency, which can also improve the response speed of the server.
  • Figure 1 shows a flow chart of a method in accordance with an embodiment of the present invention
  • FIG. 2 shows a flow chart of another method in accordance with an embodiment of the present invention
  • Figure 3 shows a schematic diagram of a first system in accordance with an embodiment of the present invention
  • Figure 4 shows a schematic diagram of a second system in accordance with an embodiment of the present invention
  • Figure 5 shows a schematic diagram of a third system in accordance with an embodiment of the present invention.
  • Figure 6 is a schematic block diagram showing a server for performing the method according to the present invention.
  • Figure 7 schematically illustrates a memory unit for holding or carrying program code that implements the method in accordance with the present invention.
  • the embodiment of the present invention is a server-based webpage rendering typesetting technology.
  • the server performs web page resource compression, page parsing, positioning, and rendering and typesetting, which require a large number of computing operations.
  • the operation of converting the webpage into a page more suitable for display by the mobile terminal device, and transmitting the page to the mobile terminal device in the form of binary data, and parsing by the mobile terminal device, thereby reducing the use of the mobile terminal device to access the webpage The demand for the network, and the amount of computation when the mobile terminal processes the web resources.
  • the embodiment of the present invention provides a corresponding solution.
  • multiple processes may be pre-launched in the proxy server, and multiple processing units are started in each process;
  • the effect is equivalent to a window in a normal browser (as opposed to a C/S architecture browser) (for example, when a user clicks on a link, a normal web browser creates a window and finally displays the page in the window), but also Not exactly equivalent.
  • the processing unit can process the request for the user to access the webpage, for example, including sending the request to the webpage server, and then parsing, rendering, and typesetting the received webpage data, but the difference is different.
  • the processing unit since the processing unit is located on the proxy server side, it is not required to have a display interface; and the real display interface is created by the client installed on the user's computer, therefore, the processing unit needs to process the webpage data.
  • the results are sent to the client via some proprietary protocol, which is drawn by the client and displayed on the user interface.
  • the processing unit activated in the embodiment of the present invention has the same function as the processing unit for processing the user request in the traditional C/S architecture, but the timing of creation is different; in the traditional C/S architecture, generally After receiving the user request, the creation of the processing unit is performed, which is equivalent to the time required to create the window, which is also part of the processing time of the webpage that the user can perceive; although the window is generally created very quickly, but concurrently in large users. In the case of this, it may still result in an extension of the overall processing time.
  • the processing unit is pre-created. After receiving the user request, the user request only needs to be assigned to the specified processing unit according to a certain policy, and does not need to receive a user request every time. When the processing unit is separately created, the response speed can be improved as a whole, and the processing time perceived by the user can be shortened to some extent.
  • a specific processing unit needs to be created in a certain process, and a process Multiple processing units are allowed to be created, which can cause multiple user requests to be processed in one process. Therefore, in the embodiment of the present invention, such a way of creating multiple processing units in a process is used. At the same time, the number of processing units allowed to be created in a process is limited. If only one process is started on one proxy server, the number of user requests that can be accommodated will still be limited. On the other hand, the proxy server generally uses multiple cores. Technology, that is to say, a proxy server has multiple cores, and different processes can run simultaneously on different cores, which can greatly improve the processing speed.
  • a method of creating multiple processes on the same proxy server is also used, so that the number of user requests that can be accommodated by a single proxy server can be increased, and the multi-core resources of the proxy server can be fully utilized.
  • the process on the proxy server is also initiated before the user request.
  • a method for processing a request for a user to access a webpage includes the following steps:
  • S101 When receiving a current request for a plurality of users to access a webpage, assigning a process to the current request of each user according to attribute information of each user and status information of each process;
  • the browser client installed on the user's computer will first receive the current request of the user to access a webpage (for example, the user currently clicks on a link, or currently enters a URL in the address bar and performs a confirmation operation, etc.), and then the client Send the current request to the server. Since the proxy server has started multiple processes in advance, after receiving the user request, the user's current request can be assigned to one of the processes. Of course, it is the processing unit that can specifically process the user request, and multiple processing units are also created in one process. Therefore, the current request needs to be allocated to a specific processing unit for processing.
  • the process and the processing unit are allocated for each user's current request according to the attribute information of each user and the status information of each process, there may be multiple ways, for example, directly assigning the current request to the process with the largest number of idle processing units. Then select an idle processing window and so on in the process.
  • different requests of the same user may be allocated to the same process as much as possible, which may further save processing time.
  • some websites such as some shopping websites, etc.
  • login information such as a user's account and password.
  • a user visits a different webpage under the same website only one login is possible, for example, the user is on a certain website.
  • the login information After logging in on the homepage, the login information is valid when accessing all pages of the site.
  • the request of multiple users is processed on the same proxy server, the effect is achieved, that is, the processing units corresponding to the web pages are in the same process. This is because, when the user's first login request is processed in a process, it can be in the browser.
  • the cookie stores some information, for example, the login information in the website, and when the same user visits other web pages on the same website, the saved information can be taken out from the cookie, thereby ensuring the continuity of the login status.
  • the cookie may also save the data information of the webpage that the user has visited, and for a webpage that the user has visited, when the access is initiated again, the user may directly return the information according to the saved information, instead of re-initiating the request to the web server. , and many more.
  • This kind of cookie will be stored persistently, but if it is a different processing unit of the same process or a different process processing "has a history, the request needs to get information from the persistent storage, it is better to be directly from the history processing unit.
  • the request of the same user may be allocated to the same process, and even the same processing unit may be allocated as much as possible.
  • the purpose in the specific implementation, may first determine whether the request is a new user's request, and then according to the result of the judgment to the current request for the process of the allocation.
  • Specific implementation see Figure 2, you can also use the following way to deal with:
  • step S201 determining whether the current request is a request of a new user; if yes, proceeding to step S202, otherwise proceeding to step S206;
  • the user attribute information corresponding to the user request and the process assigned to the request may also be recorded.
  • Correspondence relationship in this way, a proxy history list can be maintained on the proxy server side, in which the process assigned to each user is recorded.
  • the user attribute information corresponding to the request may also be obtained first, and then it is determined whether the user attribute information appears in the history allocation list, and if so, it is proved that the user has visited other web pages before, That is, the current request is not a request of a new user; otherwise, if it does not appear in the history allocation list, it proves that the current request is a request of a new user.
  • the data recorded in the historical allocation record is not permanent, but is deleted after a certain time. For example, for a certain record, after a preset period of time from the time of generation, it can be It is removed from the history allocation record. That is to say, the request of the same user corresponds to a large session. After the session is established, even if the request of other users is inserted in the middle, it will not be interrupted, but the duration of the session still has a timeout period, when receiving After a user's request, if the user's request is not received after a long period of time, it proves that the user may no longer need to access other web pages, and correspondingly, the session can be concluded. Bunch.
  • new user here is not a user who newly enters the network, or a user who newly installs the browser of the C/S architecture, etc., but refers to a user who has had an access request within a certain period of time.
  • the user attribute information may be represented by the ID of the browser client.
  • the ID of the browser client is uniquely identified, and the client carries the ID when sending the user's access request to the server, so the ID can be used. Differentiate between different users.
  • the user attribute information may also be obtained according to the login information of the user, or the user's IP address may be used as the user attribute information, and the like.
  • the status information about each process may include the number of idle processing units included in the process, and the like.
  • step S202 determining whether there is an idle processing unit in each process; if yes, proceeding to step S203, if there is no idle processing unit in each process, proceeding to step S204;
  • S203 a process that selects the most idle window is allocated to the current request, and selects an idle window from which to allocate to the current request;
  • step S204 Determine whether there is a processing unit with a processing timeout in the process; if yes, go to step S205, otherwise, wait until the processing unit is idle, or the processing time of the processing unit expires;
  • processing time of a processing unit expires, it is proved that the processing unit may be faulty, and the window measured by the client may have been closed by the user. Therefore, it may be ended and assigned to other requests.
  • S205 assign the processing unit with the longest timeout period and its corresponding process to the current request
  • S206 obtain the process in which the historical request of the user is located, and assign the process to the current request; of course, where the historical request of the user is located Before the process is assigned to the current request, it may first determine whether the process has an idle processing unit, and if it exists, assign it to the current request, otherwise, it may wait until a processing unit completes the current processing, or Other processes are assigned to the current request.
  • S209 Select an idle processing unit to allocate to the current request in the process in which the user's history request is located.
  • Steps S207 and S208 are optional steps, that is, in practical applications, in steps After S206, the process may directly go to step S209, and assign any idle processing unit in the process in which the user's history request is located to the current request; but if the step S207 is passed, the request of the same user may be allocated to the same process of the same process as much as possible. In the unit, in this way, the reuse of resources can be further ensured, and the processing efficiency is improved.
  • the server cluster can also be used on the proxy server side, that is, multiple proxy servers can be deployed on the server side, and each proxy server has all of the foregoing.
  • Functions for example, can start multiple processes in advance, initialize multiple processing units in each process, and perform processing such as continuous access by the same user.
  • the hardware configuration of each proxy server can be different.
  • a server cluster In the case of using a server cluster, it also involves the problem of allocating the user's current request between different proxy servers. At this time, it is also possible to deploy a server dedicated to distribution on the server side, of course, Add a distribution module to a proxy server.
  • the distribution server or distribution module can be reached first, and then the distribution server or distribution module allocates a proxy server for the current request.
  • assigning a proxy server multiple strategies can be implemented. For example, the performance parameters of each proxy server can be obtained in advance, the processing capabilities are determined according to the performance parameters of the proxy server, and then each is based on the respective processing capabilities.
  • the proxy server makes the assignment of the request. Alternatively, it is also possible to monitor the total number of requests processed by each proxy server, or to report the total number of requests being processed by each proxy server, and then combine them with their respective processing capabilities for more efficient allocation.
  • the distribution server or distribution module When the distribution server or distribution module is allocated between the various proxy servers, it can be implemented in a DNS-like manner. However, in actual applications, the proxy server may also fail. In this case, in order to avoid the processing speed of the user request. The impact should be able to stop assigning user requests to the failed proxy server in a timely manner.
  • the DNS configuration must be modified first, and the DNS server has a cache, cached. The update takes time, so after modifying the DNS configuration, the forwarding will not stop immediately. After waiting for the DNS cache update at all levels, you can stop forwarding new requests to the failed server.
  • the distribution may be performed without using the DNS, but the real-time or quasi-real-time heartbeat monitoring may be performed on each proxy server directly in the distribution server or the distribution module (the time interval for monitoring may be configured, the unit At the second level, the so-called heartbeat monitoring is to monitor the fault of the proxy server. If a proxy server fails, when the distribution server or distribution module sends a heartbeat test signal to it, it will not be able to return a response, so the proxy server goes online. Downline can be monitored and processed.
  • the proxy server capable of normally monitoring the heartbeat information can be added to the list of available proxy servers, and when the proxy server is assigned to the current request, the proxy server is selected from the list, so that the faulty proxy server can be ensured.
  • the proxy server can also be deleted from the list of available proxy servers.
  • the distribution server or distribution module detects the fault. If the server's heartbeat information is available, the failed server will rejoin the list of available proxy servers, and when there is a user request, it can be reassigned to the proxy server.
  • S103 Send, by using the allocated processing unit, a request to the web server corresponding to the current request to obtain webpage content, so as to return to the client for presentation.
  • the processing unit can parse the request, and then construct a webpage access request to the web server to obtain the webpage resource, and after parsing the webpage resource, the parsing, rendering, and typesetting can be performed. , then converted to binary data and returned to the client for the client to draw and display the web page.
  • the present invention is not only applicable to the case where the processing unit performs processing such as parsing, rendering, typesetting, etc., and may be used when the operation required by the processing unit changes (for example, the proxy server becomes heavier or lighter).
  • the solution provided by the embodiment of the present invention is implemented.
  • the processing unit does not close, but releases the resources requested when processing the request (including the link resources when establishing a connection with the web server, the storage resources used to cache various resources, computing resources, etc.), and then waits The arrival of other requests will be repeated in this week.
  • the processing unit may not have to release the result of the page request.
  • the embodiment of the present invention further provides a requesting system for processing a user to access a webpage, in which at least two pre-launching in the proxy server is required. Process, at least two processing units are created in each process, see Figure 3, the system includes:
  • the process allocation module 301 is configured to allocate a process for the current request of each user according to the attribute information of each user and the status information of each process when receiving the current request of the plurality of users to access the webpage;
  • a processing unit allocation module 302 configured to allocate a processing unit to the current request in the allocated process
  • the webpage content processing module 303 is configured to send, by using the allocated processing unit, a request to the webpage server corresponding to the current request to obtain webpage content, so as to return to the client for display.
  • the process allocation module 301 may include:
  • the determining sub-module 3011 is configured to determine, according to attribute information of each user, whether the current request is a request of a new user;
  • the process allocation sub-module 3012 is configured to allocate a process for the current request according to the allocation history of the user and the status information of each process if the current request is not a request of the new user.
  • the judging submodule 3011 may include:
  • a user identifier obtaining sub-module configured to obtain attribute information of a user corresponding to the current request
  • a comparison sub-module configured to: if the attribute information identifier of the user corresponding to the current request does not appear in the historical allocation record,
  • the current request is a request of a new user; wherein the historical allocation record is used to record a correspondence between the attribute information of the user requesting the corresponding user and the process assigned to the user request in the history processing.
  • the process allocation sub-module 3012 can be specifically used to:
  • the current request is not a request of a new user, and the process corresponding to the attribute information of the user in the history allocation record includes an idle processing unit, and the process is assigned to the current request.
  • the history allocation record may also record the correspondence between the attribute information of the user corresponding to the user request and the processing unit allocated to the request; at this time, the processing unit allocation module 302 may specifically use In:
  • the processing unit corresponding to the attribute information of the user in the history allocation record is assigned to the current request.
  • the system can also include:
  • the new user requests an allocation module for assigning the current process with the most idle processing units to the current request if the current request is a new user's request.
  • a timeout processing module configured to: if there is no idle processing unit in all processes, determine whether there is a processing unit with a processing timeout timeout, and if yes, assign the process in which the processing unit is located to the current request;
  • processing unit allocation module 302 can be specifically configured to:
  • the current task of the processing unit that timed out the processing time is ended and assigned to the current request.
  • the server cluster may be implemented in a manner that the proxy server is at least two.
  • the system may further include:
  • the proxy server allocation module 304 is configured to allocate a proxy server for the current request.
  • the proxy server allocation module 304 can be deployed in a separate distribution server, or can be deployed. In any of the cluster's proxy servers.
  • the proxy server assignment module 304 can include:
  • the heartbeat monitoring sub-module is configured to perform real-time heartbeat monitoring on each proxy server, and add a proxy server capable of normally monitoring heartbeat information to the list of available proxy servers;
  • a proxy server allocation sub-module for assigning a proxy server to the current request from the list of available proxy servers.
  • it can also include:
  • a list update module configured to delete a proxy server that fails to monitor heartbeat information from the list of available proxy servers; when re-monitoring the heartbeat information of the proxy server, adding the The list of available proxy servers.
  • the above system provided by the embodiment of the present invention can start multiple processes in the same proxy server in advance and create multiple processing units in each process, so that it is not necessary to re-create the user request when it arrives.
  • the process and the processing unit are created, so that the processing time actually perceived by the user can be shortened, and the processing requirement of the large user amount concurrently can be satisfied.
  • the multi-process method can fully proxy the multi-core resources in the server and improve the processing efficiency. This also improves the responsiveness of the server.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functionality of the microprocessor or all components of the digital signal processor (DSP) may be used in practice.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
  • the method of the server such as an application server.
  • the server conventionally includes a processor 610 and a computer program product or computer readable medium in the form of a memory 620.
  • the memory 620 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 620 has a memory space 630 for program code 631 for performing any of the method steps described above.
  • storage space 630 for program code may include various program code 631 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 620 in the server of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 63, i.e., code that can be read by a processor, such as 610, which when executed by the server causes the server to perform various steps in the methods described above.
  • "One embodiment," or "an embodiment," or "one or more embodiments as used herein means that the particular features, structures, or characteristics described in connection with the embodiments are included in at least one embodiment of the invention.
  • the phrase "in one embodiment" herein does not necessarily refer to the same embodiment.
  • any reference signs placed between parentheses shall not be construed as a limitation.
  • the word “comprising” does not exclude the presence of the elements or steps that are not in the claims.
  • the word “a” or “an” preceding a component does not exclude the presence of a plurality of such elements.
  • the invention can be implemented by means of hardware comprising several distinct elements and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by the same hardware item.
  • the use of the words first, second, and third does not indicate any order. These words can be interpreted as names.

Abstract

Discloses in the present invention are a method and a system for processing user requests of accessing to web pages, at least two processes are started in advance in the same proxy server, and at least two processing units are created in each process. The method comprises: when receiving multiple user current requests of accessing to the web page, allocating the process for the current request of each user according to the attribute information of each user and the state information of each process; allocating the processing unit for the current request in the allocated process; and by the allocated processing unit, transmitting the request to a web page server corresponding to the current request to obtain the web page content, so as to return the web page content back to the client for presentation. The response speed of the proxy server can be improved under the condition of concurrency of a large number of users.

Description

处理用户访问网页的请求的方法及系统 技术领域  Method and system for processing a request for a user to access a webpage
本发明涉及计算机技术领域, 尤其涉及一种处理用户访问网页的请 求的方法及系统。 背景技术  The present invention relates to the field of computer technologies, and in particular, to a method and system for processing a request for a user to access a webpage. Background technique
浏览器是用来从网站获取网页内容的工具软件。 一般而言, 它必须 具备解析网页上的各种元素的能力, 解析完成之后, 要进行页面各个元 素的定位计算。 然后, 浏览器在调用基于平台的 API来完成页面的绘制, 最终页面上的各种元素才会显示在用户面前。这个过程在 PC上是一个很 顺畅的过程, 但是在手机等移动终端上, 由于移动通信技术的限制, 目 前移动终端上网不可能达到以太网所能够达到的传输速度, 另外, 由于 一些移动终端的硬件处理能力有限, 而渲染、 排版和绘制网页页面会耗 费大量的资源进行计算, 对于这些处理能力有限的移动终端来说, 需要 消耗的时间和电量等开销就好比较大, 为了解决这些问题, 基于服务器 渲染排版的技术便应运而生了。  A browser is a tool for obtaining web content from a website. In general, it must have the ability to parse the various elements on the web page. After parsing is complete, the positioning of each element of the page is calculated. Then, the browser calls the platform-based API to complete the drawing of the page, and the various elements on the final page will be displayed in front of the user. This process is a very smooth process on the PC, but on mobile terminals such as mobile phones, due to the limitations of mobile communication technology, it is currently impossible for mobile terminals to reach the transmission speed that Ethernet can achieve. In addition, due to the The hardware processing capability is limited, and rendering, typography, and drawing web pages consume a lot of resources for calculation. For these mobile terminals with limited processing capabilities, the time and power consumption required are relatively large. To solve these problems, Techniques based on server rendering and typography came into being.
该技术把耗时且费资源的操作封装在服务端, 使用这种技术的浏览 器通常被设计为 C/S (客户端 /代理服务器) 架构。 在这种架构下, 用户 访问网页的请求会被浏览器的客户端发送到代理服务器端, 代理服务器 向网页服务器发送访问网页的请求, 并获取到网页资源之后, 能够对大 流量的资源进行压缩和处理, 然后将压缩和处理过的数据发送至客户端, 客户端只需对数据做简单的操作就能显示网页内容。 这种轻客户端的模 式降低了对移动终端的要求, 但是又能够在移动终端用户使用的网络低 速, 以及移动设备的处理能力有限的情形下, 也能获得比较好的用户体 验。 所以目前这种基于服务端渲染排版的模式在手机等移动终端使用的 浏览器上大行其道, 并且该技术向着客户端更轻量化、 服务端更重量化 的方向发展, 例如, 服务端负责大流量资源文件的压缩、 页面的解析、 页面的定位计算和页面的排版, 以及将排版后的结构体转换为能够让客 户端解析出来的二进制数据, 这样客户端直接根据对这些二进制数据进 行解析, 然后进行相应的绘制及显示即可。 这种发展方向有利于降低对 移动终端硬件性能的要求, 但是其缺点在于: 由于服务端需要处理的工 作大幅增加了, 而所有使用其客户端的用户请求都会发送到服务端, 由 服务端进行处理, 因此, 在同一时间段内多个用户请求并发的情况下, 可能会降低服务端的响应速度, 结果反而降低了用户的移动终端上浏览 器的响应速度。 发明内容 This technology encapsulates time-consuming and resource-intensive operations on the server side, and browsers using this technology are typically designed as C/S (client/proxy) architectures. Under this architecture, the user's request to access the webpage is sent to the proxy server by the client of the browser. The proxy server sends a request for accessing the webpage to the webpage server, and after obtaining the webpage resource, the resource of the large traffic can be compressed. And processing, and then sending the compressed and processed data to the client, the client can display the content of the webpage simply by simply performing the operation on the data. This light client mode reduces the requirements for mobile terminals, but it can also achieve a better user experience in the case of low network speeds used by mobile terminal users and limited processing capabilities of mobile devices. Therefore, the current mode of server-based rendering and typography is popular in browsers used by mobile terminals such as mobile phones, and the technology is developing toward a more lightweight client and a more weighted server. For example, the server is responsible for large traffic resources. File compression, page parsing, page positioning calculation, and page layout, and converting the typed structure into binary data that can be parsed by the client, so that the client directly parses the binary data and then proceeds The corresponding drawing and display can be. This direction of development is conducive to reducing the right Mobile terminal hardware performance requirements, but the disadvantages are: Because the server needs to process the work has increased greatly, and all user requests using its client will be sent to the server, the server will process, therefore, within the same time period In the case where multiple users request concurrent, the response speed of the server may be lowered, and the result is a lower response speed of the browser on the user's mobile terminal. Summary of the invention
鉴于上述问题, 提出了本发明以便提供一种克服上述问题或者至少 部分地解决或者减緩上述问题的处理用户访问网页的请求的方法及系 统。  In view of the above problems, the present invention has been made in order to provide a method and system for processing a user's request to access a web page that overcomes the above problems or at least partially solves or alleviates the above problems.
根据本发明的一个方面,提供了一种处理用户访问网页的请求的方法, 在同一台代理服务器中预先启动至少两个进程,每个进程中创建至少两个处 理单元, 所述方法包括:  According to an aspect of the present invention, a method for processing a request for a user to access a webpage is provided, in which at least two processes are pre-launched in the same proxy server, and at least two processing units are created in each process, the method comprising:
在接收到多个用户访问网页的当前请求时,根据各个用户的属性信息以 及各个进程的状态信息, 为各个用户的当前请求分配进程;  When receiving the current request of multiple users to access the webpage, the process is assigned to each user's current request according to the attribute information of each user and the status information of each process;
在所述分配的进程中为所述当前请求分配处理单元;  Allocating a processing unit to the current request in the allocated process;
通过所述分配的处理单元, 向所述当前请求对应的网页服务器发送请求以 获取网页内容, 以便返回给客户端进行展现。  And sending, by the allocated processing unit, a request to the web server corresponding to the current request to obtain webpage content, so as to return to the client for presentation.
根据本发明的另一个方面, 提供了一种处理用户访问网页的请求的系 统, 在同一台代理服务器中预先启动至少两个进程, 每个进程中创建至少两 个处理单元, 所述系统包括:  According to another aspect of the present invention, a system for processing a request for a user to access a webpage is provided, in which at least two processes are pre-launched in the same proxy server, and at least two processing units are created in each process, the system comprising:
进程分配模块, 用于在接收到多个用户访问网页的当前请求时, 根据各 个用户的属性信息以及各个进程的状态信息, 为所述各个用户的当前请求分 配进程;  a process allocation module, configured to allocate a process for the current request of each user according to attribute information of each user and status information of each process when receiving a current request of multiple users to access a webpage;
处理单元分配模块, 用于在所述分配的进程中为所述当前请求分配处理 单元;  a processing unit allocation module, configured to allocate a processing unit to the current request in the allocated process;
网页内容处理模块, 用于通过所述分配的处理单元, 向所述当前请求对 应的网页服务器发送请求以获取网页内容, 以便返回给客户端进行展现。  And a webpage content processing module, configured to send, by using the allocated processing unit, a request to the web server corresponding to the current request to obtain webpage content, so as to return to the client for presentation.
根据本发明的又一个方面, 提供了一种计算机程序, 其包括计算机 可读代码, 当所述计算机可读代码在服务器上运行时, 导致所述服务器 执行根据权利要求 1-11 中的任一个所述的处理用户访问网页的请求的方 法。 根据本发明的再一个方面, 提供了一种计算机可读介质, 其中存储 了如权利要求 23所述的计算机程序。 According to still another aspect of the present invention, a computer program comprising computer readable code causing the server to perform any of claims 1-11 when run on a server The method of processing a request for a user to access a web page. According to still another aspect of the present invention, a computer readable medium storing the computer program according to claim 23 is provided.
本发明的有益效果为:  The beneficial effects of the invention are:
通过本发明, 可以预先在同一台代理服务器中启动多个进程并在每 个进程中创建多个处理单元, 这样, 就不必在用户请求到达时再重新进 行进程及处理单元的创建, 因此, 可以缩短用户实际感知到的处理时间, 而且能够满足大用户量并发时的处理需求; 同时, 多进程的方式可以充 分代理服务器中多核的资源, 提高处理效率, 这也能提高服务器的响应 速度。  According to the present invention, multiple processes can be started in the same proxy server in advance and multiple processing units can be created in each process, so that it is not necessary to re-create the process and the processing unit when the user request arrives. The processing time that the user actually perceives is shortened, and the processing requirement of the large user amount concurrently can be satisfied. At the same time, the multi-process method can fully proxy the multi-core resources in the server, thereby improving the processing efficiency, which can also improve the response speed of the server.
上述说明仅是本发明技术方案的概述, 为了能够更清楚了解本发明 的技术手段, 而可依照说明书的内容予以实施, 并且为了让本发明的上 述和其它目的、 特征和优点能够更明显易懂, 以下特举本发明的具体实 施方式。 附图说明  The above description is only an overview of the technical solutions of the present invention, and the technical means of the present invention can be more clearly understood, and can be implemented in accordance with the contents of the specification, and the above and other objects, features and advantages of the present invention can be more clearly understood. Specific embodiments of the invention are set forth below. DRAWINGS
通过阅读下文优选实施方式的详细描述, 各种其他的优点和益处对 于本领域普通技术人员将变得清楚明了。 附图仅用于示出优选实施方式 的目的, 而并不认为是对本发明的限制。 而且在整个附图中, 用相同的 参考符号表示相同的部件。 在附图中:  Various other advantages and benefits will become apparent to those skilled in the art from a The drawings are only for the purpose of illustrating the preferred embodiments and are not to be construed as limiting. Throughout the drawings, the same reference numerals are used to refer to the same parts. In the drawing:
图 1示出了依据本发明实施例的方法的流程图;  Figure 1 shows a flow chart of a method in accordance with an embodiment of the present invention;
图 2示出了依据本发明实施例的另一方法的流程图;  2 shows a flow chart of another method in accordance with an embodiment of the present invention;
图 3示出了依据本发明实施例的第一系统的示意图;  Figure 3 shows a schematic diagram of a first system in accordance with an embodiment of the present invention;
图 4示出了依据本发明实施例的第二系统的示意图;  Figure 4 shows a schematic diagram of a second system in accordance with an embodiment of the present invention;
图 5示出了依据本发明实施例的第三系统的示意图;  Figure 5 shows a schematic diagram of a third system in accordance with an embodiment of the present invention;
图 6 示意性地示出了用于执行根据本发明的方法的服务器的框图; 以及  Figure 6 is a schematic block diagram showing a server for performing the method according to the present invention;
图 7 示意性地示出了用于保持或者携带实现根据本发明的方法的程 序代码的存储单元。 具体实施例  Figure 7 schematically illustrates a memory unit for holding or carrying program code that implements the method in accordance with the present invention. Specific embodiment
下面结合附图和具体的实施方式对本发明作进一步的描述。  The invention is further described below in conjunction with the drawings and specific embodiments.
首先需要说明的是, 本发明实施例是在基于服务器的网页渲染排版技术 基础上进行的改进, 前提仍然是将耗时且费资源的操作封装在服务端, 由服 务端进行需要大量计算作业的网页资源压缩、 页面的解析、 定位以及渲染排 版等工作, 通过这一系列的操作, 将网页转化为更加适合移动终端设备显示 的页面, 并将页面以二进制数据的形式发送至移动终端设备, 由移动终端设 备进行解析, 由此, 降低了使用移动终端设备访问网页时对网络的需求, 以 及移动终端处理网页资源时的计算量。 然而, 随着移动终端设备的多元化和 普及程度的提高, 使用移动终端访问网页服务器的用户数量高速增长, 这就 对传统的基于服务器渲染排版的技术提出了新的要求, 比如在同一时间段 内, 如果大量的用户请求并发, 可能会降低服务端的响应速度, 服务端的作 业出现延时甚至是服务器的部分资源停止响应, 结果反而降低了浏览器的响 应速度。 First of all, it should be noted that the embodiment of the present invention is a server-based webpage rendering typesetting technology. Based on the improvement, the premise is still to encapsulate time-consuming and resource-intensive operations on the server side. The server performs web page resource compression, page parsing, positioning, and rendering and typesetting, which require a large number of computing operations. The operation of converting the webpage into a page more suitable for display by the mobile terminal device, and transmitting the page to the mobile terminal device in the form of binary data, and parsing by the mobile terminal device, thereby reducing the use of the mobile terminal device to access the webpage The demand for the network, and the amount of computation when the mobile terminal processes the web resources. However, as the diversity and popularity of mobile terminal devices increase, the number of users accessing web servers using mobile terminals is increasing rapidly, which places new demands on traditional server-based rendering techniques, such as at the same time period. If a large number of users request concurrent, it may reduce the response speed of the server, the delay of the service of the server, or even the partial resources of the server stop responding, which in turn reduces the response speed of the browser.
为了避免上述情况的发生, 本发明实施例提供了相应的解决方案, 在此 方案中, 可以在代理服务器中预先启动多个进程, 并在每个进程中启动多个 处理单元;这些处理单元的作用相当于普通浏览器(相对于 C/S架构浏览器) 中的窗口 (例如用户在点击某链接时, 普通的网页浏览器会创建一个窗口, 最终将页面显示在该窗口中) , 但也不完全等同。 具体而言, 这种处理单元 可以对用户访问网页的请求进行处理,例如,包括将请求发送到网页服务器, 然后对接收到的网页数据进行解析、 渲染、 排版等一系列操作, 但不同之处 在于, 由于这种处理单元位于代理服务器端, 因此并不需要具有显示界面; 而真正的显示界面是由用户计算机上安装的客户端来创建的, 因此, 这种处 理单元需要将网页数据的处理结果通过一些私有的协议发送给客户端, 由客 户端进行绘制并显示在用户界面上。  In order to avoid the above situation, the embodiment of the present invention provides a corresponding solution. In this solution, multiple processes may be pre-launched in the proxy server, and multiple processing units are started in each process; The effect is equivalent to a window in a normal browser (as opposed to a C/S architecture browser) (for example, when a user clicks on a link, a normal web browser creates a window and finally displays the page in the window), but also Not exactly equivalent. Specifically, the processing unit can process the request for the user to access the webpage, for example, including sending the request to the webpage server, and then parsing, rendering, and typesetting the received webpage data, but the difference is different. Therefore, since the processing unit is located on the proxy server side, it is not required to have a display interface; and the real display interface is created by the client installed on the user's computer, therefore, the processing unit needs to process the webpage data. The results are sent to the client via some proprietary protocol, which is drawn by the client and displayed on the user interface.
换言之, 本发明实施例中启动的处理单元, 与传统的 C/S架构下用于处 理用户请求的处理单元的作用相同, 但创建的时机不同; 在传统的 C/S架构 下, 一般是在接收到用户请求之后, 再进行处理单元的创建, 这样相当于创 建窗口的耗时也成为用户可感知的网页处理时间的一部分; 虽然一般而言创 建窗口是很快的, 但是在大用户量并发的情况下, 仍然可能会导致整体处理 时间的延长。 而在本发明实施例中, 处理单元是预先创建好的, 当接收到用 户请求之后, 只需要将用户请求按照一定的策略分配到指定的处理单元即 可, 而不需要每接收到一个用户请求时再分别创建处理单元, 因此可以从整 体上提高响应速度, 从一定程度上缩短用户可感知的处理时间。  In other words, the processing unit activated in the embodiment of the present invention has the same function as the processing unit for processing the user request in the traditional C/S architecture, but the timing of creation is different; in the traditional C/S architecture, generally After receiving the user request, the creation of the processing unit is performed, which is equivalent to the time required to create the window, which is also part of the processing time of the webpage that the user can perceive; although the window is generally created very quickly, but concurrently in large users. In the case of this, it may still result in an extension of the overall processing time. In the embodiment of the present invention, the processing unit is pre-created. After receiving the user request, the user request only needs to be assigned to the specified processing unit according to a certain policy, and does not need to receive a user request every time. When the processing unit is separately created, the response speed can be improved as a whole, and the processing time perceived by the user can be shortened to some extent.
另外, 由于具体的处理单元是需要在一定的进程中创建的, 而一个进程 中允许创建多个处理单元, 这样可以使得在一个进程中处理多个用户请求, 因此,在本发明实施例中,釆用了这种在进程中创建多个处理单元的方式。 同 时, 一个进程中允许创建的处理单元数目毕竟也是有限的, 如果一台代理服 务器上仅启动一个进程, 则能够容纳的用户请求数量仍然会比较有限; 另一 方面, 代理服务器一般会釆用多核技术, 也就是说一台代理服务器具有多个 内核, 不同的进程在不同的内核上可以同时运行, 这样可以大大提高处理速 度。 因此, 在本发明实施例中, 还釆用了在同一台代理服务器上创建多个进 程的方式, 这样, 可以提高单台代理服务器可容纳的用户请求数量, 同时充 分利用代理服务器的多核的资源。 当然, 代理服务器上的进程也是先于用户 请求启动的。 In addition, because a specific processing unit needs to be created in a certain process, and a process Multiple processing units are allowed to be created, which can cause multiple user requests to be processed in one process. Therefore, in the embodiment of the present invention, such a way of creating multiple processing units in a process is used. At the same time, the number of processing units allowed to be created in a process is limited. If only one process is started on one proxy server, the number of user requests that can be accommodated will still be limited. On the other hand, the proxy server generally uses multiple cores. Technology, that is to say, a proxy server has multiple cores, and different processes can run simultaneously on different cores, which can greatly improve the processing speed. Therefore, in the embodiment of the present invention, a method of creating multiple processes on the same proxy server is also used, so that the number of user requests that can be accommodated by a single proxy server can be increased, and the multi-core resources of the proxy server can be fully utilized. . Of course, the process on the proxy server is also initiated before the user request.
在以上所述的前提下, 参见图 1 , 本发明实施例提供的处理用户访问网 页的请求的方法包括以下步骤:  On the premise of the foregoing, referring to FIG. 1 , a method for processing a request for a user to access a webpage according to an embodiment of the present invention includes the following steps:
S101 : 在接收到多个用户访问网页的当前请求时, 根据各个用户的属性 信息以及各个进程的状态信息, 为所述各个用户的当前请求分配进程;  S101: When receiving a current request for a plurality of users to access a webpage, assigning a process to the current request of each user according to attribute information of each user and status information of each process;
S102: 在所述分配的进程中为所述当前请求分配处理单元;  S102: Allocating a processing unit to the current request in the allocated process;
用户计算机上安装的浏览器客户端会首先接收到用户访问某网页的当 前请求(例如用户当前点击了某链接, 或者当前在地址栏中输入了某网址并 执行了确认操作等) , 然后客户端将该当前请求发送到服务器端。 代理服务 器端由于已经预先启动了多个进程, 因此, 在接收到用户请求之后, 就可以 将用户的当前请求分配到其中一个进程中。 当然, 具体能够对用户请求进行 处理的是处理单元, 而一个进程中还创建了多个处理单元, 因此, 还需要将 当前请求分配到一个具体的处理单元中进行处理。  The browser client installed on the user's computer will first receive the current request of the user to access a webpage (for example, the user currently clicks on a link, or currently enters a URL in the address bar and performs a confirmation operation, etc.), and then the client Send the current request to the server. Since the proxy server has started multiple processes in advance, after receiving the user request, the user's current request can be assigned to one of the processes. Of course, it is the processing unit that can specifically process the user request, and multiple processing units are also created in one process. Therefore, the current request needs to be allocated to a specific processing unit for processing.
具体在根据各个用户的属性信息以及各个进程的状态信息, 为各个用户 的当前请求分配进程及处理单元时, 可以有多种方式, 例如, 直接将当前请 求分配到空闲处理单元最多的进程中, 然后在该进程中任选一个空闲的处理 窗口等等。 或者, 在本发明实施例中, 为了保证资源的重复利用, 可以尽量 将同一用户的不同请求分配到同一进程中, 这样可以进一步节省处理时间。 例如, 有些网站 (如某些购物网站等) 需要用户的账户及密码等登录信息, 一般情况下,用户在访问同一网站下的不同网页时,只登录一次即可,例如, 用户在某网站的首页上登录之后, 在访问该网站的所有网页时, 登录信息都 是有效的。 但在本发明实施例中, 由于同一台代理服务器上会处理多个用户 的请求,要实现这种效果,其前提是这些网页对应的处理单元在同一进程中。 这是因为, 当在某进程中处理用户的首次登陆请求后, 可以在浏览器的Specifically, when the process and the processing unit are allocated for each user's current request according to the attribute information of each user and the status information of each process, there may be multiple ways, for example, directly assigning the current request to the process with the largest number of idle processing units. Then select an idle processing window and so on in the process. Or, in the embodiment of the present invention, in order to ensure the reuse of resources, different requests of the same user may be allocated to the same process as much as possible, which may further save processing time. For example, some websites (such as some shopping websites, etc.) require login information such as a user's account and password. Generally, when a user visits a different webpage under the same website, only one login is possible, for example, the user is on a certain website. After logging in on the homepage, the login information is valid when accessing all pages of the site. However, in the embodiment of the present invention, since the request of multiple users is processed on the same proxy server, the effect is achieved, that is, the processing units corresponding to the web pages are in the same process. This is because, when the user's first login request is processed in a process, it can be in the browser.
Cookie中进行保存一些信息, 例如, 在网站中的登录信息, 后续当同一用户 访问同一网站中的其他网页时, 就可以从 Cookie中取出保存的这些信息, 从而保证登录状态的连续性。 或者, Cookie中还可以保存用户访问过的网页 的数据信息等, 针对用户访问过的一个网页, 当再次发起访问时, 可以直接 根据保存的信息返回给用户, 而不必再重新向网页服务器发起请求, 等等。 这种 Cookie会进行持久化存储, 但是如果是同一进程的不同处理单元或者 不同的进程处理"已经有历史记录,,的请求时, 需要从持久化存储获取一次信 息, 这样不如直接由历史处理单元处理时高效。 因此, 在本发明实施例中, 为了实现前述目的, 在为用户请求分配进程时, 可以进来将同一用户的请求 分配到同一进程, 甚至还可以尽量分配到同一处理单元。 为了该目的, 具体 实现时, 可以首先判断当请请求是否为新用户的请求, 然后根据判断的结果 向当前请求进行进程的分配。 具体实现时, 参见图 2 , 还可以釆用如下方式 来处理: The cookie stores some information, for example, the login information in the website, and when the same user visits other web pages on the same website, the saved information can be taken out from the cookie, thereby ensuring the continuity of the login status. Alternatively, the cookie may also save the data information of the webpage that the user has visited, and for a webpage that the user has visited, when the access is initiated again, the user may directly return the information according to the saved information, instead of re-initiating the request to the web server. , and many more. This kind of cookie will be stored persistently, but if it is a different processing unit of the same process or a different process processing "has a history, the request needs to get information from the persistent storage, it is better to be directly from the history processing unit. Therefore, in the embodiment of the present invention, in order to achieve the foregoing object, when the user is requested to allocate a process, the request of the same user may be allocated to the same process, and even the same processing unit may be allocated as much as possible. The purpose, in the specific implementation, may first determine whether the request is a new user's request, and then according to the result of the judgment to the current request for the process of the allocation. Specific implementation, see Figure 2, you can also use the following way to deal with:
S201 : 判断当前请求是否为新用户的请求; 如果是, 则进入步骤 S202, 否则进入步骤 S206;  S201: determining whether the current request is a request of a new user; if yes, proceeding to step S202, otherwise proceeding to step S206;
为了能够判断当前请求是否为新用户的请求, 可以在历史处理的过程 中, 在为用户请求分配进程的同时, 还可以记录下用户请求对应的用户属性 信息与为该请求分配的进程之间的对应关系, 这样, 在代理服务器端就可以 维护一个历史分配列表,在该列表中记录了为各个用户分配的进程分别是哪 个。 当接收到一个当前请求时, 同样可以先获取到该请求对应的用户属性信 息, 然后判断该用户属性信息是否出现在历史分配列表中, 如果是, 则证明 该用户之前访问过其他的网页, 也就是说, 该当前请求不是一个新用户的请 求; 否则, 如果未出现在历史分配列表中, 则证明当前请求是一个新用户的 请求。  In order to be able to determine whether the current request is a request of a new user, in the process of historical processing, while the user is requested to allocate the process, the user attribute information corresponding to the user request and the process assigned to the request may also be recorded. Correspondence relationship, in this way, a proxy history list can be maintained on the proxy server side, in which the process assigned to each user is recorded. When receiving a current request, the user attribute information corresponding to the request may also be obtained first, and then it is determined whether the user attribute information appears in the history allocation list, and if so, it is proved that the user has visited other web pages before, That is, the current request is not a request of a new user; otherwise, if it does not appear in the history allocation list, it proves that the current request is a request of a new user.
需要说明的是, 历史分配记录中记录的数据并不是永久性的, 而是要在 一定时间后删除, 例如, 对于某一条记录, 自生成之时起经过一段预置的时 间之后, 就可以将其从历史分配记录中删除。 也就是说, 同一个用户的请求 对应着一个大的会话 ( session ) , 该会话建立之后, 即使中间插入了其他用 户的请求, 也不会中断, 但是该会话的存续仍然存在超时时间, 当接收了一 个用户的某请求之后, 如果很长一段时间之后都没有再收到该用户的请求, 则证明用户目前可能不再需要访问其他的网页, 相应的, 该会话也就可以结 束。 可见, 这里所谓的"新用户 "并不是新入网的用户, 或者新安装了该 C/S 架构的浏览器的用户等等, 而是指在一定时间段内是否有过访问请求的用 户。 It should be noted that the data recorded in the historical allocation record is not permanent, but is deleted after a certain time. For example, for a certain record, after a preset period of time from the time of generation, it can be It is removed from the history allocation record. That is to say, the request of the same user corresponds to a large session. After the session is established, even if the request of other users is inserted in the middle, it will not be interrupted, but the duration of the session still has a timeout period, when receiving After a user's request, if the user's request is not received after a long period of time, it proves that the user may no longer need to access other web pages, and correspondingly, the session can be concluded. Bunch. It can be seen that the so-called "new user" here is not a user who newly enters the network, or a user who newly installs the browser of the C/S architecture, etc., but refers to a user who has had an access request within a certain period of time.
在具体实现时, 用户属性信息可以由浏览器客户端的 ID来表示。 在实 际应用中, 在安装浏览器客户端时, 会带有唯一标识该浏览器客户端的 ID, 客户端在向服务器发送用户的访问请求时, 会携带该 ID, 因此, 就可以通 过该 ID来区分不同的用户。 当然, 如果用户注册过浏览器账户, 并且当前 处于登录状态, 则也可以根据用户的登录信息来获取用户属性信息, 或者, 还可以将用户的 IP地址作为用户属性信息, 等等。 关于各个进程的状态信 息可以包括进程包括的空闲处理单元的数目等等。  In a specific implementation, the user attribute information may be represented by the ID of the browser client. In the actual application, when the browser client is installed, the ID of the browser client is uniquely identified, and the client carries the ID when sending the user's access request to the server, so the ID can be used. Differentiate between different users. Of course, if the user has registered the browser account and is currently in the login state, the user attribute information may also be obtained according to the login information of the user, or the user's IP address may be used as the user attribute information, and the like. The status information about each process may include the number of idle processing units included in the process, and the like.
S202: 判断各个进程中是否存在空闲处理单元; 如果是, 则进入步骤 S203 , 如果每个进程中都不存在空闲的处理单元, 进入步骤 S204;  S202: determining whether there is an idle processing unit in each process; if yes, proceeding to step S203, if there is no idle processing unit in each process, proceeding to step S204;
S203: 选出空闲窗口最多的进程分配给当前请求, 并从中选择一个空闲 窗口分配给当前请求;  S203: a process that selects the most idle window is allocated to the current request, and selects an idle window from which to allocate to the current request;
S204: 判断进程中是否存在处理时间超时的处理单元; 如果是, 则进入 步骤 S205, 否则, 可以等待, 直到有处理单元空闲, 或者有处理单元的处 理时间超时;  S204: Determine whether there is a processing unit with a processing timeout in the process; if yes, go to step S205, otherwise, wait until the processing unit is idle, or the processing time of the processing unit expires;
如果某处理单元的处理时间超时, 则证明该处理单元可能发生了故障, 客户端测的窗口可能早已被用户关闭, 因此, 可以将其结束, 分配给其他请 求使用。  If the processing time of a processing unit expires, it is proved that the processing unit may be faulty, and the window measured by the client may have been closed by the user. Therefore, it may be ended and assigned to other requests.
S205: 将超时时间最长的处理单元及其对应的进程分配给当前请求; S206: 获取该用户的历史请求所在的进程, 将该进程分配给当前请求; 当然, 在将该用户的历史请求所在的进程分配给该当前请求之前, 还可 以首先判断下该进程是否存在空闲的处理单元, 如果存在, 则将其分配给当 前请求, 否则, 可以等待, 直到有处理单元完成当前的处理, 或者将其他进 程分配给该当前请求。  S205: assign the processing unit with the longest timeout period and its corresponding process to the current request; S206: obtain the process in which the historical request of the user is located, and assign the process to the current request; of course, where the historical request of the user is located Before the process is assigned to the current request, it may first determine whether the process has an idle processing unit, and if it exists, assign it to the current request, otherwise, it may wait until a processing unit completes the current processing, or Other processes are assigned to the current request.
S207:判断该用户的历史请求对应的历史处理单元是否空闲 ,如果空闲 , 则进入步骤 S208; 否则进入 S209;  S207: determining whether the history processing unit corresponding to the history request of the user is idle, if it is idle, proceeding to step S208; otherwise, proceeding to S209;
S208: 将该历史处理单元分配给当前请求;  S208: Assign the history processing unit to the current request;
S209:在该用户的历史请求所在的进程选择一个空闲处理单元分配给当 前请求。  S209: Select an idle processing unit to allocate to the current request in the process in which the user's history request is located.
步骤 S207及 S208为可选的步骤, 也就是说, 在实际应用中, 在步骤 S206之后可以直接进入步骤 S209 , 将该用户的历史请求所在的进程中任意 一个空闲处理单元分配给当前请求; 但如果经过该步骤 S207 , 则可以将同 一用户的请求尽量分配到同一进程的同一处理单元中, 这样, 可以进一步保 证资源的重复利用, 提高处理效率。 Steps S207 and S208 are optional steps, that is, in practical applications, in steps After S206, the process may directly go to step S209, and assign any idle processing unit in the process in which the user's history request is located to the current request; but if the step S207 is passed, the request of the same user may be allocated to the same process of the same process as much as possible. In the unit, in this way, the reuse of resources can be further ensured, and the processing efficiency is improved.
可见, 通过上述处理, 不仅能够增加代理服务器的吞吐量, 还可以尽可 能保证同一用户在访问中的连续性,从而进一步提高处理及响应速度。当然, 在实际应用中, 有些窗口可能在运行中会出现故障, 此时, 如果将用户请求 分配给该窗口, 则会导致该请求无法得到及时处理。 因此, 在本发明实施例 中, 可以对各个处理单元的状态进行监控, 当发现某处理单元出现故障时, 为了避免同一进程中其他处理单元的正常运行, 可以首先将故障处理单元所 在的进程打上标记, 避免有新的请求被分配到该进程, 然后, 等到该进程中 所有的处理单元正在处理的任务都完成之后, 再将该进程关闭, 然后再重新 启动该进程, 并重新创建处理单元。  It can be seen that through the above processing, not only the throughput of the proxy server can be increased, but also the continuity of the same user in the access can be ensured as much as possible, thereby further improving the processing and response speed. Of course, in practical applications, some windows may fail during operation. At this time, if a user request is assigned to the window, the request cannot be processed in time. Therefore, in the embodiment of the present invention, the status of each processing unit can be monitored. When a fault occurs in a processing unit, in order to avoid normal operation of other processing units in the same process, the process in which the fault processing unit is located may be first marked. Mark, avoid new requests being assigned to the process, then wait until all the processing units in the process are completing the task, then close the process, then restart the process, and re-create the processing unit.
除此之外, 为了进一步提高吞吐量及响应速度, 在代理服务器端还可以 釆用服务器集群的方式, 也就是说, 可以在服务器端部署多个代理服务器, 每个代理服务器中都具有前述所有功能, 例如, 都可以预先启动多个进程, 每个进程中初始化多个处理单元, 并且还可以进行同一用户的连续性访问等 处理。 当然, 每台代理服务器的硬件配置可以是不同的。  In addition, in order to further improve the throughput and response speed, the server cluster can also be used on the proxy server side, that is, multiple proxy servers can be deployed on the server side, and each proxy server has all of the foregoing. Functions, for example, can start multiple processes in advance, initialize multiple processing units in each process, and perform processing such as continuous access by the same user. Of course, the hardware configuration of each proxy server can be different.
在釆用服务器集群的情况下,还涉及到将用户的当前请求在不同的代理 服务器之间进行分配的问题, 此时, 还可以在服务器端部署一个专门用于分 发的服务器, 当然也可以在某一台代理服务器上增加分发模块。 当客户端发 送的用户当前请求到达服务器端时, 可以首先到达该分发服务器或分发模 块, 然后由分发服务器或分发模块为当前请求分配代理服务器。 在分配代理 服务器时, 可以釆用多种策略来实现, 例如, 可以预先获取到各个代理服务 器的性能参数, 根据代理服务器的性能参数确定出各自的处理能力, 然后基 于各自的处理能力, 为各个代理服务器进行请求的分配。 或者, 还可以对各 个代理服务器处理的请求总数进行监控, 或者由各个代理服务器上报各自正 在处理的请求总数, 然后结合各自的处理能力, 进行更有效的分配。  In the case of using a server cluster, it also involves the problem of allocating the user's current request between different proxy servers. At this time, it is also possible to deploy a server dedicated to distribution on the server side, of course, Add a distribution module to a proxy server. When the user sent by the client currently requests to reach the server, the distribution server or distribution module can be reached first, and then the distribution server or distribution module allocates a proxy server for the current request. When assigning a proxy server, multiple strategies can be implemented. For example, the performance parameters of each proxy server can be obtained in advance, the processing capabilities are determined according to the performance parameters of the proxy server, and then each is based on the respective processing capabilities. The proxy server makes the assignment of the request. Alternatively, it is also possible to monitor the total number of requests processed by each proxy server, or to report the total number of requests being processed by each proxy server, and then combine them with their respective processing capabilities for more efficient allocation.
分发服务器或分发模块在各个代理服务器之间进行分配时, 可以釆用类 似 DNS的方式来实现, 但是, 在实际应用中, 代理服务器也可能出现故障, 此时, 为了避免对用户请求的处理速度产生影响, 应该能够及时停止向该发 生故障的代理服务器分配用户请求。 然而, 如果使用 DNS的方式进行分发, 则无法达到该要求, 这是因为, 传统的 DNS服务在单台服务器出现故障之 后, 并不会停止转发新请求到故障的服务器上, 而必须首先修改 DNS配置, 而 DNS服务器存在緩存, 緩存的更新需要时间, 所以修改 DNS配置之后, 转发不会立即停止, 需要等到各级 DNS緩存更新完毕之后, 才能停止转发 新请求到故障的服务器上。 为此, 在本发明实施例中, 可以不使用 DNS的 方式进行分发, 而是直接在分发服务器或者分发模块中, 对各个代理服务器 进行实时或准实时心跳监测 (可以配置监控的时间间隔, 单位在秒级别), 所 谓的心跳监测也就是对代理服务器进行故障监测,如果某代理服务器发生故 障,则当分发服务器或分发模块向其发送心跳测试信号时,将无法返回响应, 这样, 代理服务器上线下线都能监控到并处理。 同时, 还可以将能够正常监 测到心跳信息的代理服务器加入到可用代理服务器列表中,在向当前请求分 配代理服务器时, 从该列表中选择代理服务器, 这样就可以保证有故障的代 理服务器不会被选择到。 对于已经处于可用代理服务器列表中的代理服务 器, 当又发现未能监测到其心跳信息时, 还可以将其从可用代理服务器列表 中删除, 一旦故障服务器恢复之后, 分发服务器或分发模块监测到故障服务 器的心跳信息了, 则故障服务器就会重新加入到可用代理服务器列表中, 当 有用户请求时, 又可以重新分配到该代理服务器中。 When the distribution server or distribution module is allocated between the various proxy servers, it can be implemented in a DNS-like manner. However, in actual applications, the proxy server may also fail. In this case, in order to avoid the processing speed of the user request. The impact should be able to stop assigning user requests to the failed proxy server in a timely manner. However, if you use DNS to distribute, This requirement cannot be met because the traditional DNS service does not stop forwarding new requests to the failed server after a single server fails. Instead, the DNS configuration must be modified first, and the DNS server has a cache, cached. The update takes time, so after modifying the DNS configuration, the forwarding will not stop immediately. After waiting for the DNS cache update at all levels, you can stop forwarding new requests to the failed server. Therefore, in the embodiment of the present invention, the distribution may be performed without using the DNS, but the real-time or quasi-real-time heartbeat monitoring may be performed on each proxy server directly in the distribution server or the distribution module (the time interval for monitoring may be configured, the unit At the second level, the so-called heartbeat monitoring is to monitor the fault of the proxy server. If a proxy server fails, when the distribution server or distribution module sends a heartbeat test signal to it, it will not be able to return a response, so the proxy server goes online. Downline can be monitored and processed. At the same time, the proxy server capable of normally monitoring the heartbeat information can be added to the list of available proxy servers, and when the proxy server is assigned to the current request, the proxy server is selected from the list, so that the faulty proxy server can be ensured. Was selected. For a proxy server that is already in the list of available proxy servers, when it is found that its heartbeat information is not detected, it can also be deleted from the list of available proxy servers. Once the failed server is restored, the distribution server or distribution module detects the fault. If the server's heartbeat information is available, the failed server will rejoin the list of available proxy servers, and when there is a user request, it can be reassigned to the proxy server.
在将某当前请求分配到某代理服务器上之后,对于进程以及处理单元的 分配可以按照前文所述的来处理, 这里不再赘述。 可见, 在这种服务器集群 的方式下, 可以实现两个层次的调度, 一个是代理服务器之间的物理层, 另 一个是进程层, 通过这种多级调度机制, 可以提高整体的处理能力以及响应 速度。  After a current request is assigned to a proxy server, the allocation of processes and processing units can be handled as described above, and will not be described here. It can be seen that in this server cluster mode, two levels of scheduling can be implemented, one is the physical layer between the proxy servers, and the other is the process layer. Through this multi-level scheduling mechanism, the overall processing capability can be improved. responding speed.
S103: 通过所述分配的处理单元, 向所述当前请求对应的网页服务器发 送请求以获取网页内容, 以便返回给客户端进行展现。  S103: Send, by using the allocated processing unit, a request to the web server corresponding to the current request to obtain webpage content, so as to return to the client for presentation.
在给一个请求分配了处理单元之后, 该处理单元就可以对请求进行解 析, 然后构造网页访问请求到网页服务器, 以获取网页资源, 在获取到网页 资源之后就可以进行解析、 渲染、 排版等处理, 之后转换成二进制数据返回 给客户端, 供客户端进行绘制及网页的显示。 当然, 本发明不仅仅适用于上 述处理单元执行了解析、 渲染、 排版等处理的情况, 在处理单元所需执行的 操作发生变化时(例如代理服务器变得更重或者更轻) , 也可以使用本发明 实施例提供的方案来实现。  After a processing unit is assigned to a request, the processing unit can parse the request, and then construct a webpage access request to the web server to obtain the webpage resource, and after parsing the webpage resource, the parsing, rendering, and typesetting can be performed. , then converted to binary data and returned to the client for the client to draw and display the web page. Of course, the present invention is not only applicable to the case where the processing unit performs processing such as parsing, rendering, typesetting, etc., and may be used when the operation required by the processing unit changes (for example, the proxy server becomes heavier or lighter). The solution provided by the embodiment of the present invention is implemented.
需要说明的是, 在一个处理单元完成了对一个请求的处理之后, 该处理 单元并不会关闭, 而是将处理该请求时申请的资源(包括与网页服务器建立 连接时的链接资源、 用于对各种资源进行緩存的存储资源、 计算资源等等) 释放掉, 然后等待其他请求的到来, 以此周而复始。 当然, 为了达到前述同 一用户的不同请求能够复用一些资源的目的, 处理单元可以不必释放页面请 求的结果。 与本发明实施例提供的处理用户访问网页的请求的方法相对应, 本发明 实施例还提供了一种处理用户访问网页的请求系统, 在该系统中, 需要在代 理服务器中预先启动至少两个进程, 每个进程中创建至少两个处理单元, 参 见图 3 , 该系统包括: It should be noted that after a processing unit finishes processing a request, the processing The unit does not close, but releases the resources requested when processing the request (including the link resources when establishing a connection with the web server, the storage resources used to cache various resources, computing resources, etc.), and then waits The arrival of other requests will be repeated in this week. Of course, in order to achieve the purpose of multiplexing some resources of different requests of the same user, the processing unit may not have to release the result of the page request. Corresponding to the method for processing a request for a user to access a webpage provided by the embodiment of the present invention, the embodiment of the present invention further provides a requesting system for processing a user to access a webpage, in which at least two pre-launching in the proxy server is required. Process, at least two processing units are created in each process, see Figure 3, the system includes:
进程分配模块 301 , 用于在接收到多个用户访问网页的当前请求时, 根 据各个用户的属性信息以及各个进程的状态信息, 为所述各个用户的当前请 求分配进程;  The process allocation module 301 is configured to allocate a process for the current request of each user according to the attribute information of each user and the status information of each process when receiving the current request of the plurality of users to access the webpage;
处理单元分配模块 302, 用于在所述分配的进程中为所述当前请求分配 处理单元;  a processing unit allocation module 302, configured to allocate a processing unit to the current request in the allocated process;
网页内容处理模块 303 , 用于通过所述分配的处理单元, 向所述当前请 求对应的网页服务器发送请求以获取网页内容, 以便返回给客户端进行展 现。  The webpage content processing module 303 is configured to send, by using the allocated processing unit, a request to the webpage server corresponding to the current request to obtain webpage content, so as to return to the client for display.
为了保证用户访问的连续性, 进一步提高响应速度, 可以将同一用户的 不同请求分配给同一进程。  In order to ensure the continuity of user access and further improve the response speed, different requests from the same user can be assigned to the same process.
具体实现时, 为了实现上述将同一用户的不同请求分配给同一进程, 参 见图 4, 进程分配模块 301可以包括:  In a specific implementation, in order to implement the foregoing, the different requests of the same user are allocated to the same process. Referring to FIG. 4, the process allocation module 301 may include:
判断子模块 3011 ,用于根据各个用户的属性信息判断当前请求是否为新 用户的请求;  The determining sub-module 3011 is configured to determine, according to attribute information of each user, whether the current request is a request of a new user;
进程分配子模块 3012 ,用于如果当前请求不是新用户的请求, 则根据该 用户的分配历史以及各个进程的状态信息为所述当前请求分配进程。  The process allocation sub-module 3012 is configured to allocate a process for the current request according to the allocation history of the user and the status information of each process if the current request is not a request of the new user.
其中, 判断子模块 3011可以包括:  The judging submodule 3011 may include:
用户标识获取子模块, 用于获取所述当前请求对应的用户的属性信息; 比对子模块, 用于如果当前请求对应的用户的属性信息标识未出现在所 述历史分配记录中, 则所述当前请求为新用户的请求; 其中, 所述历史分配 记录用于记录在历史处理过程中, 用户请求对应的用户的属性信息与分配给 该用户请求的进程之间的对应关系。 进程分配子模块 3012具体可以用于: a user identifier obtaining sub-module, configured to obtain attribute information of a user corresponding to the current request, and a comparison sub-module, configured to: if the attribute information identifier of the user corresponding to the current request does not appear in the historical allocation record, The current request is a request of a new user; wherein the historical allocation record is used to record a correspondence between the attribute information of the user requesting the corresponding user and the process assigned to the user request in the history processing. The process allocation sub-module 3012 can be specifically used to:
所述当前请求不是新用户的请求, 并且历史分配记录中该用户的属性信 息对应的进程包含空闲的处理单元, 则将该进程分配给当前请求。  The current request is not a request of a new user, and the process corresponding to the attribute information of the user in the history allocation record includes an idle processing unit, and the process is assigned to the current request.
为了能够应对一些网站中的特殊设计, 历史分配记录中还可以记录有用 户请求对应的用户的属性信息与分配给请求的处理单元之间的对应关系; 此时, 处理单元分配模块 302具体可以用于:  In order to be able to cope with the special design in some websites, the history allocation record may also record the correspondence between the attribute information of the user corresponding to the user request and the processing unit allocated to the request; at this time, the processing unit allocation module 302 may specifically use In:
将历史分配记录中该用户的属性信息对应的处理单元分配给当前请求。 另外, 该系统还可以包括:  The processing unit corresponding to the attribute information of the user in the history allocation record is assigned to the current request. In addition, the system can also include:
新用户请求分配模块, 用于如果当前请求是新用户的请求, 则将当前具 有最多空闲处理单元的进程分配给当前请求。  The new user requests an allocation module for assigning the current process with the most idle processing units to the current request if the current request is a new user's request.
超时处理模块, 用于如果全部进程中都不存在空闲处理单元, 则判断是 否存在处理时间超时的处理单元, 如果是, 则将该处理单元所在的进程分配 给当前请求;  a timeout processing module, configured to: if there is no idle processing unit in all processes, determine whether there is a processing unit with a processing timeout timeout, and if yes, assign the process in which the processing unit is located to the current request;
相应的, 处理单元分配模块 302具体可以用于:  Correspondingly, the processing unit allocation module 302 can be specifically configured to:
将所述处理时间超时的处理单元的当前任务结束, 并将其分配给当前请 求。  The current task of the processing unit that timed out the processing time is ended and assigned to the current request.
为了进一步提高服务器的吞吐量以及响应速度, 可以釆用服务器集群的 方式来实现, 也即, 代理服务器为至少两个, 此时, 参见图 5 , 该系统还可 以包括:  In order to further improve the throughput and response speed of the server, the server cluster may be implemented in a manner that the proxy server is at least two. In this case, referring to FIG. 5, the system may further include:
代理服务器分配模块 304 , 用于为当前请求分配代理服务器。  The proxy server allocation module 304 is configured to allocate a proxy server for the current request.
需要说明的是, 图 5中示出的仅为各个模块之间的逻辑关系, 在物理结 构中, 如前文所述, 该代理服务器分配模块 304可以部署于一个单独的分发 服务器中, 也可以部署于集群的任意一台代理服务器中。  It should be noted that the logical relationship between the modules is only shown in FIG. 5. In the physical structure, as described above, the proxy server allocation module 304 can be deployed in a separate distribution server, or can be deployed. In any of the cluster's proxy servers.
为了能够及时停止向发生故障的代理服务器分发新的请求,代理服务器 分配模块 304可以包括:  In order to be able to stop distributing new requests to the failed proxy server in a timely manner, the proxy server assignment module 304 can include:
心跳监控子模块, 用于对各代理服务器进行实时的心跳监控, 将能够正 常监测到心跳信息的代理服务器加入到可用代理服务器列表中;  The heartbeat monitoring sub-module is configured to perform real-time heartbeat monitoring on each proxy server, and add a proxy server capable of normally monitoring heartbeat information to the list of available proxy servers;
代理服务器分配子模块, 用于从所述可用代理服务器列表中为当前请求 分配代理服务器。  A proxy server allocation sub-module for assigning a proxy server to the current request from the list of available proxy servers.
另外, 还可以包括:  In addition, it can also include:
列表更新模块, 用于将未能监测到心跳信息的代理服务器从所述可用代 理服务器列表中删除; 当重新监测到代理服务器的心跳信息时, 将其加入到 所述可用代理服务器列表中。 a list update module, configured to delete a proxy server that fails to monitor heartbeat information from the list of available proxy servers; when re-monitoring the heartbeat information of the proxy server, adding the The list of available proxy servers.
综上所述, 通过本发明实施例提供的上述系统, 可以预先在同一台代理 服务器中启动多个进程并在每个进程中创建多个处理单元, 这样, 就不必在 用户请求到达时再重新进行进程及处理单元的创建, 因此, 可以缩短用户实 际感知到的处理时间, 而且能够满足大用户量并发时的处理需求; 同时, 多 进程的方式可以充分代理服务器中多核的资源, 提高处理效率, 这也能提高 服务器的响应速度。  In summary, the above system provided by the embodiment of the present invention can start multiple processes in the same proxy server in advance and create multiple processing units in each process, so that it is not necessary to re-create the user request when it arrives. The process and the processing unit are created, so that the processing time actually perceived by the user can be shortened, and the processing requirement of the large user amount concurrently can be satisfied. At the same time, the multi-process method can fully proxy the multi-core resources in the server and improve the processing efficiency. This also improves the responsiveness of the server.
本发明的各个部件实施例可以以硬件实现, 或者以在一个或者多个 处理器上运行的软件模块实现, 或者以它们的组合实现。 本领域的技术 人员应当理解, 可以在实践中使用微处理器或者数字信号处理器 (DSP ) 全部部件的一些或者全部功能。 本发明还可以实现为用于执行这里所描 述的方法的一部分或者全部的设备或者装置程序 (例如, 计算机程序和 计算机程序产品) 。 这样的实现本发明的程序可以存储在计算机可读介 质上, 或者可以具有一个或者多个信号的形式。 这样的信号可以从因特 网网站上下载得到, 或者在载体信号上提供, 或者以任何其他形式提供。 的方法的服务器, 例如应用服务器。 该服务器传统上包括处理器 610 和 以存储器 620 形式的计算机程序产品或者计算机可读介质。 存储器 620 可以是诸如闪存、 EEPROM (电可擦除可编程只读存储器) 、 EPROM、 硬盘或者 ROM之类的电子存储器。存储器 620具有用于执行上述方法中 的任何方法步骤的程序代码 631的存储空间 630。 例如, 用于程序代码的 存储空间 630 可以包括分别用于实现上面的方法中的各种步骤的各个程 序代码 631。这些程序代码可以从一个或者多个计算机程序产品中读出或 者写入到这一个或者多个计算机程序产品中。 这些计算机程序产品包括 诸如硬盘, 紧致盘 (CD ) 、 存储卡或者软盘之类的程序代码载体。 这样 的计算机程序产品通常为如参考图 7 所述的便携式或者固定存储单元。 该存储单元可以具有与图 6的服务器中的存储器 620类似布置的存储段、 存储空间等。 程序代码可以例如以适当形式进行压缩。 通常, 存储单元 包括计算机可读代码 63 Γ , 即可以由例如诸如 610之类的处理器读取的 代码, 这些代码当由服务器运行时, 导致该服务器执行上面所描述的方 法中的各个步骤。 本文中所称的 "一个实施例"、 "实施例"或者"一个或者多个实施例" 意味着, 结合实施例描述的特定特征、 结构或者特性包括在本发明的至 少一个实施例中。 此外, 请注意, 这里"在一个实施例中"的词语例子不一 定全指同一个实施例。 The various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functionality of the microprocessor or all components of the digital signal processor (DSP) may be used in practice. The invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein. Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form. The method of the server, such as an application server. The server conventionally includes a processor 610 and a computer program product or computer readable medium in the form of a memory 620. The memory 620 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM. Memory 620 has a memory space 630 for program code 631 for performing any of the method steps described above. For example, storage space 630 for program code may include various program code 631 for implementing various steps in the above methods, respectively. The program code can be read from or written to one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG. The storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 620 in the server of FIG. The program code can be compressed, for example, in an appropriate form. Typically, the storage unit includes computer readable code 63, i.e., code that can be read by a processor, such as 610, which when executed by the server causes the server to perform various steps in the methods described above. "One embodiment," or "an embodiment," or "one or more embodiments" as used herein means that the particular features, structures, or characteristics described in connection with the embodiments are included in at least one embodiment of the invention. In addition, it should be noted that the phrase "in one embodiment" herein does not necessarily refer to the same embodiment.
在此处所提供的说明书中, 说明了大量具体细节。 然而, 能够理解, 本发明的实施例可以在没有这些具体细节的情况下被实践。 在一些实例 中, 并未详细示出公知的方法、 结构和技术, 以便不模糊对本说明书的 理解。  Numerous specific details are set forth in the description provided herein. However, it is understood that the embodiments of the invention may be practiced without these specific details. In some instances, well known methods, structures, and techniques have not been shown in detail so as not to obscure the description.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限 制, 并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计 出替换实施例。 在权利要求中, 不应将位于括号之间的任何参考符号构 造成对权利要求的限制。单词"包含"不排除存在未列在权利要求中的元件 或步骤。 位于元件之前的单词 "一"或"一个"不排除存在多个这样的元件。 本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计 算机来实现。 在列举了若干装置的单元权利要求中, 这些装置中的若干 个可以是通过同一个硬件项来具体体现。 单词第一、 第二、 以及第三等 的使用不表示任何顺序。 可将这些单词解释为名称。  It is to be noted that the above-described embodiments are illustrative of the invention and are not intended to limit the scope of the invention, and those skilled in the art can devise alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as a limitation. The word "comprising" does not exclude the presence of the elements or steps that are not in the claims. The word "a" or "an" preceding a component does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by the same hardware item. The use of the words first, second, and third does not indicate any order. These words can be interpreted as names.
此外, 还应当注意, 本说明书中使用的语言主要是为了可读性和教 导的目的而选择的, 而不是为了解释或者限定本发明的主题而选择的。 因此, 在不偏离所附权利要求书的范围和精神的情况下, 对于本技术领 域的普通技术人员来说许多修改和变更都是显而易见的。 对于本发明的 范围, 对本发明所做的公开是说明性的, 而非限制性的, 本发明的范围 由所附权利要求书限定。  In addition, it should be noted that the language used in the specification has been selected primarily for the purpose of readability and teaching, and is not intended to be interpreted or limited. Therefore, many modifications and variations will be apparent to those of ordinary skill in the art. The disclosure of the present invention is intended to be illustrative, and not restrictive, and the scope of the invention is defined by the appended claims.

Claims

权 利 要 求 Rights request
1、 一种处理用户访问网页的请求的方法, 其中, 在同一台代理服务器 中预先启动至少两个进程, 每个进程中创建至少两个处理单元, 所述方法包 括: A method for processing a request for a user to access a webpage, wherein at least two processes are pre-launched in the same proxy server, and at least two processing units are created in each process, the method comprising:
在接收到多个用户访问网页的当前请求时,根据各个用户的属性信息以 及各个进程的状态信息, 为各个用户的当前请求分配进程;  When receiving the current request of multiple users to access the webpage, the process is assigned to each user's current request according to the attribute information of each user and the status information of each process;
在所述分配的进程中为所述当前请求分配处理单元;  Allocating a processing unit to the current request in the allocated process;
通过所述分配的处理单元, 向所述当前请求对应的网页服务器发送请求 以获取网页内容, 以便返回给客户端进行展现。  And sending, by the allocated processing unit, a request to the web server corresponding to the current request to obtain webpage content, so as to return to the client for presentation.
2、 根据权利要求 1所述的方法, 其中, 将同一用户的不同请求分配给 同一进程。  2. The method of claim 1, wherein different requests from the same user are assigned to the same process.
3、 根据权利要求 1所述的方法, 其中, 所述在接收到多个用户访问网 页的当前请求时, 根据各个用户的属性信息以及各个进程的状态信息, 为各 个用户的当前请求分配进程, 包括:  3. The method according to claim 1, wherein, when receiving a current request of a plurality of users to access a webpage, the process assigns a process to each user's current request according to the attribute information of each user and the state information of each process. Includes:
根据各个用户的属性信息判断当前请求是否为新用户的请求; 如果当前请求不是新用户的请求, 则根据该用户的分配历史以及各个进 程的状态信息为所述当前请求分配进程。  Determining whether the current request is a request of the new user according to the attribute information of each user; if the current request is not a request of the new user, the process is allocated for the current request according to the allocation history of the user and the status information of each process.
4、 根据权利要求 3所述的方法, 其中, 所述根据各个用户的属性信息 判断当前请求是否为新用户的请求包括:  4. The method according to claim 3, wherein the request for determining whether the current request is a new user according to attribute information of each user comprises:
获取所述当前请求对应的用户的属性信息;  Obtaining attribute information of the user corresponding to the current request;
如果当前请求对应的用户的属性信息未出现在所述历史分配记录中, 则 所述当前请求为新用户的请求; 其中, 所述历史分配记录用于记录在历史处 理过程中, 用户请求对应的用户的属性信息与分配给该用户请求的进程之间 的对应关系。  If the attribute information of the user corresponding to the current request does not appear in the history allocation record, the current request is a request of the new user; wherein the history allocation record is used for recording in the history processing, and the user request corresponds to The correspondence between the attribute information of the user and the process assigned to the user request.
5、 根据权利要求 4所述的方法, 其中, 所述如果当前请求不是新用户 的请求, 则根据该用户的分配历史以及各个进程的状态信息为所述当前请求 分配进程, 包括:  The method according to claim 4, wherein, if the current request is not a request of a new user, the process is allocated to the current request according to the allocation history of the user and the status information of each process, including:
如果所述当前请求不是新用户的请求, 并且历史分配记录中该用户的属 性信息对应的进程包含空闲的处理单元, 则将该进程分配给当前请求。  If the current request is not a request of a new user, and the process corresponding to the attribute information of the user in the history allocation record contains an idle processing unit, the process is assigned to the current request.
6、 根据权利要求 4所述的方法, 其中, 所述历史分配记录中还记录有 用户请求对应的用户的属性信息与分配给请求的处理单元之间的对应关系; 所述在分配的进程中为所述请求分配处理单元, 包括: The method according to claim 4, wherein the history allocation record further records a correspondence between the attribute information of the user corresponding to the user request and the processing unit allocated to the request; The allocating a processing unit to the request in the process of allocating includes:
将历史分配记录中该用户的属性信息对应的处理单元分配给当前请求。 The processing unit corresponding to the attribute information of the user in the history allocation record is assigned to the current request.
7、 根据权利要求 3所述的方法, 其中, 还包括: 7. The method according to claim 3, further comprising:
如果当前请求是新用户的请求, 则将当前具有最多空闲处理单元的进程 分配给当前请求。  If the current request is a request from a new user, the process that currently has the most idle processing unit is assigned to the current request.
8、 根据权利要求 7所述的方法, 其中, 还包括:  8. The method according to claim 7, further comprising:
如果全部进程中都不存在空闲处理单元, 则判断是否存在处理时间超时 的处理单元, 如果是, 则将该处理单元所在的进程分配给当前请求;  If there is no idle processing unit in all processes, it is determined whether there is a processing unit whose processing time expires, and if so, the process in which the processing unit is located is allocated to the current request;
所述在所述分配的进程中为所述当前请求分配处理单元包括: 将所述处理时间超时的处理单元的当前任务结束, 并将其分配给当前请 求。  The allocating the processing unit to the current request in the allocated process includes: ending the current task of the processing unit with the processing time timeout, and assigning it to the current request.
9、 根据权利要求 1至 8任一项所述的方法, 其中, 所述代理服务器为 至少两个, 所述方法还包括:  The method according to any one of claims 1 to 8, wherein the proxy server is at least two, the method further comprising:
为各个用户的当前请求分配代理服务器。  A proxy server is assigned to each user's current request.
10、 根据权利要求 9所述的方法, 其中, 所述为各个用户的当前请求分 配代理服务器包括:  10. The method according to claim 9, wherein the allocating the proxy server for the current request of each user comprises:
对各代理服务器进行实时的心跳监控, 将能够正常监测到心跳信息的代 理服务器加入到可用代理服务器列表中;  Perform real-time heartbeat monitoring on each proxy server, and add a proxy server capable of normally monitoring heartbeat information to the list of available proxy servers;
从所述可用代理服务器列表中为当前请求分配代理服务器。  A proxy server is assigned to the current request from the list of available proxy servers.
11、 根据权利要求 10所述的方法, 其中, 还包括:  11. The method according to claim 10, further comprising:
将未能监测到心跳信息的代理服务器从所述可用代理服务器列表中删 除; 当重新监测到代理服务器的心跳信息时, 将其加入到所述可用代理服务 器列表中。  A proxy server that fails to monitor heartbeat information is deleted from the list of available proxy servers; when the heartbeat information of the proxy server is re-monitored, it is added to the list of available proxy servers.
12、 一种处理用户访问网页的请求的系统, 其中, 在同一台代理服务器 中预先启动至少两个进程, 每个进程中创建至少两个处理单元, 所述系统包 括:  12. A system for processing a request for a user to access a webpage, wherein at least two processes are pre-launched in the same proxy server, and at least two processing units are created in each process, the system comprising:
进程分配模块, 用于在接收到多个用户访问网页的当前请求时, 根据各 个用户的属性信息以及各个进程的状态信息, 为所述各个用户的当前请求分 配进程;  a process allocation module, configured to allocate a process for the current request of each user according to attribute information of each user and status information of each process when receiving a current request of multiple users to access a webpage;
处理单元分配模块, 用于在所述分配的进程中为所述当前请求分配处理 单元;  a processing unit allocation module, configured to allocate a processing unit to the current request in the allocated process;
网页内容处理模块, 用于通过所述分配的处理单元, 向所述当前请求对 应的网页服务器发送请求以获取网页内容, 以便返回给客户端进行展现。a webpage content processing module, configured to send, by the allocated processing unit, the current request pair The requesting web server sends a request to obtain the content of the web page for return to the client for presentation.
13、 根据权利要求 12所述的系统, 其中, 将同一用户的不同请求分配 给同一进程。 13. The system of claim 12, wherein different requests from the same user are assigned to the same process.
14、 根据权利要求 12所述的系统, 其中, 所述进程分配模块包括: 判断子模块, 用于根据各个用户的属性信息判断当前请求是否为新用户 的请求;  The system according to claim 12, wherein the process allocation module comprises: a determining submodule, configured to determine, according to attribute information of each user, whether the current request is a request of a new user;
进程分配子模块, 用于如果当前请求不是新用户的请求, 则根据该用户 的分配历史以及各个进程的状态信息为所述当前请求分配进程。  The process allocation submodule is configured to allocate a process for the current request according to the allocation history of the user and the status information of each process if the current request is not a request of the new user.
15、 根据权利要求 14所述的系统, 其中, 所述判断子模块包括: 用户标识获取子模块, 用于获取所述当前请求对应的用户的属性信息; 比对子模块, 用于如果当前请求对应的用户的属性信息未出现在所述历 史分配记录中, 则所述当前请求为新用户的请求; 其中, 所述历史分配记录 用于记录在历史处理过程中, 用户请求对应的用户的属性信息与分配给该用 户请求的进程之间的对应关系。  The system according to claim 14, wherein the determining sub-module comprises: a user identifier obtaining sub-module, configured to acquire attribute information of a user corresponding to the current request; and a comparison sub-module, if the current request is If the attribute information of the corresponding user does not appear in the history allocation record, the current request is a request of the new user; wherein the history allocation record is used to record the attribute of the user requested by the user in the history processing process. The correspondence between the information and the process assigned to the user request.
16、 根据权利要求 15所述的系统, 其中, 所述进程分配子模块具体用 于:  16. The system according to claim 15, wherein the process allocation sub-module is specifically used to:
如果所述当前请求不是新用户的请求, 并且历史分配记录中该用户的属 性信息对应的进程包含空闲的处理单元, 则将该进程分配给当前请求。  If the current request is not a request of a new user, and the process corresponding to the attribute information of the user in the history allocation record contains an idle processing unit, the process is assigned to the current request.
17、 根据权利要求 15所述的系统, 其中, 所述历史分配记录中还记录 有用户请求对应的用户的属性信息与分配给请求的处理单元之间的对应关 系;  The system according to claim 15, wherein the history allocation record further records a correspondence between attribute information of a user corresponding to the user request and a processing unit allocated to the request;
所述处理单元分配模块具体用于:  The processing unit allocation module is specifically configured to:
将历史分配记录中该用户的属性信息对应的处理单元分配给当前请求。 The processing unit corresponding to the attribute information of the user in the history allocation record is assigned to the current request.
18、 根据权利要求 14所述的系统, 其中, 还包括: 18. The system according to claim 14, further comprising:
新用户请求分配模块, 用于如果当前请求是新用户的请求, 则将当前具 有最多空闲处理单元的进程分配给当前请求。  The new user requests an allocation module for assigning the current process with the most idle processing units to the current request if the current request is a new user's request.
19、 根据权利要求 18所述的系统, 其中, 还包括:  19. The system of claim 18, further comprising:
超时处理模块, 用于如果全部进程中都不存在空闲处理单元, 则判断是 否存在处理时间超时的处理单元, 如果是, 则将该处理单元所在的进程分配 给当前请求;  a timeout processing module, configured to: if there is no idle processing unit in all processes, determine whether there is a processing unit with a processing timeout timeout, and if yes, assign the process in which the processing unit is located to the current request;
所述处理单元分配模块具体用于:  The processing unit allocation module is specifically configured to:
将所述处理时间超时的处理单元的当前任务结束, 并将其分配给当前请 求。 Ending the current task of the processing unit with the processing time timeout and assigning it to the current request begging.
20、 根据权利要求 12至 19任一项所述的系统, 其中, 所述代理服务器 为至少两个, 所述系统还包括:  The system according to any one of claims 12 to 19, wherein the proxy server is at least two, the system further comprising:
代理服务器分配模块, 用于为各个用户的当前请求分配代理服务器。  A proxy server allocation module that is used to assign a proxy server to each user's current request.
21、 根据权利要求 20所述的系统, 其中, 所述代理服务器分配模块包 括: 21. The system according to claim 20, wherein the proxy server allocation module comprises:
心跳监控子模块, 用于对各代理服务器进行实时的心跳监控, 将能够正 常监测到心跳信息的代理服务器加入到可用代理服务器列表中;  The heartbeat monitoring sub-module is configured to perform real-time heartbeat monitoring on each proxy server, and add a proxy server capable of normally monitoring heartbeat information to the list of available proxy servers;
代理服务器分配子模块, 用于从所述可用代理服务器列表中为当前请求 分配代理服务器。  A proxy server allocation sub-module for assigning a proxy server to the current request from the list of available proxy servers.
22、 根据权利要求 21所述的系统, 其中, 还包括:  22. The system according to claim 21, further comprising:
列表更新模块, 用于将未能监测到心跳信息的代理服务器从所述可用代理 服务器列表中删除; 当重新监测到代理服务器的心跳信息时, 将其加入到所 述可用代理服务器列表中。 。  a list update module, configured to delete a proxy server that fails to monitor heartbeat information from the list of available proxy servers; when the proxy server's heartbeat information is re-monitored, it is added to the list of available proxy servers. .
23、 一种计算机程序, 包括计算机可读代码, 当所述计算机可读代码 在服务器上运行时, 导致所述服务器执行根据权利要求 1-11 中的任一个 所述的处理用户访问网页的请求的方法。  23. A computer program comprising computer readable code, when said computer readable code is run on a server, causing said server to perform a request to process a user access to a web page according to any of claims 1-11 Methods.
24、 一种计算机可读介质, 其中存储了如权利要求 23所述的计算机 程序。  A computer readable medium storing the computer program of claim 23.
PCT/CN2013/074441 2012-05-02 2013-04-19 Method and system for processing user requests of accessing to web pages WO2013163926A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210135416.5A CN102708173B (en) 2012-05-02 2012-05-02 Method and system for processing user requests of accessing to web pages
CN201210135416.5 2012-05-02

Publications (1)

Publication Number Publication Date
WO2013163926A1 true WO2013163926A1 (en) 2013-11-07

Family

ID=46900939

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/074441 WO2013163926A1 (en) 2012-05-02 2013-04-19 Method and system for processing user requests of accessing to web pages

Country Status (2)

Country Link
CN (1) CN102708173B (en)
WO (1) WO2013163926A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708173B (en) * 2012-05-02 2014-08-13 北京奇虎科技有限公司 Method and system for processing user requests of accessing to web pages
CN102799636B (en) * 2012-06-26 2015-11-25 北京奇虎科技有限公司 The method and system of mobile terminal display web page
CN102981705B (en) * 2012-11-09 2018-04-27 北京奇虎科技有限公司 Server-side browser implementation method and server
US8539080B1 (en) * 2012-12-18 2013-09-17 Microsoft Corporation Application intelligent request management based on server health and client information
CN105518644B (en) * 2013-08-09 2020-08-07 杨绍峰 Method for processing and displaying social data on map in real time
CN104426985B (en) * 2013-09-06 2019-11-26 腾讯科技(深圳)有限公司 Show the method, apparatus and system of webpage
CN105045651B (en) * 2015-06-26 2019-04-05 广州华多网络科技有限公司 Transaction processing system and method
CN105607951A (en) * 2015-12-17 2016-05-25 北京奇虎科技有限公司 Method and device for processing data request and obtaining server information
CN105610906A (en) * 2015-12-18 2016-05-25 北京奇虎科技有限公司 Request forwarding method, device and system
CN108462731B (en) * 2017-02-20 2021-04-09 阿里巴巴集团控股有限公司 Data proxy method and device and electronic equipment
CN109729062B (en) * 2018-05-14 2022-01-25 网联清算有限公司 Online method of encryption server and proxy server
CN109522472A (en) * 2018-09-30 2019-03-26 中国农业大学烟台研究院 A kind of user's intention estimation method
CN111355693B (en) * 2018-12-24 2023-10-31 北京奇虎科技有限公司 Proxy service realization method, device, electronic equipment and storage medium
CN111741014B (en) * 2020-07-21 2020-12-22 平安国际智慧城市科技股份有限公司 Message sending method, device, server and storage medium
CN113094128B (en) * 2021-03-01 2024-01-30 北京水滴科技集团有限公司 Network information interaction method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120516A1 (en) * 2001-02-26 2002-08-29 Nec Corporation Mobile marketing method, mobile marketing system, mobile marketing server, and associated user terminal, analysis terminal, and program
US20070233957A1 (en) * 2006-03-28 2007-10-04 Etai Lev-Ran Method and apparatus for local access authorization of cached resources
US20110304634A1 (en) * 2010-06-10 2011-12-15 Julian Michael Urbach Allocation of gpu resources across multiple clients
CN102708173A (en) * 2012-05-02 2012-10-03 奇智软件(北京)有限公司 Method and system for processing user requests of accessing to web pages
CN102799636A (en) * 2012-06-26 2012-11-28 北京奇虎科技有限公司 Method and system for displaying webpage by mobile terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6240454B1 (en) * 1996-09-09 2001-05-29 Avaya Technology Corp. Dynamic reconfiguration of network servers
CN100353362C (en) * 2000-11-27 2007-12-05 大众汽车有限公司 Method for loading, storing and presenting web pages
CN102346767B (en) * 2011-09-19 2013-04-17 北京金和软件股份有限公司 Database connection method based on double connection pools

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120516A1 (en) * 2001-02-26 2002-08-29 Nec Corporation Mobile marketing method, mobile marketing system, mobile marketing server, and associated user terminal, analysis terminal, and program
US20070233957A1 (en) * 2006-03-28 2007-10-04 Etai Lev-Ran Method and apparatus for local access authorization of cached resources
US20110304634A1 (en) * 2010-06-10 2011-12-15 Julian Michael Urbach Allocation of gpu resources across multiple clients
CN102708173A (en) * 2012-05-02 2012-10-03 奇智软件(北京)有限公司 Method and system for processing user requests of accessing to web pages
CN102799636A (en) * 2012-06-26 2012-11-28 北京奇虎科技有限公司 Method and system for displaying webpage by mobile terminal

Also Published As

Publication number Publication date
CN102708173B (en) 2014-08-13
CN102708173A (en) 2012-10-03

Similar Documents

Publication Publication Date Title
WO2013163926A1 (en) Method and system for processing user requests of accessing to web pages
US11336583B2 (en) Background processes in update load balancers of an auto scaling group
US9251040B2 (en) Remote debugging in a cloud computing environment
US8453225B2 (en) Systems and methods for intercepting and automatically filling in forms by the appliance for single-sign on
US9600313B2 (en) Systems and methods for SR-IOV pass-thru via an intermediary device
US9934065B1 (en) Servicing I/O requests in an I/O adapter device
US8392562B2 (en) Systems and methods for managing preferred client connectivity to servers via multi-core system
US8812714B2 (en) Systems and methods for application fluency policies
US10341426B2 (en) Managing load balancers associated with auto-scaling groups
US20110153953A1 (en) Systems and methods for managing large cache services in a multi-core system
US20110154443A1 (en) Systems and methods for aaa-traffic management information sharing across cores in a multi-core system
US10038640B2 (en) Managing state for updates to load balancers of an auto scaling group
US9059941B1 (en) Providing router information according to a programmatic interface
US9787521B1 (en) Concurrent loading of session-based information
US10963324B2 (en) Predictive microservice systems and methods
CN109068153A (en) Video broadcasting method, device and computer readable storage medium
WO2015149486A1 (en) Page push method, device and server, and centralized network management controller
CN108112268B (en) Managing load balancers associated with auto-extension groups
CN109561054A (en) A kind of data transmission method, controller and access device
CN111800511B (en) Synchronous login state processing method, system, equipment and readable storage medium
CN104077381B (en) Web access requests processing method and distribution method
CN102984179A (en) Cloud-computing operating system oriented method for cross-domain access to Web services
US9497262B2 (en) Systems and methods for sampling management across multiple cores for HTML injection
CN104063461B (en) Handle the method and system that user accesses the request of webpage
US20140047014A1 (en) Network access system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13784172

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13784172

Country of ref document: EP

Kind code of ref document: A1