CN100429896C - A network server structure and its service providing process - Google Patents

A network server structure and its service providing process Download PDF

Info

Publication number
CN100429896C
CN100429896C CNB2003101086247A CN200310108624A CN100429896C CN 100429896 C CN100429896 C CN 100429896C CN B2003101086247 A CNB2003101086247 A CN B2003101086247A CN 200310108624 A CN200310108624 A CN 200310108624A CN 100429896 C CN100429896 C CN 100429896C
Authority
CN
China
Prior art keywords
content
background server
content distribution
user
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2003101086247A
Other languages
Chinese (zh)
Other versions
CN1545262A (en
Inventor
陈惠芳
赵问道
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CNB2003101086247A priority Critical patent/CN100429896C/en
Publication of CN1545262A publication Critical patent/CN1545262A/en
Application granted granted Critical
Publication of CN100429896C publication Critical patent/CN100429896C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The present invention particularly relates to a network server structure and a service providing process thereof, which belongs to the technical field of networks. The existing network server structure has the defects of potential bottle neck, high cost, need of inner core modification, etc. A network server of the present invention is composed of a load balancing device, content distributing processors, a background server group and a backup memory which are connected together by a quick network, wherein the load balancing device executes four-layer exchange; the content distributing processors execute seven-layer exchange based on request contents. Two content distributing processors with the same configuration are used in a content server structure of the present invention in order to enhance reliability and reduce the time delay of network response. The service providing process of the network server of the present invention comprises three processes: connection establishment, connection switching and service supply. The present invention has the advantages of high reliability, favorable expansibility, low cost, favorable load balance, etc.

Description

A kind of service providing method of the webserver
Technical field
The invention belongs to networking technology area, relate in particular to a kind of service providing method of the webserver.
Background technology
The webserver is made up of front end load equalizer, server pools and backup of memory.Load equalizer is the key components of the whole webserver, controls the reception and the distribution of whole network task by it.It is positioned at the webserver foremost, and the request that the user sends all is to be received by it, and these requests are transmitted to different background servers according to specific allocation algorithm, guarantees that each background server can evenly bear service request.In the webserver, really the process user request is background server.Server pools is formed by connecting by fast network by some background servers.Backup of memory provides to background server shares storage, can guarantee the content consistency and the service consistency of each server.
Give the algorithm difference that background server adopted according to front end load equalizer request for allocation, the webserver can be divided into based on four layers of switching network server with based on seven layers of switching network server, two big classes.For the webserver based on four layers of exchange, when new user's request comes, load equalizer is selected suitable target background servers according to four layers of specific exchange algorithm.Because the user's data bag do not arrive application layer in this exchanging mechanism, thus switch when the select target background server and do not know the content that the user asks, so all background servers all must comprise all service content.Therefore based on the webserver of four layers of exchange at throughput of system, the request response delay can't obtain very high performance on the aspects such as the hit rate of content caching, but load-balancing performance is good.And for the webserver based on seven layers of exchange, load equalizer need be considered the type and the content of asking, and selects suitable target background servers according to seven layers of specific exchange algorithm then.Because when the select target background server, obtained the content of user's request, so can guarantee the consistency of request content.Carrying out seven layers of exchange algorithm on load equalizer is a kind of fairly simple method, but with compare based on the webserver of four layers of exchange, because carrying out the expense of exchange algorithm increases, load equalizer can become the potential bottleneck of the webserver, and the defective of this mode maximum is the extensibility that has limited the webserver.
In order to overcome the above-mentioned defective that the webserver brought based on four layers of exchange, Many researchers has proposed corrective measure to the webserver based on seven layers of exchange.As people such as Aron the distributed content server architecture has been proposed, in this structure, after front end node receives request, according to four layers of specific exchange algorithm background server is distributed in request, reside in the content of the dispensing assembly module analysis request on these background servers subsequently, then the content of request is submitted to scheduler in the local area network (LAN), scheduler is after the content that obtains request, according to seven layers of specific exchange algorithm, select the optimal target background server, and information is returned to the dispensing assembly module, if what scheduler was selected is other backstage service node, then current server switches to optimum background server to request by connecting handoff protocol.Because the execution of seven layers of exchange algorithm is separated from front end node, so front end node no longer becomes the potential bottleneck of the webserver.But this model needs each background server can both handle the request of input, carries out to connect handoff procedure.And work as background server by a lot of variety classeses, when the assembly of specialization constituted, this was unpractical.What need in the real network is a kind of modular network server of extensibility, and this does not do the change on a lot of kernels with regard to not wishing background server.
Summary of the invention
The objective of the invention is to overcome the existing network server architecture and have shortcomings such as potential bottleneck, cost height, needs modification kernel, increase the autgmentability of the webserver, proposed a kind of service providing method that is based upon the webserver on the new architecture basics.
The webserver among the present invention comprises load equalizer, content distribution processors, background server group and backup of memory, and each several part is linked together by a fast network.Load equalizer is carried out four layers of exchange, the content distribution processors is carried out seven layers of exchange based on request content, the background server group finishes the requested service to the user, and data directly return to the user, and backup of memory can offer the part service content that does not have among the background server group.In addition, in order to improve reliability and to reduce the network response delay, adopted two content distribution processors that configuration is identical in the content server structure in the present invention, these two distribution processors play a part collaborative work and fault-tolerant mutually in the course of the work.
This webserver service providing method comprises that connecting foundation, connection switching and service provides three processes.Its particular content comprises:
(1) it is as follows to connect the workflow of the process of setting up:
A) when a request arrives load equalizer, load equalizer judges that whether this request is a SYN bag, if a SYN bag then turns to (b), if not a SYN bag, then turns to (e);
B) load equalizer is carried out four layers of exchange algorithm, and this request is sent to the content distribution processors, and content distribution processors and user carry out the three-way handshake operation then, connect;
C) after connection was set up, the content distribution processors was resolved the required service content of this connection, and according to request content, the content distribution processors is selected the server of an optimum from the background server group;
D) this content distribution processors notification payload equalizer switches to the background server of appointment to this connection, and load equalizer writes down this connection simultaneously;
E) the front end load equalizer directly is transmitted to the corresponding background server to connection according to the linkage record table.
(2) workflow of connection handoff procedure is as follows:
A) the content distribution processors is operated through three-way handshake with selected background server, connects;
B) the current connection of content distribution processors notification payload equalizer need switch to selected background server.
(3) service provides the workflow of process as follows:
When connection switched to background server, background server checked that at first can this server directly provide the user required service content.If can directly provide, just the result is directly fed back to the user; If there is not user's requested service content, will from backup of memory, read content corresponding; If also there is not the required content of user in the backup of memory, the information of one " service content is unreachable " of feeding back is given the user.
The working method of two identical content distribution processors of configuration is as follows:
Load equalizer is according to four layers of exchange algorithm, select a content distribution processors to finish the distribution that the initial connection of request is set up and asked, these two content distribution processors can regularly be carried out the normality detection mutually and be sent the backup information that is connected to the other side, if an interior for some time content distribution processors is not received the backup information of another content distribution processor, illustrate that fault has taken place another content distribution processors, at this moment to handle all by a content distribution processors and come from access request on the network, comprise that the content distribution processors that breaks down also do not finish the certain customers request of processing, this situation of notification payload equalizer makes load equalizer all distribute to the content distribution processors that does not break down to all user's requests simultaneously.After the fault of the content distribution processors that breaks down was got rid of, with regard to the notification payload equalizer, load equalizer was distributed to two content distribution processors to user's request to request according to four layers of exchange algorithm afterwards.
Adopted two content distribution processors structures in the methods of the invention, these two content distribution processors play collaborative work and mutual fault-tolerant effect, make the webserver that higher reliability be arranged.In addition, the branch by load equalizer and content distribution processors is arranged, and makes load equalizer can not become the potential bottleneck of the webserver, and webserver reliability is further improved.
Be provided with the contents separated distribution processors in the inventive method,, do not need to provide to carry out to connect the functional part that switches, also do not need to revise kernel so each background server functional requirement is just more single.Simultaneously, along with the increase of offered load, this isolating construction only need increase the number of module, as increasing content distribution processors and background server, just can satisfy performance requirement, so the autgmentability of this network server structure is relatively good.
Be provided with the contents separated distribution processors in the inventive method, background server does not need to support to connect the content dedicated element of switching, makes background server simple, and cost reduces.
The webserver of the inventive method has the process of two exchanges, one is that load equalizer exchanges the request of selecting suitable content distributor process user according to four layers, and another is that the content distribution processors provides the user required service according to the suitable background server of content choice that the user asks.By this twice exchange, make the content server in the inventive method that load balancing preferably be arranged, higher service hit rate is arranged simultaneously.
Description of drawings
Fig. 1 is a network server structure schematic diagram of the present invention;
Fig. 2 sets up sequential chart for the webserver of the present invention connects;
Fig. 3 is that the webserver of the present invention connects switching sequence figure.
Specific implementation method
As shown in Figure 1, the webserver comprises load equalizer 1, content distribution processors 2, background server 3, backup of memory 4 and express network 5.What load equalizer 1 was carried out is four layers of exchange algorithm, content distribution processors 2 by two be provided with identical, collaborative work and mutually fault-tolerant module form, execution is based on seven layers of request content exchange and be connected handoff functionality, background server 3 is finished the requested service to the user, data directly return to the user, backup of memory 4 can offer the part service content that does not have among the background server group, and express network 5 is responsible for aforementioned these modules are coupled together.The request of the webserver comes from the request that user side 6 sends and arrives the webservers by INTERNET 7.
This webserver service providing method comprises that connecting foundation, connection switching and service provides three processes.
The sequential that the webserver connects the process of setting up is participated in connecting and is set up by user side 6, load equalizer 1, content distribution processors 2 as shown in Figure 2.Load equalizer 1 receives that user side 6 sends to the SYN bag of the webserver by INTERNET 7, load equalizer is given content distribution processors 2 to this request according to four layers of exchange algorithm, content distribution processors and user side are carried out a three-way handshake operation (information that transmits in the three-way handshake operating process is by SYN, SYN/ACK, ACK), connect.After connecting foundation, the content distribution processors is resolved the required service content of this connection, and according to request content, the content distribution processors is selected an only server from the background server group.The content distribution processors switches to selection result notification payload equalizer on the background server of appointment to this connection simultaneously, and load equalizer is responsible for writing down this connection.
The sequential of webserver connection handoff procedure is participated in the switching of connection as shown in Figure 3 by load equalizer 1, content distribution processors 2, background server 3.Content distribution processors 2 is carried out three-way handshake operation (information that transmits in the three-way handshake operating process is by SYN, SYN/ACK, ACK) with selected background server 3, connects.The current connection of content distribution processors notification payload equalizer switches to selected background server (HAND signal).Simultaneously, the content distribution processors is responsible for user's request is transmitted to background server (this process will transmit REQ signal and ack signal), tells load equalizer to connect switching and finishes (this process will transmit FIN and ack signal).
When connection switched to background server, background server checked that at first can this server directly provide the user required service content.If can directly provide, just the result is directly fed back to the user; If there is not user's requested service content, will from backup of memory, read content corresponding; If also there is not the required content of user in the backup of memory, the information of one " service content is unreachable " of feeding back is given the user.

Claims (1)

1, a kind of service providing method of the webserver, this webserver comprises the load equalizer of carrying out four layers of exchange, carries out content distribution processors, background server group and backup of memory based on seven layers of exchange of request content, each several part is linked together by a fast network, the service providing method of this webserver comprises connect to be set up, connects and switch and service provides process, and the workflow that it is characterized in that connecting the process of foundation is as follows:
A) when a request arrives load equalizer, load equalizer judges whether this request is a SYN bag, if a SYN bag then turns to b), if not a SYN bag, then turn to e);
B) load equalizer is carried out four layers of exchange algorithm, and this request is sent to the content distribution processors, and content distribution processors and user carry out the three-way handshake operation then, connect;
C) after connection was set up, the content distribution processors was resolved the required service content of this connection, and according to request content, the content distribution processors is selected the server of an optimum from the background server group;
D) this content distribution processors notification payload equalizer switches to the background server of appointment to this connection, and load equalizer writes down this connection simultaneously;
E) load equalizer directly is transmitted to the corresponding background server to connection according to the linkage record table;
The workflow that connects handoff procedure is as follows:
F) the content distribution processors is operated through three-way handshake with selected background server, connects;
G) the current connection of content distribution processors notification payload equalizer need switch to selected background server;
Service provides the workflow of process as follows:
When connection switched to background server, background server checked that at first can this server directly provide the user required service content, if can directly provide, just directly feeds back to the user to the result; If there is not user's requested service content, will from backup of memory, read content corresponding; If also there is not the required content of user in the backup of memory, the information of one " service content is unreachable " of feeding back is given the user.
CNB2003101086247A 2003-11-11 2003-11-11 A network server structure and its service providing process Expired - Fee Related CN100429896C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2003101086247A CN100429896C (en) 2003-11-11 2003-11-11 A network server structure and its service providing process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2003101086247A CN100429896C (en) 2003-11-11 2003-11-11 A network server structure and its service providing process

Publications (2)

Publication Number Publication Date
CN1545262A CN1545262A (en) 2004-11-10
CN100429896C true CN100429896C (en) 2008-10-29

Family

ID=34334781

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003101086247A Expired - Fee Related CN100429896C (en) 2003-11-11 2003-11-11 A network server structure and its service providing process

Country Status (1)

Country Link
CN (1) CN100429896C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11973823B1 (en) * 2023-01-11 2024-04-30 Dell Products L.P. Offloading namespace redirection to backup clients in a scale out cluster

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1863202B (en) * 2005-10-18 2011-04-06 华为技术有限公司 Method for improving load balance apparatus and server processing performance
CN103368872A (en) * 2013-07-24 2013-10-23 广东睿江科技有限公司 Data packet forwarding system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003143200A (en) * 2001-10-30 2003-05-16 Fujitsu Ltd Data transfer device
JP2003163689A (en) * 2001-11-28 2003-06-06 Hitachi Ltd Network linkage information processing system and method for moving access between load distributors
KR20030058502A (en) * 2001-12-31 2003-07-07 (주)캐너즈 Method, apparatus and system for providing Back-Up and load balancing method based on dual transmission lines

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003143200A (en) * 2001-10-30 2003-05-16 Fujitsu Ltd Data transfer device
JP2003163689A (en) * 2001-11-28 2003-06-06 Hitachi Ltd Network linkage information processing system and method for moving access between load distributors
KR20030058502A (en) * 2001-12-31 2003-07-07 (주)캐너즈 Method, apparatus and system for providing Back-Up and load balancing method based on dual transmission lines

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11973823B1 (en) * 2023-01-11 2024-04-30 Dell Products L.P. Offloading namespace redirection to backup clients in a scale out cluster

Also Published As

Publication number Publication date
CN1545262A (en) 2004-11-10

Similar Documents

Publication Publication Date Title
CN107078969B (en) Realize computer equipment, the system and method for load balancing
CN104834722B (en) Content Management System based on CDN
CN101150421B (en) A distributed content distribution method, edge server and content distribution network
US7548945B2 (en) System, network device, method, and computer program product for active load balancing using clustered nodes as authoritative domain name servers
CN107547661B (en) Container load balancing implementation method
US20040255323A1 (en) System and method for piecewise streaming of video using a dedicated overlay network
CN102164116B (en) Method, system and corresponding device for balancing load
US20110238839A1 (en) Network intrusion detection apparatus
US20060153129A1 (en) System and method for efficient selection of a packet data servicing node
CN104717231B (en) The pre- distribution processing method and device of content distributing network
Bianchini et al. Analytical and experimental evaluation of cluster-based network servers
CN101098272A (en) Seed enquiring method of P2P system and P2P server
US20030028636A1 (en) System and method for workload-aware request distribution in cluster-based network servers
CN108881348A (en) Method for controlling quality of service, device and storage server
US20050223096A1 (en) NAS load balancing system
CN112202918B (en) Load scheduling method, device, equipment and storage medium for long connection communication
CN105515837B (en) One kind being based on event driven high concurrent WEB flow generator
CN109996126A (en) Equipment connection dynamic dispatching method and system under a kind of hybrid network framework
WO2021120633A1 (en) Load balancing method and related device
CN101771703A (en) Information service system and method
CN107147697A (en) Using group switching method and device
CN100429896C (en) A network server structure and its service providing process
CN100477664C (en) Memory system based on virtual interface
CN104955125B (en) Support dispatching method, terminal and the system of multiple types linking Internet
CN112543150B (en) Dynamic load balancing method based on server control

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081029

Termination date: 20111111