CN108366110A - A kind of website data interactive system and method - Google Patents

A kind of website data interactive system and method Download PDF

Info

Publication number
CN108366110A
CN108366110A CN201810114538.3A CN201810114538A CN108366110A CN 108366110 A CN108366110 A CN 108366110A CN 201810114538 A CN201810114538 A CN 201810114538A CN 108366110 A CN108366110 A CN 108366110A
Authority
CN
China
Prior art keywords
server
load equalizer
task
data
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810114538.3A
Other languages
Chinese (zh)
Inventor
孟祥�
孟祥一
孟凡尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Landlord Network Technology Innovation Co Ltd
Original Assignee
Shandong Landlord Network Technology Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Landlord Network Technology Innovation Co Ltd filed Critical Shandong Landlord Network Technology Innovation Co Ltd
Priority to CN201810114538.3A priority Critical patent/CN108366110A/en
Publication of CN108366110A publication Critical patent/CN108366110A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention belongs to website data interaction technique field, discloses a kind of website data interactive system and method, interactive system include:Load equalizer, web1 local cache servers, web2 local cache servers, file server, distributed memory server a, distributed memory server b, background server, mysql task queues server, mysql are from server, mysql master servers.Load equalizer is separately connected web1 local cache servers, web2 local cache servers by cable.The present invention reduces the number interacted between user and load equalizer by load equalizer, ensure that the performance of load equalizer.The bandwidth consumption that the upstream network to this cache server and source server can be reduced by local cache server alleviates network pressure to reduce the flow of upstream network;The function of enhancing data interaction backstage, promotes the safety of data access.

Description

A kind of website data interactive system and method
Technical field
The invention belongs to website data interaction technique field more particularly to a kind of website data interactive systems and method.
Background technology
Today of web technology high speed development, caching technology have become a key technology of large-scale portal website, delay The speed of a website visiting of the fine or not direct relation of design is deposited, and purchases the quantity of server, or even influences user's Experience.It is different according to the place of storage, client-cache, server-side caching can be divided into.Wherein, server-side caching is divided into:Page Face caching, data buffer storage, database caches.Multiple application nodes are necessarily equipped in traditional large-scale portal website's system, in reality When now caching, these web station systems often open up the memory of current system as system cache.When in portal website's system Number of nodes it is excessive, it is necessary to maintain the cache synchronization problem of each node.It occupies a large amount of server resource and is easy out It is wrong.Website data interaction system background function is relatively weak, loophole is more.It is typically all that user oriented is opened to develop website Hair.But backstage is to leave developer for or maintenance personnel is used, subconsciousness is exactly to think that backstage need not hold more Easy to operate, something wrong also has no, this is also the common fault of all software engineers, and for this point, our website backstages are It is constantly perfect, it realizes and the people of each meeting computation can easily grasp.
In conclusion problem of the existing technology is:Website data interaction system background function is relatively weak, leaks Hole is more, and complicated for operation.
Invention content
In view of the problems of the existing technology, the present invention provides a kind of website data interactive systems.
The invention is realized in this way a kind of website data interactive system includes:
Load equalizer, web1 local cache servers, web2 local cache servers, file server, distribution in Deposit server a, distributed memory server b, background server, mysql task queues server, mysql from server, Mysql master servers;
Load equalizer is separately connected web1 local cache servers, web2 local cache servers by cable;web1 Local cache server is separately connected distributed memory server b, background server, mysql task queue services by cable Device, mysql are from server, mysql master servers;Web2 local cache servers by cable be separately connected file server, Distributed memory server a.
Load equalizer is a high performance server, is the entrance that request reaches, after load equalizer receives request, It can judge which platform application server is more idle, this request will be given the idle application server of this comparison to request It is handled;Web local cache servers, database data is read in caching, and after data update, caching also can be more Newly, database interaction is reduced, the efficiency of request response is greatly improved;Cache server is also distributed, can be looked for according to request To corresponding cache server, reading efficiency is improved;It is common slow to solve server database for mysql task queues The phenomenon that problem, there will be no interim cards by task queue;Mysql master servers and mysql are set from server, be in order to After unpredictable mistake occur in mysql master servers, mysql can be switched from server, still worked normally, greatly improved steady It is qualitative.
Another object of the present invention is to provide a kind of load equalizer control method:
Step 1, load equalizer monitor the task requests that user sends, and the communication of user is carried in task requests and is needed Seek quantity;Communication requirement quantity indicates that user needs creating for task and/or connects the size of number;
Step 2, load equalizer is according to the load information and communication requirement quantity of current each server, from server The server and the corresponding amount of communications of each server of true directional user's distribution in cluster;
Communication requirement quantity is divided into one by load equalizer according to communication requirement quantity and maximum task quantity allotted Or multiple amount of communications subsets;Load equalizer according to the load information of current each server, according to load balancing principle from The corresponding server of each amount of communications subset is chosen in server cluster;Wherein, the amount of communications that amount of communications subset includes Less than or equal to maximum task quantity allotted;
The mark of the server of distribution and the corresponding amount of communications of each server are sent to by step 3, load equalizer User so that user executes communication service according to the server of distribution and the corresponding amount of communications of each server;
Load equalizer according to the load parameter of current each server and be already allocated to each server task and/ Or connection number, the load weight situation of each server is determined one by one, and each communication is chosen according to the sequence of load from light to heavy The corresponding server of quantity subset;Load equalizer according in the corresponding amount of communications subset of server of selection task and/ Or connection number, updated in preset task list selection server it is corresponding distributed task and/or connection number;
Step 4, load equalizer obtain the task release request that server is sent, are carried in task release request State the mark of server;
Step 5, load equalizer update the mark of server according to the mark of server in preset task list The task of distribution and/or connection number of corresponding server.
Another object of the present invention is to provide a kind of local cache server control method as follows:
Step A, cache server receive multiple first solicited messages that multiple user equipmenies are sent respectively, and each first asks Information is asked to indicate the data needed for a user equipment in multiple user equipment and the request point to the data;
Step B, cache server send the second solicited message to source server, wherein the second solicited message indicates each number According to and each data request point;
Step C, cache server select a request point according to the uncached data in cache server and request point;
Step D, cache server send the second solicited message to source server, and the second solicited message indicates uncached number According to and selection request point;Cache server receives the request point indicated from the second solicited message that source server is sent and corresponds to The uncached data that position starts;
Step E, cache server splice the uncached data of reception;Cache server is sent to source server Third solicited message, third solicited message indicate the starting point of uncached data and the uncached data;
Step F, cache server receive the data since starting point that source server is sent;Cache server is to splicing Data afterwards are cached;
Step G, the request that cache server is indicated according to the first solicited message that the multiple user equipmenies received are sent Point, the position corresponding to request point that the first solicited message sent from each user equipment indicates is respectively to each user equipment Transmission data.
Advantages of the present invention and good effect are:The present invention is provided with load equalizer, with the difference for not having load equalizer It is that server can be worked as with common computer, the latter needs high configuration server, the former dilatation only needs plus a machine, the latter Shutdown upgrading configuration is needed, so load equalizer is necessary in the age of high concurrent big data.It is equal by loading The user's communication requirement quantity carried in the task requests that weighing apparatus is sent according to user, the true directional user point from server cluster The server and the corresponding amount of communications of each server matched so that the task that load equalizer can monitor user in real time is asked It asks, and, once to the multiple tasks distribution server of user's request, can ensure that negative in the task requests for monitoring user Balanced device is carried to the load balancing that should play the role of when user task distribution server, and there are considerable task needs in user When handling and carrying out data interaction with load equalizer, reduce the number interacted between user and load equalizer, protects The performance of load equalizer is demonstrate,proved.The first solicited message that multiple user equipmenies are sent is received by local cache server, often A first solicited message indicate multiple user equipmenies respectively required data and it is described respectively needed for data request point;If it is determined that Data in the multiple user equipmenies received needed at least two user equipmenies are identical and the identical data are uncached described Cache server then selects a request point in falling into the request point in each preset window;And refer to source server transmission The second solicited message for asking point shown the uncached data and selected.So, cache server can be by pre- If window avoids the repetitive requests to the similar same data of request point, since the request point position in the same preset window connects It closely, can be as the request to the same request point, so a request point is selected to be sent to source server in preset window Request, it is possible to reduce the bandwidth consumption to the upstream network of this cache server and source server, to reduce upstream network Flow alleviates network pressure;The function of enhancing data interaction backstage, promotes the safety of data access.
Description of the drawings
Fig. 1 is website data interactive system structure chart provided in an embodiment of the present invention.
In figure:1, load equalizer;2, web1 local cache servers;3, web2 local cache servers;4, file takes Business device;5, distributed memory server a;6, distributed memory server b;7, background server;8, mysql task queues service Device;9, mysql is from server;10, mysql master servers.
Specific implementation mode
In order to further understand the content, features and effects of the present invention, the following examples are hereby given, and coordinate attached drawing Detailed description are as follows.
The structure of the present invention is explained in detail below in conjunction with the accompanying drawings.
As shown in Figure 1, website data interactive system provided by the invention includes:Load equalizer 1, web1 local caches clothes Be engaged in device 2, web2 local cache servers 3, file server 4, distributed memory server a5, distributed memory server b6, Background server 7, mysql task queues server 8, mysql are from server 9, mysql master servers 10.
Load equalizer 1 is separately connected web1 local cache servers 2, web2 local cache servers 3 by cable; Web1 local cache servers 2 are separately connected distributed memory server b6, background server 7, mysql tasks team by cable Row server 8, mysql are from server 9, mysql master servers 10;Web2 local cache servers 3 are separately connected by cable File server 4, distributed memory server a5.
Load equalizer control method provided in an embodiment of the present invention is as follows:
Load equalizer monitors the task requests that user sends, and the communication requirement quantity of user is carried in task requests; Communication requirement quantity indicates that user needs creating for task and/or connects the size of number;
Load equalizer is according to the load information and communication requirement quantity of current each server, from server cluster really The server and the corresponding amount of communications of each server of directional user's distribution;
Communication requirement quantity is divided into one by load equalizer according to communication requirement quantity and maximum task quantity allotted Or multiple amount of communications subsets;Load equalizer according to the load information of current each server, according to load balancing principle from The corresponding server of each amount of communications subset is chosen in server cluster;Wherein, the amount of communications that amount of communications subset includes Less than or equal to maximum task quantity allotted.
The mark of the server of distribution and the corresponding amount of communications of each server are sent to user by load equalizer, are made It obtains user and communication service is executed according to the server of distribution and the corresponding amount of communications of each server;
Load equalizer according to the load parameter of current each server and be already allocated to each server task and/ Or connection number, the load weight situation of each server is determined one by one, and each communication is chosen according to the sequence of load from light to heavy The corresponding server of quantity subset;Load equalizer according in the corresponding amount of communications subset of server of selection task and/ Or connection number, updated in preset task list selection server it is corresponding distributed task and/or connection number.
Load equalizer obtains the task release request that server is sent, and the server is carried in task release request Mark;
Load equalizer updates the corresponding clothes of mark of server according to the mark of server in preset task list The task of distribution and/or connection number of business device.
Load equalizer is a high performance server, is the entrance that request reaches, after load equalizer receives request, It can judge which platform application server is more idle, this request will be given the idle application server of this comparison to request It is handled;Web local cache servers, database data is read in caching, and after data update, caching also can be more Newly, database interaction is reduced, the efficiency of request response is greatly improved;Cache server is also distributed, can be looked for according to request To corresponding cache server, reading efficiency is improved;It is common slow to solve server database for mysql task queues The phenomenon that problem, there will be no interim cards by task queue;Mysql master servers and mysql are set from server, be in order to After unpredictable mistake occur in mysql master servers, mysql can be switched from server, still worked normally, greatly improved steady It is qualitative.
Local cache server control method provided in an embodiment of the present invention is as follows:
Cache server receives multiple first solicited messages that multiple user equipmenies are sent respectively, each first solicited message Indicate the data needed for a user equipment in multiple user equipment and the request point to the data;
Cache server sends the second solicited message to source server, wherein the second solicited message indicates each data and each The request point of a data;
Cache server selects a request point according to the uncached data in cache server and request point;
Cache server sends the second solicited message to source server, and the second solicited message indicates uncached data and choosing The request point selected;Cache server receives the request point corresponding position indicated from the second solicited message that source server is sent and opens The uncached data to begin;
Cache server splices the uncached data of reception;Cache server sends third to source server and asks Information, third solicited message is asked to indicate the starting point of uncached data and the uncached data;
Cache server receives the data since starting point that source server is sent;Cache server is to spliced number According to being cached;
The request point for the first solicited message instruction that cache server is sent according to multiple user equipmenies for receiving, from each The position that the request point for the first solicited message instruction that a user equipment is sent is corresponding sends number to each user equipment respectively According to.
Load equalizer 1 of the present invention judges the server of selection access according to request of data visit capacity, divides if accessed Cloth inner server b6 or background server 7, distributed memory server b6 or background server 7 pass through web1 local caches Server 2 dispatches mysql task queues server 8, mysql from server 9,10 database data of mysql master servers, and will The data of access are stored in web1 local cache servers 2;If accessing file server 4 or distributed memory server a5 When internal data, it can will access data and be stored in web2 local cache servers 3.
The above is only the preferred embodiments of the present invention, and is not intended to limit the present invention in any form, Every any simple modification made to the above embodiment according to the technical essence of the invention, equivalent variations and modification, belong to In the range of technical solution of the present invention.

Claims (3)

1. a kind of load equalizer control method, which is characterized in that the load equalizer control method includes:
Step 1, load equalizer monitor the task requests that user sends, the communication requirement number of user are carried in task requests Amount;Communication requirement quantity indicates that user needs creating for task and/or connects the size of number;
Step 2, load equalizer is according to the load information and communication requirement quantity of current each server, from server cluster In really directional user distribution server and the corresponding amount of communications of each server;
Communication requirement quantity is divided into one or more by load equalizer according to communication requirement quantity and maximum task quantity allotted A amount of communications subset;Load equalizer is according to the load information of current each server, according to load balancing principle from service The corresponding server of each amount of communications subset is chosen in device cluster;Wherein, the amount of communications that amount of communications subset includes is less than Or equal to maximum task quantity allotted;
The mark of the server of distribution and the corresponding amount of communications of each server are sent to use by step 3, load equalizer Family so that user executes communication service according to the server of distribution and the corresponding amount of communications of each server;
Load equalizer is according to the currently load parameter of each server and the task and/or the company that are already allocated to each server Number is connect, determines the load weight situation of each server one by one, each amount of communications is chosen according to the sequence of load from light to heavy The corresponding server of subset;Load equalizer according in the corresponding amount of communications subset of server of selection task and/or company Connect number, updated in preset task list selection server it is corresponding distributed task and/or connection number;
Step 4, load equalizer obtain the task release request that server is sent, the clothes are carried in task release request The mark of business device;
Step 5, load equalizer update the mark correspondence of server according to the mark of server in preset task list Server the task of distribution and/or connection number.
2. a kind of website data interactive system of load equalizer control method as described in claim 1, which is characterized in that described Website data interactive system includes:
Load equalizer, web1 local cache servers, web2 local cache servers, file server, distributed memory clothes Device a, distributed memory server b, background server, mysql task queues server, mysql be engaged in from server, mysql master Server;
Load equalizer is separately connected web1 local cache servers, web2 local cache servers by cable;Web1 is local Cache server by cable be separately connected distributed memory server b, background server, mysql task queues server, Mysql is from server, mysql master servers;Web2 local cache servers are separately connected file server, distribution by cable Formula inner server a.
3. a kind of local cache server control method of website data interactive system as claimed in claim 2, which is characterized in that The local cache server control method is as follows:
Step A, cache server receive multiple first solicited messages that multiple user equipmenies are sent respectively, each first request letter Breath indicates the data needed for a user equipment in multiple user equipment and the request point to the data;
Step B, cache server send the second solicited message to source server, wherein the second solicited message indicate each data and The request point of each data;
Step C, cache server select a request point according to the uncached data in cache server and request point;
Step D, cache server to source server send the second solicited message, the second solicited message indicate uncached data and The request point of selection;Cache server receives the request point corresponding position indicated from the second solicited message that source server is sent The uncached data started;
Step E, cache server splice the uncached data of reception;Cache server sends third to source server Solicited message, third solicited message indicate the starting point of uncached data and the uncached data;
Step F, cache server receive the data since starting point that source server is sent;Cache server is to spliced Data are cached;
Step G, the request point that cache server is indicated according to the first solicited message that the multiple user equipmenies received are sent, from The position that the request point for the first solicited message instruction that each user equipment is sent is corresponding is sent to each user equipment respectively Data.
CN201810114538.3A 2018-02-05 2018-02-05 A kind of website data interactive system and method Pending CN108366110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810114538.3A CN108366110A (en) 2018-02-05 2018-02-05 A kind of website data interactive system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810114538.3A CN108366110A (en) 2018-02-05 2018-02-05 A kind of website data interactive system and method

Publications (1)

Publication Number Publication Date
CN108366110A true CN108366110A (en) 2018-08-03

Family

ID=63004466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810114538.3A Pending CN108366110A (en) 2018-02-05 2018-02-05 A kind of website data interactive system and method

Country Status (1)

Country Link
CN (1) CN108366110A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408751A (en) * 2018-09-27 2019-03-01 腾讯科技(成都)有限公司 A kind of data processing method, terminal, server and storage medium
CN112118275A (en) * 2019-06-20 2020-12-22 北京车和家信息技术有限公司 Overload processing method, Internet of things platform and computer readable storage medium
CN113688158A (en) * 2021-09-07 2021-11-23 京东科技控股股份有限公司 Processing method, device, equipment, system and medium for business rule verification
CN116155909A (en) * 2023-04-24 2023-05-23 中诚华隆计算机技术有限公司 Method and system for load balancing by flow control chip

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408751A (en) * 2018-09-27 2019-03-01 腾讯科技(成都)有限公司 A kind of data processing method, terminal, server and storage medium
CN109408751B (en) * 2018-09-27 2022-08-30 腾讯科技(成都)有限公司 Data processing method, terminal, server and storage medium
CN112118275A (en) * 2019-06-20 2020-12-22 北京车和家信息技术有限公司 Overload processing method, Internet of things platform and computer readable storage medium
CN112118275B (en) * 2019-06-20 2023-07-11 北京车和家信息技术有限公司 Overload processing method, internet of things platform and computer readable storage medium
CN113688158A (en) * 2021-09-07 2021-11-23 京东科技控股股份有限公司 Processing method, device, equipment, system and medium for business rule verification
CN116155909A (en) * 2023-04-24 2023-05-23 中诚华隆计算机技术有限公司 Method and system for load balancing by flow control chip

Similar Documents

Publication Publication Date Title
CN105979009B (en) A kind of increase load automatic balancing method for cloud application container
CN108366110A (en) A kind of website data interactive system and method
CN102369688B (en) Method for adjusting resources dynamically and scheduling device
CN110658794B (en) Manufacturing execution system
CN107534570A (en) Virtualize network function monitoring
CN110417842A (en) Fault handling method and device for gateway server
KR20000004988A (en) Method and apparatus for client managed flow control on a limited memorycomputer system
CN109783151B (en) Method and device for rule change
CN103051551A (en) Distributed system and automatic maintaining method for same
CN105933408A (en) Implementation method and device of Redis universal middleware
CN102394929A (en) Conversation-oriented cloud computing load balancing system and method therefor
CN107451853A (en) Method, apparatus, system and the storage medium that a kind of red packet distributes in real time
US8832215B2 (en) Load-balancing in replication engine of directory server
CN109376011A (en) The method and apparatus of resource are managed in virtualization system
CN110008131B (en) Method and device for managing area AB experiment based on algorithm
CN109697120A (en) Method, electronic equipment for application migration
CN106713378A (en) Method and system for realizing service provision by multiple application servers
CN107015972A (en) A kind of computer room business migration methods, devices and systems
CN101378329B (en) Distributed business operation support system and method for implementing distributed business
CN110244901A (en) Method for allocating tasks and device, distributed memory system
CN108234242A (en) A kind of method for testing pressure and device based on stream
US11652725B2 (en) Performance testing of a test application in a network-as-a-service environment
CN112988378A (en) Service processing method and device
CN105511914B (en) Using update method, device and system
CN110113176B (en) Information synchronization method and device for configuration server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180803