CN102143218B - Web access cloud architecture and access method - Google Patents

Web access cloud architecture and access method Download PDF

Info

Publication number
CN102143218B
CN102143218B CN201110025590.XA CN201110025590A CN102143218B CN 102143218 B CN102143218 B CN 102143218B CN 201110025590 A CN201110025590 A CN 201110025590A CN 102143218 B CN102143218 B CN 102143218B
Authority
CN
China
Prior art keywords
engine
tcp
students
reduction
study load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110025590.XA
Other languages
Chinese (zh)
Other versions
CN102143218A (en
Inventor
邬江兴
罗兴国
张兴明
祝永新
张铮
张帆
祝卫华
李弋
谢光伟
邓倩妮
庞建民
斯雪明
谢同飞
谈满堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Digital Switch System Engineering Technology Research Center
Shanghai Redneurons Co Ltd
Original Assignee
Shanghai Redneurons Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Redneurons Co Ltd filed Critical Shanghai Redneurons Co Ltd
Priority to CN201110025590.XA priority Critical patent/CN102143218B/en
Publication of CN102143218A publication Critical patent/CN102143218A/en
Application granted granted Critical
Publication of CN102143218B publication Critical patent/CN102143218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to a web access cloud architecture and an access method. The web access cloud architecture comprises an IP (internet protocol) packet classification engine IPC, a DoS protection engine ADE, an SSL (security socket layer)/TLS (transport layer security) engine, a TCP (transfer control protocol)/IP offload engine TOE, an HTTP (hypertext transfer protocol) offload engine HOE, a file system offload engine FOE, a remote file system offload engine RFOE, a power consumption management engine PEM, a content management engine CM, a CryptoEngineCE and a compression/decompression engine CDE, and also comprises control storage components: a CPU (central processing unit), an on-chip bus and an on-chip memory, wherein the on-chip bus is connected with the CPU, the on-chip memory, the power consumption management engine PEM and the IP packet classification engine IPC, the TCP/IP offload engine TOE, the HTTP offload engine HOE, the content management engine CM, the file system offload engine FOE and the SSL/TLS engine, and the CPU controls the components mounted on the on-chip bus, so that the processing efficiency and security of the existing Web server can be greatly improved, and the power consumption is reduced.

Description

Web access cloud architecture and cut-in method
Technical field
The present invention relates to cloud computing field, be specifically related to the Web cloud access problem in a kind of cloud computing, relate in particular to a kind of web access cloud architecture and cut-in method.
Background technology
Computer application pattern experienced the centralized architecture taking large-scale computer as main body, the client/server distributed computing architecture taking PC as main body substantially, taking Intel Virtualization Technology as the service-oriented architecture of core and the novel framework based on Web2.0 application characteristic.The differentiation of computer application pattern, Technical Architecture and realization character is the historical background of cloud computing development.
Cloud computing is directly translated by English Cloud Computing, at present also ununified definition in the industry, and some large enterprises and research institution from different perspectives, have provided the definition of cloud computing in a different manner, and actively carry out the exploitation of cloud service.Wikipedia is defined as cloud computing for 2009: cloud computing be a kind of dynamic easily expansion and normally provide virtualized Resource Calculation mode by the Internet.User does not need to understand the details of cloud inside, needn't have the professional knowledge of cloud inside yet or directly control infrastructure.IBM thinks that cloud computing is a kind of computation schema, and in this pattern, application, data and IT resource offer user in the mode of service by network and use.And Berkeley cloud computing white paper thinks that cloud computing comprises the application service on the Internet and the software and hardware facilities of these services are provided in data center.
A kind of method of emerging shared architecture had both been described in cloud computing, had described again the application and the expansion service that are based upon on this infrastructure." cloud " is a huge service network being made up of the grid walking abreast, and it expands the computing capability in high in the clouds by Intel Virtualization Technology, to make each equipment bring into play maximum usefulness.The processing of data and storage all complete by the server cluster of " cloud " end, these clusters are made up of a large amount of common industry standard servers, and be in charge of by a large-scale data processing centre, data center, by client's the distributes calculation resources that needs, reaches the effect equal with supercomputer.
The computer network of the logic one that the Internet is made up of according to certain communications protocol wide area network, local area network (LAN) and unit.The development of the Internet is the direct driving force of cloud computing, nowadays, the computing equipment, the memory device ability that connect have on the internet had significantly lifting, data resource is exponential growth, various Service Sources on the Internet become increasingly abundant, the classical environment for use of the Internet---World Wide Web (WWW) (Web) has been no longer simple content platform, but towards more powerful, abundant user interactions being provided and experiencing competence orientation development.The Internet and Web have become structure, O&M, the indispensable basic environment of all kinds of distribution application systems of use, are developing into the computing platform of human maximum so far.
At present, the lifting of the communication speed of Ethernet system far exceedes the growth of computer processor processing speed, and processing speed will not catch up with the data traffic in network, and this has just caused the bottleneck of I/O stream (I/O).In existing system, great majority adopt operating system and network stack to realize TCP/IP processing, take a large amount of host CPU resources, and the disconnected traffic load increasing can cause the hydraulic performance decline of network stack, namely the processing speed of TCP/IP stream is significantly less than the speed of network data flow, and the working method of legacy network as shown in Figure 1.In order to alleviate the pressure of CPU, TOE(TCP/IP offload engine, TCP/IP offload engine) technology arise at the historic moment.TOE Technique on T CP/IP protocol stack is expanded, and makes ICP/IP protocol transfer to TOE hardware from CPU, and this will reduce the processing of a large amount of network I/O interruption, data multiple copies and agreement itself, has greatly alleviated the burden of CPU.
Under cloud computing background, the function that client mainly completes comprises web access and web displaying, and calculating and storage come from cloud (function ratio NC more simplifies).Service end mainly realizes SAAS(software and serves), PAAS(platform serve), IAAS(architecture serve) and web access.How becoming key for user provides consistent, transparent, cloud computing easily accesses, the model that the present invention provides can provide high efficiency, low-power consumption, Web cloud access cheaply.
In patent of invention, there is no at present the patent of the Web access service of direct facing cloud calculating at home.Relevant patent relates to the caching technology of common Web server and cloud computing.
Application number is 200510120570 patent application, and name is called a kind of embedded type mobile web server.The processor unit that it is made up of ARM chip, Flash memory and network interface circuit composition.Processor unit is solidified with system service program, and Flash memory is for storing user's Web web data; After this equipment access network, system program obtains ISP to be the IP address information of its dynamic assignment and to send to domain name supervising center; Domain name supervising center is a domain name supervising server with fixed ip address, and it is that the dynamic IP addressing of this kind equipment all on network is set up " domain name-IP " mapping relations, and realizes redirection function; Viewer, after the domain name of input targeted website, this locality, resolves by domain name supervising center, login targeted website.The server of this invention is completed by software completely, on flush bonding processor unit, carries out, and performance realizes lower than the software of high-performance processor, and the farther specialized hardware proposing lower than this patent is realized.
Application number is 200610169248 patent, and name is called on-line system and method that Web service access is provided.This invention provides on-line system and a kind of on-line system of method (100) of Web service access, and the access of the Web service that online entity (110) is provided is provided.This on-line system (100) comprises line server (160), for collecting and store the online information (180) about online entity (110), and provides online information (180) to the observer (170) of online entity (110).Line server (160) further receives Web service recalls information (210) from online entity (110), and this Web service recalls information (210) provides the access of the one or more Web services to online entity (110).Line server (160) offers the Web service recalls information (210) of online entity (110) observer (170) of online entity (110) with the online information (180) of online entity (110), to be used for calling the Web service of online entity (110) by observer (170).This patent has proposed the software implementation method of online service, does not relate to the specialized hardware of the Web service of this patent proposition.
Application number is 200810043744 patent, and name is called caching system and the method thereof of system for cloud computing.This disclosure of the invention a kind of caching system of system for cloud computing, it is deployed in each node of system for cloud computing, described system comprises: service module, receives the task that other nodes send, the task kind that minute book ground node can be carried out; Dispatch module, the task assignment that local node is received is carried out to local node, or is transmitted to other nodes; Cache policy module, records the cache policy of various tasks, and whether described cache policy comprises buffer memory, cache-time; Caching management module, the cache size of management local node is searched task in the buffer memory of local node, task is saved in to the buffer memory of local node.This patent relates to the caching system of the network node of cloud computing, does not relate to the specialized hardware of the Web service of this patent proposition.
Except above-mentioned patent, also have some commercial Web front end expedite product, the performance of the skill upgrading Web systems such as they utilize, and TCP is multiplexing, load balancing, buffer memory and SSL acceleration.Traditional Web services correlation function is complete by multiple equipment serials, WinCom swap server (WinCom Switching Server, WCSS) changed this pattern, it accelerates function by Web service, load balancing, data center's buffer memory, fire compartment wall and optional SSL and is integrated in single assembly.This product does not relate to the Web service specialized hardware that this patent proposes.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art and propose one and realize dynamical the Internet model from bottom to top, realizing web access cloud architecture and the cut-in method of the cloud computing access of all kinds of general, dedicated computings and storage resources.
The object of the present invention is achieved like this:
A kind of Web access cloud architecture, comprises IP packet classification engine IPC, DoS protection engine ADE, SSL/TLS engine, TCP /iP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE, file system Reduction of Students' Study Load engine FOE, far-end file system Reduction of Students' Study Load engine RFOE, power managed engine PEM, content management engine CM, Crypto Engine CE, compression/decompression engine CDE, also comprise control store parts: CPU, on-chip bus and on-chip memory, it is characterized in that:
On-chip bus connects CPU, on-chip memory, power managed engine PEM and IP packet classification engine IPC, TCP /iP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE, content management engine CM, file system Reduction of Students' Study Load engine FOE, SSL/TLS engine, CPU controls other parts that are mounted on on-chip bus, on the I/O mouth of IP packet classification engine IPC, be also connected with the I/O mouth of DoS protection engine ADE, another I/O mouth of DoS protection engine ADE also and TCP /the I/O mouth of IP Reduction of Students' Study Load engine TOE is connected, TCP /another I/O mouth of IP Reduction of Students' Study Load engine TOE is connected with the I/O mouth of Crypto Engine CE again, another I/O mouth of Crypto Engine CE is connected with the I/O mouth of HTTP Reduction of Students' Study Load engine HOE, and another I/O mouth of HTTP Reduction of Students' Study Load engine HOE is connected with the I/O mouth of compression/decompression engine CDE;
MAC parts are connected with on-chip bus, IP packet classification engine IPC respectively by input/output port, IP packet classification engine IPC, TCP /i/O mouth between IP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE and content management engine CM is connected successively, the I/O mouth of content management engine CM is also connected with far-end file system Reduction of Students' Study Load engine RFOE, an output port of content management engine CM is connected with file system Reduction of Students' Study Load engine FOE, an output port of file system Reduction of Students' Study Load engine FOE is connected with HTTP Reduction of Students' Study Load engine HOE, and an input/output port of file system Reduction of Students' Study Load engine FOE is connected with local disk array.
At described TCP /in IP Reduction of Students' Study Load engine TOE structure, On-Chip Buffer Memory receives the message that maybe will send for buffer memory from 10G network; IP Parser StateMachine processes IP to be divided into and receives and send two parts, receiving unit will receive original message from 10G network interface, and message is carried out to Preliminary Analysis, comprise to message is carried out checksum validation, the length control information of message various piece is carried out to preliminary treatment; TCP timer provides hardware timing reference for TCP connection procedure, when Mem Ctrl supports more concurrent TCP linking number when needs, expands spatial cache with outside storage unit in high speed; TCP Parser State Machine adopts high performance synchronous state machine to realize the tcpip stack that meets various standards; Queue buffer memory needs to be transferred to the reception packet that TCP processes after IP processes, or the transmission packet that need to allow IP further process; Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence in scheduling Queue; In the time realizing TOE, the behavior of server end can be controlled; The Maximum Segment Size of TCP adds that the header of TCP is less than the load of an IP, while avoiding sending, realize the function of fragmentation at IP layer, the behavior of client and intermediate router is uncontrollable, will realize the group report of the IP bag of fragmentation while receiving data.
A cut-in method for described Web access cloud architecture, is characterized in that:
Web access cloud system is mutual by the Ethernet interface of 10G with the external world, Ethernet interface input data, first pass through the processing of MAC parts, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic, claim to abandon CRC32 check errors packets of information in transfer process, WEB server is completed to address resolution, so that user side can obtain MAC Address, the frame transmitting is carried out to ARP frame and IP frame classification, output IP frame is the IP frame that MAC filters, if object MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type are filtered out, ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC obtaining, structure arp reply frame, and this acknowledgement frame is delivered in arp reply frame transmit queue,
First IP packet classification engine IPC detects MAC parts output IP frame, if carry information is source MAC/IP, IP frame basis, whether burst IP frame carries out TCP frame, UDP, ICMP and IGMP frame classification,
The input of DoS protection engine ADE comprises through the IP bag of IP packet classification engine IPC classification with through TCP /the TCP bag that IP Reduction of Students' Study Load engine TOE processes, DoS protection engine ADE filters according to access control list ACL or other strategy provides the protection of DOS class, legal IP bag or the TCP bag of output through filtering,
TCP /the part that IP Reduction of Students' Study Load engine TOE completes ICP/IP protocol stack realizes, being input as through the IP of classification of it wrapped, IP and TCP head are processed, if the IP processing bag is fragmented, to complete the restructuring of IP bag, a complete TCP bag is provided, finally export TCP Control Block to HTTP Reduction of Students' Study Load engine HOE by 80 ports, 443 port output Payload are to SSL/TLS engine
In system, HTTP Reduction of Students' Study Load engine HOE is serving as dual role, completes client and server end function, for server side functionality parts, receives from TCP /tCP load, the TCP session number of IP Reduction of Students' Study Load engine TOE, confirm output domain name and URL information through HTTP packet parsing and request; For client functionality parts, accept HTTP data, be reduced to object by HTTP data just, object and the object name of output request,
State information, management strategy that power managed engine PEM collects other processing unit by on-chip bus are upgraded, generate the statistical information of each processing unit, and complete the calculating of dynamic power management strategy, then according to the status adjustment mode of the load of object and power managed strategy decision processing unit, the dynamic power consumption of output alignment processing parts regulates control signal or carries out instruction, global clock and local voltage are managed, store the statistical log of all processing unit
Crypto Engine CE is responsible for cryptographic calculations task required in SSL or tls protocol, comprise: certificate management, authentication, cipher key change and data encrypting and deciphering, wherein certificate management completes importing and the derivation of certificate, authentication parts input certificate, and certificate is verified, output authentication result, the key that the input of cipher key change parts is encrypted, the key of encrypting is decrypted, output key, data encrypting and deciphering parts input plaintext or ciphertext and key, complete and encrypt or decryption processing, corresponding output ciphertext or plaintext
The data of compression and the to be compressed/decompress(ion) of decompression engine CDE input, compress and decompression processing the data of input, after corresponding output squeezing with decompress(ion) after data,
File system Reduction of Students' Study Load engine FOE supports the eventful business of high speed of high flux data to read and write.When reading out data, according to data attribute information to be checked, correctly access according to storage concordance list at local disk; When data writing, require data to be concentrated or classified compression storage according to data format to be written, and complete, renewal concordance list mutual with Magnetic Disk Controler,
When data to be checked are not during at local disk array, inquiry request is submitted to far-end file system Reduction of Students' Study Load engine RFOE by data management parts, generate new far-end request HTTP by RFOE and carry out remote inquiry, data to be checked are returned to local RFOE by neighbours or remote data center, by local RFOE, the data of inquiry are directly pushed to the original destination host of data to be checked on the one hand again, on the one hand these data are upgraded to local disk array through local data management component.
The present invention has following good effect:
From general structure, calculate with communication and separate; From communication aspect, data separate with controlling, and have adopted special-purpose member to carry out data surface processing in communication aspect; From data processing aspect, data are carried out to the optional cure process of bidirectional flow aquation, select flexibly according to concrete systemic-function and the demand of performance, can greatly improve treatment effeciency and the fail safe of existing Web server, reduce power consumption simultaneously.
Brief description of the drawings
Fig. 1 is traditional Web access schematic diagram.
Fig. 2 is Web access cloud system assumption diagram.
Fig. 3 is Web access cloud architectural data flow graph.
Fig. 4 is TOE structure chart in Web access cloud architecture.
Fig. 5 is the HOE structure chart of poll processing.
Fig. 6 is the HOE structure chart of array approaches.
Fig. 7 is CM data flow diagram in Web access cloud architecture.
Embodiment
As shown in Figure 2, the present invention includes IP packet classification engine IPC 202, DoS protection engine ADE 208, SSL/TLS engine 213, TCP /iP Reduction of Students' Study Load engine TOE 203, HTTP Reduction of Students' Study Load engine HOE 204, file system Reduction of Students' Study Load engine FOE 206, far-end file system Reduction of Students' Study Load engine RFOE 207, power managed engine PEM 212, content management engine CM 205, Crypto Engine CE 209, compression/decompression engine CDE 210, also comprise control store parts: CPU211, on-chip bus 216 and on-chip memory 214, it is characterized in that:
On-chip bus 216 connects CPU 211, on-chip memory 214, power managed engine PEM 212 and IP packet classification engine IPC 202, TCP /iP Reduction of Students' Study Load engine TOE 203, HTTP Reduction of Students' Study Load engine HOE 204, content management engine CM 205, file system Reduction of Students' Study Load engine FOE 206, SSL/TLS engine 213, CPU 211 controls other parts that are mounted on on-chip bus 216, on the I/O mouth of IP packet classification engine IPC 202, be also connected with the I/O mouth of DoS protection engine ADE 208, another I/O mouth of DoS protection engine ADE 208 also and TCP /the I/O mouth of IP Reduction of Students' Study Load engine TOE 203 is connected, TCP /another I/O mouth of IP Reduction of Students' Study Load engine TOE 203 is connected with the I/O mouth of Crypto Engine CE 209 again, another I/O mouth of Crypto Engine CE 209 is connected with the I/O mouth of HTTP Reduction of Students' Study Load engine HOE 204, and another I/O mouth of HTTP Reduction of Students' Study Load engine HOE 204 is connected with the I/O mouth of compression/decompression engine CDE 210;
MAC parts 201 are connected with on-chip bus 216, IP packet classification engine IPC 202 respectively by input/output port, IP packet classification engine IPC 202, TCP /i/O mouth between IP Reduction of Students' Study Load engine TOE 203, HTTP Reduction of Students' Study Load engine HOE 204 and content management engine CM 205 is connected successively, the I/O mouth of content management engine CM 205 is also connected with far-end file system Reduction of Students' Study Load engine RFOE 207, an output port of content management engine CM 205 is connected with file system Reduction of Students' Study Load engine FOE 206, an output port of file system Reduction of Students' Study Load engine FOE 206 is connected with HTTP Reduction of Students' Study Load engine HOE 204, and an input/output port of file system Reduction of Students' Study Load engine FOE 206 is connected with local disk array 215.
At described TCP /in IP Reduction of Students' Study Load engine TOE 203 structures, On-Chip Buffer Memory receives the message that maybe will send for buffer memory from 10G network; IP Parser StateMachine processes IP to be divided into and receives and send two parts, receiving unit will receive original message from 10G network interface, and message is carried out to Preliminary Analysis, comprise to message is carried out checksum validation, the length control information of message various piece is carried out to preliminary treatment; TCP timer provides hardware timing reference for TCP connection procedure; When Mem Ctrl supports more concurrent TCP linking number when needs, expand spatial cache with outside storage unit in high speed; TCP Parser State Machine adopts high performance synchronous state machine to realize the tcpip stack that meets various standards; Queue buffer memory needs to be transferred to the reception packet that TCP processes after IP processes, or the transmission packet that need to allow IP further process; Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence in scheduling Queue; In the time realizing TOE, the behavior of server end can be controlled; The Maximum Segment Size of TCP adds that the header of TCP is less than the load of an IP, while avoiding sending, realize the function of fragmentation at IP layer, the behavior of client and intermediate router is uncontrollable, will realize the group report of the IP bag of fragmentation while receiving data.
A cut-in method for described Web access cloud architecture, is characterized in that:
Web access cloud system is mutual by the Ethernet interface of 10G with the external world, Ethernet interface input data, first process through MAC parts 201, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic, claim to abandon CRC32 check errors packets of information in transfer process, WEB server is completed to address resolution, so that user side can obtain MAC Address, the frame transmitting is carried out to ARP frame and IP frame classification, output IP frame is the IP frame that MAC filters, if object MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type are filtered out, ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC obtaining, structure arp reply frame, and this acknowledgement frame is delivered in arp reply frame transmit queue,
First IP packet classification engine IPC 202 detects MAC parts 201 and exports IP frame, if carry information is source MAC/IP, IP frame basis, whether burst IP frame carries out TCP frame, UDP, ICMP and IGMP frame classification,
The input of DoS protection engine ADE 208 comprises the IP bag and the process TCP that classify through IP packet classification engine IPC 202 /the TCP bag that IP Reduction of Students' Study Load engine TOE 203 processes, DoS protection engine ADE 208 filters according to access control list ACL or other strategy provides the protection of DOS class, legal IP bag or the TCP bag of output through filtering,
TCP /the part that IP Reduction of Students' Study Load engine TOE 203 completes ICP/IP protocol stack realizes, being input as through the IP of classification of it wrapped, IP and TCP head are processed, if the IP processing bag is fragmented, to complete the restructuring of IP bag, a complete TCP bag is provided, finally export TCP Control Block to HTTP Reduction of Students' Study Load engine HOE 204 by 80 ports, 443 port output Payload are to SSL/TLS engine 213
In system, HTTP Reduction of Students' Study Load engine HOE 204 is serving as dual role, completes client and server end function, for server side functionality parts, receives from TCP /tCP load, the TCP session number of IP Reduction of Students' Study Load engine TOE 203, confirm output domain name and URL information through HTTP packet parsing and request; For client functionality parts, accept HTTP data, be reduced to object by HTTP data just, object and the object name of output request,
State information, management strategy that power managed engine PEM 212 collects other processing unit by on-chip bus 216 are upgraded, generate the statistical information of each processing unit, and complete the calculating of dynamic power management strategy, then according to the status adjustment mode of the load of object and power managed strategy decision processing unit, the dynamic power consumption of output alignment processing parts regulates control signal or carries out instruction, global clock and local voltage are managed, store the statistical log of all processing unit
Crypto Engine CE 209 is responsible for required cryptographic calculations task in SSL or tls protocol, comprise: certificate management, authentication, cipher key change and data encrypting and deciphering, wherein certificate management completes importing and the derivation of certificate, authentication parts input certificate, and certificate is verified, output authentication result, the key that the input of cipher key change parts is encrypted, the key of encrypting is decrypted, output key, data encrypting and deciphering parts input plaintext or ciphertext and key, complete and encrypt or decryption processing, corresponding output ciphertext or plaintext
Compression is inputted the data of to be compressed/decompress(ion) with decompression engine CDE 210, the data of input are compressed and decompression processing, after corresponding output squeezing with decompress(ion) after data,
File system Reduction of Students' Study Load engine FOE 206 supports the eventful business of high speed of high flux data to read and write.When reading out data, according to data attribute information to be checked, correctly access according to storage concordance list at local disk; When data writing, require data to be concentrated or classified compression storage according to data format to be written, and complete, renewal concordance list mutual with Magnetic Disk Controler,
When data to be checked are not during at local disk array, inquiry request is submitted to far-end file system Reduction of Students' Study Load engine RFOE 207 by data management parts, generate new far-end request HTTP by RFOE and carry out remote inquiry, data to be checked are returned to local RFOE by neighbours or remote data center, by local RFOE, the data of inquiry are directly pushed to the original destination host of data to be checked on the one hand again, on the one hand these data are upgraded to local disk array through local data management component.
This architectural framework is made up of multiple dedicated engine, and each engine function adopts hardware directly to complete, and unified being mounted on system CPU processor bus accepted the unified of CPU and controlled and management.Between engine, by event-driven, in same engine, if there are multiple requests simultaneously, adopt the mode processing of poll.Interconnected complexity, depends on the complexity of event.For the message that makes to transmit between two engines is enough simple, the message between engine can be to be similar to interrupt signal, there is so better autgmentability, otherwise will define message format, and by comparatively complicated bus transfer.For simplicity, also can be by writing I/O register, carrying out of task is described, then pass through down trigger.
Web access cloud architecture specifically comprises 11 special-purpose members, as shown in Figure 2, is respectively IP packet classification engine IPC, DoS protection engine ADE, SSL/TLS engine, TCP /iP Reduction of Students' Study Load engine TOE, HTTP Reduction of Students' Study Load engine HOE, file system Reduction of Students' Study Load engine FOE, far-end file system Reduction of Students' Study Load engine RFOE, power managed engine PEM, content management engine CM, CE(Crypto Engine), compression/decompression engine CDE.
Wherein, IPC realizes multiple IP stacks based on unique MAC and link layer, according to different service strategies, the data flow of multi-service type is carried out to message classification and forwarding; ADE filters according to ACL or TCL strategy provides the protection of DOS class; SSL/TLS processing module, completes the exchange of key, and between TCP and HTTP, has made a bridge joint to realize the encryption and decryption of data, and for Web service, under normal conditions, the connection of SSL/TLS is to monitor at 443 ports; TOE adopts reconfigurable hardware to realize method and the device of ICP/IP protocol stack, can reduce the processing of a large amount of network I/O interruption, data multiple copies and agreement itself, has alleviated the burden of CPU; HOE is bearing dual identity in system configuration, it is server end, it is again client, complete and the session of TCP as server end, and resolve and encapsulation HTTP, as client its by HTTP request msg, URL and local data management component carry out session with the particular location of determining HTTP request msg at local disk array still at remote data center; FOE and RFOE provide the eventful business of the high speed of high flux data to read and write-in functions; PEM monitors the working condition of each operational module, generates statistical information, and according to statistical information, each engine is implemented to managing power consumption; CM completes search and the maintenance of data, realizes the quick location of request object and the condition managing of storage object; CE is responsible for SSL(TLS) required cryptographic calculations task in agreement, mainly comprise: certificate management, authentication, cipher key change, data encrypting and deciphering etc.; CDE carries out Fast Compression and reduction to the data towards Business Stream, reaches the object that exchanges storage and communication by calculating for.
In this structure, BUS is for carrying out data interaction between CPU and various ancillary equipment or internal memory in sheet, and each peripheral hardware is mounted in sheet on BUS by unified interface.In sheet, BUS employing operating frequency is high, data bit is roomy, addressing space is large, interrupt mechanism is perfect, and can be the shared multi-point bus of some peripheral hardwares on sheet.System is carried out data interaction by ten thousand mbit ethernet mouths and the external world, passes through api interface swap data between engine.
In order to improve the flexibility of system, in this structure, each engine internal is made up of part reconfigureable computing array.From chip on the whole, all engines and CPU management form that super to mix reconfigureable computing array be HRCA (Heterogeneous Reconfigurable Computing Array).
Energy consumption control is the requirement of whole system, and in order to realize this target, the each engine in system can provide the basic function of performance monitoring and managing power consumption, and the interface of access can be provided PME engine.
This architecture has following feature: from general structure, calculate with communication and separate; From communication aspect, data separate with controlling, and have adopted 9 special-purpose members to carry out data surface processing in communication aspect; From data processing aspect, data are carried out to the optional cure process of bidirectional flow aquation, select flexibly according to concrete systemic-function and the demand of performance.Treatment effeciency and the fail safe that can greatly improve existing Web server reduce power consumption simultaneously.
Fig. 2 is Web access cloud system assumption diagram.This system comprises 11 dedicated processes parts: as IP packet classification engine IPC202, DoS protection engine ADE 208, SSL/TLS engine 213, TCP /iP Reduction of Students' Study Load engine TOE203, HTTP Reduction of Students' Study Load engine HOE204, file system Reduction of Students' Study Load engine FOE206, far-end file system Reduction of Students' Study Load engine RFOE207, power managed engine PEM212, content management engine CM205, CE(Crypto Engine) 209, compression/decompression engine CDE210.Except processing unit, also have control, memory unit etc., as CPU211, on-chip bus 216, on-chip memory 214.On-chip bus 216 connects CPU 211, on-chip memory, PEM 212, and main processing unit IPC 202, TOE 203, HOE 204, CM 206, FOE 206, SSL/TLS draw 213, CPU 211 controls other parts that are mounted in bus, ADE 208 is connected with IPC 202, TOE 203, CE 209 is connected with TOE 203, HOE 204, and CDE 210 is connected with HOE 204.
System is mutual by the Ethernet interface of 10G with the external world, the data of Ethernet interface input, first process through MAC parts 201, the data of Ethernet are converted to the class GMII signal of 125MHz, so that the design of subsequent logic claims to abandon CRC32 check errors packets of information in transfer process.Because WEB server need to complete address resolution, so that user side can obtain MAC Address.Therefore need the frame transmitting to classify: ARP frame and IP frame.Output IP frame is the IP frame that MAC filters, if object MAC is MAC Address or the broadcast address of WEB Server, MAC and frame type is filtered out.ARP frame is resolved, remove the ARP frame of non-ip protocol, according to the source MAC obtaining, structure arp reply frame, and this acknowledgement frame is delivered in arp reply frame transmit queue.
First IPC engine 202 detects MAC parts 201 and exports IP frame, if carry information is source MAC/IP, IP frame basis, whether burst IP frame is classified: TCP frame, UDP, ICMP, IGMP frame.Because the frame that user port is come is smaller, for example get frame and arp frame, therefore this module is not recombinated to IP.
The input of ADE 208 comprises two parts; be respectively the IP bag of classifying through IPC engine 202 and the TCP bag of processing through TOE 203; ADE 208 filters according to access control list ACL or other strategy provides the protection of DOS class, legal IP bag or the TCP bag of output through filtering.
The part that TOE 203 completes ICP/IP protocol stack realizes, being input as through the IP of classification of it wrapped, IP and TCP head are processed, if the IP processing bag is fragmented, to complete the restructuring of IP bag, a complete TCP bag is provided, finally exports Payload to SSL/TLS engine 213 by 80 port output TCB (TCP Control Block) to HOE 204,443 ports.
In system, HOE 204 is serving as dual role, completes client and server end function.For server side functionality parts, receive TCP load, TCP session number from TOE 203, confirm output domain name and URL information through HTTP packet parsing and request; For client functionality parts, accept HTTP data, be reduced to object by HTTP data just, object and the object name of output request.
State information, management strategy that PEM 212 collects other dedicated processes parts by bus are upgraded, generate the statistical information of each processing unit, and complete the calculating of dynamic power management strategy, then according to the status adjustment mode of the state of object (load) and power managed strategy decision processing unit, the dynamic power consumption of output alignment processing parts regulates control signal or carries out instruction, global clock and local voltage are managed, store the statistical information (daily record) of all processing unit.
CE 209 is responsible for SSL(TLS) required cryptographic calculations task in agreement, mainly comprise: certificate management, authentication, cipher key change, data encrypting and deciphering etc., wherein, certificate management completes the importing (upgrade and derive) of certificate; Authentication parts input certificate, and certificate is verified to output authentication result; The key that the input of cipher key change parts is encrypted, is decrypted the key of encrypting, output key; Data encrypting and deciphering parts input plain/cipher text and key, complete encryption/decryption process, corresponding output ciphertext/expressly.
CDE 210 inputs the data of to be compressed/decompress(ion), and the data of input are carried out to compression/decompression processing, after corresponding output squeezing/and data after decompress(ion).
Fig. 3 is Web access cloud architectural data flow graph, and main calculating or the I/O of Web server are intensive, in data flow diagram, and the processing procedure of the data of major embodiment input and output in system, the work of PME and CPU does not all embody.In order to simplify, the access of the information that the information of http session is connected with TCP does not embody in data flow diagram yet, and these information are to leave in the internal memory of internal memory on sheet or expansion.Divide below different situations is given an explaination.
In (a), the situation that the data of request are obtained from disk is described:
1: the packet that contains HTTP request arrives host network card
2:IP packet is given TOE
HOE is handed in 3:HTTP request
4:HOE parses the resource of request, then hands to FOE
5:FOE initiates read request to disk
6: data return to FOE
7:FOE by deposit data in main memory
The information that 8:FOE accesses the file of request in internal memory is told HOE
9:HOE generates corresponding HTTP and responds packet header, and fileinfo and packet header are handed to TOE
10: to the designated length data of the given file of Memory Controller Hub request
11: specific data returns to TOE
12: for for the first time, TOE obtains data and becomes the packet of TCP/IP with HTTP capitiform from internal memory; Other, directly the load using data as remaining HTTP forms TCP/IP packet.Send to MAC.
13: packet is sent from network
In situation (b), because the file of access directly leaves in internal memory, therefore FOE need to be from disk reading information, but obtains the information of file in internal memory in the 5th, 6 steps.
In situation (c), what HTTP submitted to is post or put request, and HOE by Data Analysis out, then gives FOE and is stored in disk, can certainly be other medium, for example, be stored in other machine by network.
1: the packet that contains HTTP request arrives host network card
2:IP packet is given TOE
HOE is handed in 3:HTTP request
4:HOE parses request for Post or Put, then by the filename of uploading, contains path and data to FOE
5: the deposit data of reception is in internal memory
1-5: repeatedly carry out, until file transfer is complete.
6-7:FOE carries out repeatedly, until file has been stored in disk.
8: notice HOE, has completed upload request.
9: TOE is handed in receiveing the response that HOE is generated.
The same situation of remaining step (a) (b).
Fig. 4 is TOE structure chart in Web access cloud architecture.On-Chip Buffer Memory receives the message that maybe will send for buffer memory from 10G network.IP Parser StateMachine processes IP to be divided into and receives and send two parts, receiving unit will receive original message from 10G network interface, and message is carried out to Preliminary Analysis, comprise to message is carried out checksum validation, the length control information of message various piece is carried out to preliminary treatment etc.TCP timer provides hardware timing reference for TCP connection procedure.When Mem Ctrl supports more concurrent TCP linking number when needs, can expand spatial cache with outside storage unit in high speed.TCP Parser State Machine adopts high performance synchronous state machine to realize the tcpip stack that meets various standards.Queue buffer memory needs to be transferred to the reception packet that TCP processes after IP processes, or the transmission packet that need to allow IP further process.Queue Manager is controlled by TCP Parser State Machine, assists the mass data sequence in scheduling Queue.In the time realizing TOE, the behavior of server end can be controlled, and can limit.The MSS(Maximum Segment Size of TCP) add that the header of TCP is less than the load of an IP, can avoid realizing at IP layer the function of fragmentation while transmission like this.But the behavior of client and intermediate router is uncontrollable, while receiving data, to realize the group report of the IP bag of fragmentation.
Consider the characteristic of http protocol, and serve as the role of Web reptile, HOE adopts programmable implementation flexibly, and a kind of scheme is to use a PE poll that disposal ability is enough, as shown in Figure 5; Another kind of scheme is the array of multiple simple PE compositions, as shown in Figure 6.Wherein, PE (Processing Element) is a general processor, with the characteristic strengthening, for example, for character string comparison, because the parsing of http protocol is mainly the operation of character string.PE carries out the program writing, and leaves in advance in ROM.
In Fig. 5, a Session once can only process a request, if all requests of this Session are all processed and are over, this Session has just removed from snoop queue.And new request joined in snoop queue before the request of this queue starts again.For the processing of specific request, from HTTP head, parse the resource of request, comprise service, file, obtain the resource of request, then generate response according to the result of the resource of request.The processing of specific response, according to the type of responding, generates header, and gives next processing unit, i.e. TOE processing forward.
Fig. 6 considers dynamic web page and expansion.Due to task become more complicated, can cause single PE too complicated, or disposal ability is inadequate, also may cause energy consumption too high.From the angle of energy efficiency management, consider to adopt the array of multiple PE compositions to replace original single PE.
File system Reduction of Students' Study Load engine FOE 206 supports the eventful business of high speed of high flux data to read and write.When reading out data, according to data attribute information to be checked, correctly access according to storage concordance list at local disk; When data writing, require data to be concentrated or classified compression storage according to data format to be written, and complete, renewal concordance list mutual with Magnetic Disk Controler.
When data to be checked are not during at local disk array, inquiry request is submitted to far-end file system Reduction of Students' Study Load engine RFOE 207 by data management parts, generate new far-end request HTTP by RFOE and carry out remote inquiry, data to be checked are returned to local RFOE by neighbours or remote data center, by local RFOE, the data of inquiry are directly pushed to the original destination host of data to be checked on the one hand again, on the one hand these data are upgraded to local disk array through local data management component.
Fig. 7 is CM data flow diagram in Web access cloud architecture.CM completes the function of management and the search of data, realizes the management of the state of the quick location of request object and the object of storage.Its input mainly comprises the URL of policy information and request object, and strategy mainly comprises, the content of management and corresponding effective time, and for example dynamic, static content, the requirement of word, picture, Audio and Video etc. is just different; The rule of content update, adopts active or passive; Capacity limit, needs how many data of buffer memory.Output mainly comprises two parts: give the instruction of FOE, as the operation of object (storage, deletion and path); Give the instruction of RFOE, as the URL of destination address and object.CM workflow is as follows:
1,, in the time of system initialization, analysis configuration strategy, sets restrictive condition.And in the time that system is moved, dynamically adjust restrictive condition.
2, safeguard the object of local storage:
New object arrives, and whether stores the position of this locality and storage into according to strategy decision;
According to the strategy of the life cycle of object, invalidate exceeds the object of condition life cycle.
3, safeguard the database of the object of local storage:
New object arrives, and increases a record, and and the path of storage between set up mapping;
Object is by invalidate, deletion record.
4, according to the URL search request object of request:
This locality, provides the store path of object.
Not in this locality, according to strategy, provide the destination address and the URL that obtain from far-end.

Claims (2)

1. a Web access cloud architecture, comprises IP packet classification engine IPC(202), DoS protection engine ADE(208), SSL/TLS engine (213), TCP /iP Reduction of Students' Study Load engine TOE(203), HTTP Reduction of Students' Study Load engine HOE(204), file system Reduction of Students' Study Load engine FOE(206), far-end file system Reduction of Students' Study Load engine RFOE(207), power managed engine PME(212), content management engine CM(205), cipher engine CE(209), compression/decompression engine CDE(210), also comprise control store parts: CPU(211), on-chip bus (216) and on-chip memory (214) and MAC parts (201) and local disk array (215), it is characterized in that:
On-chip bus (216) connects CPU(211), on-chip memory (214), power managed engine PME(212) and IP packet classification engine IPC(202), TCP /iP Reduction of Students' Study Load engine TOE(203), HTTP Reduction of Students' Study Load engine HOE(204), content management engine CM(205), file system Reduction of Students' Study Load engine FOE(206), SSL/TLS engine (213), CPU(211) other parts that are mounted on on-chip bus (216) are controlled, at IP packet classification engine IPC(202) I/O mouth on be also connected with DoS protection engine ADE(208) I/O mouth, DoS protection engine ADE(208) another I/O mouth also and TCP /iP Reduction of Students' Study Load engine TOE(203) I/O mouth be connected, TCP /iP Reduction of Students' Study Load engine TOE(203) another I/O mouth again with cipher engine CE(209) I/O mouth be connected, cipher engine CE(209) another I/O mouth and HTTP Reduction of Students' Study Load engine HOE(204) I/O mouth be connected, the HTTP engine HOE(204 that lightens the burden) another I/O mouth and compression/decompression engine CDE(210) I/O mouth be connected;
MAC parts (201) by input/output port respectively with on-chip bus (216), IP packet classification engine IPC(202) be connected, IP packet classification engine IPC(202), TCP /iP Reduction of Students' Study Load engine TOE(203), HTTP Reduction of Students' Study Load engine HOE(204) and content management engine CM(205) between I/O mouth connected successively, content management engine CM(205) I/O mouth also with far-end file system Reduction of Students' Study Load engine RFOE(207) be connected, content management engine CM(205) output port and file system Reduction of Students' Study Load engine FOE(206) be connected, file system Reduction of Students' Study Load engine FOE(206) output port and the HTTP engine HOE(204 that lightens the burden) be connected, file system Reduction of Students' Study Load engine FOE(206) an input/output port be connected with local disk array (215).
2. Web according to claim 1 access cloud architecture, is characterized in that: at described TCP/IP Reduction of Students' Study Load engine TOE(203) in structure, chip buffer storage is received the message that maybe will send for buffer memory from 10G network; IP parser state machine is processed IP to be divided into and is received and send two parts, receiving unit will receive original message from 10G network interface, and message is carried out to Preliminary Analysis, comprise to message is carried out checksum validation, the length control information of message various piece is carried out to preliminary treatment; TCP timer provides hardware timing reference for TCP connection procedure, when Mem Ctrl supports more concurrent TCP linking number when needs, expands spatial cache with outside storage unit in high speed; TCP analyzer state machine adopts high performance synchronous state machine to realize the tcpip stack that meets various standards; Queue buffer memory needs to be transferred to the reception packet that TCP processes after IP processes, or the transmission packet that need to allow IP further process; Queue management device Queue Manager is subject to the control of TCP analyzer state machine, assists the mass data sequence in scheduling Queue; In the time realizing TOE, the behavior of server end can be controlled; The maximum segment size of TCP adds that the header of TCP is less than the load of an IP, realizes the function of division while avoiding sending at IP layer, and the behavior of client and intermediate router is uncontrollable, will realize the group report of the IP bag of division while receiving data.
CN201110025590.XA 2011-01-24 2011-01-24 Web access cloud architecture and access method Active CN102143218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110025590.XA CN102143218B (en) 2011-01-24 2011-01-24 Web access cloud architecture and access method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110025590.XA CN102143218B (en) 2011-01-24 2011-01-24 Web access cloud architecture and access method

Publications (2)

Publication Number Publication Date
CN102143218A CN102143218A (en) 2011-08-03
CN102143218B true CN102143218B (en) 2014-07-02

Family

ID=44410436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110025590.XA Active CN102143218B (en) 2011-01-24 2011-01-24 Web access cloud architecture and access method

Country Status (1)

Country Link
CN (1) CN102143218B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546424A (en) * 2012-07-10 2014-01-29 华为技术有限公司 TCP (transmission control protocol) data transmission method and TCP unloading engine and system
CN102821000B (en) * 2012-09-14 2015-12-09 乐视致新电子科技(天津)有限公司 Improve the method for usability of PaaS platform
CN104883335B (en) * 2014-02-27 2017-12-01 王磊 A kind of devices at full hardware TCP protocol stack realizes system
CN104079624A (en) * 2014-05-09 2014-10-01 国云科技股份有限公司 Message access layer framework based on service and implementing method thereof
CN109714302B (en) * 2017-10-25 2022-06-14 阿里巴巴集团控股有限公司 Method, device and system for unloading algorithm
CN108234662A (en) * 2018-01-09 2018-06-29 江苏徐工信息技术股份有限公司 A kind of secure cloud storage method with active dynamic key distribution mechanisms
CN108881425B (en) * 2018-06-07 2020-12-25 中国科学技术大学 Data packet processing method and system
CN111010410B (en) * 2020-03-09 2020-06-16 南京红阵网络安全技术研究院有限公司 Mimicry defense system based on certificate identity authentication and certificate signing and issuing method
CN111726361B (en) * 2020-06-19 2022-02-22 西安微电子技术研究所 Ethernet communication protocol stack system and implementation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1910869A (en) * 2003-12-05 2007-02-07 艾拉克瑞技术公司 TCP/IP offload device with reduced sequential processing
CN101883103A (en) * 2009-04-15 2010-11-10 埃森哲环球服务有限公司 The method and system of the client-side extensions of Web server gang fight structure in the cloud data center

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101179554B1 (en) * 2009-03-26 2012-09-05 한국전자통신연구원 Mobile device adopting mobile cloud platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1910869A (en) * 2003-12-05 2007-02-07 艾拉克瑞技术公司 TCP/IP offload device with reduced sequential processing
CN101883103A (en) * 2009-04-15 2010-11-10 埃森哲环球服务有限公司 The method and system of the client-side extensions of Web server gang fight structure in the cloud data center

Also Published As

Publication number Publication date
CN102143218A (en) 2011-08-03

Similar Documents

Publication Publication Date Title
CN102143218B (en) Web access cloud architecture and access method
US10630654B2 (en) Hardware-accelerated secure communication management
CN103765851B (en) The system and method redirected for the transparent layer 2 to any service
CN107046542A (en) A kind of method that common recognition checking is realized using hardware in network level
CN104054316B (en) Systems and methods for conducting load balancing on SMS center and building virtual private network
CN110915172A (en) Access node for a data center
US20230421627A1 (en) Technologies for accelerated http processing with hardware acceleration
US11843527B2 (en) Real-time scalable virtual session and network analytics
CN104205080A (en) Offloading packet processing for networking device virtualization
Wang et al. SDUDP: A reliable UDP-Based transmission protocol over SDN
CN104054067A (en) Frameworks and interfaces for offload device-based packet processing
US20100325263A1 (en) Systems and methods for statistics exchange between cores for load balancing
WO2023216424A1 (en) Data link service processing system and method for networked encrypted transmission
WO2022032984A1 (en) Mqtt protocol simulation method and simulation device
Chen et al. Mp-rdma: enabling rdma with multi-path transport in datacenters
CN112631788A (en) Data transmission method and data transmission server
Diab et al. Orca: Server-assisted multicast for datacenter networks
Kissel et al. The extensible session protocol: A protocol for future internet architectures
Morishima et al. Network transparent fog-based IoT platform for industrial IoT
Nikitinskiy et al. A stateless transport protocol in software defined networks
US11570257B1 (en) Communication protocol, and a method thereof for accelerating artificial intelligence processing tasks
Diamond et al. SECURING INFINIBAND TRAFFIC WITH BLUEFIELD-2 DATA PROCESSING UNITS
Farhat Increasing DoS-Resilience for Cross-Protocol Proxies
Yang et al. SmartGate: Accelerate Cloud Gateway with SmartNIC
Xia et al. \upmu DC^ 2 μ DC 2: unified data collection for data centers

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171010

Address after: 201112 room 1588, building 3A, No. 501, Union Road, Shanghai, Minhang District

Co-patentee after: National Digital Switch System Engineering Technology Research Center

Patentee after: Shanghai RedNeurons Information Technology Co., Ltd.

Address before: 201112 3A business building, United Airlines road 1588, Shanghai, Minhang District

Patentee before: Shanghai RedNeurons Information Technology Co., Ltd.

TR01 Transfer of patent right