CN101334742B - Java EE applications server parallel processing method - Google Patents

Java EE applications server parallel processing method Download PDF

Info

Publication number
CN101334742B
CN101334742B CN2008101178203A CN200810117820A CN101334742B CN 101334742 B CN101334742 B CN 101334742B CN 2008101178203 A CN2008101178203 A CN 2008101178203A CN 200810117820 A CN200810117820 A CN 200810117820A CN 101334742 B CN101334742 B CN 101334742B
Authority
CN
China
Prior art keywords
processing
request
event
shared resource
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008101178203A
Other languages
Chinese (zh)
Other versions
CN101334742A (en
Inventor
李洋
张文博
钟华
魏峻
黄涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jialian Agel Ecommerce Ltd
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN2008101178203A priority Critical patent/CN101334742B/en
Publication of CN101334742A publication Critical patent/CN101334742A/en
Application granted granted Critical
Publication of CN101334742B publication Critical patent/CN101334742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Stored Programmes (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a concurrent processing method of a Java EE application server which belongs to the technical field of software; the application server of the invention comprises one or a plurality of request processing units; each request event is processed in sequence by one or a plurality of request processing units; each request processing unit examines that whether the idle share resource which is needed for processing the request event exists before processing the request event, if so, the thread is distributed for the current request processing unit and the request event is processed, and if not, the thread is not distributed for the current request processing unit and waiting is carried out until the needed idle share resource exists. Compared with the prior art, the method of the invention reduces the block of thread resource caused by the competition of the share resource, improves the capacity of concurrent processing of the Java EE application server, leads the adjustment and the performance analysis to be more convenient and is beneficial to the positioning of the performance bottle-neck.

Description

A kind of Java EE application server concurrent processing method
Technical field
The present invention relates to a kind of Java EE application server concurrent processing method, belong to software technology field.
Background technology
Java EE (Java Enterprise Environment) is the standard platform based on the Distributed Application of Java that Sun Microsystems proposes, it is by providing enterprise computing environment necessary various services, making the Distributed Application based on member that is deployed on the Java EE platform can realize features such as high availability, security, extensibility and reliability, is the application system specification for structure towards Web that is most widely used at present.It provides a series of codes and standards for exploitation, deployment, operation and the management that Web uses.
Java EE application server is supported layered architecture by container, supports when container provides the operation of Java EE application component.Fig. 1 has shown the basic composition structure based on the Web Application Server of Java EE platform, wherein the Web container encloses function of Web server and presentation layer logic, for Java EE presentation layer assembly (as Servlet) provides the runtime environment support.Enterprise Java Bean container (Enterprise Java Bean Container) has encapsulated the operation layer function, for Java EE operation layer assembly (EJB) provides the runtime environment support.Simultaneously, Java EE application server also provides a series of bottom service (as Java messenger service, Java connector architecture, Java database Connection Service etc.) to provide some bottom function supports for Web container and Enterprise Java Bean container, make it can visit enterprise database and enterprise's Legacy System, simultaneously with other middlewares mutual (as message-oriented middleware etc.).
For Java EE application server, it must be simultaneously for a plurality of corporate clients provide service, and this concurrent processing ability is realized by the management thread resource by its concurrent processing model.The concurrent processing model that present Java EE application server adopts mainly is the thread pool model, and it distributes a thread resources for each client requests and finishes its processing, has the preferable performance performance under underloaded situation.
Yet, in Java EE application server, except thread resources, also comprising a large amount of shared resource (as database connection, Java message queue etc.), they are to finish client requests to handle necessary resource equally.When Java EE application server faced heavy duty, the shared resource deficiency can cause thread resources to block and wait for, could continue the processing to client requests after obtaining corresponding shared resource, and in the wait process, in fact this thread is in blocked state.When load increases, will cause the obstruction of a large amount of processing threads, make Java EE application server enter state of saturation, can't handle follow-up request, even this request does not need to cause the shared resource of thread block.As seen, adopt this pattern lower to the service efficiency of thread when load is big, thereby cause the efficient of whole application server lower.
Operating position by the monitoring shared resource instructs the management of thread resources can effectively reduce above-mentioned thread block, improves the utilization factor of thread resources, and then improves the concurrent processing ability of Java EE application server.
Summary of the invention
Problem and shortage at above-mentioned existing Java EE application server concurrent processing model existence, the Java EE application server concurrent processing method that the purpose of this invention is to provide a kind of shared resource sensitivity, improve the concurrent processing ability of Java EE application server, make it provide service for more corporate client simultaneously.In the methods of the invention, client requests is packaged as request event, by a plurality of requesting processing (Request Processing Unit, RPU) handle this request event successively, each requesting processing is finished the partial function of Request Processing, and is that the client requests that needs are handled is distributed thread resources according to the operating position of shared resource.
The inventive method mainly comprises following key element: request event, resource context and requesting processing (as shown in Figure 2).Below these key elements are described in detail:
(1) request event
In the methods of the invention, request event is the internal representation of client requests, is the process object of requesting processing.Generally, request event has comprised all information of client requests.For example, in the Web container of realizing based on the inventive method, URL, required parameter value, the inner session object that request event has comprised client requests such as quotes at content.
(2) resource context
The resource contextual definition shared resource demand of requesting processing, stipulated that simultaneously to shared resource some use restriction, be the foundation of requesting processing allocation process resource.After a request event arrives requesting processing, if the shared resource that still exists idle shared resource and current request processing unit to use in the system surpasses predefined Resources limit, then requesting processing is handled resource for this of request event distribution and is carried out the Request Processing logic.
For Java EE application server, application component is by the exploitation of Web application developer, and the Web container can't be known its shared resource demand in advance.Therefore, the contextual definition of resource needs the participation of Web application developer, and the mode by configuration file is deployed to the Web container with application component.Provided a contextual example of resource below, having indicated requesting processing Search, to need type be the resource of AppConnection, and maximum usage quantity is 15.
<ResourceContext>
<RPU>Search</RPU>
<Resources>
<Resource?name=”AppConnection”max=”15”/>
</Resources>
</ResourceContext>
(2) requesting processing
Requesting processing is the core component of the inventive method, and it is encapsulated as the Request Processing task with a series of request processing step.Simultaneously, requesting processing is handled resource according to the operating position of shared resource for the Request Processing Task Distribution, finishes the Request Processing logic.At first describe the assembly of forming requesting processing below in detail, provided the operation logic of requesting processing then.
● form structure
The assembly of composition requesting processing as shown in Figure 3.A requesting processing is made up of incoming event formation (Input EventQueue), scheduler (Scheduler), explorer (Resource Manager), event handler (Event Handler), task manager (Task Manager), distributor (Dispatcher), will introduce the concrete function of each assembly in detail below.
(1) incoming event formation (Input Event Queue).
The incoming event queue stores request events that need handle of all current request processing units.In requesting processing inside, allow batch processing to request event, this make event handler can be in a thread that has obtained shared resource the batch processing incident, the context that reduces thread switches.
(2) event handler (Event Handler).
Each requesting processing comprises an event handler, and it has encapsulated one or more treatment steps of current request event handling flow process, and the processing logic of usually more corresponding request processing component is the internal request object as resolving the HTTP request.
(3) explorer (Resource Manager).
Explorer is in charge of the resource context and is monitored the operating position of shared resource, as the foundation of scheduler thread creation Processing tasks.
(4) task manager (Task Manager).
Task manager is responsible for request event and creates Processing tasks.When the shared resource demand of request event was satisfied, Processing tasks of scheduler thread creation was also given task manager, distributed thread resources to carry out the Request Processing logic of event handler encapsulation by task manager for it.
(5) distributor (Dispatcher).
The incident distributor is responsible for the processed request incident is distributed to according to its type in the incoming event formation of other requesting processing, and the continuation of driving request treatment scheme is carried out.
(6) scheduler thread (Scheduler).
The scheduler thread is the core of requesting processing, and it has one and handles resource.It monitors the operating position of shared resource by explorer, and creates Processing tasks for the request event that satisfies the shared resource demand and give the task manager execution.Fig. 4 has illustrated the state transition graph of scheduler.
1) scheduler is in idle condition (idle) at first.
2) when request event arrives, scheduler is checked the operating position of shared resource by explorer.
If there is not idle dedicate resources, scheduler enters resource blocked state (Resourcesblocked).
3) explorer wakeup schedule device enters active state (active) after shared resource satisfies, this moment, scheduler was that request event is created Processing tasks and given task manager, distribute thread resources by it, the scheduler thread is got back to idle condition then, continues to wait for the arrival of successor.
● operation logic
The processing of request event in requesting processing relates to a plurality of assemblies, and Fig. 5 has illustrated the processing procedure of request event in requesting processing.
1) when request event enters event queue, the notice scheduler exists request event to need to handle;
2) scheduler checks whether there is idle shared resource by explorer, if there is not then scheduler thread waits;
3) explorer notice scheduler has had shared resource to discharge, and can handle request event;
4) scheduler takes out a collection of request event from event queue;
5) scheduler is created Processing tasks;
6) scheduler is given task manager with the Request Processing task;
7) task manager is the Task Distribution thread resources;
8) the processing logic processing events of the task call event handler of acquisition thread resources;
9) distributor is distributed to other requesting processing with the processed request incident.
The present invention proposes a kind of Java EE application server concurrent processing method of shared resource sensitivity, compares with prior art, and the technique effect of the inventive method is mainly:
1. the inventive method is by introducing the one-to-one relationship that request event comes decoupling zero client requests and thread resources, make that each requesting processing can be request event allocation process resource according to the operating position of shared resource, the request event that surpasses processing power is waited in the request event formation, only blocks the scheduler thread.Reduce the thread resources that causes because of the shared resource competition and blocked, improved the concurrent processing ability of Java EE application server.
2. the inventive method is isolated in the processing logic of client requests in the different requesting processing, each requesting processing only is concerned about the request event that oneself receives and produces, this provides a kind of assembling mode flexibly, make that debugging and performance evaluation are convenient, the entrance and exit that monitor code can be placed on each requesting processing is easily monitored the performance of each requesting processing, helps the location of performance bottleneck.
Description of drawings
Fig. 1 is the basic composition structure of Web Application Server
Fig. 2 is the basic composition assembly that the inventive method relates to
Fig. 3 is that the requesting processing assembly is formed synoptic diagram
Fig. 4 is the constitutional diagram of scheduler thread
Fig. 5 is the UML precedence diagram that request event is handled
Fig. 6 is the request processing flow of Once Web Container
Fig. 7 is the request processing flow of Once Web container
Fig. 8 compares according to the Web container performance that the inventive method and thread pool method design respectively
Embodiment
The present invention will be described in more detail below in conjunction with specific embodiments and the drawings.
In Once Web Container, the processing procedure of a client requests is divided into 18 steps (as shown in Figure 6), relate to six request processing component, be respectively audiomonitor assembly (HTTPListener), server component (DefaultServer), fictitious host computer assembly (DefaultHost), context component (DefaultContext), casing assembly (DefaultShell) and Servlet assembly, wherein HTTPListener is responsible for listening port, DefaultServer, DefaultHost, DefaultContext, DefaultShell is corresponding Web container itself respectively, fictitious host computer, the internal representation of a Web application and a Servlet, Servlet represents the practical application component instance of application developer exploitation.
The concrete implication of each step of Request Processing process is as follows:
1) accept by the request of HTTPListener assembly reception client, creates socket object (socket).
2) notifyThread, another thread in current leader's thread notice thread pool becomes the leader, and current thread continues to handle client requests as processing threads
3) handle calls DefaultServer and handles
4) parseRequest, DefaultServer resolve the HTTP request from the inlet flow of socket
5) parseHeader, DefaultServer resolve the HTTP header from socket
6) createRequest, DefaultServer creates the inside request object of Web container
7) createResponse, DefaultServer creates the inside response object of Web container
8) service, DefaultServer gives DefaultHost with request and handles
9) map, DefaultHost is mapped to corresponding DefaultContext with request
10) service, DefaultHost gives corresponding DefaultContext with request and handles
11) map, DefaultContext is mapped to corresponding DefaultShell with request, and promptly Servlet is in the representative of Web internal tank
12) service, DefaultContext gives DefaultShell with request and handles
13) createFilter creates the filtrator (Filter) of corresponding Servlet
14) doFilter calls corresponding filter logic
15) service calls the service method of concrete Servlet example, finishes business logic processing
16) return, result returns to DefaultServer
17) prepareHeaders, DefaultServer generates the http response header
18) sendResponse, DefaultServer returns to client with result
In above Request Processing process, step 1,2 is responsible for receiving client requests, obtains corresponding socket object.Step 4-7 is responsible for reading in data from the inlet flow of socket object, and according to http protocol it is resolved to the internal request object of server.Step 8-16 is responsible for request object is mapped to corresponding Servlet, and handles client requests by the Servlet of correspondence.At last, step 17-18 is packaged as the http response object with result and returns to client.
Suppose that current Web application comprises three Servlet: administrator request (Admin Request), homepage information (Home) and searching request (Search), the treatment scheme of each Servlet Request Processing process of corresponding diagram 6 all then, different is that they have different Servlet examples.Request processing flow according to the method for the invention design Web container is as follows:
1) narrates according to the function of Fig. 6, request processing flow is divided into 5 requesting processing, is respectively HTTP audiomonitor requesting processing (HTTPListener), HTTP resolver requesting processing (HTTPParser), mapper requesting processing (URLMapper), Servlet method requesting processing (ServletMethod), echo sender requesting processing (ResponseSender).
2) because HTTPListener and ResponseSender need the socket shared resource, therefore keep independently requesting processing of their conducts, and HTTPParser and URLMapper do not need special shared resource, and the two is merged into a requesting processing.
3) among three Servlet, Home does not need the shared resource outside the thread resources.Admin Request and Search need database to be connected shared resource to finish the searching of database, and Admin Request also needs the safety database connection resource to finish the authentication to keeper's identity simultaneously.Therefore, all corresponding requesting processing of each concrete Servlet makes each Servlet to distribute thread resources for request according to its shared resource demand like this.
4) request processing flow that finally obtains as shown in Figure 7, request processing flow comprises 6 requesting processing HTTPListener, HTTPParser ﹠amp; URLMapper, Response Sender, Admin Request, Home and Search,
For Web container and the difference of existing Web container on performance that more specifically relatively designs according to the inventive method, present embodiment uses TPC-W benchmark standard as a comparison, this test procedure is by (the Transaction Processing Performance Council of Transaction Processing Performance Council, TPC) released in 2000, represent a typical electronic business application environment, simulated a system of internet book store that concurrent in a large number visitor is arranged.TPC-W has simulated client's following behavior: finish shopping and use shopping cart, do search in stock sheet, fill in customer information, carry out the responsibility of shop management, buy the commodity in the shopping cart, inquire about the tabulation of best shop-assistant and new commodity, the order situation before the inquiry etc.Simultaneously, TPC-W also provides three kinds of above client's behavior to form mixed type: browse type (Browsing), purchase pattern (Shopping) and subscription type (Ordering).Fig. 8 has shown that as can be seen, under different access modules, concurrent processing method of the present invention has all obtained more performance based on the performance comparison result of Web container with the Web container that designs based on the inventive method of thread pool modelling.
The part code that has provided relevant request event, resource context and requesting processing interface below is to illustrate the present embodiment method better:
public?interface?Event{
public?String?getType();
public?String?getDescription();
public?void?setDirection(boolean?direction);
public?boolean?getDirection();
public?Object?getProducter();
public?Request?getRequest();
public?Response?getResponse();
public?void?setRequest(Request?request);
public?void?setResponse(Response?response);
public?void?setSocket(Socket?socket);
public?Socket?getSocket();
public?InputStream?getInputStream();
public?OutputStream?getOutputStream();
public?void?setInputStream(InputStream?is);
public?void?setOutputStream(OutputStream?os);
……
}
public?interface?RPU{
public?String?getName();
public?void?setName();
public?void?setEventQueue(Queue<Event>q);
public?Queue<Event>getEventQueue();
public?ResourceContext?getResourceContext();
public?void?setResourceContext(ResourceContext?rc);
public?void?setDispatcher(Dispatcher?dis);
public?Dispatcher?getDispatcher();
…………
}
public?interface?ResourceContext{
public?int?getType();
public?String?getName();
public?int?getMaxSize();
public?int?setMaxSize();
public?int?getAvaiable();
…………
}
The scheduler and the Processing tasks code of concrete requesting processing inside are as follows:
class?Scheduler?implements?Runnable{
public?void?run(){
while(true){
Event?clientEvent=queue.poll();
if(clientEvent!=null){
Task?task=new?Task(clientEvent);
TaskManager.executeTask(task);
}
}
}
}
class?Task{
public?Event?clientEvent=null;
public?Task(ClientEvent?clientEvent){
this.clientEvent=clientEvent;
}
public?void?run(){
try{
forwardRequest(clientEvent);
}catch(ServletException?e){
e.printStackTrace();
}catch(IOException?e){
e.printStackTrace();
}
}
}

Claims (5)

1. a Java EE application server concurrent processing method is characterized in that described application server comprises one or more requesting processing, and each request event is successively by one or more processing the in the described request processing unit;
Whether each requesting processing was checked to exist before the processes said request incident and is handled this request event shared resource of required free time, if, then distribute thread and processes said request incident for the current request processing unit, if not, then for the current request processing unit distributes thread and wait, until the shared resource that the required free time occurs;
Described request processing unit pending request event in this requesting processing of batch processing in same thread.
2. the method for claim 1, it is characterized in that, in each requesting processing, only in processing procedure, need the requesting processing of shared resource before the processes said request incident, to check whether there is this request event of processing shared resource of required free time.
3. the method for claim 1 is characterized in that, described one or more requesting processing realize mutually different processing capacity.
4. the method for claim 1 is characterized in that, also checks the required shared resource of current request processing unit for current request processing unit partition line Cheng Qian and whether surpasses predefined Resources limit, if not, then distributes thread for it.
5. as any described method of claim 1 to 4, it is characterized in that the described request processing unit is made up of incoming event formation, scheduler, explorer, event handler, task manager and distributor;
The request event that all current request processing units of described incoming event queue stores need be handled;
Described scheduler is monitored the operating position of shared resource by explorer, and creates Processing tasks for the request event that satisfies the shared resource demand and give task manager;
Described explorer management resource context is also monitored the operating position of shared resource, as the foundation of scheduler thread creation Processing tasks;
The processing logic of described event handler corresponding requests processing components is finished its processing capacity;
Described task manager is responsible for request event and creates Processing tasks;
Described distributor is responsible for the processed request incident is distributed to according to its type in the incoming event formation of other requesting processing, and the continuation of driving request treatment scheme is carried out.
CN2008101178203A 2008-08-05 2008-08-05 Java EE applications server parallel processing method Active CN101334742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101178203A CN101334742B (en) 2008-08-05 2008-08-05 Java EE applications server parallel processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101178203A CN101334742B (en) 2008-08-05 2008-08-05 Java EE applications server parallel processing method

Publications (2)

Publication Number Publication Date
CN101334742A CN101334742A (en) 2008-12-31
CN101334742B true CN101334742B (en) 2011-06-01

Family

ID=40197354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101178203A Active CN101334742B (en) 2008-08-05 2008-08-05 Java EE applications server parallel processing method

Country Status (1)

Country Link
CN (1) CN101334742B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5889332B2 (en) * 2011-01-10 2016-03-22 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Activity recording system for concurrent software environments
US9471458B2 (en) 2012-01-05 2016-10-18 International Business Machines Corporation Synchronization activity recording system for a concurrent software environment
CN102708005A (en) * 2012-01-16 2012-10-03 陈晓亮 System and method for virtual resource competition
CN103246552B (en) * 2012-02-14 2018-03-09 腾讯科技(深圳)有限公司 Prevent thread from the method and apparatus blocked occur
CN102629216A (en) * 2012-02-24 2012-08-08 浪潮(北京)电子信息产业有限公司 Cloud operating system (OS) scheduling method and cloud system scheduling device
CN103870337A (en) * 2014-02-28 2014-06-18 浪潮集团山东通用软件有限公司 ESB assembly realization method based on SEDA
CN105094988A (en) * 2015-08-13 2015-11-25 深圳市金蝶中间件有限公司 Data processing method and device based on HTTP requests
CN107783827B (en) * 2016-08-31 2021-06-08 北京国双科技有限公司 Asynchronous task processing method and device
CN107330625A (en) * 2017-07-04 2017-11-07 郑州云海信息技术有限公司 A kind of method and apparatus and computer-readable recording medium for managing order
CN110489201B (en) * 2018-05-15 2021-11-30 中国移动通信集团浙江有限公司 Container performance testing device and method
CN113360418B (en) * 2021-08-10 2021-11-05 武汉迎风聚智科技有限公司 System testing method and device

Also Published As

Publication number Publication date
CN101334742A (en) 2008-12-31

Similar Documents

Publication Publication Date Title
CN101334742B (en) Java EE applications server parallel processing method
US7299478B2 (en) Integration service and domain object for telecommunications operational support
EP1518163B1 (en) Mobile application service container
CN1608248A (en) Provisioning aggregated services in a distributed computing environment
US20040201611A1 (en) Common customer interface for telecommunications operational support
CA2405700C (en) Web service interfaces used in providing a billing service
CN100352221C (en) Apparatus and method for sharing services on network
CN102375894B (en) Method for managing different types of file systems
US8700753B2 (en) Distributed computer system for telecommunications operational support
CN108475220B (en) System and method for integrating a transactional middleware platform with a centralized audit framework
CN103036917A (en) Achievement method of client side platform and client side platform
Abreu et al. Specifying and composing interaction protocols for service-oriented system modelling
Oliveira et al. An innovative design approach to build virtual environment systems
US7495568B2 (en) JMX administration of RFID edge server
CN110119269B (en) Method, device, server and storage medium for controlling task object
Frei et al. A dynamic lightweight platform for ad-hoc infrastructures
Zou et al. Building business processes or assembling service components: Reuse services with BPEL4WS and SCA
Deng et al. Study on EAI based on web services and SOA
CN110727419A (en) Monitoring system
CN101692644B (en) Digital media adapter system applied in digital home
CN100498717C (en) Method for calling enterprise grade Java assembling method
Hwang et al. Design and implementation of the home service delivery and management system based on OSGi service platform
Jana Service oriented architectures–a new paradigm
You et al. Context-based dynamic channel management for efficient event service in pervasive computing
CN116319983A (en) Middleware for service communication

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20191010

Address after: 250100 area B, floor 19, building 1, Xinsheng building, 1299 Xinluo street, high tech Zone, Jinan City, Shandong Province

Patentee after: Shandong Jialian Agel Ecommerce Ltd

Address before: 100190 No. four, 4 South Street, Haidian District, Beijing, Zhongguancun

Patentee before: Institute of Software, Chinese Academy of Sciences

TR01 Transfer of patent right