CN101551745A - Method for greatly improving performance of workflow engine - Google Patents

Method for greatly improving performance of workflow engine Download PDF

Info

Publication number
CN101551745A
CN101551745A CNA2009100155162A CN200910015516A CN101551745A CN 101551745 A CN101551745 A CN 101551745A CN A2009100155162 A CNA2009100155162 A CN A2009100155162A CN 200910015516 A CN200910015516 A CN 200910015516A CN 101551745 A CN101551745 A CN 101551745A
Authority
CN
China
Prior art keywords
workflow engine
cache
data
engine service
cache object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2009100155162A
Other languages
Chinese (zh)
Inventor
姜健
戴海宏
何忠胜
刘宗福
刘民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CVIC Software Engineering Co Ltd
Original Assignee
CVIC Software Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CVIC Software Engineering Co Ltd filed Critical CVIC Software Engineering Co Ltd
Priority to CNA2009100155162A priority Critical patent/CN101551745A/en
Publication of CN101551745A publication Critical patent/CN101551745A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to a method for greatly improving the performance of a workflow engine, which pertains to the filed of middleware technical framework, in particular to the field of the technical framework of the workflow engine. The method improves the performance of the workflow engine from the technical framework of the workflow engine and comprises the steps of server-side caching, client-side caching and the like; the method adopts aching technologies, object pool technologies and an asynchronous call way to reduce system expense, greatly improve the performance of the workflow engine, reduce the fund input of the whole business flow system, support the business flow system with the same data volume and highly concurrent access, achieve the same performance effect and reduce cluster nodes of certain number or lower the hardware configuration of cluster nodes when clustering is executed.

Description

Increase substantially the method for performance of workflow engine
Technical field
The present invention relates to a kind of method that increases substantially performance of workflow engine, this method belongs to middleware Technology framework field, particularly in the Technical Architecture field of workflow engine.
Background technology
In the last few years, along with the continuous development of informatization, the application of workflow was more and more universal, and each business process system is managed more clearly and be flexible; Simultaneously, business process system is also more and more higher to the requirement of workflow, basic application can't satisfy the demand of business process system, and business process system is more and more stronger to senior application demands such as the real-time monitoring analysis of business activity, flow data statistical study.
The real-time monitoring analysis requirement of business activity, professional real-time monitoring and optimization; According to fewer data volume, carry out real-time analysis and handle, be mainly used in bookkeeping.
Flow data statistical study requirement, the statistical study of historical data; According to many data, carry out batch quantity analysis and handle, be mainly used in planning.
Present most of working flow products still rests on the basic application that only satisfies business process system; Working flow products on this aspect when relating to the flow process of artificial participation, in order to tackle the accident burst accidents such as machine of delaying, and provides functions such as service data and the inquiry of historical data, and flow data need be persisted in the database; Relate to data base persistenceization, under big visit capacity, high concurrent situation, run into the problem of database I/O bottleneck through regular meeting; Working flow products has determined the supporting capacity of this working flow products to business process system to the degree that database I/O bottleneck solves.
Some working flow products with its product orientation on the senior application, set about satisfying functional requirements such as the real-time monitoring analysis of business activity, flow data statistical study; User's statistical study flow data must be persisted to all flow data in the database for convenience; Monitoring analysis, big data quantity statistical study will bring higher challenge to its working flow products in real time.
Working flow products makes things convenient for the business process system management all to the platform product development at present, like this workflow platform is supported that the requirement of remote access capability is more and more higher; This just brings the problem of another one reality, network I/O bottleneck; How to solve network I/O bottleneck, also determine the supporting capacity of workflow platform business process system.
More than analyzed some practical problemss that working flow products runs into from local I/O and network I/O aspect, work at present miscarriage product promote the supporting capacity to business process system generally from setting up aspects solution deficiencies separately such as suitable data storehouse index, upgrading hardware, optimization logic.
Also having the most direct method is exactly cluster, application server cluster and data-base cluster; Certainly this is unusual effective method, also is must do towards big visit capacity and high concurrent system.How further to improve the performance of workflow engine on this basis, just this invents described emphasis.
Work at present miscarriage product, it no matter is the function on the basic application level, for example: Pending tasks inquiry, the task of handling are inquired about, can be recovered task inquiry, specified requirements inquiry etc., or the function on the senior application, for example: the real-time monitoring analysis of business activity, flow data statistical study etc., all need the frequent visit workflow engine, engine internal need be carried out the lot of data inquiry continually simultaneously, and the function of these each aspects will inevitably be brought huge pressure to workflow engine; Refer more particularly to the flow process of artificial participation, Processing tasks must check out the business that needs processing earlier, and then carry out corresponding service processing, business process system is very frequent to calling with workflow engine intrinsic call of workflow engine like this, directly causes whole workflow engine very limited to the business process system supporting capacity; If workflow engine is handled the statistical study of some monitoring in real time and big data quantity again, will influence workflow engine more to the business process system supporting capacity.If still only pace up and down setting up suitable data storehouse index, upgrading hardware, optimization logic and Clustering, will inevitably increase the fund input of whole service flow system greatly.
Summary of the invention
Purpose of the present invention is exactly at above-mentioned deficiency, from the workflow technology framework, promotes performance of workflow engine, and a kind of method that increases substantially performance of workflow engine is provided.
The method that increases substantially performance of workflow engine provided by the invention: comprise the steps:
The 1st step, the server end buffer memory, promptly finish the frequency that reduces on the same function basis the computer hardware read and write, business process system is optimized the frequent data query of work engine, at the peripheral packing of workflow engine one deck cache layer, data are directly obtained in each visit from buffer memory, reduce database I/O is operated.
When workflow engine starts, will need the data cached initialization of carrying out; Flow data with producing in real time is cached in the memory object, avoids the new flow data that produces in the Query Database, and memory object is managed.
The 2nd step, client-cache is promptly finished on the same function basis request number of times that reduces server, carries out buffer memory from both direction: flow data buffer memory and represent a layer page cache, minimizing promptly reduces network layer I/O to the call number of workflow engine service.
The metadata cache object focuses on those often operation and big data of data volume, and for example the flow definition data are promptly often operated, and data volume is big again; Server end and client Work stream data cache object are synchronous, and the server end data change notifies all client-side data cache objects to upgrade; Different clients Work stream data cache object is synchronous, during the data cached renewal of active client, at first notifies the workflow engine server, and server end reinforms other all client-side data cache objects and upgrades then; Represent a layer member, carry out buffer memory at those pages that seldom change.
In the 3rd step, object pool utilizes the pond technology, avoids the system overhead that brings is created, encapsulates, destroyed to cache object continually.
In the 4th step, asynchronous call at the specific demand of certain time period, promotes the batch processing ability.
The 5th step, the cluster support, under big visit capacity, the high concurrent situation, the workflow engine service of cluster is an effective support.
Above-mentioned the 3rd step, the 4th step, the 5th step are common prior aries.
Above-mentioned server end cache object management process is such:
When 1-1) the workflow engine service starts, the needs data in buffer is carried out initialization add in the internal memory;
When 1-2) workflow engine was served newly-generated flow data, new cache object was created and is increased in the internal memory;
When 1-3) the existing flow data of workflow engine service was upgraded, the corresponding cache object was updated in the internal memory;
When 1-4) workflow engine service deletion has flow data, deletion corresponding cache object in the internal memory.
Above-mentioned client-cache Object Management group process is such:
When 2-1) business process system starts, the flow definition in the workflow engine service is loaded in the internal memory cache object initialization;
2-2) business process system imports the new technological process definition in the workflow engine service, and workflow engine Service Notification client-cache increases the new flow definition that imports, and newly imports the flow definition cache object and is created and is increased in the internal memory;
2-3) business process system will have flow definition and be updated in the workflow engine service, the corresponding flow definition of workflow engine Service Notification client buffer update, and flow definition cache object corresponding in the internal memory is updated;
2-4) business process system will have flow definition when deleting from the workflow engine service, and workflow engine Service Notification client-cache is deleted corresponding flow definition, the corresponding flow definition cache object of deletion in the internal memory.
In server end buffer memory and client-cache implementation process, should be noted that problem:
1. which type of object of buffer memory? the caching server end is more invoked flow data often, for example: the tabulation of nominator's Pending tasks; Cache client frequent access server end, the data that data volume is big, for example: the flow definition object of client (relies on the flow definition masterplate during process flow operation, need to load definition information, especially in the process of remote service, flow definition information self is just bigger, transmit the flow definition information of this big data quantity, will inevitably cause network is taken in large quantities, especially the flow definition data need frequent visit, will inevitably cause the obstruction of network, reduce the access efficiency of total system greatly, along with the increasing performance meeting straight line decline of concurrency); Represent a layer member, carry out buffer memory at those pages that seldom change.
Is 2. how cache object managed? flow data with producing in real time is cached in the memory object, inquires about the flow data of these new generations when avoiding inquiring about this partial data more again from database, and establishment, renewal, the deletion of cache object managed.
3. cache object initialization when? when workflow engine starts, will need the data cached initialization of carrying out.
Is 4. how the server end Data Update synchronized to client-cache? the server end data change is notified all clients to carry out the metadata cache object and is upgraded.
5. how synchronous is the data cached renewal between different clients? during the data cached renewal of active client, at first notify the workflow engine server, the workflow engine service end reinforms other all clients and carries out the metadata cache object and upgrade then.
Like this, the present invention reduces system overhead by caching technology, object pool technology, asynchronous call mode, promotes performance of workflow engine significantly, reduces the fund input of whole service flow system; Support same big data quantity, the business process system of high concurrent visit, reach the identical impact of performance, will reduce the clustered node of some during cluster, perhaps reduce the hardware configuration of clustered node.
Description of drawings
Fig. 1 is a server end cache object management flow chart in the embodiment of the invention;
Fig. 2 is a client-cache Object Management group process flow diagram in the embodiment of the invention.
Embodiment
A kind of method that increases substantially performance of workflow engine mainly comprises the client and server end, comprises the steps:
The 1st step, the server end buffer memory, promptly finish the frequency that reduces on the same function basis the computer hardware read and write, business process system is optimized the frequent data query of work engine, at the peripheral packing of workflow engine one deck cache layer, data are directly obtained in each visit from buffer memory, reduce database I/O is operated.
When workflow engine starts, will need the data cached initialization of carrying out; Flow data with producing in real time is cached in the memory object, avoids the new flow data that produces in the Query Database, and memory object is managed.
The 2nd step, client-cache is promptly finished on the same function basis request number of times that reduces server, carries out buffer memory from both direction: flow data buffer memory and represent a layer page cache, minimizing promptly reduces network layer I/O to the call number of workflow engine service.
The metadata cache object focuses on those often operation and big data of data volume, and for example the flow definition data are promptly often operated, and data volume is big again; Server end and client Work stream data cache object are synchronous, and the server end data change notifies all client-side data cache objects to upgrade; Different clients Work stream data cache object is synchronous, during the data cached renewal of active client, at first notifies the workflow engine server, and server end reinforms other all client-side data cache objects and upgrades then; Represent a layer member, carry out buffer memory at those pages that seldom change.
In the 3rd step, object pool utilizes the pond technology, avoids the system overhead that brings is created, encapsulates, destroyed to cache object continually.
In the 4th step, asynchronous call at the specific demand of certain time period, promotes the batch processing ability.
The 5th step, the cluster support, under big visit capacity, the high concurrent situation, the workflow engine service of cluster is an effective support.
Show that as Fig. 1 above-mentioned server end cache object management process is such:
When 1-1) the workflow engine service starts, the needs data in buffer is carried out initialization add in the internal memory;
When 1-2) workflow engine was served newly-generated flow data, new cache object was created and is increased in the internal memory;
When 1-3) the existing flow data of workflow engine service was upgraded, the corresponding cache object was updated in the internal memory;
When 1-4) workflow engine service deletion has flow data, deletion corresponding cache object in the internal memory.
Show that as Fig. 2 above-mentioned client-cache Object Management group process is such:
When 2-1) business process system starts, the flow definition in the workflow engine service is loaded in the internal memory cache object initialization;
2-2) business process system imports the new technological process definition in the workflow engine service, and workflow engine Service Notification client-cache increases the new flow definition that imports, and newly imports the flow definition cache object and is created and is increased in the internal memory;
2-3) business process system will have flow definition and be updated in the workflow engine service, the corresponding flow definition of workflow engine Service Notification client buffer update, and flow definition cache object corresponding in the internal memory is updated;
2-4) business process system will have flow definition when deleting from the workflow engine service, and workflow engine Service Notification client-cache is deleted corresponding flow definition, the corresponding flow definition cache object of deletion in the internal memory.

Claims (3)

1. a method that increases substantially performance of workflow engine is characterized in that: comprise the steps:
The 1st step, the server end buffer memory, promptly finish the frequency that reduces on the same function basis the computer hardware read and write, business process system is optimized the frequent data query of work engine, at the peripheral packing of workflow engine one deck cache layer, data are directly obtained in each visit from buffer memory, reduce database I/O is operated;
The 2nd step, client-cache is promptly finished on the same function basis request number of times that reduces server, carries out buffer memory from both direction: flow data buffer memory and represent a layer page cache, minimizing promptly reduces network layer I/O to the call number of workflow engine service;
In the 3rd step, object pool utilizes the pond technology, avoids the system overhead that brings is created, encapsulates, destroyed to cache object continually;
In the 4th step, asynchronous call at the specific demand of certain time period, promotes the batch processing ability;
The 5th step, the cluster support, under big visit capacity, the high concurrent situation, the workflow engine service of cluster is an effective support.
2. the method that increases substantially performance of workflow engine according to claim 1 is characterized in that: described server end cache object management process is such:
When 1-1) the workflow engine service starts, the needs data in buffer is carried out initialization add in the internal memory;
When 1-2) workflow engine was served newly-generated flow data, new cache object was created and is increased in the internal memory;
When 1-3) the existing flow data of workflow engine service was upgraded, the corresponding cache object was updated in the internal memory;
When 1-4) workflow engine service deletion has flow data, deletion corresponding cache object in the internal memory.
3. the method that increases substantially performance of workflow engine according to claim 1 is characterized in that: described client-cache Object Management group process is such:
When 2-1) business process system starts, the flow definition in the workflow engine service is loaded in the internal memory cache object initialization;
2-2) business process system imports the new technological process definition in the workflow engine service, and workflow engine Service Notification client-cache increases the new flow definition that imports, and newly imports the flow definition cache object and is created and is increased in the internal memory;
2-3) business process system will have flow definition and be updated in the workflow engine service, the corresponding flow definition of workflow engine Service Notification client buffer update, and flow definition cache object corresponding in the internal memory is updated;
2-4) business process system will have flow definition when deleting from the workflow engine service, and workflow engine Service Notification client-cache is deleted corresponding flow definition, the corresponding flow definition cache object of deletion in the internal memory.
CNA2009100155162A 2009-05-13 2009-05-13 Method for greatly improving performance of workflow engine Pending CN101551745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2009100155162A CN101551745A (en) 2009-05-13 2009-05-13 Method for greatly improving performance of workflow engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2009100155162A CN101551745A (en) 2009-05-13 2009-05-13 Method for greatly improving performance of workflow engine

Publications (1)

Publication Number Publication Date
CN101551745A true CN101551745A (en) 2009-10-07

Family

ID=41155998

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2009100155162A Pending CN101551745A (en) 2009-05-13 2009-05-13 Method for greatly improving performance of workflow engine

Country Status (1)

Country Link
CN (1) CN101551745A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236679A (en) * 2010-04-27 2011-11-09 杭州德昌隆信息技术有限公司 Method and device for outputting workflow based on browser page
CN102394807A (en) * 2011-08-23 2012-03-28 北京京北方信息技术有限公司 System and method for decentralized scheduling of autonomous flow engine load balancing clusters
CN103064964A (en) * 2012-12-29 2013-04-24 天津南大通用数据技术有限公司 Connection method of data base supporting distributed type affairs
CN104471572A (en) * 2012-07-12 2015-03-25 微软公司 Progressive query computation using streaming architectures
CN104699411A (en) * 2013-12-06 2015-06-10 北京慧正通软科技有限公司 Technical method for improving performance of cache in workflow engine process instance
CN104735152A (en) * 2015-03-30 2015-06-24 四川神琥科技有限公司 Mail reading method based on network
CN104751359A (en) * 2013-12-30 2015-07-01 中国银联股份有限公司 System and method for payment and settlement
CN106210022A (en) * 2016-06-29 2016-12-07 天涯社区网络科技股份有限公司 A kind of system and method for processing forum's height concurrent data requests
CN106529917A (en) * 2016-12-15 2017-03-22 平安科技(深圳)有限公司 Workflow processing method and device
CN107657419A (en) * 2016-07-25 2018-02-02 武汉票据交易中心有限公司 The processing method and relevant apparatus and server of a kind of operation flow
US10552435B2 (en) 2017-03-08 2020-02-04 Microsoft Technology Licensing, Llc Fast approximate results and slow precise results
US10740328B2 (en) 2016-06-24 2020-08-11 Microsoft Technology Licensing, Llc Aggregate-query database system and processing
CN112926206A (en) * 2021-02-25 2021-06-08 北京工业大学 Workflow engine cache elimination method based on industrial process background
CN115857907A (en) * 2023-02-06 2023-03-28 卓望数码技术(深圳)有限公司 Business flow dynamic assembly system and method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236679A (en) * 2010-04-27 2011-11-09 杭州德昌隆信息技术有限公司 Method and device for outputting workflow based on browser page
CN102394807A (en) * 2011-08-23 2012-03-28 北京京北方信息技术有限公司 System and method for decentralized scheduling of autonomous flow engine load balancing clusters
CN102394807B (en) * 2011-08-23 2015-03-04 京北方信息技术股份有限公司 System and method for decentralized scheduling of autonomous flow engine load balancing clusters
CN104471572B (en) * 2012-07-12 2018-11-16 微软技术许可有限责任公司 It is calculated using the gradual inquiry of streaming framework
CN104471572A (en) * 2012-07-12 2015-03-25 微软公司 Progressive query computation using streaming architectures
US10140358B2 (en) 2012-07-12 2018-11-27 Microsoft Technology Licensing, Llc Progressive query computation using streaming architectures
CN103064964A (en) * 2012-12-29 2013-04-24 天津南大通用数据技术有限公司 Connection method of data base supporting distributed type affairs
CN103064964B (en) * 2012-12-29 2016-04-20 天津南大通用数据技术股份有限公司 A kind of method of attachment supporting the database of distributed transaction
CN104699411A (en) * 2013-12-06 2015-06-10 北京慧正通软科技有限公司 Technical method for improving performance of cache in workflow engine process instance
CN104751359B (en) * 2013-12-30 2020-08-21 中国银联股份有限公司 System and method for payment clearing
CN104751359A (en) * 2013-12-30 2015-07-01 中国银联股份有限公司 System and method for payment and settlement
CN104735152A (en) * 2015-03-30 2015-06-24 四川神琥科技有限公司 Mail reading method based on network
US10740328B2 (en) 2016-06-24 2020-08-11 Microsoft Technology Licensing, Llc Aggregate-query database system and processing
CN106210022A (en) * 2016-06-29 2016-12-07 天涯社区网络科技股份有限公司 A kind of system and method for processing forum's height concurrent data requests
CN107657419A (en) * 2016-07-25 2018-02-02 武汉票据交易中心有限公司 The processing method and relevant apparatus and server of a kind of operation flow
CN106529917A (en) * 2016-12-15 2017-03-22 平安科技(深圳)有限公司 Workflow processing method and device
CN106529917B (en) * 2016-12-15 2020-07-03 平安科技(深圳)有限公司 Workflow processing method and device
US10552435B2 (en) 2017-03-08 2020-02-04 Microsoft Technology Licensing, Llc Fast approximate results and slow precise results
CN112926206A (en) * 2021-02-25 2021-06-08 北京工业大学 Workflow engine cache elimination method based on industrial process background
CN112926206B (en) * 2021-02-25 2024-04-26 北京工业大学 Workflow engine cache elimination method based on industrial process background
CN115857907A (en) * 2023-02-06 2023-03-28 卓望数码技术(深圳)有限公司 Business flow dynamic assembly system and method

Similar Documents

Publication Publication Date Title
CN101551745A (en) Method for greatly improving performance of workflow engine
US20210042282A1 (en) Providing snapshots of journal tables
CN104933112B (en) Distributed interconnection Transaction Information storage processing method
US8364751B2 (en) Automated client/server operation partitioning
CN103116627B (en) A kind of method and system of high concurrent SOA technology access database
CN110019267A (en) A kind of metadata updates method, apparatus, system, electronic equipment and storage medium
US8380663B2 (en) Data integrity in a database environment through background synchronization
US9836516B2 (en) Parallel scanners for log based replication
CN102855239A (en) Distributed geographical file system
US20070100826A1 (en) Method for improving the performance of database loggers using agent coordination
CN110309233A (en) Method, apparatus, server and the storage medium of data storage
CN112163001A (en) High-concurrency query method, intelligent terminal and storage medium
EP4254187A1 (en) Cross-organization & cross-cloud automated data pipelines
CN102937964A (en) Intelligent data service method based on distributed system
WO2023033721A2 (en) Storage engine for hybrid data processing
EP4118536A1 (en) Extensible streams on data sources
CN113568938A (en) Data stream processing method and device, electronic equipment and storage medium
CN103365987A (en) Clustered database system and data processing method based on shared-disk framework
CN108280123B (en) HBase column polymerization method
CN102024010A (en) Data processing system and data processing method
CN102325098A (en) Group information acquisition method and system
CN106776810A (en) The data handling system and method for a kind of big data
CN100486177C (en) Method of synchronously operating network element by network management and its system
CN114237858A (en) Task scheduling method and system based on multi-cluster network
CN115080609A (en) Method and system for realizing high-performance and high-reliability business process engine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20091007