CN106302778A - A kind of distributed flow process automotive engine system - Google Patents

A kind of distributed flow process automotive engine system Download PDF

Info

Publication number
CN106302778A
CN106302778A CN201610723817.0A CN201610723817A CN106302778A CN 106302778 A CN106302778 A CN 106302778A CN 201610723817 A CN201610723817 A CN 201610723817A CN 106302778 A CN106302778 A CN 106302778A
Authority
CN
China
Prior art keywords
layer
flow
service
engine
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610723817.0A
Other languages
Chinese (zh)
Inventor
廖子常
钟坚
廖小文
张林坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Eshore Technology Co Ltd
Original Assignee
Guangdong Eshore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Eshore Technology Co Ltd filed Critical Guangdong Eshore Technology Co Ltd
Priority to CN201610723817.0A priority Critical patent/CN106302778A/en
Publication of CN106302778A publication Critical patent/CN106302778A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The invention discloses a kind of distributed flow process automotive engine system, this system includes service application layer, access layer, flow engine application service layer and data storage layer.Fast construction the distributed collaboration handling process engine that service application is docked can be prone to by the present invention; elastic support processes peak value request amount; and convenient can not shut down dilatation flow engine service node online, promote flow processing ability, it is achieved the collaborative process across between management node flow process.

Description

A kind of distributed flow process automotive engine system
Technical field
The present invention relates to technical field of information processing, particularly relate to a kind of distributed flow process automotive engine system.
Background technology
Owing to the data volume of the business transaction of telecommunication system is the hugest, business procession link is various, needs complexity Flow engine supporting business processing procedure, the Business Processing amount that the flow engine of conventional individual pattern can be supported is limited to machine Performance, it is difficult to bear the hugest Business Processing request amount.And business datum and Business Processing belong to districts and cities according to business Divide multiple different management node, but have considerable business need to need across the collaborative process of districts and cities, transaction processing system application The flow process that complicated interactive mode or manual type process across management node is worked in coordination with, and causes at the low for the treatment of effeciency and flow process That manages is the most smooth.So in order to solve limited machine unit performance bottleneck, and it is collaborative to solve the flow process across management region, needs one Plant and realize distributed flow process automotive engine system.
Summary of the invention
The invention aims to overcome the defect of prior art, it is provided that a kind of distributed flow process automotive engine system, pass through The present invention fast construction can be prone to the distributed collaboration handling process engine that service application is docked, and elastic support processes peak value Request amount, and convenient can not shut down dilatation flow engine service node online, promote flow processing ability, it is achieved across management Collaborative process between node flow process.
For achieving the above object, the invention provides a kind of distributed flow process automotive engine system, this system includes four layers, point Wei service application layer, access layer, flow engine application service layer and data storage layer.
Service application layer, develops client SDK used to service application docking specially, only need to can be complete by interface U.S. docking uses flow engine ability.
Access layer, utilizes hardware load balancer or the access of soft load equalizer access service application, equilibrium point Send out request and improve service availability.
Flow engine application service layer, including three parts:
Flow engine cluster, the flow engine node stateless of cluster is run, and integrates load equalizer and can support the most not Shut down expanding node.
Collaborative center cluster, uses Zookeeper middleware to build collaborative center cluster, collaborative center Cluster is responsible for flow process distributed collaboration and is processed.
Message-oriented middleware cluster, in conjunction with message-oriented middleware cluster, the process chain of Distributed Message Queue decoupling intermodule, props up Hold the request that services on a large scale.
Data storage layer, is stored by big data thinking burst, supports the storage of large-scale data.
Further, the interface of service application layer includes that flow startup, link receipt, hang-up solve extension, remove list, rollback and weight Group.
Further, flow engine node includes Service Component, engine kernel, rule framework and configuration management and control.
Further, data storage layer supports multiple main flow relational database, including MySQL, Oracle, DB2.
Further, flow engine node accesses data base by universal data access layer, and universal data access layer can allow Flow engine node is without being concerned about how data store.
The beneficial effect that technical solution of the present invention is brought:
The first, the present invention is used fast construction can be prone to the distributed collaboration handling process engine that service application is docked, Elastic support processes peak value request amount.
The second, use the present invention convenient can not shut down dilatation flow engine service node online, promote flow processing energy Power.
3rd, the present invention is used to can solve the problem that the collaborative process across between management node flow process.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to Other accompanying drawing is obtained according to these accompanying drawings.
Fig. 1 is the system architecture diagram of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under not making creative work premise Embodiment, broadly falls into the scope of protection of the invention.
The present invention proposes and realizes distributed flow engine based on Distributed Message Queue and distributed collaboration operation.Reference WFMC Yu BPMN2.0 specification, simplifies and supplementary model, real based on Distributed Message Queue, distributed collaboration and data base's burst Existing distributed flow engine, supports workflow and the overall situation management and control of large-scale data, and the most supporting offer graphic process sets Meter interface and management and control interface, it is easy to by business Planning procedure with carry out flow management and control, and provide client SDK bag, at transparence Reason operation system and flow engine between mutual.
A kind of distributed flow process automotive engine system is illustrated in figure 1 its integrated stand composition:
Whole system framework includes four layers: service application layer, access layer, flow engine application service layer and data Accumulation layer.
Service application layer develops client SDK used to service application docking specially, only simply need to be connect by several Mouth perfect docking can use flow engine ability, solves extension including flow startup, link receipt, hang-up, removes list, rollback and weight Group.
Access layer utilizes hardware load balancer or the access of soft load equalizer access service application, equilibrium distribution Request and raising service availability.
Flow engine application service layer, including three parts:
Flow engine cluster, the flow engine node stateless of cluster is run, and integrates load equalizer and can support the most not Shut down expanding node.Flow engine node includes Service Component, engine kernel, rule framework and configuration management and control.
Collaborative center cluster, uses Zookeeper middleware to build collaborative center cluster, collaborative center Cluster is responsible for flow process distributed collaboration and is processed.
Message-oriented middleware cluster, in conjunction with message-oriented middleware cluster, the process chain of Distributed Message Queue decoupling intermodule, props up Hold the request that services on a large scale.
Data storage layer, is stored by big data thinking burst, supports the storage of large-scale data.Support MySQL, The multiple main flow relational database such as Oracle, DB2.
Flow engine node accesses data base by universal data access layer, and universal data access layer can allow flow engine save Point is without being concerned about how data store.
Above the embodiment of the present invention is described in detail, specific case used herein to the principle of the present invention and Embodiment is set forth, and the explanation of above example is only intended to help to understand method and the core concept thereof of the present invention; Simultaneously for one of ordinary skill in the art, according to the thought of the present invention, the most all can Change part, and in sum, this specification content should not be construed as limitation of the present invention.

Claims (5)

1. a distributed flow process automotive engine system, it is characterised in that this system includes four layers, respectively service application layer, connect Enter access layer, flow engine application service layer and data storage layer;
Service application layer, develops client SDK used to service application docking specially, and it is perfect right only need to be got final product by interface Connect use flow engine ability;
Access layer, utilizes hardware load balancer or the access of soft load equalizer access service application, and equilibrium distribution please Ask and improve service availability;
Flow engine application service layer, including three parts:
Flow engine cluster, the flow engine node stateless of cluster is run, and integrates load equalizer and can support laterally not shut down Expanding node;
Collaborative center cluster, uses Zookeeper middleware to build collaborative center cluster, collaborative center cluster It is responsible for flow process distributed collaboration to process;
Message-oriented middleware cluster, in conjunction with message-oriented middleware cluster, the process chain of Distributed Message Queue decoupling intermodule, supports big Scale service request;
Data storage layer, is stored by big data thinking burst, supports the storage of large-scale data.
System the most according to claim 1, it is characterised in that the interface of service application layer includes that flow startup, link are returned Single, hang-up solves extension, removes list, rollback and heavily send.
System the most according to claim 1, it is characterised in that flow engine node includes Service Component, engine kernel, rule Then framework and configuration management and control.
System the most according to claim 1, it is characterised in that data storage layer supports multiple main flow relational database, bag Include MySQL, Oracle, DB2.
5. according to the system described in claim 1 or 2 or 3 or 4, it is characterised in that flow engine node is visited by uniform data Asking that layer accesses data base, universal data access layer can allow flow engine node without being concerned about how data store.
CN201610723817.0A 2016-08-25 2016-08-25 A kind of distributed flow process automotive engine system Pending CN106302778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610723817.0A CN106302778A (en) 2016-08-25 2016-08-25 A kind of distributed flow process automotive engine system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610723817.0A CN106302778A (en) 2016-08-25 2016-08-25 A kind of distributed flow process automotive engine system

Publications (1)

Publication Number Publication Date
CN106302778A true CN106302778A (en) 2017-01-04

Family

ID=57616414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610723817.0A Pending CN106302778A (en) 2016-08-25 2016-08-25 A kind of distributed flow process automotive engine system

Country Status (1)

Country Link
CN (1) CN106302778A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804241A (en) * 2018-05-21 2018-11-13 平安科技(深圳)有限公司 Cross-platform method for scheduling task, system, computer equipment and storage medium
CN109389352A (en) * 2017-08-02 2019-02-26 余汶龙 Consumption endowment management system
CN111612424A (en) * 2020-05-21 2020-09-01 浩云科技股份有限公司 Flow management method and device based on free task node and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7221377B1 (en) * 2000-04-24 2007-05-22 Aspect Communications Apparatus and method for collecting and displaying information in a workflow system
CN101321181A (en) * 2008-07-17 2008-12-10 上海交通大学 Distributed service flow engine management system based on fuzzy control
CN102594870A (en) * 2011-05-31 2012-07-18 北京亿赞普网络技术有限公司 Cloud computing platform, cloud computing system and service information publishing method for cloud computing system
CN105630589A (en) * 2014-11-24 2016-06-01 航天恒星科技有限公司 Distributed process scheduling system and process scheduling and execution method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7221377B1 (en) * 2000-04-24 2007-05-22 Aspect Communications Apparatus and method for collecting and displaying information in a workflow system
CN101321181A (en) * 2008-07-17 2008-12-10 上海交通大学 Distributed service flow engine management system based on fuzzy control
CN102594870A (en) * 2011-05-31 2012-07-18 北京亿赞普网络技术有限公司 Cloud computing platform, cloud computing system and service information publishing method for cloud computing system
CN105630589A (en) * 2014-11-24 2016-06-01 航天恒星科技有限公司 Distributed process scheduling system and process scheduling and execution method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389352A (en) * 2017-08-02 2019-02-26 余汶龙 Consumption endowment management system
CN108804241A (en) * 2018-05-21 2018-11-13 平安科技(深圳)有限公司 Cross-platform method for scheduling task, system, computer equipment and storage medium
WO2019223178A1 (en) * 2018-05-21 2019-11-28 平安科技(深圳)有限公司 Cross-platform task scheduling method and system, computer device, and storage medium
CN111612424A (en) * 2020-05-21 2020-09-01 浩云科技股份有限公司 Flow management method and device based on free task node and readable storage medium
CN111612424B (en) * 2020-05-21 2024-03-19 浩云科技股份有限公司 Flow management method and device based on free task node and readable storage medium

Similar Documents

Publication Publication Date Title
CN103365979A (en) Long-distance double-center online processing method and system based on open database
CN105893083A (en) Container-based mobile code unloading support system under cloud environment and unloading method thereof
CN106302778A (en) A kind of distributed flow process automotive engine system
CN105824686A (en) Selecting method and selecting system of host machine of virtual machine
CN105653204A (en) Distributed graph calculation method based on disk
CN107122244A (en) A kind of diagram data processing system and method based on many GPU
CN104506578A (en) Publishing/subscription network subscription information maintenance method and device
CN106250566A (en) A kind of distributed data base and the management method of data operation thereof
CN101593361A (en) A kind of large-scale terrain rendering system based on double-layer nested grid
CN109254846A (en) The dynamic dispatching method and system of CPU and GPU cooperated computing based on two-level scheduler
CN108063784A (en) The methods, devices and systems of application cluster resource allocation under a kind of cloud environment
CN105808341A (en) Method, apparatus and system for scheduling resources
CN106936882A (en) A kind of electronic article transaction system
CN103118131A (en) Cloud platform based immersive education system and method
CN106056322A (en) Smart grid scheduling system based on cloud computing
CN102841822B (en) Carry out delaying the method and system of machine protection to jobTracker main frame
CN103677930B (en) Based on GIS data loading method and the device of spelling wall system
CN104519086A (en) Immersion education system and method based on cloud platform
CN107992358A (en) A kind of asynchronous IO suitable for the outer figure processing system of core performs method and system
CN105184372B (en) Knowledge network construction method and device
CN111400021B (en) Deep learning method, device and system
CN106250210A (en) Dispatching method of virtual machine under cloud environment
CN106022662A (en) E-commerce-based distributer and user position matching method
CN105930945A (en) Business processing method and apparatus
CN106294445A (en) The method and device stored based on the data across machine room Hadoop cluster

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170104