CN107943963A - Mass data distributed rule engine operation system based on cloud platform - Google Patents
Mass data distributed rule engine operation system based on cloud platform Download PDFInfo
- Publication number
- CN107943963A CN107943963A CN201711209612.1A CN201711209612A CN107943963A CN 107943963 A CN107943963 A CN 107943963A CN 201711209612 A CN201711209612 A CN 201711209612A CN 107943963 A CN107943963 A CN 107943963A
- Authority
- CN
- China
- Prior art keywords
- destination object
- distributed
- memory management
- rule engine
- cloud platform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2471—Distributed queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/544—Remote
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Fuzzy Systems (AREA)
- Computational Linguistics (AREA)
- Probability & Statistics with Applications (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides a kind of mass data distributed rule engine operation system based on cloud platform, including following module;Data source modules:Destination object is collected from database;Memory management module:According to the destination object of collection, single node storage is expanded into distributed storage;Policy decision module:The realizing each destination object by scheduler of the task is distributed;Operation performs management module:Distributed according to the task of destination object, filtering repeated and redundant request.The present invention has redesigned memory management and operation performs the two key components, and providing operation by multi executors remote procedure call frame performs, and realizes the mass data distributed rule engine operation strategy under cloud platform.Mass data distributed rule engine operation policy system regulation engine provided by the invention based on cloud platform is that one kind widely used rule generation system, its use in artificial intelligence and commercial management field can promote the separation of programming personnel and tactful expert.
Description
Technical field
The present invention relates to a kind of engine operation system, and in particular, to a kind of mass data based on cloud platform is distributed
Regulation engine operating system.
Background technology
Regulation engine is a kind of nested component in the application, it is realized business rule from application code
In separation.Regulation engine writes business rule using specific grammer, can receive data input, explain business rule and root
Corresponding decision-making is made according to business rule.Although the regulation engine of increasing income that Drools etc. is typically write with Java language is logical
It can only often be applied on single server, but Rete algorithms are matched come to the rule write by the high effective model of optimization
Then evaluation, can tackle traditional Complex event processing.However, when terabyte (Terabyte, TB) level in face of current magnanimity
, it is necessary to when going to analyze thousands of a events in a short time, traditional rule engine is just no longer applicable in other big data.
Current traditional rule engine has the defects of its is huge, i.e., it is merely able to run on single server, either exists
In performance, scalability or availability, all there is obvious limitation, have no to use force when in face of mass data application scenarios
Ground.The present invention is directed to the big data ecological environment of current high speed development, traditional rule engine kernel and more than ten kinds is increased income big
Data processing platform (DPP) is combined, and management assembly is performed by the working memory management assembly and operation of brand-new design, by rule
Engine is moved in distributed type assemblies environment from stand-alone environment.
Therefore, existing regulation engine is inefficient, may not apply to situation under big data environment in order to overcome, and introduces base
User is helped to obtain the operation being doubled and redoubled and execution efficiency in the distributed rule engine strategy of cloud platform.Due to need by
The regulation engine being operated in the past under stand-alone environment is moved among cluster distributed environment, traditional working memory management and is held
Row operation strategy has been no longer able to ensure the efficient process of large-scale data.Therefore urgently design realizes a whole set of from memory pipe
Manage decision-making to judge to arrive the distributed rule engine strategy system that operation performs again, to reach global efficiency under big data environment
Balance and lifting.
The content of the invention
For in the prior art the defects of, it is distributed the object of the present invention is to provide a kind of mass data based on cloud platform
Regulation engine operating system.
A kind of mass data distributed rule engine operation system based on cloud platform provided according to the present invention, including such as
Lower module;
Data source modules:Destination object is collected from database;
Memory management module:According to the destination object of collection, single node storage is expanded into distributed storage;
Policy decision module:The realizing each destination object by scheduler of the task is distributed;
Operation performs management module:Distributed according to the task of destination object, filtering repeated and redundant request.
Preferably, the data source of the destination object includes batch data, flow data message queue.
Preferably, the memory management module:Destination object is divided into multiple subregions, each subregion difference, holds parallel
Row operation strategy;
The memory management module includes discrete distributed memory management submodule;
Discrete distributed memory manages submodule:Destination object is divided into multiple subregions, each Paralleled is performed
One strategic decision-making, and form multiple autonomous working memories;
Multiple autonomous working memories form cluster.
Preferably, the memory management module includes unified memory management module;
Unified memory management module:Destination object is formed into memory database, and is deposited in distributed rule engine cluster
Store up data.
Preferably, the policy decision module:Destination object is divided into multiple subregions, a kind of each rule of subregion triggering
Engine, and independently carry out implementation strategy.
Preferably, it is master-slave architecture that the operation, which performs management module,.
Preferably, the operation, which performs management module, includes following submodule:
Service queue submodule:By multiple independent working memories or storage data sharing in a real time environment;
Micro services submodule:In same real time environment, by multiple independent working memories or storage data sharing son behaviour
Make;
Web services registry submodule:The implementation status of record, monitoring child-operation.
Preferably, discrete distributed memory management submodule includes map sub-region, simplifies subregion;
Map sub-region:Complicated destination object is divided into multiple simple subregions, each Paralleled is performed opposite
The strategic decision-making answered, and form multiple autonomous working memories;
Simplify subregion:Multiple autonomous working memories are merged, the accuracy for judgment rule result.
The present invention provides a kind of mass data distributed rule engine operation method based on cloud platform, including following step
Suddenly:
Data source step:Destination object is collected from database;
Memory management step:According to the destination object of collection, single node storage is expanded into distributed storage;
Strategic decision-making step:Scheduler is performed using computing engines, realizes the task distribution of each destination object;
Operation performs management process:Distributed according to the task of destination object, filtering repeated and redundant request.
Preferably, the memory management step:Destination object is divided into multiple subregions, each subregion respectively/hold parallel
Row operation strategy;
The memory management step includes discrete distributed memory management sub-step;
Discrete distributed memory manages sub-step:Destination object is divided into multiple subregions, each Paralleled is performed
One strategic decision-making, and form multiple autonomous working memories;
Multiple autonomous working memories form cluster;
The memory management step, further includes unified memory management process;
Unified memory management process:Destination object is formed into memory database, and is deposited in distributed rule engine cluster
Store up data;
The strategic decision-making step:Destination object is divided into multiple subregions, each subregion triggers a kind of regulation engine, and
It is independent to carry out implementation strategy;
The operation, which performs management process, includes following sub-step:
Service queue sub-step:By multiple independent working memories or storage data sharing in a real time environment;
Micro services sub-step:In same real time environment, by multiple independent working memories or storage data sharing son behaviour
Make;
Web services registry sub-step:The implementation status of record/monitoring child-operation;
Discrete distributed memory management sub-step includes map sub-region, simplifies subregion;
Map sub-region:Complicated destination object is divided into multiple simple subregions, each Paralleled is performed opposite
The strategic decision-making answered, and form multiple autonomous working memories;
Simplify subregion:Multiple autonomous working memories are merged, the accuracy for judgment rule result.
Compared with prior art, the present invention has following beneficial effect:
1st, the present invention carries out logic judgment based on traditional rule engine kernel, has redesigned memory management and operation
The two key components are performed, providing operation by multi executors remote procedure call frame performs, it is achieved thereby that in cloud
Mass data distributed rule engine operation strategy under platform.
2nd, it is provided by the invention based on cloud platform mass data distributed rule engine operation policy system -- rule is drawn
It is one kind widely used rule generation system in artificial intelligence and commercial management field to hold up (Rule Engine), it makes
With the separation of programming personnel and tactful expert can be promoted, and complex transaction processing is acted on, be suitable for fast-changing multiple
Miscellaneous application scenarios.
3rd, the mass data distributed rule engine operation policy system provided by the invention based on cloud platform is in a distributed manner
Database and message queue provide working memory with memory type database, are aided with traditional rule engine kernel as data source
Logic judgment is carried out, providing operation with multi executors remote procedure call (RPC) frame performs.On the one hand responsible affairs are extended
Handle (CEP) field application scenarios, on the other hand also for the big data ecosphere add for complex logic judge it is new into
Member, is a kind of regulation engine operation strategy with initiative.
4th, the mass data distributed rule engine operation policy system provided by the invention based on cloud platform, i.e. distribution
Formula regulation engine is tested and assessed.By test and appraisal, the present invention successfully improves rule judgment speed to traditional rule engine operation
Several times more than, achieve good detection result.
5th, traditional rule engine is directed to single host working memory management, and an only actuator provides operation and performs, from
And it is caused to be difficult to execution pattern matching and data manipulation parallel on large-scale dataset.The RED that present invention design is realized
System manages mould using Hbase distributed data bases and Kafka message queues as data source, by the working memory of redesign
Block DMM (the distributed memory management for being directed to more hosts) and UMM (contiguous memory management), is aided with Drools, Spark on
The operation that the multi executors such as Yarn, ZeroMQ provide performs, while the practicality and scalability of the system of improvement, significantly
Improve rule judgment efficiency.
6th, the present invention creatively realizes the regulation engine performed under conventional individual environment in mass data distribution ring
Under border, rule judgment efficiency has been significantly improved while application scenarios are extended.
Brief description of the drawings
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, further feature of the invention,
Objects and advantages will become more apparent upon:
Fig. 1 is the overall architecture of the mass data distributed rule engine operation system provided by the invention based on cloud platform
Figure.
Fig. 2 is the DMM patterns of the mass data distributed rule engine operation system provided by the invention based on cloud platform
Under RED framework flow charts.
Fig. 3 draws for the mass data distributed rule engine operation system RED rules provided by the invention based on cloud platform
Hold up and perform management framework figure.
Fig. 4 is the destination object of the mass data distributed rule engine operation system provided by the invention based on cloud platform
Execution performance test result.
Embodiment
With reference to specific embodiment, the present invention is described in detail.Following embodiments will be helpful to the technology of this area
Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill to this area
For personnel, without departing from the inventive concept of the premise, some changes and improvements can also be made.These belong to the present invention
Protection domain.
As shown in Figure 1, the present invention provides a kind of mass data distributed rule engine operation system based on cloud platform,
Including following module;Data source modules:Destination object is collected from database;Memory management module:According to the target pair of collection
As single node storage is expanded to distributed storage;Policy decision module:Appointing for each destination object, is realized by thread scheduler
Business distribution;Operation performs management module:Distributed according to the task of destination object, filtering repeated and redundant request.
The data source of the destination object includes batch data, flow data message queue.
The memory management module:Destination object is divided into multiple subregions, each subregion difference is parallel to perform operation plan
Slightly;The memory management module includes discrete distributed memory management submodule;Discrete distributed memory manages submodule:By mesh
Mark Object Segmentation is multiple subregions, performs a strategic decision-making to each Paralleled, and form multiple autonomous working memories;It is more
A autonomous working memory forms cluster.
The memory management module includes unified memory management module;Unified memory management module:Destination object is formed
Memory database, and store data in distributed rule engine cluster.
The policy decision module:Destination object is divided into multiple subregions, each subregion triggers a kind of regulation engine, and
It is independent to carry out implementation strategy.
It is master-slave architecture that the operation, which performs management module,.
The operation, which performs management module, includes following submodule:Service queue submodule:By in multiple independent work
Data sharing is deposited or stored in a real time environment;Micro services submodule:In same real time environment, by multiple independent works
Make memory or storage data sharing child-operation;Web services registry submodule:The implementation status of record, monitoring child-operation.
The discrete distributed memory management submodule includes map sub-region, simplifies subregion;
Map sub-region:Complicated destination object is divided into multiple simple subregions, each Paralleled is performed opposite
The strategic decision-making answered, and form multiple autonomous working memories.
Simplify subregion:Multiple autonomous working memories are merged, the accuracy for judgment rule result.
Present invention also offers a kind of mass data distributed rule engine operation method based on cloud platform, including it is as follows
Step:Data source step:Destination object is collected from database;Memory management step:According to the destination object of collection, by single-unit
Point storage expands to distributed storage;Strategic decision-making step:Scheduler is performed using computing engines, realizes appointing for each destination object
Business distribution;Operation performs management process:Distributed according to the task of destination object, filtering repeated and redundant request.
The memory management step:Destination object is divided into multiple subregions, each subregion difference/parallel perform operates plan
Slightly;The memory management step includes discrete distributed memory management sub-step;Discrete distributed memory manages sub-step:By mesh
Mark Object Segmentation is multiple subregions, performs a strategic decision-making to each Paralleled, and form multiple autonomous working memories;It is more
A autonomous working memory forms cluster;The memory management step, further includes unified memory management process;Unified memory management walks
Suddenly:Destination object is formed into memory database, and data are stored in distributed rule engine cluster;The strategic decision-making step
Suddenly:Destination object is divided into multiple subregions, each subregion triggers a kind of regulation engine, and independently carries out implementation strategy;It is described
Operation, which performs management process, includes following sub-step:Service queue sub-step:By multiple independent working memories or storage data
It is shared in a real time environment;Micro services sub-step:In same real time environment, by multiple independent working memories or storage
Data sharing child-operation;Web services registry sub-step:The implementation status of record/monitoring child-operation;Discrete distributed memory management
Sub-step includes map sub-region, simplifies subregion;Map sub-region:Complicated destination object is divided into multiple simple subregions, so
Corresponding strategic decision-making is performed to each Paralleled afterwards, and forms multiple autonomous working memories;Simplify subregion:Will be multiple only
Vertical working memory merges, the accuracy for judgment rule result.
Specifically, the mass data distributed rule engine operation system provided by the invention based on cloud platform is can bullet
Property the policy system that stretches include four main modules compositions:Data source modules (Fact Source Module), memory
Management module (Memory Factory Module), policy decision module (Policy Decision Module) and operation
Perform management module (Operation Execution Module), wherein memory management module, policy decision module and behaviour
Make to perform several key components that management module is brand-new design of the present invention, accordingly, regulation engine is moved to from stand-alone environment
In cluster distributed environment.Each module is specifically described as follows respectively:
1st, data source modules:Data source modules collect destination object from database, and the destination object collected is inserted
Enter among the memory management module of RED systems.Data source in the system include distributed batch data in Hbase and
Flow data message queue in Kafka.
2nd, memory management module:Memory is expanded to distributed storage by memory modules from single node storage.Set in the present invention
Discrete memory modules DMM (Discrete Memory Management) and unified memory module UMM (United is counted
Memory Management) go exented memory to store.DMM modules and UMM modules are introduced separately below.
(1) discrete distributed memory management module DMM:Data object is divided into several points by discrete memory management module
Area, a strategic decision-making operating process is individually performed to each subregion, therefore there is several independent work in the cluster
Memory.As shown in Fig. 2, the DMM typical modules in RED distributed rule automotive engine system include map sub-region (Map
Partition) and simplify subregion (Reduce Partition) two steps.Simplified subregion therein is exactly to work as certain to overcome
Data source changes the problem of cannot but triggering the rule in other working memories and increased design in one working memory.Simplify
Partitioning step merges mutually independent working memory, so as to ensure the accuracy of rule judgment result.In the present invention, at data
Reason flow includes following steps.First, data are collected from data sources such as Kafka;Then, data are converted into regulation engine
Data class;Next, these data class can be divided into several different subregions, each subregion can independent reality parallel
Existing strategic decision-making and operation perform management;Finally, after RED systems complete the task in each subregion, all subregion meetings
Merging is simplified to have detected whether that some rules are lost and are not carried out since memory is isolated.
(2) unified memory management module UMM:Compared with source data object, working memory needs less memory space.Therefore
We can store data on different nodes, and their index is stored in a unified working memory.In unified
Deposit management module and construct a memory database, data are stored in distributed rule engine cluster, so that instead of discrete interior
Some working memories in depositing.In the present invention, this memory database is established using Geode.Geode be one it is very ripe,
Strong data management platform, it provides real-time, consistent, through cloud platform framework access data critical type application.When
In fact it is exactly the insertion/more new command performed in Geode during data in the inserted or updated working memory of user.Work as user
When pattern match is carried out in working memory, they have actually done a request of data and have gone to search similar memory record.UMM
The design of module is based on Drools regulation engines, and the present invention have changed the source codes of Drools memory managements to adapt to this unified memory
Database.In short, working memory is denoted as an entirety by UMM, and source data object is stored among their own host.
3rd, strategic decision-making management module:Policy decision module uses Spark on Yarn to perform scheduler and goes to be different
Yarn container allocation tasks.For each container, execution Drools regulation engines can be started and do rule judgment and perform plan
Slightly select.Likewise, Yarn manager administrations distributed task scheduling and recover error collapse container.In RED systems, data source
It is divided into some different subregions, each subregion can perform operation strategy parallel.That is, each subregion can be with
A kind of Drools regulation engines, and independent carry out strategic decision-making can be triggered.
4th, operation performs management module:Operation executing module is master-slave architecture, it organizes operation please in service queue
Ask and filter repeated and redundant request.It can be also integrated among the big data ecosystem with the relevant operation of big data.Such as Fig. 3 institutes
Show, operation requests are divided into several micro- requests (atomic request) by the present invention, devise service queue (service
Queue) to organize this to ask slightly, so that service request is sent to big data actuator, operation implementation procedure is avoided
In compute repeatedly.Major part is introduced below.
(1) service queue:The relevant operation of big data usually requires a real time environment.This real time environment is responsible for process
Management, database interaction and other system level tasks.For example, spark context can be in the relevant actuators of spark
Middle startup, and hive context can start in the relevant actuators of hive.It is big compared with the real time environment of other application, magnanimity
The relevant required real time environment of operation of data can be very big.If each operate with independent real time environment to go to hold
OK, then cluster resource such as memory source and cpu resource can all be taken by this real time environment, and other operations are in this situation
Under be just difficult to continue.In the present invention, we perform management module for operation and devise service queue (service queue)
Avoid these resource contentions so that in most circumstances, the data of different operating can share a real time environment.For example,
If regulation engine have sent the predicted operation of 100 SparkML, system will trigger 100 SparkML real time environments.However,
Virtually free from any difference, we need only to a SparkML content and go to handle all appoint this 100 contents
Business.The thought shared based on content, regulation engine manager will not directly trigger this operation, but operation requests are sent to far
Journey invocation of procedure RPC broker.After request is received, RPC broker can send a request to required actuator.Each hold
Row device goes to perform the request in Fig. 3 equipped with a special real time environment.
(2) micro services:One relevant operation of big data would generally be divided into several child-operations, different big datas
Some child-operations are often shared in relevant operation.Therefore, elasticity distribution formula data set (Resilient Distributed
Datasets, RDD) need to avoid repeating the child-operation of these types.Item " " is inserted into hive tables of data with operation
Exemplified by, three child-operations are included in hive tables of data:Interim table is established, is denoted as child-operation 1;Content is added into interim table, is remembered
For child-operation 2;From interim table into object table reproducting content, be denoted as child-operation 3.Please if 100 data objects perform this
Ask, child-operation 1 needs to be performed 100 times.Obviously, child-operation 1 needs to be executed once when calling first time, then
Every other operation can share the result of child-operation 1 and not have to repeat.Therefore, if we can be multiple by one
Miscellaneous operation is divided into several micro services and sufficiently can significantly be reduced using the micro services being shared, overhead.
The present invention has provided option to the user at the same time, and user can voluntarily choose whether to need micro services to optimize.
(3) web services registry:In micro services optimization, some child-operations are only executed only once and are operated altogether by many
Enjoy.Removing record therefore, it is necessary to a service registration tables of data, whether some child-operation has been carried out.If do not have in tables of data
Identical record, then RED systems can perform this child-operation and be written into table;Otherwise, RED systems can be before direct use
The result of child-operation.Each record in registration table includes two parts:Part I preserves micro-op information and such as operates name
And operating parameter, only when two child-operations have identical operation name and operating parameter, micro services optimization can just be triggered;
Part II preserves the position of result of calculation, and when micro services optimization is triggered, RED systems can search out this position and check it
Preceding result.In Hive actuators, this position is a Hive table.In SparkML actuators, this position is memory
In a data object.
As shown in figure 4, traditional Drools regulation engines and RED distributed rule engine strategies are compared in performance
It is different.It can be seen from the figure that the overall performance that two kinds of operation behaviour of RED-DMM and RED-UMM omit will be substantially better than traditional rule
Engine.It can be seen that under basic usage scenario, the object of three kinds of regulation engines judges the time without too big difference.So
And under other three kinds of usage scenarios, the judgement time of RED-DMM and RED-UMM will be considerably less than Drools, when averagely performing
Between to reduce more than about 42 times.
The specific embodiment of the present invention is described above.It is to be appreciated that the invention is not limited in above-mentioned
Particular implementation, those skilled in the art can make a variety of changes or change within the scope of the claims, this not shadow
Ring the substantive content of the present invention.In the case where there is no conflict, the feature in embodiments herein and embodiment can any phase
Mutually combination.
Claims (10)
1. a kind of mass data distributed rule engine operation system based on cloud platform, it is characterised in that including following module;
Data source modules:Destination object is collected from database;
Memory management module:According to the destination object of collection, single node storage is expanded into distributed storage;
Policy decision module:The realizing each destination object by scheduler of the task is distributed;
Operation performs management module:Distributed according to the task of destination object, filtering repeated and redundant request.
2. the mass data distributed rule engine operation system according to claim 1 based on cloud platform, its feature exist
In the data source of the destination object includes batch data, flow data message queue.
3. the mass data distributed rule engine operation system according to claim 1 based on cloud platform, its feature exist
In the memory management module:Destination object is divided into multiple subregions, each subregion difference, performs operation strategy parallel;
The memory management module includes discrete distributed memory management submodule;
Discrete distributed memory manages submodule:Destination object is divided into multiple subregions, one is performed to each Paralleled
Strategic decision-making, and form multiple autonomous working memories;
Multiple autonomous working memories form cluster.
4. the mass data distributed rule engine operation system according to claim 1 based on cloud platform, its feature exist
In the memory management module includes unified memory management module;
Unified memory management module:Destination object is formed into memory database, and number is stored in distributed rule engine cluster
According to.
5. the mass data distributed rule engine operation system according to claim 3 based on cloud platform, its feature exist
In the policy decision module:Destination object is divided into multiple subregions, each subregion triggers a kind of regulation engine, and independently
Carry out implementation strategy.
6. the mass data distributed rule engine operation system according to claim 1 based on cloud platform, its feature exist
In it is master-slave architecture that the operation, which performs management module,.
7. the mass data distributed rule engine operation system according to claim 1 based on cloud platform, its feature exist
In the operation, which performs management module, includes following submodule:
Service queue submodule:By multiple independent working memories or storage data sharing in a real time environment;
Micro services submodule:In same real time environment, by multiple independent working memories or storage data sharing child-operation;
Web services registry submodule:The implementation status of record, monitoring child-operation.
8. the mass data distributed rule engine operation system according to claim 3 based on cloud platform, its feature exist
In discrete distributed memory management submodule includes map sub-region, simplifies subregion;
Map sub-region:Complicated destination object is divided into multiple simple subregions, each Paralleled is performed corresponding
Strategic decision-making, and form multiple autonomous working memories;
Simplify subregion:Multiple autonomous working memories are merged, the accuracy for judgment rule result.
A kind of 9. mass data distributed rule engine operation method based on cloud platform, it is characterised in that include the following steps:
Data source step:Destination object is collected from database;
Memory management step:According to the destination object of collection, single node storage is expanded into distributed storage;
Strategic decision-making step:Scheduler is performed using computing engines, realizes the task distribution of each destination object;
Operation performs management process:Distributed according to the task of destination object, filtering repeated and redundant request.
10. the mass data distributed rule engine operation method according to claim 9 based on cloud platform, its feature exist
In the memory management step:Destination object is divided into multiple subregions, each subregion distinguishes/operation strategy is performed parallel;
The memory management step includes discrete distributed memory management sub-step;
Discrete distributed memory manages sub-step:Destination object is divided into multiple subregions, one is performed to each Paralleled
Strategic decision-making, and form multiple autonomous working memories;
Multiple autonomous working memories form cluster;
The memory management step, further includes unified memory management process;
Unified memory management process:Destination object is formed into memory database, and number is stored in distributed rule engine cluster
According to;
The strategic decision-making step:Destination object is divided into multiple subregions, each subregion triggers a kind of regulation engine, and independently
Carry out implementation strategy;
The operation, which performs management process, includes following sub-step:
Service queue sub-step:By multiple independent working memories or storage data sharing in a real time environment;
Micro services sub-step:In same real time environment, by multiple independent working memories or storage data sharing child-operation;
Web services registry sub-step:The implementation status of record/monitoring child-operation;
Discrete distributed memory management sub-step includes map sub-region step, simplifies partitioning step;
Map sub-region step:Complicated destination object is divided into multiple simple subregions, each Paralleled is performed opposite
The strategic decision-making answered, and form multiple autonomous working memories;
Simplify partitioning step:Multiple autonomous working memories are merged, the accuracy for judgment rule result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711209612.1A CN107943963A (en) | 2017-11-27 | 2017-11-27 | Mass data distributed rule engine operation system based on cloud platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711209612.1A CN107943963A (en) | 2017-11-27 | 2017-11-27 | Mass data distributed rule engine operation system based on cloud platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107943963A true CN107943963A (en) | 2018-04-20 |
Family
ID=61949146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711209612.1A Pending CN107943963A (en) | 2017-11-27 | 2017-11-27 | Mass data distributed rule engine operation system based on cloud platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107943963A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299150A (en) * | 2018-10-24 | 2019-02-01 | 万惠投资管理有限公司 | A kind of configurable multi-data source adaptation rule engine solution |
CN110443512A (en) * | 2019-08-09 | 2019-11-12 | 北京思维造物信息科技股份有限公司 | A kind of regulation engine and regulation engine implementation method |
CN110445793A (en) * | 2019-08-13 | 2019-11-12 | 四川长虹电器股份有限公司 | A kind of analysis method for the analysis engine possessing the irredundant calculating of node thread rank |
CN110580203A (en) * | 2019-08-19 | 2019-12-17 | 武汉长江通信智联技术有限公司 | Data processing method, device and system based on elastic distributed data set |
CN112131014A (en) * | 2020-09-02 | 2020-12-25 | 广州市双照电子科技有限公司 | Decision engine system and business processing method thereof |
CN112381501A (en) * | 2020-11-05 | 2021-02-19 | 上海汇付数据服务有限公司 | Product operation platform system |
CN113568610A (en) * | 2021-09-28 | 2021-10-29 | 国网江苏省电力有限公司营销服务中心 | Method for implementing business rule engine library system of electric power marketing system |
CN113923212A (en) * | 2020-06-22 | 2022-01-11 | 大唐移动通信设备有限公司 | Network data packet processing method and device |
CN112381501B (en) * | 2020-11-05 | 2024-06-07 | 上海汇付支付有限公司 | Product operation platform system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104679790A (en) * | 2013-12-03 | 2015-06-03 | 富士通株式会社 | Distributed rule engine system, distributed rule engine construction method and rule processing method |
CN106777029A (en) * | 2016-12-08 | 2017-05-31 | 中国科学技术大学 | A kind of distributed rule automotive engine system and its construction method |
-
2017
- 2017-11-27 CN CN201711209612.1A patent/CN107943963A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104679790A (en) * | 2013-12-03 | 2015-06-03 | 富士通株式会社 | Distributed rule engine system, distributed rule engine construction method and rule processing method |
CN106777029A (en) * | 2016-12-08 | 2017-05-31 | 中国科学技术大学 | A kind of distributed rule automotive engine system and its construction method |
Non-Patent Citations (2)
Title |
---|
张琦: "《基于MapReduce的分布式规则匹配系统的研究与实现》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王靖晗: "《面向大数据基于消息传递的分布式规则引擎设计与实现》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109299150A (en) * | 2018-10-24 | 2019-02-01 | 万惠投资管理有限公司 | A kind of configurable multi-data source adaptation rule engine solution |
CN109299150B (en) * | 2018-10-24 | 2022-01-28 | 万惠投资管理有限公司 | Configurable multi-data-source adaptation rule engine solution method |
CN110443512A (en) * | 2019-08-09 | 2019-11-12 | 北京思维造物信息科技股份有限公司 | A kind of regulation engine and regulation engine implementation method |
CN110445793A (en) * | 2019-08-13 | 2019-11-12 | 四川长虹电器股份有限公司 | A kind of analysis method for the analysis engine possessing the irredundant calculating of node thread rank |
CN110580203A (en) * | 2019-08-19 | 2019-12-17 | 武汉长江通信智联技术有限公司 | Data processing method, device and system based on elastic distributed data set |
CN113923212A (en) * | 2020-06-22 | 2022-01-11 | 大唐移动通信设备有限公司 | Network data packet processing method and device |
CN112131014A (en) * | 2020-09-02 | 2020-12-25 | 广州市双照电子科技有限公司 | Decision engine system and business processing method thereof |
CN112131014B (en) * | 2020-09-02 | 2024-01-26 | 广州市双照电子科技有限公司 | Decision engine system and business processing method thereof |
CN112381501A (en) * | 2020-11-05 | 2021-02-19 | 上海汇付数据服务有限公司 | Product operation platform system |
CN112381501B (en) * | 2020-11-05 | 2024-06-07 | 上海汇付支付有限公司 | Product operation platform system |
CN113568610A (en) * | 2021-09-28 | 2021-10-29 | 国网江苏省电力有限公司营销服务中心 | Method for implementing business rule engine library system of electric power marketing system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107943963A (en) | Mass data distributed rule engine operation system based on cloud platform | |
CN107239335B (en) | Job scheduling system and method for distributed system | |
US8725707B2 (en) | Data continuous SQL process | |
US8126909B2 (en) | System and method for analyzing data records | |
CN101256516B (en) | Distribution of data and task instances in grid environments | |
US9395954B2 (en) | Project planning and debugging from functional decomposition | |
US9135071B2 (en) | Selecting processing techniques for a data flow task | |
CN108762900A (en) | High frequency method for scheduling task, system, computer equipment and storage medium | |
CN106663075A (en) | Executing graph-based program specifications | |
CN111949454A (en) | Database system based on micro-service component and related method | |
CN111736964A (en) | Transaction processing method and device, computer equipment and storage medium | |
US7444350B1 (en) | Method and apparatus for processing management information | |
CN102968339A (en) | System and method for realizing complicated event handling based on cloud computing architecture | |
Seol et al. | Design process modularization: concept and algorithm | |
CN110516000A (en) | A kind of Workflow Management System for supporting complex work flow structure | |
CN113641739B (en) | Spark-based intelligent data conversion method | |
Shahoud et al. | A meta learning approach for automating model selection in big data environments using microservice and container virtualization technologies | |
Gu et al. | Characterizing job-task dependency in cloud workloads using graph learning | |
CN111552847A (en) | Method and device for changing number of objects | |
Dobler | Implementation of a time step based parallel queue simulation in MATSim | |
CN112559603A (en) | Feature extraction method, device, equipment and computer-readable storage medium | |
CN110309177A (en) | A kind of method and relevant apparatus of data processing | |
US20240095243A1 (en) | Column-based union pruning | |
CN113918623B (en) | Method and device for calculating times of wind control related behaviors | |
Ciuffreda | Distributed systems for neural network models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180420 |
|
RJ01 | Rejection of invention patent application after publication |