CN107368503A - Method of data synchronization and system based on Kettle - Google Patents
Method of data synchronization and system based on Kettle Download PDFInfo
- Publication number
- CN107368503A CN107368503A CN201610320280.3A CN201610320280A CN107368503A CN 107368503 A CN107368503 A CN 107368503A CN 201610320280 A CN201610320280 A CN 201610320280A CN 107368503 A CN107368503 A CN 107368503A
- Authority
- CN
- China
- Prior art keywords
- kettle
- data
- data source
- information
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Abstract
The invention provides a kind of method of data synchronization and system based on Kettle, wherein, methods described includes:It is stored in by client configuration data source information and parameter information, and by data source configuration information and parameter configuration in database;Include the Kettle files of stream compression logic by client Kettle graphic interfaces editor;The Kettle files are uploaded to Kettle enforcement engines;Kettle files described in dynamic analysis;With according to stream compression logic and corresponding parameter configuration, according to data source configuration information, the respective stream of data in source data source is gone into target data source.The system includes client and data acquisition platform, and client is used to carry out information configuration, editor and uploads Kettle files;Data acquisition platform is for receiving, Kettle files described in dynamic analysis, and according to data source configuration information, the respective stream of data in source data source is gone into target data source.The present invention can be arbitrarily designated the stream compression between multiple data sources, quick, convenient, and the degree of coupling between system is low.
Description
Technical field
The present invention relates to technical field of data processing, specifically, is related to one kind and is based on Kettle
Method of data synchronization and system.
Background technology
At present, in most of enterprise, with the continuous expansion of business event, business system is not
Disconnected increase, iteration renewal, many associations are also had between system, just as cobweb,
It is intricate.When existed system can not meet corporate business demand, it has to again to being
System is designed exploitation.But this will necessarily face a very difficult thing:How to accomplish new
The uniformity of old data.Common scheme is:, can compatible old system when carrying out the design of new system
The design of system, Worker timings are made by legacy system data syn-chronization to new system.This scheme is certain
It is feasible, but the time of synchrodata consumption is very more.If there are thousands of individual systems in company,
Often upgrade a new system, will re-synchronization data.It will spend too much of his time.Therefore, remove
While the compatibility with legacy system is considered in design of new systems, it is also necessary to find completion system
Between data syn-chronization scheme.
At present, had using more scheme using between the synchronization of Sql phrase datas, application program
Timing Synchronization, called by RPC between system, such as:In WebService, RMI or enterprise
The RPC frameworks in portion, also locally doing data conversion using some Etl instruments.
In complicated business system, data acquisition is carried out, is then imported using Sql sentences,
Workload is bigger.If using the Timing Synchronization between application program, can limit in enterprises
Extension between each application.If called using long-range RPC, will necessarily because of far call and
Very big pressure is brought to corresponding application server, and it is integrated relatively complicated.Using Etl
Instrument does data conversion, due to being that database on line is directly operated, will necessarily have permission limitation and
Safety problem, and this way does not meet the specification of corporate process generally also.
In Etl instruments, there are the Etl instruments increased income of entitled Kettle a kind of, by pure Java
Write, there is the characteristics of efficient, stable on data pick-up.It allows to manage different data sources
Data, the function of being completed by providing a patterned interface to realize user to want.
The content of the invention
The technical problem to be solved in the present invention is, in view of the shortcomings of the prior art, there is provided Yi Zhongji
In Kettle method of data synchronization and system, for realizing the synchronization of the rapid data between system.
In order to solve the above-mentioned technical problem, according to an aspect of the present invention, the invention provides one
Method of data synchronization of the kind based on Kettle, wherein, including:
Believe by client disposition data source and parameter, and by data source configuration information and parameter configuration
Breath is stored in database;
Include the Kettle texts of stream compression logic by client Kettle graphic interfaces editor
Part;
The Kettle files are uploaded to Kettle enforcement engines;
Kettle files described in dynamic analysis;With
According to stream compression logic and corresponding parameter configuration, according to data source configuration information,
The respective stream of data in source data source is gone into target data source.
Preferably, the data source configuration information includes data source identification, data source types, data
The IP in source or URL link address;And/or the parameter includes dynamic required during stream compression
Parameter.
Preferably, include described in the dynamic analysis the step of Kettle files:
Kettle files described in dynamic load, generate corresponding converting objects;
According to the data source configuration information and parameter configuration, line number is entered to the converting objects
According to source information and the assignment of parameter information.
Preferably, the converting objects includes data source property information, parameter attribute information and node
Attribute information;
It is described to the converting objects carry out data source information assignment the step of include:
According to the data source identification in the data source property information, in the data source configuration information
Source data source information corresponding to middle acquisition and target data source information;
The data source property letter is changed according to the source data source information and target data source information
Breath;
It is described to the converting objects carry out parameter information assignment the step of include:
According to needing the parameter of assignment to search the parameter configuration in the parameter attribute information,
According to the design parameter found, the parameter attribute information is changed.
Preferably, the data source configuration information record is in data source allocation list;And/or institute
Parameter configuration is stated to be recorded in parameter list.
Preferably, the data source types include Oralce, MySql, SQLServer,
WebService and SAP polytypes.
Preferably, before the Kettle files are uploaded into Kettle enforcement engines, at this
Ground performs test to the Kettle files, and verifies the completion for obtaining data and accuracy.
Preferably, after the respective stream of data in source data source is gone into target data source, in addition to
The step of execution information being returned to client.
In order to solve the above-mentioned technical problem, according to another aspect of the present invention, the invention provides
A kind of data synchronous system based on Kettle, including:
Client, for carrying out information configuration, editor and uploading Kettle files, wherein, and
Data source configuration information and relevant parameter configuration information are stored in database;With
Data acquisition platform, for receiving, Kettle files described in dynamic analysis, according to data
The logic that circulates and corresponding parameter information, according to data source information, by the corresponding data in source data source
Circulate to target data source.
Preferably, the client includes:
Configuration module, for configuration data source information and parameter information, and by data source configuration information
And relevant parameter configuration information is stored in database;With
Editor module, by providing Kettle graphic interfaces, include stream compression for editing
The Kettle files of logic.
Preferably, the data acquisition platform includes:
Kettle enforcement engines, for dynamically parsing the Kettle files, and complete data
Circulation;
A variety of API, the Kettle files are completed for being called for the Kettle enforcement engines
Parsing and data circulation;With
Database, for providing the data source configuration information needed in the Kettle resolvings
And relevant parameter configuration information.
Preferably, the Kettle enforcement engines include:
Converting objects creation module, for being turned accordingly according to the Kettle document creations above transmitted
Change object;
Information analysis module, for obtaining Database Properties information and parameter category from the converting objects
Property information;
Assignment module, for according to Database Properties information and parameter attribute information, inquiring about the number
According to source configuration information and parameter configuration, institute is changed according to the database information of the matching inquired
The Database Properties information of converting objects is stated, according to the modification of the parameter information of the matching inquired
The parameter attribute information of converting objects;With
Scheduler module, it is connected with multiple API, calls corresponding API initialization parsing Kettle
The resource environment of file, complete to the converting objects of assignment described in the assignment of converting objects and execution,
So as to complete the circulation of data.
According to scheme provided by the invention, the stream compression that can be arbitrarily designated between multiple data sources,
That is data syn-chronization, it is quick, convenient, it is not necessary to responsible Sql is write to each data synchronization process
Sentence, it is not required that do the long-range calling of the RPC of timed task in the application.Kettle files exist
In execution, heat deployment is supported, uploads and comes into force, the degree of coupling between system is low.
Brief description of the drawings
By referring to description of the following drawings to the embodiment of the present invention, of the invention is above-mentioned and other
Objects, features and advantages will be apparent from, in the accompanying drawings:
Fig. 1 is the Module for General Design for realizing data syn-chronization of the present invention;
Fig. 2 is the principle schematic of stream compression logic of the present invention;
Fig. 3 is the theory structure block diagram of the data synchronous system of the present invention based on Kettle;
Fig. 4 is the theory structure block diagram that Kettle engines of the present invention perform;
Fig. 5 is the schematic flow sheet of the method for data synchronization of the invention based on Kettle;With
Fig. 6 is the schematic flow sheet of dynamic analysis Kettle files of the present invention.
Embodiment
Below based on embodiment, present invention is described, but the present invention is not restricted to these
Embodiment.Below to the present invention detailed description in, it is detailed to describe some specific detail portions
Point.The description of part can also understand this completely without these details for a person skilled in the art
Invention.In order to avoid obscuring the essence of the present invention, known method, process, flow be not detailed
Narration.What other accompanying drawing was not necessarily drawn to scale.
Flow chart, block diagram in accompanying drawing illustrate the system of the embodiment of the present invention, method, apparatus
Possible System Framework, function and operation, the square frame on flow chart and block diagram can represent a mould
Block, program segment or only one section of code, the module, program segment and code are all for realizing
Provide the executable instruction of logic function.It should also be noted that it is described realize regulation logic function can
Execute instruction can reconfigure, so as to generate new module and program segment.Therefore the square frame of accompanying drawing
And square frame order is used only to the process and step of preferably diagram embodiment, without should be made with this
For the limitation to invention itself.
The invention provides a kind of data synchronous system and method based on Kettle, such as Fig. 1 institutes
Show, in order to realize data syn-chronization, the present invention is needed from four aspects:Configuration information, Kettle
Document Editing, the analysis service of Kettle enforcement engines and in order to complete the various of analysis service
Kettle application programming interfaces (abbreviation Kettle API).
Configuration information is included in various information and parameter required during data syn-chronization.It is main to include number
According to some dynamic parameters used in source information and data synchronization process.Wherein, data source is by same
Data flow during step can be divided into source data source and target data source.Either source data source and target
Data source, be required for some data source informations of identical, as data source identification, data source types,
The IP of data source or URL link address.Wherein, the data source identification is used to distinguish data source,
Data source types be the data source in data type, as Oralce, MySql, SqlServer,
The data of the types such as WebService, SAP, and the IP of data source or URL link address then provide
The address of the data source.
Described parameter is dynamic parameter, according to the different and different of Kettle files, therefore,
Each Kettle file has corresponding dynamic parameter, such as:Make during stream compression
Parameter information of condition, URL in SQL statement etc..
Above-mentioned configuration information is completed by the configuration page of client, the information storage configured
In database in data acquisition platform.
The Kettle graphic interfaces that the editor of Kettle files mainly provides in client are completed,
According to the needs of data syn-chronization, different nodes is set, so as to form stream compression logic.For example,
As shown in Fig. 2 the editing pictures for a simple Kettle files.Wherein, table input node,
SAP is inputted and REST Client these three nodes are respectively used to obtain the data of data source.With table
Exemplified by input node, table input node defines data source, i.e. certain number from some data source
Data are obtained according to table.
JavaScript nodes can write JS scripts., can be by number in actual application
According to being verified in source per the data in a line and each row and Data Format Transform.Such as:Numerical value
Type data check, date format verification and character string fractionation, character string and other data types
Between conversion.So greatly reduce in a program by the verification of code.
Table output node is used for selecting business table, specifies the row to be inserted, and inserts data.The node
Batch insertion or wall scroll insertion can be specified, can also designate whether to locate in same affairs
Reason.
Table input node, SAP inputs and REST Client these three nodes are used for from corresponding source
Data source obtains data, therefore believes, it is necessary to configure the corresponding database as source data source to it
Breath, such as such as type of database, Mysql, oralce, SQLServer.Similarly, table output section
Point is also required to configure the corresponding database information as target data source.It is therefore to be understood that it is
The data that table input node is obtained from source data source are inserted into target data corresponding to table output node
In source.Whole kettle files include forming a file and DB link informations by multiple nodes,
When the kettle files really loaded, understand according to the data source configuration information being configured,
Database corresponding to this two nodes is replaced with into database on real line, so as to reach data
Real circulation.
Kettle enforcement engines carry out dynamic analysis to the kettle files, support various
Kettle connection types, such as Oralce, MySql, SQLServer, WebService, SAP,
It can thus match with the data source types in data source configuration information, for connecting different pieces of information
The database of type.Thus no matter which kind of type the data source types that kettle files are related to are,
Kettle enforcement engines can enter Mobile state to it, parse in real time, and support heat deployment, from
And the upload for realizing kettle files comes into force.Kettle enforcement engines are defined and defined
Execution sequence in implementation procedure, such as:Data in the assignment of parameter, implementation procedure before execution
The parsing in source and the parsing of some, and parsed output of return value etc. afterwards.
The Kettle API, such as Kettle commonly used in various Etl instruments is included in Kettle API
Resource environment API, various converting objects API.These Kettle API are mainly used in being solved
During analysis service corresponding function is realized for Kettle engine callings.
According to above-mentioned design module, the invention provides the data synchronous system based on Kettle,
As shown in figure 3, including client 1 and data acquisition platform 2.Client 1 is used for into row information
Configuration, editor and upload Kettle files, wherein, the configuration information matches somebody with somebody confidence including data source
Breath and relevant parameter configuration information.Data acquisition platform 2 is used to receive Kettle files, dynamic resolution
The Kettle files are analysed, according to stream compression logic and corresponding parameter configuration, according to number
According to source configuration information, source data source A respective stream of data is gone into target data source B.
Specifically, as shown in figure 4, the client includes configuration module 11 and editor module 12,
Configuration module 11 provides configuration interface, and user passes through the configuration interface configuration data source information and data
The various information about dynamic parameters needed during circulation, data source configuration information are believed including source data source
Breath and target data source information.In a detailed embodiment, given birth in configuration data source information
Into data source allocation list, parameter list is generated in configuration parameter information, and the two table storages are arrived
In the database 23 of data acquisition platform 2.Editor module 12 provides Kettle graphic interfaces,
User is by the patterned interface editing Kettle files, as it was previously stated, according to data syn-chronization
Need, different nodes is set, so as to form stream compression logic.
The data acquisition platform 2 includes Kettle enforcement engines 21, a variety of API22 and data
Storehouse 23, the Kettle enforcement engines 21 are by calling the API22 to parse the Kettle
File, and complete the circulation of data.Specifically, the Kettle enforcement engines 21 include conversion
Object Creation module 211, information analysis module 212, assignment module 213 and scheduler module 214.
Wherein, converting objects creation module 211 is used for according to the Kettle document creations above transmitted
Corresponding converting objects.
Information analysis module 212 is used to obtain Database Properties information and parameter from the converting objects
Attribute information.
Assignment module 213 is used for according to Database Properties information and parameter attribute information, described in inquiry
Data source configuration information and parameter configuration, changed according to the database information of the matching inquired
The Database Properties information of the converting objects, institute is changed according to the parameter information of the matching inquired
State the parameter attribute information of converting objects.
Scheduler module 214 is connected with multiple API, calls corresponding API initialization parsing Kettle
The resource environment of file, complete to the converting objects of assignment described in the assignment of converting objects and execution,
So as to complete the circulation of data.
The API22 includes the API of various functions, such as:API is initialized, for initially dissolving
Analyse the resource environment of Kettle files;Assignment API, for completing converting objects respective attributes information
Dynamic assignment;The execution API of converting objects, for performing the converting objects of assignment, so as to
Complete the circulation of data.
On the method for data synchronization based on Kettle, as shown in figure 5, specifically including:
Step S1, configuration data source information and relevant parameter information, wherein, the data source information
Including source data source information and target data source information.
The configuration interface provided using client 1 can configure multiple data sources, each data source
There is unique data source identification.Data source configuration information includes but is not limited to data source identification, data
Source Type, the IP of data source or URL link address., can be to obtain by data source identification
The other database informations needed, such as IP address, are thus connected to corresponding database.
Step S2, the Kettle of stream compression logic is included by Kettle graphic interfaces editor
File;There is data source identification in the Kettle files, the data source identification with configuration information
Data source identification it is consistent.
After the Kettle file generateds, be uploaded to Kettle enforcement engines before, will also
The test to the Kettle files is performed locally, the process of test is:
By the table input node, test data is obtained, such as:Table1 in test1 databases
Data;.
By the data for contrasting the acquisition and default demand data, verification obtains the completion of data
Property and accuracy;For example, the data in the data in table1 and Excel demand files are carried out
Contrast, is mainly contrasted with the row in Excel demand files, checks whether to import and correct
The data imported in demand file.
If verification passes through, i.e., table input node obtains data that are correct, needing, will obtain
Data be inserted into the tables of data of the specified database in table back end.Such as test2 data
The table2 in storehouse.So as to then by the data of table1 in test1 databases according to specified row,
OK, it is inserted into the table2 of test2 databases, and the log information in operating process is returned
To client.Client has checked whether exception, if the correct circulation for performing data.It is if correct
The circulation of data is performed, then can perform step S2, the Kettle files are uploaded to Kettle
Enforcement engine.
Step S3, Kettle files are uploaded to Kettle enforcement engines 21.Enter in step 1
When row information configures, data source information and parameter information can be respectively configured as data source allocation list
And parameter list, and exist in the database 23 of data acquisition platform.
Step S4, Kettle files described in dynamic analysis.First, Kettle enforcement engines 21
API is initialized by calling, the resource environment of initialization parsing Kettle files.Then, Kettle
The converting objects creation module 211 of enforcement engine according to the Kettle document creations converting objects,
And call assignment API to complete converting objects respective attributes by scheduler module 214 by assignment module and believe
The dynamic assignment of breath.Wherein, converting objects include data source property information, parameter attribute information and
Node attribute information, data source information when the currently available data source property information is tests,
It is not the data source information for really wanting fluxion, it is therefore desirable to which original test data source is believed
Breath is revised as really circulating the database informations of data.
On parameter, such as:When table input node is included in Kettle files, the inside is looked into
It is a Sql sentence with parameter to ask sentence, at this time joins the condition used in Sql sentences
Number assigns corresponding parameter information value during configuration.And for Rest Client nodes, it is also desirable to
Some required parameters, these parameters are all at this moment to be ready to, i.e., give these parameters before runtime
Assign appropriate value.
Step S5, after above-mentioned work is completed, the execution API of converting objects is called, is performed
The converting objects of assignment, the respective stream of data in source data source is gone into target data source so as to reach
Purpose.
In addition, in order to make user understand the circulation situation of data, by the respective counts in source data source
After circulation to target data source, in addition to the step of return execution information, it will record and operate
The log information of journey sends back client.
It is specific on Kettle files described in the dynamic analysis in step S4 as shown in fig. 6, including
Following steps:
Step S41, the resource environment for calling initialization API, initialization Kettle file to perform;
Step S42, Kettle files described in dynamic load, corresponding converting objects is generated, it is described
Converting objects includes data source property information, parameter attribute information and node attribute information;
Step S43, API parsing data source informations are called, according to the data source of the converting objects
Data source identification in attribute information, obtained and the data source mark in the data source allocation list
Source data source information corresponding to knowledge and target data source information;
Step S44, the data are changed according to the source data source information and target data source information
Source attribute information, the assignment of complete paired data source information;
Step S45, the parameter is searched according to the parameter of assignment is needed in the parameter attribute information
Table;
Step S46, according to the design parameter found, the parameter attribute information is changed, completes ginseng
Several assignment.
The present invention can configure different types of data source, so as to realize between different data sources
Stream compression.And the data conversion that can be arbitrarily designated between multiple data sources, it is quick, convenient,
Responsible Sql need not be write to each data syn-chronization, it is not required that timed task RPC is in reapplying
Long-range calling.
Because the present invention can be with Kettle files described in dynamic analysis, thus heat deployment is supported, on
Biography can come into force.The present invention can configure different type of database, support the connection of a variety of data,
Solve the problems, such as system connection type in internal system, thus the degree of coupling between system is low.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for this
For art personnel, the present invention can have various changes and change.All spirit in the present invention
With any modification, equivalent substitution and improvements made within principle etc., it should be included in the present invention's
Within protection domain.
Claims (12)
1. a kind of method of data synchronization based on Kettle, wherein, including:
Match somebody with somebody by client disposition data source and relevant parameter, and by data source configuration information and parameter
Confidence breath is stored in database;
Include the Kettle texts of stream compression logic by client Kettle graphic interfaces editor
Part;
The Kettle files are uploaded to Kettle enforcement engines;
Kettle files described in dynamic analysis;With
According to stream compression logic and corresponding parameter configuration, according to data source configuration information,
The respective stream of data in source data source is gone into target data source.
2. the method for data synchronization based on Kettle as claimed in claim 1, wherein, it is described
Data source configuration information includes data source identification, data source types, the IP of data source or URL link
Address;And/or the parameter includes dynamic parameter required during stream compression.
3. the method for data synchronization based on Kettle as claimed in claim 2, wherein, it is described
Include described in dynamic analysis the step of Kettle files:
Kettle files described in dynamic load, generate corresponding converting objects;
According to the data source configuration information and parameter configuration, line number is entered to the converting objects
According to source and the assignment of parameter.
4. the method for data synchronization based on Kettle as claimed in claim 3, wherein, it is described
Converting objects includes data source property information and parameter attribute information;
Described the step of carrying out data source assignment to the converting objects, includes:
According to the data source identification in the data source property information, in the data source configuration information
Source data source information corresponding to middle acquisition and target data source information;
The data source property letter is changed according to the source data source information and target data source information
Breath;
Described the step of carrying out parameter assignment to the converting objects, includes:
According to needing the parameter of assignment to search the parameter configuration in the parameter attribute information,
According to the design parameter found, the parameter attribute information is changed.
5. the method for data synchronization based on Kettle as described in claim 3 or 4, wherein,
The data source configuration information record is in data source allocation list;And/or the parameter configuration
It is recorded in parameter list.
6. the method for data synchronization based on Kettle as claimed in claim 2, wherein, it is described
Data source types include one in Oralce, MySql, SQLServer, WebService or SAP
The combination of kind or any several types.
7. the method for data synchronization based on Kettle as claimed in claim 1, wherein, inciting somebody to action
The Kettle files are uploaded to before Kettle enforcement engines, in local to the Kettle
File performs test, and verifies the completion for obtaining data and accuracy.
8. the method for data synchronization based on Kettle as claimed in claim 1, wherein, inciting somebody to action
The respective stream of data in source data source is gone to after target data source, in addition to is returned and performed to client
The step of information.
9. a kind of data synchronous system based on Kettle, including:
Client, for carrying out information configuration, editor and uploading Kettle files;With
Data acquisition platform, for receiving, Kettle files described in dynamic analysis, according to data
The logic that circulates and corresponding parameter configuration, according to data source configuration information, by source data source
Respective stream of data goes to target data source.
10. the data synchronous system based on Kettle as claimed in claim 9, wherein, institute
Stating client includes:
Configuration module, for disposition data source and relevant parameter, and by data source configuration information and ginseng
Number configuration information is stored in database;
Editor module, by providing Kettle graphic interfaces, include stream compression for editing
The Kettle files of logic.
11. the data synchronous system based on Kettle as claimed in claim 9, wherein, institute
Stating data acquisition platform includes:
Kettle enforcement engines, for dynamically parsing the Kettle files, and complete data
Circulation;
A variety of API, for being called for the Kettle enforcement engines, complete the Kettle texts
The parsing of part and the circulation of data;With
Database, for providing the data source configuration information needed in the Kettle resolvings
And parameter configuration.
12. the data synchronous system based on Kettle as claimed in claim 11, wherein, institute
Stating Kettle enforcement engines includes:
Converting objects creation module, for being turned accordingly according to the Kettle document creations above transmitted
Change object;
Information analysis module, for obtaining Database Properties information and parameter category from the converting objects
Property information;
Assignment module, for according to Database Properties information and parameter attribute information, inquiring about the number
According to source configuration information and parameter configuration, institute is changed according to the database information of the matching inquired
The Database Properties information of converting objects is stated, according to the modification of the parameter information of the matching inquired
The parameter attribute information of converting objects.
Scheduler module, it is connected with multiple API, calls corresponding API initialization parsing Kettle
The resource environment of file, complete to the converting objects of assignment described in the assignment of converting objects and execution,
So as to complete the circulation of data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610320280.3A CN107368503B (en) | 2016-05-13 | 2016-05-13 | Data synchronization method and system based on button |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610320280.3A CN107368503B (en) | 2016-05-13 | 2016-05-13 | Data synchronization method and system based on button |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107368503A true CN107368503A (en) | 2017-11-21 |
CN107368503B CN107368503B (en) | 2021-04-30 |
Family
ID=60304208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610320280.3A Active CN107368503B (en) | 2016-05-13 | 2016-05-13 | Data synchronization method and system based on button |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107368503B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108491526A (en) * | 2018-03-28 | 2018-09-04 | 腾讯科技(深圳)有限公司 | Daily record data processing method, device, electronic equipment and storage medium |
CN108629052A (en) * | 2018-05-21 | 2018-10-09 | 平安科技(深圳)有限公司 | Kettle method for scheduling task, system, computer equipment and storage medium |
CN108629002A (en) * | 2018-05-03 | 2018-10-09 | 山东汇贸电子口岸有限公司 | A kind of big data comparison method and device based on kettle |
CN108984505A (en) * | 2018-07-17 | 2018-12-11 | 浪潮软件股份有限公司 | A kind of data resource automation sharing method and system based on operation template |
CN109086295A (en) * | 2018-06-13 | 2018-12-25 | 中国平安人寿保险股份有限公司 | Method of data synchronization, device, computer equipment and storage medium |
CN109558392A (en) * | 2018-11-20 | 2019-04-02 | 南京数睿数据科技有限公司 | A kind of mass data moving apparatus that cross-platform multi engine is supported |
CN110880146A (en) * | 2019-11-21 | 2020-03-13 | 上海中信信息发展股份有限公司 | Block chain chaining method, device, electronic equipment and storage medium |
CN111124548A (en) * | 2019-12-31 | 2020-05-08 | 科大国创软件股份有限公司 | Rule analysis method and system based on YAML file |
CN111400061A (en) * | 2020-03-12 | 2020-07-10 | 泰康保险集团股份有限公司 | Data processing method and system |
CN111414369A (en) * | 2020-04-08 | 2020-07-14 | 支付宝(杭州)信息技术有限公司 | Data processing method, device and equipment |
CN111695565A (en) * | 2020-06-14 | 2020-09-22 | 荆门汇易佳信息科技有限公司 | Automobile mark accurate positioning method based on road barrier fuzzy image |
CN112527799A (en) * | 2020-12-17 | 2021-03-19 | 杭州玳数科技有限公司 | Method for realizing distributed real-time synchronization of SqlServer database based on flink |
CN113111108A (en) * | 2021-04-06 | 2021-07-13 | 创意信息技术股份有限公司 | File data source warehousing analysis access method |
WO2022206123A1 (en) * | 2021-03-29 | 2022-10-06 | 中兴通讯股份有限公司 | Blockchain chaining method and apparatus, and electronic device and storage medium |
WO2022256969A1 (en) * | 2021-06-07 | 2022-12-15 | 京东方科技集团股份有限公司 | General data extraction system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020133504A1 (en) * | 2000-10-27 | 2002-09-19 | Harry Vlahos | Integrating heterogeneous data and tools |
-
2016
- 2016-05-13 CN CN201610320280.3A patent/CN107368503B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020133504A1 (en) * | 2000-10-27 | 2002-09-19 | Harry Vlahos | Integrating heterogeneous data and tools |
Non-Patent Citations (9)
Title |
---|
HAPJIN: "Kettle中转换(Transformation)的执行过程", 《博客园:HTTPS://WWW.CNBLOGS.COM/HAPJIN/P/4630487.HTML》 * |
KINGWANG: "java调用Kettle动态传参修改数据库连接", 《HTTPS://ASK.HELLOBI.COM/BLOG/KING/888》 * |
ROTKANG: "【Kettle从零开始】第三弹之Kettle数据源连接配置_RotKang-CSDN博客_kettle连接池配置", 《百度HTTPS://BLOG.CSDN.NET/ROTKANG/ARTICLE/DETAILS/20962725》 * |
SRIVIDYA K BANSAL: "Towards a Semantic Extract-Transform-Load Framework for big data integration", 《IEEE》 * |
刘充: "基于KETTLE的高校多源异构数据集成研究及实践", 《电子设计工程》 * |
十月阳光: "数据迁移实战:基于Kettle的Mysql到DB2的数据迁移", 《HTTPS://MY.OSCHINA.NET/SIMPLETON/BLOG/525675》 * |
崔有文 等: "基于Kettle的数据集成研究", 《计算机技术与发展》 * |
王伟: "干货放送:基于Kettle的数据处理实践", 《HTTP://TECH.IT168.COM/A2015/1201/1783/000001783611.SHTML》 * |
马盈盈等: "ETL-Kettle技术在交通流量调查中的应用研究", 《中国交通信息化》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108491526A (en) * | 2018-03-28 | 2018-09-04 | 腾讯科技(深圳)有限公司 | Daily record data processing method, device, electronic equipment and storage medium |
CN108491526B (en) * | 2018-03-28 | 2022-02-22 | 腾讯科技(深圳)有限公司 | Log data processing method and device, electronic equipment and storage medium |
CN108629002A (en) * | 2018-05-03 | 2018-10-09 | 山东汇贸电子口岸有限公司 | A kind of big data comparison method and device based on kettle |
CN108629052A (en) * | 2018-05-21 | 2018-10-09 | 平安科技(深圳)有限公司 | Kettle method for scheduling task, system, computer equipment and storage medium |
CN108629052B (en) * | 2018-05-21 | 2023-06-02 | 平安科技(深圳)有限公司 | Kettle task scheduling method, system, computer equipment and storage medium |
CN109086295A (en) * | 2018-06-13 | 2018-12-25 | 中国平安人寿保险股份有限公司 | Method of data synchronization, device, computer equipment and storage medium |
CN109086295B (en) * | 2018-06-13 | 2023-05-30 | 中国平安人寿保险股份有限公司 | Data synchronization method, device, computer equipment and storage medium |
CN108984505A (en) * | 2018-07-17 | 2018-12-11 | 浪潮软件股份有限公司 | A kind of data resource automation sharing method and system based on operation template |
CN109558392A (en) * | 2018-11-20 | 2019-04-02 | 南京数睿数据科技有限公司 | A kind of mass data moving apparatus that cross-platform multi engine is supported |
CN110880146A (en) * | 2019-11-21 | 2020-03-13 | 上海中信信息发展股份有限公司 | Block chain chaining method, device, electronic equipment and storage medium |
CN111124548B (en) * | 2019-12-31 | 2023-10-27 | 科大国创软件股份有限公司 | Rule analysis method and system based on YAML file |
CN111124548A (en) * | 2019-12-31 | 2020-05-08 | 科大国创软件股份有限公司 | Rule analysis method and system based on YAML file |
CN111400061A (en) * | 2020-03-12 | 2020-07-10 | 泰康保险集团股份有限公司 | Data processing method and system |
CN111414369A (en) * | 2020-04-08 | 2020-07-14 | 支付宝(杭州)信息技术有限公司 | Data processing method, device and equipment |
CN111414369B (en) * | 2020-04-08 | 2024-03-01 | 支付宝(杭州)信息技术有限公司 | Data processing method, device and equipment |
CN111695565A (en) * | 2020-06-14 | 2020-09-22 | 荆门汇易佳信息科技有限公司 | Automobile mark accurate positioning method based on road barrier fuzzy image |
CN112527799B (en) * | 2020-12-17 | 2022-09-13 | 杭州玳数科技有限公司 | Method for realizing distributed real-time synchronization of SqlServer database based on flink |
CN112527799A (en) * | 2020-12-17 | 2021-03-19 | 杭州玳数科技有限公司 | Method for realizing distributed real-time synchronization of SqlServer database based on flink |
WO2022206123A1 (en) * | 2021-03-29 | 2022-10-06 | 中兴通讯股份有限公司 | Blockchain chaining method and apparatus, and electronic device and storage medium |
CN113111108A (en) * | 2021-04-06 | 2021-07-13 | 创意信息技术股份有限公司 | File data source warehousing analysis access method |
WO2022256969A1 (en) * | 2021-06-07 | 2022-12-15 | 京东方科技集团股份有限公司 | General data extraction system |
CN115836284A (en) * | 2021-06-07 | 2023-03-21 | 京东方科技集团股份有限公司 | Universal data extraction system |
Also Published As
Publication number | Publication date |
---|---|
CN107368503B (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107368503A (en) | Method of data synchronization and system based on Kettle | |
US9424003B1 (en) | Schema-less system output object parser and code generator | |
CN114981775B (en) | Cloud-based API metadata management method and system for integrated API management | |
CN108846020A (en) | Knowledge mapping automated construction method, system are carried out based on multi-source heterogeneous data | |
CN108829884B (en) | Data mapping method and device | |
CN104035754A (en) | XML (Extensible Markup Language)-based custom code generation method and generator | |
EP2772879A1 (en) | Correlating data from multiple business processes to a business process scenario | |
CN104461531A (en) | Implementing method for self-defined functions of reporting system | |
CN108959496A (en) | Integration across database access method and abstract data access method based on dynamic proxy | |
CN111367818A (en) | System component testing method and device based on dynamic data return | |
CN108664546B (en) | XML data structure conversion method and device | |
CN113886485A (en) | Data processing method, device, electronic equipment, system and storage medium | |
CN113821565B (en) | Method for synchronizing data by multiple data sources | |
CN109116828A (en) | Model code configuration method and device in a kind of controller | |
US8069154B2 (en) | Autonomic rule generation in a content management system | |
CN112579604A (en) | Test system number making method, device, equipment and storage medium | |
JP2018142271A (en) | Api convention checking apparatus, api convention checking method, and program | |
CN106293862A (en) | A kind of analysis method and device of expandable mark language XML data | |
Li et al. | Automated creation of navigable REST services based on REST chart | |
Chareonsuk et al. | Translating TOSCA model to kubernetes objects | |
CN113220706A (en) | Component product query method, device, equipment and medium | |
CN113342399A (en) | Application structure configuration method and device and readable storage medium | |
CN113505143A (en) | Statement type conversion method and device, storage medium and electronic device | |
CN112363700A (en) | Cooperative creation method and device of intelligent contract, computer equipment and storage medium | |
CN112445811A (en) | Data service method, device, storage medium and component based on SQL configuration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |