CN101083532A - Method and system for realizing data loading - Google Patents

Method and system for realizing data loading Download PDF

Info

Publication number
CN101083532A
CN101083532A CN 200610035813 CN200610035813A CN101083532A CN 101083532 A CN101083532 A CN 101083532A CN 200610035813 CN200610035813 CN 200610035813 CN 200610035813 A CN200610035813 A CN 200610035813A CN 101083532 A CN101083532 A CN 101083532A
Authority
CN
China
Prior art keywords
data
main system
task
subsystem
data load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200610035813
Other languages
Chinese (zh)
Inventor
吴刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN 200610035813 priority Critical patent/CN101083532A/en
Publication of CN101083532A publication Critical patent/CN101083532A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data loading method, comprising the steps of: a main system is based on different data loading tasks respectively built between it and different subsystems main system and shares the data corresponding to the data loading tasks in internal memory and then sends the data to the corresponding subsystems. And the invention also discloses a data loading system. In the parallel data loading course, the main system reads in the data from memory cells for the first time and then shares the internal memory, and sends the data to the corresponding subsystems, therefore the invention raises data parallel loading efficiency by reducing the system resources occupied by the main system repeatedly reading data.

Description

A kind of implementation method of data load and system
Technical field
The present invention relates to the data load technical field, especially relate to a kind of implementation method and system of data load.
Background technology
Along with development of internet technology, catenet equipment also is on the increase.Catenet equipment comprises numerous distributed subsystems, and to the main system of subsystem centralized management, it externally shows as a complete application system or equipment, and in catenet device start and running, for example, subsystem is according to handled business, need be to data such as main system request of loading programs, and main system also can be in to system software upgrading the time, will be in the extremely relevant a plurality of subsystems of the data load of some programs, therefore, how realizing fast main system data is loaded on each subsystem in catenet equipment, is one of factor that improves catenet device data disposal ability.
The existing mode of data load that realizes has two kinds: a kind of is to adopt data serial to load, and another kind is to adopt data parallel to load.Wherein and since data serial load need finish one by one main system with data load to each subsystem, need expend the long data load time, so the communication efficiency between main system and the subsystem is lower, seldom uses in catenet equipment.The substitute is and adopt the data parallel loading technique, can realize the data load task of parallel processing main system and a plurality of subsystems, adopt the data serial loading technique relatively, employing data parallel loading technique can improve the data loading efficiency between main system and the subsystem greatly, shorten the time of data load, can realize basically each subsystem carried out that data sync loads and startup when realizing main system with each subsystem.
But, main system also is about to data load during to each subsystem, because in the memory cell of storage in main system, such as be hard disk, flash memory, so the data load task of setting up between main system and each subsystem needs obtain data from memory cell respectively; Simultaneously, because the processor of relative main system of memory cell such as hard disk, flash memory and subsystem is slow devices, main system is obtained data from memory cell need take a large amount of system resource, and is unfavorable for improving the parallel data loading velocity.
Summary of the invention
The technical problem that the present invention solves is implementation method and the system that has proposed a kind of data load, adopts in the parallel data loading procedure to solve prior art, and main system repeats the shared a large amount of system resource of reading of data, thereby improves the parallel data loading velocity.
For addressing the above problem, the present invention proposes a kind of implementation method of data load, and key is may further comprise the steps:
Main system is according to the different pieces of information loading tasks of setting up respectively between itself and the different sub-systems, after the data of data load task correspondence are shared in internal memory, sends to the corresponding respectively subsystem of different pieces of information loading tasks that loads these data.
Between described main system and subsystem, set up after the step of data load task, also comprise:
With the data load task composition data loading tasks formation of setting up between each subsystem and the main system, and the different pieces of information loading tasks that will load same data in the data load task queue is provided with interrelated.
The step of described composition data loading tasks formation specifically comprises:
Default main system is the maximum number of deal with data loading tasks simultaneously;
In order with the formation of data load task composition data loading tasks, and successively in the data load task queue data load task of maximum number be set to treatment progress, other data load tasks are set to waiting process.
After described data with data load task correspondence are shared in internal memory, send to load these data the different pieces of information loading tasks respectively the step of corresponding subsystem specifically comprise:
Main system is read in internal memory with the data of this data load task correspondence from storage medium;
In internal memory,, and send to the different pieces of information loading tasks that is mutually related in the described treatment progress and distinguish corresponding subsystem data sharing.
The described step of setting up the data load task specifically comprises:
The data load notification message that data load request message that subsystem sends to main system or main system send to subsystem;
The described message of main system or subsystem responses is to set up the data load task between main system and the subsystem.
Correspondingly, the present invention proposes a kind of data loading system, be used for realizing data load from the memory module of main system to a plurality of subsystems by the main system centralized management, key is that described main system comprises task creation module, task management module and task execution module;
Described task creation module, be used for and described subsystem between set up the data load task;
Described task management module is used for the formation of management data loading tasks, and the different pieces of information loading tasks that will load same data is provided with interrelated;
Described task execution module is used to carry out the data loading tasks, after data are shared in internal memory, sends to the corresponding respectively subsystem of data load task that is mutually related.
Described task management module specifically comprises:
The queue management submodule, be used for will set up between main system and each subsystem the data load task constitute formation, and the different pieces of information loading tasks that will load same data is provided with interrelated;
The management of process submodule is used for the data load task of real-time management main system parallel processing.
Described task execution module specifically comprises:
The data read submodule is used for from the memory module of main system data being read in internal memory;
Data send submodule, are used for data after internal memory is shared, and send to the corresponding respectively subsystem of data load task that is mutually related.
Compared with prior art, the present invention has following beneficial effect:
Because the present invention is in main system data loaded in parallel process, after main system is read in main system memory from memory cell with data for the first time, shared drive, these data are sent to the corresponding respectively subsystem of different pieces of information loading tasks that loads these data, therefore, the present invention repeats the system resource that reading of data repeated to take by reducing main system, thereby improve the data parallel loading efficiency, because the shared system resource of main system reading of data reduces, help strengthening the stability of a system of main system simultaneously.
Description of drawings
Fig. 1 is the implementation method schematic flow sheet of the data load that discloses of the present invention.
Fig. 2 is the data loading system schematic diagram that the present invention discloses.
Embodiment
Guiding theory of the present invention is, when main system need be loaded on a plurality of system with same data parallel, when carrying out the data load task of setting up respectively between main system and each subsystem, main system need not to carry out the data loading tasks and is reading of data from the memory cell of slow devices, but after adopting main system from memory cell data to be read in main system memory for the first time, shared drive, these data are sent to the corresponding respectively subsystem of different pieces of information loading tasks that loads these data, repeat the system resource that reading of data repeated to take thereby reduce main system, improve data loading efficiency.
Please refer to shown in Figure 1, the implementation method schematic flow sheet of the data load that discloses for the present invention.The present invention includes following steps:
Step s110: when subsystem loads a certain data according to handled service needed, data load request message to the main system transmission, perhaps, by the data load notification message that the main system of managing subsystem concentratedly sends to each subsystem, the described message of main system or subsystem responses is to set up the data load task between main system and the subsystem;
Step s120: the data load task that main system is set up itself and each subsystem respectively is according to order composition data loading tasks formation settling time, and according to the default maximum data loading tasks number that can parallel processing of main system, the data load task of order data load task queue corresponding number is set to treatment progress, and other data load tasks are set to waiting process; And make the different pieces of information loading tasks that loads same data in the treatment progress be provided with interrelated;
Step s130: it is the data load task of treatment progress that main system is carried out in the formation of data loading tasks, from memory cell the data of a data loading tasks correspondence are read in internal memory, and control reading of data from memory cell no longer when carrying out other data load tasks that this data load task is associated;
Step s140: main system is shared data in internal memory, and sends to the corresponding respectively subsystem of all data load tasks that is associated with this data load task.
Certainly,, then from the data load task queue, delete this data load task, and make a data loading tasks that is in the waiting process enter treatment progress, begin to carry out this data load task if a data loading tasks is finished.
In addition, in step s120, need take all factors into consideration the system resource that takies that main system is managed business, and the deal with data loading tasks system resource that can take, default main system can parallel processing the data load task of maximum number, be unlikely to influence other Business Processing of main system to guarantee main system because of the deal with data loading tasks.
In step s140, main system can be taken all factors into consideration the data-handling capacity of main system and subsystem, the data that send to subsystem are carried out flow control, the data-handling capacity with balance main system and subsystem, subsystem its she between the professional and data load task.
In addition, see also shown in Figure 2ly, the present invention has also disclosed a kind of data loading system, is used for realizing that data load from the memory module 110 of main system 100 is to a plurality of subsystems 200 by main system 100 centralized management.
Described main system 100 comprises task creation module 120, task management module 130 and task execution module 140;
Described task creation module 120 is used for setting up the data load task between described main system 100 and the described subsystem 200;
Described task management module 130 is used for the formation of management data loading tasks, and the different pieces of information loading tasks that will load same data is provided with interrelated;
Described task execution module 140 is used to carry out the data loading tasks, after data are shared in internal memory, sends to the corresponding respectively subsystem of data load task that is mutually related.
Wherein, described task management module 130 specifically comprises:
The queue management submodule, be used for will set up between main system and each subsystem the data load task constitute formation, and the different pieces of information loading tasks that will load same data is provided with interrelated;
The management of process submodule is used for the data load task of real-time management main system parallel processing.
Described task execution module 140 specifically comprises:
The data read submodule is used for from the memory module of main system data being read in internal memory;
Data send submodule, are used for data after internal memory is shared, and send to the corresponding respectively subsystem of different pieces of information loading tasks that loads these data.
By as can be known aforementioned, between main system 100 and each subsystem 200 by setting up the data load task to realize that data load from the memory module 110 of main system 100 is to the corresponding subsystem 200 of this data load task; Task management module 130 in the main system 100 is with the data load task composition data loading tasks tabulation of main system 100 with the foundation of each subsystem 200, and the data load task that loads identical data is set to interrelated, with when described task execution module 140 is carried out the data loading tasks, data are read in internal memory from the memory module 110 of main system 100 after, memory shared, to from internal memory, read these data and send to the be mutually related subsystem of all data load task correspondences of this data load task, waste system resource to avoid main system 100 repeating datas, and can effectively improve the data in the memory module 110 that main system is read as slow devices for more than 100 time and cause the speed of data load.
Above embodiment only in order to the explanation the present invention and and unrestricted technical scheme described in the invention; Therefore, although this specification has been described in detail the present invention with reference to each above-mentioned embodiment,, those of ordinary skill in the art should be appreciated that still and can make amendment or be equal to replacement the present invention; And all do not break away from the technical scheme and the improvement thereof of the spirit and scope of the present invention, and it all should be encompassed in the middle of the claim scope of the present invention.

Claims (8)

1. the implementation method of a data load is characterized in that, may further comprise the steps:
Main system is according to the different pieces of information loading tasks of setting up respectively between itself and the different sub-systems, after the data of data load task correspondence are shared in internal memory, sends to the corresponding respectively subsystem of different pieces of information loading tasks that loads these data.
2. the implementation method of data load according to claim 1 is characterized in that, sets up between described main system and subsystem after the step of data load task, also comprises:
With the data load task composition data loading tasks formation of setting up between each subsystem and the main system, and the different pieces of information loading tasks that will load same data in the data load task queue is provided with interrelated.
3. the implementation method of data load according to claim 2 is characterized in that, the step of described composition data loading tasks formation specifically comprises:
Default main system is the maximum number of deal with data loading tasks simultaneously;
In order with the formation of data load task composition data loading tasks, and successively in the data load task queue data load task of maximum number be set to treatment progress, other data load tasks are set to waiting process.
4. the implementation method of data load according to claim 3, it is characterized in that, after described data with data load task correspondence are shared in internal memory, send to load these data the different pieces of information loading tasks respectively the step of corresponding subsystem specifically comprise:
Main system is read in internal memory with the data of this data load task correspondence from storage medium;
In internal memory with data sharing and send to the different pieces of information loading tasks that is mutually related in the described treatment progress and distinguish corresponding subsystem.
5. the implementation method of data load according to claim 1 is characterized in that, the described step of setting up the data load task specifically comprises:
The data load notification message that data load request message that subsystem sends to main system or main system send to subsystem;
The described message of main system or subsystem responses is to set up the data load task between main system and the subsystem.
6. data loading system, be used for realizing that data load from the memory module of main system is to a plurality of subsystems by the main system centralized management, it is characterized in that described main system comprises task creation module, task management module and task execution module;
Described task creation module, be used for and described subsystem between set up the data load task;
Described task management module is used for the formation of management data loading tasks, and the different pieces of information loading tasks that will load same data is provided with interrelated;
Described task execution module is used to carry out the data loading tasks, in internal memory data sharing is distinguished corresponding subsystem to send to the different pieces of information loading tasks that loads these data.
7. data loading system according to claim 6 is characterized in that, described task management module specifically comprises:
The queue management submodule, be used for will set up between main system and each subsystem the data load task constitute formation, and the different pieces of information loading tasks that will load same data is provided with interrelated;
The management of process submodule is used for the data load task of real-time management main system parallel processing.
8. data loading system according to claim 6 is characterized in that, described task execution module specifically comprises:
The data read submodule is used for from the memory module of main system data being read in internal memory; Data send submodule, are used for sending to after internal memory is with data sharing the corresponding respectively subsystem of data load task that is mutually related.
CN 200610035813 2006-05-31 2006-05-31 Method and system for realizing data loading Pending CN101083532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200610035813 CN101083532A (en) 2006-05-31 2006-05-31 Method and system for realizing data loading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200610035813 CN101083532A (en) 2006-05-31 2006-05-31 Method and system for realizing data loading

Publications (1)

Publication Number Publication Date
CN101083532A true CN101083532A (en) 2007-12-05

Family

ID=38912832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200610035813 Pending CN101083532A (en) 2006-05-31 2006-05-31 Method and system for realizing data loading

Country Status (1)

Country Link
CN (1) CN101083532A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447997B (en) * 2008-12-31 2012-10-10 中国建设银行股份有限公司 Data processing method, server and data processing system
CN101710323B (en) * 2008-09-11 2013-06-19 威睿公司 Computer storage deduplication
CN105306508A (en) * 2014-07-17 2016-02-03 阿里巴巴集团控股有限公司 Service processing method and service processing device
CN105450705A (en) * 2014-08-29 2016-03-30 阿里巴巴集团控股有限公司 Service data processing method and apparatus
CN105608138A (en) * 2015-12-18 2016-05-25 贵州大学 System for optimizing parallel data loading performance of array databases
CN105700902A (en) * 2014-11-27 2016-06-22 航天信息股份有限公司 Data loading and refreshing method and apparatus
CN106445615A (en) * 2016-10-12 2017-02-22 北京元心科技有限公司 Multi-system OTA upgrading method and device
CN107256180A (en) * 2017-05-19 2017-10-17 腾讯科技(深圳)有限公司 Data processing method, device and terminal
CN112071408A (en) * 2019-06-11 2020-12-11 无锡识凌科技有限公司 Business data monitoring component applied to enterprise service bus and method thereof

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710323B (en) * 2008-09-11 2013-06-19 威睿公司 Computer storage deduplication
CN101447997B (en) * 2008-12-31 2012-10-10 中国建设银行股份有限公司 Data processing method, server and data processing system
CN105306508A (en) * 2014-07-17 2016-02-03 阿里巴巴集团控股有限公司 Service processing method and service processing device
CN105450705A (en) * 2014-08-29 2016-03-30 阿里巴巴集团控股有限公司 Service data processing method and apparatus
CN105450705B (en) * 2014-08-29 2018-11-27 阿里巴巴集团控股有限公司 Business data processing method and equipment
CN105700902A (en) * 2014-11-27 2016-06-22 航天信息股份有限公司 Data loading and refreshing method and apparatus
CN105608138A (en) * 2015-12-18 2016-05-25 贵州大学 System for optimizing parallel data loading performance of array databases
CN105608138B (en) * 2015-12-18 2019-03-12 贵州大学 A kind of system of optimization array data base concurrency data loading performance
CN106445615A (en) * 2016-10-12 2017-02-22 北京元心科技有限公司 Multi-system OTA upgrading method and device
CN107256180A (en) * 2017-05-19 2017-10-17 腾讯科技(深圳)有限公司 Data processing method, device and terminal
CN112071408A (en) * 2019-06-11 2020-12-11 无锡识凌科技有限公司 Business data monitoring component applied to enterprise service bus and method thereof

Similar Documents

Publication Publication Date Title
CN101083532A (en) Method and system for realizing data loading
CN103473142B (en) Virtual machine migration method under a kind of cloud computing operating system and device
US9792227B2 (en) Heterogeneous unified memory
CN1875348A (en) Information system, load control method, load control program, and recording medium
CN1859325A (en) News transfer method based on chained list process
CN103888501A (en) Virtual machine migration method and device
CN103210379A (en) Server system, management method and device
CN1869933A (en) Computer processing system for implementing data update and data updating method
CN104202423A (en) System for extending caches by aid of software architectures
CN104144202A (en) Hadoop distributed file system access method, system and device
CN102609218A (en) Method for implementing parallel-flash translation layer and parallel-flash translation layer system
CN1945521A (en) Virtualizing system and method for non-homogeny storage device
CN102870374A (en) Load-sharing method and apparatus, and veneer,
CN103500147A (en) Embedded and layered storage method of PB-class cluster storage system
CN105207993A (en) Data access and scheduling method in CDN, and system
CN104281587A (en) Connection establishing method and device
CN102495987A (en) Method and system for local confidence breach preventing access to electronic information
CN112748883B (en) IO request pipeline processing device, method, system and storage medium
CN113553195A (en) Memory pool resource sharing method, device, equipment and readable medium
CN116797438A (en) Parallel rendering cluster application method of heterogeneous hybrid three-dimensional real-time cloud rendering platform
CN112269649A (en) Method, device and system for realizing asynchronous execution of host task
CN108153489B (en) Virtual data cache management system and method of NAND flash memory controller
CN101510146A (en) Virtual space establishing method, apparatus and system based on independent redundant magnetic disc array
CN106331036B (en) Server control method and device
CN115509763B (en) Fingerprint calculation method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication