WO2022088515A1 - 一种海量数据处理并发任务自适应测控方法及系统 - Google Patents

一种海量数据处理并发任务自适应测控方法及系统 Download PDF

Info

Publication number
WO2022088515A1
WO2022088515A1 PCT/CN2021/071736 CN2021071736W WO2022088515A1 WO 2022088515 A1 WO2022088515 A1 WO 2022088515A1 CN 2021071736 W CN2021071736 W CN 2021071736W WO 2022088515 A1 WO2022088515 A1 WO 2022088515A1
Authority
WO
WIPO (PCT)
Prior art keywords
database
group
parameter
cpu
application server
Prior art date
Application number
PCT/CN2021/071736
Other languages
English (en)
French (fr)
Inventor
鞠洪尧
施美
吕高赟
施云
谢志军
姚雪存
陆正球
宁可
于虹
Original Assignee
浙江纺织服装职业技术学院
宁波云裳谷时尚科技有限公司
宁波创艺信息科技有限公司
斐戈集团股份有限公司
宁波大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江纺织服装职业技术学院, 宁波云裳谷时尚科技有限公司, 宁波创艺信息科技有限公司, 斐戈集团股份有限公司, 宁波大学 filed Critical 浙江纺织服装职业技术学院
Publication of WO2022088515A1 publication Critical patent/WO2022088515A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention relates to the field of concurrent task processing, in particular to a massive data processing concurrent task self-adaptive measurement and control method and system.
  • concurrent tasks cannot be processed in time, which will seriously slow down the running of the software, and even lead to the downtime of the computer system.
  • how to match the hardware in the computer system with concurrent tasks is also an existing problem.
  • concurrent tasks and hardware such as the number of CPU cores and memory, the rapid processing of concurrent tasks can be effectively achieved, while reducing Waste of money for hardware acquisition.
  • the technical problem to be solved by the present invention is to provide an adaptive measurement and control method and system for massive data processing concurrent tasks, to solve the problem of reasonable matching between the number of concurrent tasks and the number of CPU cores and threads when processing massive data, and to improve the processing of massive data. efficiency goals.
  • the technical solution adopted by the present invention to solve the above problems is: a method for self-adaptive measurement and control of massive data processing concurrent tasks, comprising the following steps:
  • Step 1 scan the application server group and database server group in the network system by the parameter scanner, record the hardware parameters of each application server and database server, and store the hardware parameters in the hardware parameter register, the The hardware parameters include the number of CPU cores, the number of threads, and the value of memory capacity;
  • Step 2 After the hardware parameters scanned in step 1 are processed by the performance index extractor, they are sent to the performance index matching determiner for engineering matching judgment. Make it meet the parameter requirements; if it meets the parameter requirements, store the hardware parameter value information that meets the parameter requirements in the group parameter register; Step 3. Read the hardware parameters in the group parameter register through the group performance indicator statistic device, and obtain by accumulation The total number of CPU threads of all application servers in the application server group is NA thread and the total number of CPU threads of all database servers in the database server group is ND thread ;
  • Step 4 Send the total number of CPU threads of the application server group and the total number of CPU threads of the database server group obtained in step 3 to the group performance matching determiner for engineering judgment.
  • the number of CPU cores of the server is adjusted so that the total number of CPU threads of the application server group is greater than or equal to the total number of CPU threads of the database server group, which meets the engineering requirements; if it meets the engineering requirements, the total number of CPU threads of the application server group and the CPU threads of each application server in the group data is passed to the database controller;
  • Step 5 by the data splitter of the database controller, the data table to be processed in the database is equally divided into NA thread split tables, and the table name is automatically numbered and recorded and stored in the split information register;
  • Step 6 Start data processing tasks equal to the number of threads on each application server in the application server group through the concurrent task controller, and pass the corresponding table names in turn to perform real-time data processing;
  • Step 7 The application that completes the task processing in Step 6 feeds back the task end status information to the concurrent task controller.
  • the concurrent task controller collects all the feedback information, it sends the information to the numerical control library controller, and the data combiner of the database controller combines the processing results of the split table, and stores the combined results in the combined result memory.
  • the database controller will re-send the abnormal table name to the concurrent task controller to start the application again to process the data in the table.
  • the memory capacity value is N RAM
  • the number of CPU cores is N CPU
  • the parameter requirement in the step 2 is that N RAM / N CPU ⁇ 4 is in compliance with the parameter requirements, otherwise it is not in compliance with the parameter requirements.
  • the engineering requirement relationship in step 4 is that the total number of CPU threads of the application server group is NA thread compared with the total number of CPU threads of the database server group is ND thread , when NA thread ⁇ ND thread , it is in compliance with the engineering requirements, otherwise it is not in compliance with the engineering Require.
  • the parameter scanner includes a CPU hardware feature detector and a memory hardware feature detector, and the CPU hardware feature detector is used to detect the number of CPU cores and threads.
  • the memory hardware feature detector is used to detect the memory capacity value of each member server of the application server group and the database server group.
  • the hardware parameters in the hardware parameter register are extracted by the performance index extractor, and the hardware parameters are processed by the performance index extractor and sent to the performance index matching determiner.
  • the massive data processing concurrent task adaptive measurement and control system includes an application server group composed of multiple application servers, a database server group composed of multiple database servers, a single server parameter detection unit, and a server group concurrent task management and control unit.
  • the single-server parameter detection unit is used to scan, process, and store the hardware parameter information of each member server in the application server group and the database server group, and determine whether the number of CPU cores of the application server and the database server matches the memory capacity value, And send the matching hardware parameter information to the concurrent task management and control unit of the server group.
  • the server group concurrent task management and control unit is used to determine whether the total number of CPU threads of the application server group matches the total number of CPU threads of the database server group, and to complete equal division and merging of the data to be processed in the database.
  • the single-server parameter detection unit includes a parameter scanner, a hardware parameter register, a performance index extractor and a performance index matching determiner.
  • the parameter scanner is used to scan the hardware parameter information corresponding to the application server and the database server.
  • the hardware parameter register is used to store the hardware parameter information obtained by the parameter scanner.
  • the performance index extractor is used for reading and processing the hardware parameter information stored in the hardware parameter register, and sending the processing result to the performance index matching determiner.
  • the performance index matching determiner is used to determine whether the number of CPU cores of each application server and database server matches the memory capacity value, and store the matched parameter information in the group parameter register.
  • the server group concurrent task management and control unit includes a group performance indicator statistic device, a group performance matching determiner, a database controller and a division information register.
  • the group performance indicator statistic is used for reading the parameter information in the group parameter register, and performing accumulation processing to obtain the total number of CPU threads of the application server group and the total number of CPU threads of the database server group.
  • the group performance matching determiner is used to determine whether the total number of CPU threads of the application server group matches the total number of CPU threads of the database server group, and send the matched total number of CPU threads of the application server group and the number of CPU threads of each application server to the database control. device.
  • the database controller is used to split the data table to be processed in the database into a plurality of split tables, and store the table name number of the split table and the number of CPU threads of each application server in the split information register.
  • the server group concurrent task management and control unit further includes a concurrent task controller.
  • the concurrent task controller includes a task starter and a task state detector.
  • the task starter sequentially controls each application server to start a number of data processing tasks equal to the number of its own CPU threads according to the information in the division information register.
  • the task state detector is used to detect the end state of the application program executing the data processing task, and send the complete task completion instruction information to the database controller after all the data processing tasks are completed.
  • the database controller includes a data divider and a data consolidator.
  • the data splitter is used to split the data table to be processed in the database into NA thread split tables equally.
  • the data combiner is used to combine the processing results of each partition table after the database controller receives the complete task instruction, and store the combined results in the combined result memory.
  • one of the advantages of the present invention is that, starting from the performance acquisition of the key data processing equipment of the information system, the optimal number of concurrent tasks of the key server and the hardware performance such as the number of CPU cores, the number of threads, and the memory capacity are automatically determined. Adapting the matching relationship gives clear practical results.
  • Each server adaptively matches the number of concurrent tasks according to its own hardware performance, which overcomes the problem of light load and overload when the server is processing massive data, and makes the load balancing effect more effective. Data processing efficiency is greatly improved.
  • the second advantage of the present invention is that specific requirements are given for the matching relationship between the total number of CPU threads of the application server group and the database server group that determine the processing efficiency of massive data, and the total number of CPU threads of the application server group is required to be greater than or equal to the CPU of the database server group.
  • the total number of threads so that when the application server cluster performs high concurrent data processing tasks, the database server cluster can achieve task load balancing by itself after clustering without performance bottlenecks.
  • the third advantage of the present invention is that the data tables in the database storing massive data are equally divided according to the total number of threads of the application server group CPU, and an independent data processing unit is provided for each concurrent task in the application server group.
  • Tasks belong to different copies of the same application, so there is a one-to-one relationship between task handlers and the data they process.
  • the fourth advantage of the present invention is to give an alarm for the mismatch between the number of CPU cores, the number of threads and the memory capacity when a single server has the optimal number of concurrent tasks.
  • the number of CPU threads of the application server group does not match the number of CPU threads of the database server group when the optimal number of concurrent tasks is not matched, it ensures that the key data processing nodes run in the optimal high-efficiency state and provide the best processing. ability. It is more scientific and reasonable to determine the number of data divisions according to the total number of threads of the application server cluster CPU.
  • Fig. 1 is the working flow chart of the self-adaptive measurement and control method for massive data processing concurrent tasks
  • Fig. 2 is a system connection frame diagram of an adaptive measurement and control system for massive data processing concurrent tasks.
  • An adaptive measurement and control method for massive data processing concurrent tasks comprising the following steps:
  • Step 1 Scan the application server group AF and the database server group DF in the network system through the parameter scanner S, and record the hardware parameters of each application server and database server.
  • the hardware parameters are stored in the hardware parameter register R, and the hardware parameters include the number of CPU cores, the number of threads, and the value of memory capacity;
  • the parameter scanner S includes a CPU hardware feature detector and a memory hardware feature detector, and the CPU hardware feature detector is used to detect the CPU Number of cores and threads.
  • the memory hardware feature detector is used to detect memory capacity values corresponding to the application server group AF and the database server group DF.
  • Step 2 extract the hardware parameters in the hardware parameter register R through the performance index extractor C, and send the hardware parameters to the performance index matching determiner L through the performance index extractor C, and perform engineering matching through the performance index matching determiner L If it does not meet the parameter requirements, alarm information will be sent through the alarm A, and the system administrator will adjust the hardware, that is, the system administrator will readjust the memory capacity to make it meet the parameter requirements; if it meets the parameter requirements , store the information corresponding to the hardware parameter values that meet the parameter requirements in the group parameter register CR; the memory capacity value is N RAM (unit: GB), the number of CPU cores is N CPU (unit: piece), and the parameter requirement is N RAM /N CPU ⁇ 4 means it meets the parameter requirements, otherwise it does not meet the parameter requirements.
  • Step 3 Read the hardware parameters in the group parameter register CR through the group performance indicator counter CC, and obtain the total number of CPU threads of all application servers in the application server group AF through accumulation as NA threads and all database servers in the database server group DF.
  • the total number of CPU threads is ND thread ;
  • Step 4 Send the total number of CPU threads of the application server group AF and the total number of CPU threads of the database server group DF obtained in step 3 to the group performance matching determiner CL for engineering judgment. If the engineering requirements are not met, an alarm will be issued through the alarm device AC.
  • Step 5 by the data divider of the database controller DC, the data table to be processed in the database is equally divided into NA thread division tables, the table names are automatically numbered and recorded and stored in the division information register SR; while the database controls
  • the data combiner of the controller DC monitors the instructions returned by the concurrent task controller in real time;
  • Step 6 through the concurrent task controller TC, in each member application server of the application server group AF, start a data processing task equal to the number of its CPU threads in turn, and transmit the table name of the response in turn, and carry out real-time concurrent data processing;
  • Step 7 After completing the task processing in Step 6, the concurrent task controller TC collects the feedback information of the end of each data processing task, and sends the feedback information to the numerical control library controller DC after the collection of the feedback information is completed.
  • the database controller DC receives an instruction for normal termination of all tasks returned by the concurrent task control, the data combiner of the database controller DC combines the processing results of the split table and stores them in the result memory to end the task.
  • the database controller DC calls the task initiator again to restart the task processing program on the server that terminated the task abnormally.
  • the adaptive measurement and control system for massive data processing concurrent tasks adapted to the above method mainly includes an application server group AF composed of multiple application servers, a database server group DF composed of multiple database servers, a single-server parameter detection unit, and concurrent tasks of the server group. control unit.
  • the single-server parameter detection unit is used to scan, store, and process the hardware parameter information of the application server group AF and the database server group DF, and determine whether the number of CPU cores of the application server and database server matches the memory capacity value, and compare the matching hardware parameters.
  • the information is sent to the concurrent task control unit of the server farm.
  • the server group concurrent task management and control unit is used to determine whether the total number of CPU threads of the application server group AF matches the total number of CPU threads of the database server group DF, and to complete the equal division of the data to be processed in the database and the merging of the processing results.
  • the single-server parameter detection unit includes a parameter scanner S, a hardware parameter register R, a performance index extractor C, a performance index matching determiner L, and a group parameter register CR.
  • the parameter scanner S is used to scan the hardware parameter information of the application server and the database server
  • the hardware parameter register R is used to store the hardware parameter information obtained by the parameter scanner S
  • the performance index extractor C is used to process and extract the storage in the hardware parameter register R and send the processed parameters to the performance index matching determiner L.
  • the performance index matching determiner L is used to determine whether the number of CPU cores of the application server and database server matches the memory capacity value, and will match the matching
  • the parameter information is stored in the group parameter register CR.
  • the server cluster concurrent task management and control unit includes a cluster performance indicator statistic unit CC, a cluster performance matching determiner CL, a database controller DC, a division information register SR, a concurrent task manager and a merge result storage DR.
  • the group performance indicator statistic device CC is used to read the parameter information in the group parameter register CR, and perform accumulation processing to obtain the total number of CPU threads of the application server group AF and the total number of CPU threads of the database group DF.
  • the group performance matching determiner CL is used to determine whether the total number of CPU threads of the application server group AF matches the total number of CPU threads of the database server group DF, and compares the total number of CPU threads of the matched application server group AF and the CPU of each member application server.
  • the number of threads is sent to the database controller DC.
  • the database controller DC is used to split the data table to be processed in the database into multiple split tables, and use the split information register SR to number the split tables and the number of CPU threads of each application server to store.
  • the concurrent task controller TC included in the above-mentioned server group concurrent task management and control unit also includes a task starter and a task state detector, and the task starter controls each application server to start and its own CPU thread sequentially according to the information in the division information register SR.
  • the task state detector is used to detect the end state of each concurrent task, and send the end state information of all tasks to the database controller DC after the data processing task is processed.
  • the above-mentioned database controller DC includes a data splitter and a data combiner, the data splitter is used to divide the data table to be processed in the database into NA thread split tables, and the combiner is used to receive the normal task at the database controller DC. After finishing the information, the processed results of each division table are combined, and the combined results are stored in the combined result memory DR.
  • the above-mentioned parameter scanner S includes a CPU hardware feature detector and a memory hardware feature detector.
  • the CPU hardware feature detector is used to detect the number of CPU cores and threads of each server in the application server group AF and the database server group DF.
  • the memory hardware feature detector is used to detect the memory capacity value of each server in the application server group AF and the database server group DF;
  • the hardware parameter register R is a text file, used to store the hardware parameter information scanned and identified by the parameter scanner S;
  • the group The parameter register CR consists of two text files, which are respectively used to store the processed parameter values of all the application servers in the application server group AF, and the parameter values of all the database servers in the database server group DF.
  • the parameter scanner S of the single server parameter monitoring unit scans the number of CPU cores, threads and memory capacity of each corresponding server in the application server group AF and the database server group DF respectively. If the two values do not meet the parameter requirements, the system administrator will be prompted to adjust the hardware. If the two values meet the parameter requirements, the group name will be changed. , server name, number of CPU cores, number of CPU threads and memory capacity values are stored in the group parameter register CR.
  • the cluster performance indicator CC counts the total number of cores and threads of the application server cluster AF and the database server cluster DF respectively, and matches the total number of CPU threads of the two server clusters according to the engineering requirements. Sexual comparison. If the number of CPU threads of the two server groups does not meet the engineering requirements, the system administrator will be prompted to adjust the hardware configuration of the database server. If the CPU thread values of the two server groups meet the engineering requirements, the database controller DC divides the database DB into multiple tables according to the total number of threads of the application server group AF, and records the divided table name information in the division information register SR.
  • the concurrent task controller TC starts each application server with a number of data processing tasks equal to the number of its CPU threads, and at the same time transmits the corresponding table names in turn to implement data processing. After each data processing task in the application server is completed, it feeds back information to the concurrent task controller TC. After all tasks are completed normally, the concurrent task controller TC feeds back information to the database controller DC. The data merger merges the data processed by the split data tables in the database DB. The result is put into the merge result memory DR. If a data processing task in the application server terminates abnormally, the application server feeds back information to the concurrent task controller TC. After receiving the task abnormal end information, the database controller DC resends the task processing instruction to the concurrent task controller TC, and restarts the data processing task to complete the processing of the last abnormally terminated data table.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种海量数据处理并发任务自适应测控方法,包括通过参数扫描器对应用服务器群、数据库服务器群进行扫描,将步扫描到的硬件参数发送到性能指标匹配判定器内并进行工程匹配性判定,将通过累加获得的应用服务器群CPU线程总数和数据库服务器群CPU线程总数发送至群性能匹配判定器进行工程判定。通过数据分割器对待处理的数据表进行等值分割,通过并发任务控制器启动与其CPU线程数等量的并发数据处理任务,进行实时数据处理。优点是对关键服务器的最佳并发任务数与CPU核心数、线程数、内存容量等硬件性能进行自适应匹配,克服了服务器在处理海量数据时的处理任务轻载或过载问题,提高数据处理效率。

Description

一种海量数据处理并发任务自适应测控方法及系统 技术领域
本发明涉及并发任务处理领域,具体涉及一种海量数据处理并发任务自适应测控方法及系统。
背景技术
随着信息系统在人类社会各领域应用的不断深入,反映人类生活规律的各类数据的存储量越来越大,如:电子商务、交通出行、电子教育等。这些数据的快速、高效分析结果能够为各行各业的发展决策提供科学的依据。而如何从海量的日常数据中快速提取高价值商用信息是人类社会活动的迫切需求,现有计算机应对海量数据处理耗时长造成了高价值商用信息的应用延迟。长期以来,如何科学有效地提高应用程序并发任务数并且合理分割数据库来提高数据处理效率一直困扰着息处理行业的发展。并发任务的处理需要计算机系统各个模块之间的密切配合,否则将会造成并发任务无法及时处理,严重拖慢软件的运行,严重甚至会导致计算机系统的宕机。同时如何将加计算机系统内的硬件与并发任务之间进行匹配同样是现存的难题,通过并发任务与CPU核心数、内存等硬件之间的更好配合可有效实现并发任务的快速处理,同时降低硬件购置资金的浪费。
因此,探索信息处理网络提高信息处理效率,有效利用信息网络中关键节点设备的高性能、科学地优化各关键服务节点间的协作关系成为亟待解决的问题。
发明内容
本发明所要解决的技术问题是:提供一种海量数据处理并发任务自适应测控方法及系统,解决处理处理海量数据时并发任务数与CPU核心数、线程数的合理匹配问题,实现提高海量数据处理效率的目标。
本发明解决上述问题所采用的技术方案为:一种海量数据处理并发任务自适应测控方法,包括以下步骤:
步骤1、通过参数扫描器对网络系统中的应用服务器群、数据库服务器群进行扫描,记录每个应用服务器和数据库服务器的硬件参数,并将所述的硬件参数存储于硬件参数寄存器内,所述的硬件参数包括CPU核心数、线程数以及内存容量值;
步骤2、将步骤1中扫描到的硬件参数经性能指标提取器处理后发送到性能指标匹配判定器内并进行工程匹配性判定,若不符合参数要求,通过系统管理员对其进行硬件调整,使其符合参数要求;若符合参数要求,将符合参数要求的硬件参数值信息存储至群参数寄存器内; 步骤3、通过群性能指标统计器读取群参数寄存器内的硬件参数,并通过累加获得应用服务器群中所有应用服务器的CPU线程总数为NA thread和数据库服务器群中所有数据库服务器的CPU线程总数为ND thread
步骤4、将步骤3中获得的应用服务器群CPU线程总数和数据库服务器群CPU线程总数发送至群性能匹配判定器进行工程判定,若不符合工程要求,通过系统管理员对应用服务器群相应的应用服务器的CPU核心数进行调整,使应用服务器群CPU线程总数大于等于数据库服务器群CPU线程总数,符合工程要求;若符合工程要求,将应用服务器群CPU线程总数及群中每个应用服务器的CPU线程数传递至数据库控制器;
步骤5、通过数据库控制器的数据分割器对数据库中待处理的数据表进行等值分割为NA thread个分割表,对表名进行自动编号并记录保存在分割信息寄存器内;
步骤6、通过并发任务控制器在应用服务器群中的每个应用服务器上启动与其线程数等量的数据处理任务,并依次传递相应的表名,进行实时数据处理;
步骤7、通过步骤6完成任务处理的应用程序反馈任务结束状态信息给并发任务控制器。并发任务控制器收集全部反馈信息后,将信息发送至数控库控制器,数据库控制器的数据合并器对分割表的处理结果进行合并,并存储合并结果至合并结果存储器。当某个应用程序结束状态信息异常,数据库控制器会重新将处理异常的表名发送给并发任务控制器再次启动应用程序处理表中的数据。
优选地,所述的内存容量值为N RAM,CPU核心数为N CPU,所述步骤2中的参数要求为N RAM/N CPU≥4为符合参数要求,否则为不符合参数要求。
优选地,步骤4中的工程要求关系为应用服务器群CPU线程总数为NA thread与数据库服务器群CPU线程总数为ND thread相比,当NA thread≥ND thread时为符合工程要求,否则为不符合工程要求。
优选地,所述的参数扫描器包括CPU硬件特征检测器和内存硬件特征检测器,所述的CPU硬件特征检测器用于检测CPU核心数、线程数。所述的内存硬件特征检测器用于检测应用服务器群、数据库服务器群每个成员服务器的内存容量值。
优选地,所述步骤2通过性能指标提取器对硬件参数寄存器内的硬件参数进行提取,并通过性能指标提取器将硬件参数处理后发送至性能指标匹配判定器。
海量数据处理并发任务自适应测控系统,包括由多个应用服务器构成的应用服务器群、多个数据库服务器构成的数据库服务器群以及单服务器参数检测单元、服务器群并发任务管控单元。所述的单服务器参数检测单元用于扫描、处理、存储应用服务器群和数据库服 务器群中每个成员服务器的硬件参数信息,以及判定应用服务器、数据库服务器的CPU核心数与内存容量值是否匹配,并将匹配的硬件参数信息输送至服务器群并发任务管控单元。所述的服务器群并发任务管控单元用于判定应用服务器群CPU线程总数与数据库服务器群CPU线程总数是否匹配,并完成对数据库中待处理数据的等值分割以及合并。
优选地,所述的单服务器参数检测单元包括参数扫描器、硬件参数寄存器、性能指标提取器和性能指标匹配判定器。所述的参数扫描器用于扫描应用服务器、数据库服务器对应的硬件参数信息。所述的硬件参数寄存器用于存储参数扫描器获取的硬件参数信息。所述的性能指标提取器用于读取、处理硬件参数寄存器内存储的硬件参数信息,并将处理结果发送至性能指标匹配判定器。所述的性能指标匹配判定器用于判定每个应用服务器、数据库服务器的CPU核心数与内存容量值是否匹配,并将相匹配的参数信息存储至群参数寄存器。
优选地,所述的服务器群并发任务管控单元包括群性能指标统计器、群性能匹配判定器、数据库控制器和分割信息寄存器。所述的群性能指标统计器用于读取群参数寄存器内的参数信息,并进行累加处理获得应用服务器群CPU线程总数和数据库服务器群CPU线程总数。所述的群性能匹配判定器用于判定应用服务器群CPU线程总数与数据库服务器群CPU线程总数是否匹配,并将相匹配的应用服务器群CPU线程总数以及每个应用服务器的CPU线程数发送至数据库控制器。所述的数据库控制器用于将数据库中待处理的数据表拆分成多个分割表,并将分割表的表名编号及每个应用服务器的CPU线程数存储至分割信息寄存器。
优选地,所述的服务器群并发任务管控单元还包括并发任务控制器。所述的并发任务控制器包括任务启动器和任务状态检测器。所述的任务启动器依据分割信息寄存器内的信息依次控制每个应用服务器启动与其自身CPU线程数相等数量的数据处理任务。所述的任务状态检测器用于检测执行数据处理任务的应用程序的结束状态,并在数据处理任务全部完成后发送全体任务完成指令信息至数据库控制器。
优选地,所述的数据库控制器包括数据分割器和数据合并器。所述的数据分割器用于将数据库中待处理的数据表等值分割成NA thread个分割表。所述的数据合并器用于在数据库控制器接收到全体任务完成指令后对每个分割表的处理结果进行合并,并将合并后的结果存储至合并结果存储器中。
与现有技术相比,本发明的优点之一是从信息系统关键数据处理设备的性能获取入手,对关键服务器的最佳并发任务数与CPU核心数、线程数、内存容量等硬件性能的自适应匹配关系给出了明确的实用性结果,每台服务器根据自身的硬件性能自适应的匹配并发任 务数,克服了服务器在处理海量数据时的处理任务轻载获过载问题,使得负载均衡效果及数据处理效率大幅度提高。本发明的优点之二是对决定海量数据处理效率的应用服务器群和数据库服务器群的CPU总线程数匹配关系给出了具体要求,要求应用服务器群的CPU总线程数大于等于数据库服务器群的CPU总线程数,以便于应用服务器群执行高并发数据处理任务时,数据库服务器群在群集后可自行实现任务负载均衡且不会出现性能瓶颈问题。本发明的优点之三是根据应用服务器群CPU总线程数来等值分割存储海量数据的数据库中的数据表,为应用服务器群中的每个并发任务提供独立的数据处理单元,由于每个并发任务属于同一个应用程序的不同副本,所以任务处理程序与其所处理的数据为一对一关系,随着CPU核心数及线程数的提高,系统可并发的任务数随之提高,数据的处理效率随着应用服务器群总线程数的提高而同步提高。本发明的优点之四是对于单服务器在最佳并发任务数时CPU核心数、线程数与内存容量的不匹配给出报警。对于应用服务器群在最佳并发任务数时CPU线程数与数据库服务器群的CPU线程数不匹配时给出报警,保证了关键数据处理节点在最优的高效率状态下运行并提供最佳的处理能力。数据分割的数量按照应用服务器群CPU总线程数来确定更加科学合理。
附图说明
图1为海量数据处理并发任务自适应测控方法的工作流程图;
图2为海量数据处理并发任务自适应测控系统的系统连接框架图。
具体实施方式
以下结合附图实施例对本发明作进一步详细描述。
一种海量数据处理并发任务自适应测控方法,包括以下步骤:
步骤1、通过参数扫描器S对网络系统中的应用服务器群AF、数据库服务器群DF进行扫描,记录各个应用服务器和数据库服务器的硬件参数。硬件参数存储于硬件参数寄存器R内,硬件参数包括CPU核心数、线程数以及内存容量值;其中参数扫描器S包括CPU硬件特征检测器和内存硬件特征检测器,CPU硬件特征检测器用于检测CPU核心数、线程数。内存硬件特征检测器用于检测应用服务器群AF、数据库服务器群DF对应的内存容量值。
步骤2、过性能指标提取器C对硬件参数寄存器R内的硬件参数进行提取,并通过性能指标提取器C将硬件参数发送至性能指标匹配判定器L,通过性能指标匹配判定器L进行工程匹配性判定,若不符合参数要求,通过报警器A发出报警信息,并通过系统管理员对其进行硬件调整,即通过系统管理员对内存容量进行重新调整,使其符合参数要求;若符合参数要求,将符合参数要求的硬件参数值对应的信息存储至群参数寄存器CR内;其中内 存容量值为N RAM(单位:GB),CPU核心数为N CPU(单位:个),参数要求为N RAM/N CPU≥4为符合参数要求,否则为不符合参数要求。
步骤3、通过群性能指标统计器CC读取群参数寄存器CR内的硬件参数,并通过累加获得应用服务器群AF中所有应用服务器的CPU线程总数为NA thread和数据库服务器群DF中所有数据库服务器的CPU线程总数为ND thread
步骤4、将步骤3中获得的应用服务器群AF的CPU线程总数和数据库服务器群DF的CPU线程总数发送至群性能匹配判定器CL进行工程判定,若不符合工程要求,通过报警器AC发出报警信息,并通过系统管理员对应用服务器群AF中相应的应用服务器的CPU核心数进行调整,使应用服务器群AF的CPU线程总数大于等于数据库服务器群DF中CPU线程总数,符合工程要求;若符合工程要求,将应用服务器群AF的CPU线程总数及对应的每个成员应用服务器的CPU线程数传递至数据库控制器DC;其中工程要求关系为应用服务器群AF的CPU线程总数为NA thread与数据库群CPU线程总数为ND thread相比,当NA thread≥ND thread时为符合工程要求,否则为不符合工程要求。
步骤5、通过数据库控制器DC的数据分割器对数据库中待处理的数据表进行等值分割为NA thread个分割表,对表名进行自动编号并记录保存在分割信息寄存器SR内;同时数据库控制器DC的数据合并器实时监听并发任务控制器返回的指令;
步骤6、通过并发任务控制器TC依次在应用服务器群AF的每个成员应用服务器中启动与其CPU线程数等量的数据处理任务,并依次传递响应的表名,进行实时并发数据处理;
步骤7、通过步骤6完成任务处理完成后,并发任务控制器TC收集每个数据处理任务结束的反馈信息,反馈信息收集完成后发送至数控库控制器DC。当数据库控制器DC接收到并发任务控制返回的所有任务正常结束的指令时,数据库控制器DC的数据合并器对分割表的处理结果进行合并,并存储至结果存储器,结束任务。当并发任务控制器TC反馈了数据处理任务非正常结束信息时,数据库控制器DC再次调用任务启动器在非正常结束任务的服务器上重新启动任务处理程序。
适配上述方法的海量数据处理并发任务自适应测控系统,主要包括由多个应用服务器构成的应用服务器群AF、多个数据库服务器构成的数据库服务器群DF以及单服务器参数检测单元、服务器群并发任务管控单元。单服务器参数检测单元用于扫描、存储、处理应用服务器群AF和数据库服务器群DF的硬件参数信息,以及判定应用服务器、数据库服务器的CPU核心数与内存容量值是否匹配,并将匹配的硬件参数信息输送至服务器群并发任务管控单元。服务器群并发任务管控单元用于判定应用服务器群AF的CPU线程总数与数 据库服务器群DF的CPU线程总数是否匹配,并完成对数据库中待处理数据的等值分割以及处理结果合并。
其中,单服务器参数检测单元包括参数扫描器S、硬件参数寄存器R、性能指标提取器C、性能指标匹配判定器L和群参数寄存器CR。参数扫描器S用于扫描应用服务器、数据库服务器的硬件参数信息,硬件参数寄存器R用于存储参数扫描器S获取的硬件参数信息,性能指标提取器C用于处理、提取硬件参数寄存器R内存储的硬件参数信息,并将处理后的参数发送至性能指标匹配判定器L,性能指标匹配判定器L用于判定应用服务器、数据库服务器的CPU核心数与内存容量值是否匹配,并将相匹配的参数信息存储至群参数寄存器CR。
服务器群并发任务管控单元包括群性能指标统计器CC、群性能匹配判定器CL、数据库控制器DC、分割信息寄存器SR、并发任务管理器和合并结果存储器DR。群性能指标统计器CC用于读取群参数寄存器CR内的参数信息,并进行累加处理获得应用服务器群AF的CPU线程总数和数据库群DF的CPU线程总数。群性能匹配判定器CL用于判定应用服务器群AF的CPU线程总数与数据库服务器群DF的CPU线程总数是否匹配,并将相匹配的应用服务器群AF的CPU线程总数以及每个成员应用服务器的CPU线程数发送至数据库控制器DC,数据库控制器DC用于将数据库中待处理的数据表分割成多个分割表,并通过分割信息寄存器SR对分割表的编号以及每个应用服务器的CPU线程数进行存储。上述的服务器群并发任务管控单元中包括的并发任务控制器TC中还包括任务启动器和任务状态检测器,任务启动器依据分割信息寄存器SR内的信息依次控制每个应用服务器启动与其自身CPU线程数相等数量的数据处理任务,任务状态检测器用于检测每个并发任务的结束状态,并在数据处理任务处理完成后发送所有任务的结束状态信息至数据库控制器DC。
上述的数据库控制器DC包括数据分割器和数据合并器,数据分割器用于将数据库中待处理的数据表等值分割成NA thread个分割表,合并器用于在数据库控制器DC接收到全体任务正常结束信息后对每个分割表被处理的结果进行合并,并将合并后的结果存储至合并结果存储器DR中。
上述的参数扫描器S包括CPU硬件特征检测器和内存硬件特征检测器,CPU硬件特征检测器用于检测应用服务器群AF、数据库服务器群DF中每个服务器的CPU核心数、线程数。内存硬件特征检测器用于检测应用服务器群AF、数据库服务器群DF中每个服务器的内存容量值;硬件参数寄存器R为文本文件,用于存储参数扫描器S所扫描识别出的硬件参数信息;群参数寄存器CR由两个文本文件构成,分别用于存储处理后的应用服务器群 AF内所有应用服务器参数值,以及数据库服务器群DF内所有数据库服务器参数值。
海量数据处理并发任务自适应测控系统的工作过程为:单服务器参数监测单元的参数扫描器S分别扫描应用服务器群AF和数据库服务器群DF的每个对应的服务器CPU核心数、线程数以及内存容量值,每个成员服务器的CPU核心数与内存容量值进行工程匹配性判定,若这两个值不符合参数要求就提示系统管理员进行硬件调整,若这两个值符合参数要求就将群名称、服务器名称、CPU核心数、CPU线程数和内存容量值保存在群参数寄存器CR中。
在服务器群并发任务管控单元中,群性能指标统计器CC分别统计应用服务器群AF和数据库服务器群DF的总体核心数、线程数值,将两个服务器群的总的CPU线程数值按照工程要求进行匹配性比对。若两个服务器群的CPU线程数值不满足工程要求,就提示系统管理员调整数据库服务器硬件配置。若两个服务器群的CPU线程数值满足工程要求,数据库控制器DC根据应用服务器群AF的总线程数对数据库DB进行多表分割,并将分割后的表名信息记录在分割信息寄存器SR中。并发任务控制器TC在每个应用服务器中启动与其CPU线程数等量的数据处理任务数,同时依次传递相应的表名,实施数据处理。应用服务器中每个数据处理任务完成后反馈信息给并发任务控制器TC,所有任务正常结束后,并发任务控制器TC反馈信息给数据库控制器DC,数据合并器合并数据库DB中分割数据表处理的结果,放入合并结果存储器DR中。若应用服务器中某个数据处理任务异常终止,应用服务器反馈信息给并发任务控制器TC。数据库控制器DC接收到任务异常结束信息后,重新发送任务处理指令给并发任务控制器TC,重新启动数据处理任务完成上次异常终止的数据表的处理。
以上所述实施例仅为本发明的优选实施例,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干的变形和改进,这些都属于本发明的保护范围。

Claims (10)

  1. 一种海量数据处理并发任务自适应测控方法,其特征在于,包括以下步骤:
    步骤1、通过参数扫描器对网络系统中的应用服务器群、数据库服务器群进行扫描,记录各个应用服务器和数据库服务器的硬件参数,并所述的硬件参数存储于硬件参数寄存器内,所述的硬件参数包括CPU核心数、线程数以及内存容量值;
    步骤2、将步骤1中扫描到的硬件参数发送到性能指标匹配判定器内并进行工程匹配性判定,若不符合参数要求通过系统管理员对其进行硬件调整,使其符合参数要求;若符合参数要求,将符合参数要求的硬件参数值对应的信息存储至群参数寄存器内;
    步骤3、通过群性能指标统计器读取群参数寄存器内的硬件参数,并通过累加获得应用服务器群中所有应用服务器的CPU线程总数为NA thread和数据库服务器群中所有数据库服务器的CPU线程总数为ND thread
    步骤4、将步骤3中获得的应用服务器群CPU线程总数和数据库服务器群CPU线程总数发送至群性能匹配判定器进行工程判定,若不符合工程要求通过系统管理员对应用服务器群对应的应用服务器的CPU核心数进行调整,使应用服务器群CPU线程总数大于等于数据库服务器群CPU线程总数,符合工程要求;若符合工程要求,将应用服务器群CPU线程总数及对应的应用服务器的CPU线程数传递至数据库控制器;
    步骤5、通过数据库控制器的数据分割器对数据库中待处理的数据表进行等值分割为NA thread个分割表,对表名进行自动编号并记录保存在分割信息寄存器内;
    步骤6、通过并发任务控制器在应用服务器群的各成员服务器中启动与其CPU线程数等量的并发数据处理任务,并依次传递相应的表名,进行实时数据处理;
    步骤7、通过步骤6完成数据任务的每个应用程序在任务结束后,反馈应用程序正常结束信息给并发任务控制器,当全部应用程序都正常结束后,并发任务控制器发送任务结束指令至数据库控制器,数据库控制器的数据合并器对分割表的处理结果进行合并,并存储至合并结果存储器,当应用服务器有未正常结束的应用程序时,并发任务控制器发送反馈信息至数据库控制器,数据库控制器重新发送表名给并发任务控制器单独启动应用服务器的应用程序重新处理未完成处理的数据表。
  2. 如权利要求1所述的一种海量数据处理并发任务自适应测控方法,其特征在于,所述的内存容量值为N RAM,CPU核心数为N CPU,所述步骤2中的参数要求为N RAM/N CPU≥4为符合参数要求,否则为不符合参数要求。
  3. 如权利要求1所述的一种海量数据处理并发任务自适应测控方法,其特征在于,步骤4中的工程要求关系为应用服务器群CPU线程总数为NA thread与数据库群CPU线程总数为 ND thread相比,当NA thread≥ND thread时为符合工程要求,否则为不符合工程要求。
  4. 如权利要求1所述的一种海量数据处理并发任务自适应测控方法,其特征在于,所述的参数扫描器包括CPU硬件特征检测器和内存硬件特征检测器,所述的CPU硬件特征检测器用于检测CPU核心数、线程数,所述的内存硬件特征检测器用于检测应用服务器群、数据库服务器群中所有成员服务器的内存容量值。
  5. 如权利要求1所述的一种海量数据处理并发任务自适应测控方法,其特征在于,所述步骤2通过性能指标提取器对硬件参数寄存器内的硬件参数进行提取,并通过性能指标提取器处理后将硬件参数发送至性能指标匹配判定器。
  6. 用于适配权利要求1所述一种海量数据处理并发任务自适应测控方法的海量数据处理并发任务自适应测控系统,其特征在于,包括由多个应用服务器构成的应用服务器群、多个数据库服务器构成的数据库服务器群以及单服务器参数检测单元、服务器群并发任务管控单元,所述的单服务器参数检测单元用于扫描应用服务器群和数据库服务器群各服务器的硬件参数信息,以及判定应用服务器、数据库服务器对应的CPU核心数与内存容量值是否匹配,并将匹配的硬件参数信息输送至服务器群并发任务管控单元,所述的服务器群并发任务管控单元用于判定应用服务器群CPU线程总数与数据库服务器群CPU线程总数是否匹配,并完成对数据库中待处理数据的等值分割以及合并。
  7. 如权利要求6所述的海量数据处理并发任务自适应测控系统,其特征在于,所述的单服务器参数检测单元包括CPU和内存参数扫描器、硬件参数寄存器、性能指标提取器、性能指标匹配判定器和群参数寄存器,所述的参数扫描器用于扫描每个应用服务器、数据库服务器的硬件参数信息,所述的硬件参数寄存器用于存储参数扫描器获取的硬件参数信息,所述的性能指标提取器用于读取和处理硬件参数寄存器内存储的硬件参数信息,并将处理结果发送至性能指标匹配判定器,所述的性能指标匹配判定器用于判定应用服务器、数据库服务器的CPU核心数与内存容量值是否匹配,并将相匹配的参数信息存储至群参数寄存器。
  8. 如权利要求7所述的海量数据处理并发任务自适应测控系统,其特征在于,所述的服务器群并发任务管控单元包括群性能指标统计器、群性能匹配判定器、数据库控制器和分割信息寄存器,所述的群性能指标统计器用于读取群参数寄存器内的参数信息,并进行累加处理获得应用服务器群CPU线程总数和数据库群CPU线程总数,所述的群性能匹配判定器用于判定应用服务器群CPU线程总数与数据库服务器群CPU线程总数是否匹配,并将相匹配的应用服务器群CPU线程总数以及各个应用服务器的线程数发送至数据库控制器,所述的数据库控制器用于将数据库中待处理的数据表分割成多个分割表,并通过分割信息寄存器对分 割表的编号以及每个应用服务器的线程数进行存储。
  9. 如权利要求8所述的海量数据处理并发任务自适应测控系统,其特征在于,所述的服务器群并发任务管控单元还包括并发任务控制器,所述的并发任务控制器包括任务启动器和任务状态检测器,所述的任务启动器依据分割信息寄存器内的信息依次控制每个应用服务器启动与其自身CPU线程数相等数量的数据处理任务,所述的任务状态检测器用于检测应用服务器中承担数据处理任务的应用程序的结束状态,并在所有数据处理任务正常结束后发送全体任务结束指令至数据库控制器。
  10. 如权利要求9所述的海量数据处理并发任务自适应测控系统,其特征在于,所述的数据库控制器包括数据分割器和数据合并器,所述的数据分割器用于将数据库中待处理的数据表等值分割成NA thread个分割表,所述的合并器用于在数据库控制器接收到全体任务完成指令后对每个分割表被处理的结果进行合并,并将合并后的结果存储至合并结果存储器中。
PCT/CN2021/071736 2020-10-28 2021-01-14 一种海量数据处理并发任务自适应测控方法及系统 WO2022088515A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011167948.8A CN112269660B (zh) 2020-10-28 2020-10-28 一种海量数据处理并发任务自适应测控方法及系统
CN202011167948.8 2020-10-28

Publications (1)

Publication Number Publication Date
WO2022088515A1 true WO2022088515A1 (zh) 2022-05-05

Family

ID=74344281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/071736 WO2022088515A1 (zh) 2020-10-28 2021-01-14 一种海量数据处理并发任务自适应测控方法及系统

Country Status (2)

Country Link
CN (1) CN112269660B (zh)
WO (1) WO2022088515A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116257365A (zh) * 2023-05-15 2023-06-13 建信金融科技有限责任公司 数据入库方法、装置、设备、存储介质及程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656698A (zh) * 2016-03-24 2016-06-08 鞠洪尧 一种网络应用系统智能监控结构与方法
WO2018183422A1 (en) * 2017-03-28 2018-10-04 Cisco Technology, Inc. Flowlet resolution for application performance monitoring and management
CN111049914A (zh) * 2019-12-18 2020-04-21 珠海格力电器股份有限公司 负载均衡方法、装置和计算机系统
CN111651789A (zh) * 2020-06-05 2020-09-11 北京明朝万达科技股份有限公司 一种基于扫描系统的多线程安全批量反馈的方法及装置
CN111694648A (zh) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 一种任务调度方法、装置以及电子设备
CN111782378A (zh) * 2020-07-29 2020-10-16 平安银行股份有限公司 自适应性的处理性能调整方法、服务器及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656698A (zh) * 2016-03-24 2016-06-08 鞠洪尧 一种网络应用系统智能监控结构与方法
WO2018183422A1 (en) * 2017-03-28 2018-10-04 Cisco Technology, Inc. Flowlet resolution for application performance monitoring and management
CN111049914A (zh) * 2019-12-18 2020-04-21 珠海格力电器股份有限公司 负载均衡方法、装置和计算机系统
CN111651789A (zh) * 2020-06-05 2020-09-11 北京明朝万达科技股份有限公司 一种基于扫描系统的多线程安全批量反馈的方法及装置
CN111694648A (zh) * 2020-06-09 2020-09-22 北京百度网讯科技有限公司 一种任务调度方法、装置以及电子设备
CN111782378A (zh) * 2020-07-29 2020-10-16 平安银行股份有限公司 自适应性的处理性能调整方法、服务器及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JU HONGYAO: "Service security monitoring mechanism for application server cluster", DIANXIN KEXUE - TELECOMMUNICATIONS SCIENCE, RENMIN YOUDIAN CHUBANSHE, BEIJING, CN, no. 6, 1 June 2016 (2016-06-01), CN , pages 177 - 185, XP055925680, ISSN: 1000-0801, DOI: 10.11959/j.issn.1000-0801.2016173 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116257365A (zh) * 2023-05-15 2023-06-13 建信金融科技有限责任公司 数据入库方法、装置、设备、存储介质及程序产品
CN116257365B (zh) * 2023-05-15 2023-08-22 建信金融科技有限责任公司 数据入库方法、装置、设备、存储介质

Also Published As

Publication number Publication date
CN112269660A (zh) 2021-01-26
CN112269660B (zh) 2023-04-11

Similar Documents

Publication Publication Date Title
US11888702B2 (en) Intelligent analytic cloud provisioning
CN109033123B (zh) 基于大数据的查询方法、装置、计算机设备和存储介质
US7958159B1 (en) Performing actions based on monitoring execution of a query
US11169994B2 (en) Query method and query device
CN110704231A (zh) 一种故障处理方法及装置
WO2019019621A1 (zh) 业务处理方法、装置、服务器和存储介质
US20120297393A1 (en) Data Collecting Method, Data Collecting Apparatus and Network Management Device
CN110175154A (zh) 一种日志记录的处理方法、服务器及存储介质
US20230017300A1 (en) Query method and device suitable for olap query engine
KR101374533B1 (ko) 대용량 데이터에 대한 고성능 복제 및 백업 시스템과, 고성능 복제 방법
CN111860667A (zh) 设备故障的确定方法及装置、存储介质、电子装置
Cheng et al. Efficient event correlation over distributed systems
CN101093454A (zh) 一种在分布式系统中执行sql脚本文件的方法和装置
WO2022088515A1 (zh) 一种海量数据处理并发任务自适应测控方法及系统
US20220253222A1 (en) Data reduction method, apparatus, computing device, and storage medium
CN105608138A (zh) 一种优化阵列数据库并行数据加载性能的系统
US8510273B2 (en) System, method, and computer-readable medium to facilitate application of arrival rate qualifications to missed throughput server level goals
WO2022088809A1 (zh) 确定检测服务器的间隔时间的方法、系统、设备及介质
CN110765082A (zh) Hadoop文件处理方法、装置、存储介质及服务器
US20140089311A1 (en) System. method, and computer-readable medium for classifying problem queries to reduce exception processing
CN110851249A (zh) 一种数据导出的方法及设备
CN113254547B (zh) 数据查询方法、装置、服务器及存储介质
CN111414567A (zh) 数据处理方法、装置
US20220308976A1 (en) Automatically detecting workload type-related information in storage systems using machine learning techniques
CN117527740A (zh) 语音流审核方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884260

Country of ref document: EP

Kind code of ref document: A1