CN113010278B - Batch processing method and system for financial insurance core system - Google Patents

Batch processing method and system for financial insurance core system Download PDF

Info

Publication number
CN113010278B
CN113010278B CN202110189239.8A CN202110189239A CN113010278B CN 113010278 B CN113010278 B CN 113010278B CN 202110189239 A CN202110189239 A CN 202110189239A CN 113010278 B CN113010278 B CN 113010278B
Authority
CN
China
Prior art keywords
task
information
data
instruction
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110189239.8A
Other languages
Chinese (zh)
Other versions
CN113010278A (en
Inventor
张鹏
陈立伟
范新生
吴志祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202110189239.8A priority Critical patent/CN113010278B/en
Publication of CN113010278A publication Critical patent/CN113010278A/en
Application granted granted Critical
Publication of CN113010278B publication Critical patent/CN113010278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Retry When Errors Occur (AREA)

Abstract

The invention discloses a batch processing method and a batch processing system for a financial insurance core system, wherein the method comprises the following steps: the system firstly calls a task state manager to check whether a task can be executed or not and whether the task is rerun data or not, and if the task is not the rerun data, the task is judged to pass the pre-detection; then according to the configuration data of the task; after main file data in the task data are extracted and a main file table is established, the task is subjected to fragment operation processing; and after the data fragmentation is finished, the CTM platform schedules different fragmented data to different servers for execution, and when the execution is carried out, the data fragmentation is blocked and then a thread pool is used for scheduling service logic in parallel. After the execution is finished, cancelled or the execution of the slicing task fails, the whole process enters a gathering stage. The problems that batch processing tasks of the existing life insurance core system are dispatched by using a simple timer, processing logic is simple and crude, single-point faults exist in the batch processing tasks, processing efficiency is low, and fault tolerance is poor are solved.

Description

Batch processing method and system for financial insurance core system
Technical Field
The invention relates to the field of batch processing, in particular to a batch processing method and system for a financial insurance core system.
Background
Spring Batch provides a large number of reusable data processing functions, including important functions such as transaction management, job processing statistics restart, skipping, and resource management, and is often used for offline migration of data and data processing. The method provides a large number of abundant system interfaces, provides a large number of customized operation capabilities, and realizes efficient processing of batch processing tasks by segmenting and dispatching a batch task step by step in a concurrent manner. It also has some drawbacks in the application of the insurance system: the combination with the insurance business is not deep enough, developers need to provide a large amount of other configurations or interface development to use normally, a technical framework system needs to be deeply modified and adapted with a new generation project framework system, subsequent improvement is not facilitated, and a proper monitoring and management platform scheme is not provided for production, operation and maintenance.
In the process of implementing the technical scheme of the invention in the embodiment of the present application, the inventor of the present application finds that the above-mentioned technology has at least the following technical problems:
the batch processing tasks of the existing life insurance core system are scheduled by using a simple timer, the processing logic is simple and crude, and the batch processing tasks have the problems of single-point failure, low processing efficiency and poor fault tolerance.
Disclosure of Invention
The embodiment of the application provides a batch processing method and system for a financial risk core system, solves the technical problems that in the prior art, batch processing tasks of the existing life risk core system are scheduled by using a simple timer, the processing logic is simple and crude, the batch processing tasks have single-point faults, the processing efficiency is low, and the fault tolerance capability is poor, and achieves the technical purposes of adopting a multithreading distributed scheme, scheduling in segments and blocks, reasonably tolerating faults, retrying regularly, reducing the influence of abnormity or errors, improving the processing efficiency and monitoring efficiently of conventional timing tasks, and meeting the increasing business requirements.
The embodiment of the application provides a batch processing method for a financial insurance core system, and the method is applied to the financial insurance core system, wherein the method comprises the following steps: obtaining a first task scheduling instruction and first task information, wherein the first task information comprises first task data; acquiring first process state data information of the first task information according to the first task scheduling instruction; judging whether the first process state data information is rerun data or not; if the re-running data is not the re-running data, first output information is obtained, wherein the first output information is pre-detection passing information; acquiring configuration data of the first task according to the first output information; after main file data in the first task data are extracted and a first main file table is established, carrying out fragment operation processing on the main file data according to the configuration data; after the fragmentation operation processing is completed, obtaining first fragmentation data, wherein the first fragmentation data comprises a plurality of fragmentation data; acquiring a second task scheduling instruction and second task information, and retrieving all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information; after all the buffer information is stored in a first memory, a first partitioning instruction is obtained; according to the first blocking instruction, after the first fragment data is blocked and first blocking data is obtained, a thread pool is adopted to perform parallel scheduling on the first blocking data; after the parallel scheduling is finished, judging whether the failure times in the second task exceed a preset number or not; if not, calling a preset service logic and obtaining first task result information; and acquiring a third task scheduling instruction and third task information, and acquiring second output information after summarizing the first task result information according to the third task scheduling instruction.
In another aspect, the present application further provides a batch processing system for a financial insurance core system, wherein the system includes: a first obtaining unit, configured to obtain a first task scheduling instruction and first task information, where the first task information includes first task data; a second obtaining unit, configured to obtain first process state data information of the first task information according to the first task scheduling instruction; the first judging unit is used for judging whether the first process state data information is rerun data or not; a third obtaining unit, configured to obtain first output information if the rerun data is not the rerun data, where the first output information is preview-passing information; a fourth obtaining unit, configured to obtain configuration data of the first task according to the first output information; the first execution unit is used for performing fragmentation operation processing on the main file data according to the configuration data after the main file data in the first task data is extracted and a first main file table is established; a fifth obtaining unit, configured to obtain first sliced data after the slicing operation is completed, where the first sliced data includes multiple sliced data; a sixth obtaining unit, configured to obtain a second task scheduling instruction and second task information, and retrieve all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information; a seventh obtaining unit, configured to obtain a first blocking instruction after storing all the buffer information in the first memory; an eighth obtaining unit, configured to perform blocking on the first fragment data according to the first blocking instruction, and perform parallel scheduling on the first fragment data by using a thread pool after obtaining the first fragment data; a second judging unit, configured to judge whether the number of failures in the second task exceeds a preset number after the parallel scheduling is finished; a ninth obtaining unit, configured to, if not exceeded, invoke a predetermined service logic and obtain first task result information; and the tenth obtaining unit is used for obtaining a third task scheduling instruction and third task information, and obtaining second output information after summarizing the first task result information according to the third task scheduling instruction.
In another aspect, an embodiment of the present application further provides a batch processing system for a financial risk core system, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
because the scheme of firstly fragmenting and then fragmenting and parallelly scheduling is adopted in task scheduling, the processing capacity of all servers is fully used, and the task processing speed is improved to the utmost extent; meanwhile, aiming at errors and exceptions, the framework provides specified fault tolerance capability and provides retry capability under the condition of failure of the fragmentation task, so that normal operation of the service is guaranteed.
The foregoing is a summary of the present disclosure, and embodiments of the present disclosure are described below to make the technical means of the present disclosure more clearly understood.
Drawings
FIG. 1 is a schematic flow chart of a batch processing method for a financial insurance core system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a batch processing system for a financial risk core system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of the reference numerals: a first obtaining unit 11, a second obtaining unit 12, a first judging unit 13, a third obtaining unit 14, a fourth obtaining unit 15, a first executing unit 16, a fifth obtaining unit 17, a sixth obtaining unit 18, a seventh obtaining unit 19, an eighth obtaining unit 20, a second judging unit 21, a ninth obtaining unit 22, a tenth obtaining unit 23, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides a batch processing method and system for a financial risk core system, and solves the technical problems that in the prior art, batch processing tasks of the existing life risk core system are scheduled by using a simple timer, the processing logic is simple and crude, the batch processing tasks have single-point faults, the processing efficiency is low, and the fault tolerance capability is poor, and the technical purposes that a multithreading distributed scheme is adopted, the scheduling is carried out in a partitioned mode, the fault tolerance is reasonable, the periodical retry is carried out, and the influence of abnormity or errors is reduced are achieved, so that the processing efficiency and the efficient monitoring of conventional timing tasks are improved, and the increasing business requirements are met. Hereinafter, example embodiments of the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
Spring Batch also has some disadvantages in the application of insurance systems: the combination with the insurance service is not deep enough, developers need to provide a large amount of other configurations or interface development to be normally used, the technical framework system needs to be deeply modified and adapted with a new generation project framework system, subsequent improvement is not facilitated, and a proper monitoring and management platform scheme is not provided for production, operation and maintenance. The technical problems that batch processing tasks of an existing life insurance core system are scheduled by using a simple timer, processing logic is simple and crude, and the batch processing tasks have single-point faults, low processing efficiency and poor fault tolerance in the prior art also exist.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a batch processing method for a financial insurance core system, and the method is applied to the financial insurance core system, wherein the method comprises the following steps: obtaining a first task scheduling instruction and first task information, wherein the first task information comprises first task data; acquiring first process state data information of the first task information according to the first task scheduling instruction; judging whether the first process state data information is rerun data or not; if the re-running data is not the re-running data, obtaining first output information, wherein the first output information is pre-detection passing information; acquiring configuration data of the first task according to the first output information; after main file data in the first task data are extracted and a first main file table is established, performing fragment operation processing on the main file data according to the configuration data; after the fragmentation operation processing is completed, obtaining first fragmentation data, wherein the first fragmentation data comprises a plurality of fragmentation data; acquiring a second task scheduling instruction and second task information, and retrieving all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information; after all the buffer information is stored in a first memory, a first partitioning instruction is obtained; according to the first blocking instruction, after the first fragment data is blocked and first blocking data is obtained, a thread pool is adopted to perform parallel scheduling on the first blocking data; after the parallel scheduling is finished, judging whether the failure times in the second task exceed a preset number or not; if not, calling a preset service logic and obtaining first task result information; and acquiring a third task scheduling instruction and third task information, and acquiring second output information after summarizing the first task result information according to the third task scheduling instruction.
Having described the principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a batch processing method for a financial risk core system, where the method includes:
step S100: obtaining a first task scheduling instruction and first task information, wherein the first task information comprises first task data;
specifically, in the embodiment of the present application, the batch processing framework generally divides a batch processing task into 3 phases: namely data extraction and fragmentation, data block scheduling processing and data fragmentation and summarization. In the data extraction stage, the financial risk core system calls a task state manager according to the first task scheduling instruction, and obtains task data information of the first task information, so as to check whether the first task information can be executed, and the specific fragment logic is executed when the first task information can be executed.
Step S200: acquiring first process state data information of the first task information according to the first task scheduling instruction;
specifically, if the task state manager determines that the first task information is executable, the task state manager records the first process state data information of the first task information, and periodically refreshes the first process state data information of the first task.
Step S300: judging whether the first process state data information is rerun data or not;
step S400: if the re-running data is not the re-running data, obtaining first output information, wherein the first output information is pre-detection passing information;
specifically, the task state manager determines whether the first process state data information of the first task information is repeatedly executed, if the first process state data information is repeatedly executed, the first process state data information is the rerun data, and if the first process state data information is determined not to be the rerun data, the preview pass information is obtained.
Step S500: acquiring configuration data of the first task according to the first output information;
specifically, after the first task information preview passes, the configuration data of the first task information is initialized, and whether the data meets the specification is checked.
Step S600: after main file data in the first task data are extracted and a first main file table is established, carrying out fragment operation processing on the main file data according to the configuration data;
specifically, job is a concept that encapsulates the entire batch process, and five methods are defined in Job's interface, and its implementation classes mainly have two types of jobs, one is simplejob, and the other is flowjobb. After the initialization of the first task information data is completed, the application only needs to provide and designate buildExtractSql () of a Job interface to extract main file data in the first task data, establish the first main file table, perform fragment operation processing on the main file data according to the configuration data, judge whether a flow is abnormal, clear up the content executed at this time if the flow is abnormal, and start a CTM mechanism to perform task re-running. Wherein, the fragmentation is to physically cut a data set to form a plurality of data sets, and the CTM: control-M, which is an enterprise level centralized job scheduling management solution provided by BMCSoftware. The method and the system have the advantages that the production control and scheduling process of cross-platform and cross-application is managed in a centralized mode through a single control node, enterprise integration, standardization and automation of enterprise batch job management are effectively facilitated by virtue of excellent high performance, high reliability, high stability, high expansibility and high safety, the job scheduling problem is prevented from evolving into a business problem, and the business service efficiency of the enterprise is practically improved.
Step S700: after the fragmentation operation processing is completed, obtaining first fragmentation data, wherein the first fragmentation data comprises a plurality of fragmentation data;
step S800: acquiring a second task scheduling instruction and second task information, and retrieving all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information;
specifically, after completing data fragmentation and obtaining the first fragmented data, the CTM platform schedules different fragmented data to different servers for execution, and performs the blocking scheduling processing of the first fragmented data. At this stage, after obtaining the second task scheduling instruction and the second task information, the task state manager checks the second task information, and then retrieves all the cache information in the financial risk core system.
Step S900: after all the buffer information is stored in a first memory, a first blocking instruction is obtained;
specifically, all the buffers are scheduled by retrieving all the buffer information in the system, the data is cached in the memory, and then the first fragment data is processed in a blocking manner according to the system configuration.
Step S1000: according to the first blocking instruction, after the first fragment data is blocked and first blocking data is obtained, a thread pool is adopted to perform parallel scheduling on the first blocking data;
specifically, the CTM platform schedules different pieces of data in the first piece of data to different servers for execution, and performs blocking on the data pieces and then uses a thread pool to schedule service logic in parallel when executing the execution, wherein the blocking is to perform logic cutting on the data in the pieces and input the data into different processors; the thread and the minimum execution unit of the program execution flow are the actual operation units in the process, and a concept of managing the thread is developed in Java, wherein the concept is called a thread pool, and the thread pool has the advantages of conveniently managing the thread and reducing the consumption of a memory.
Step S1100: after the parallel scheduling is finished, judging whether the failure times in the second task exceed a preset number or not;
step S1200: if not, calling a preset service logic and obtaining first task result information;
specifically, the first block data is loaded to a memory from a database, whether the second task is cancelled or whether the number of failed tasks exceeds the preset number is checked, if yes, the loop is exited and the block task is ended, and if not, a service code is called to execute a specific service logic and obtain the result information of the first task.
Step S1300: and acquiring a third task scheduling instruction and third task information, and acquiring second output information after summarizing the first task result information according to the third task scheduling instruction.
Specifically, after all the blocks are executed, the summarizing logic is executed to summarize the data volume of successful execution and failed execution, the checking task is ended, the refreshing task flow data is ended, and the second output information is obtained.
Further, step S200 in the embodiment of the present application further includes:
step S201a: after initializing the first task data according to the first task scheduling instruction, judging whether fourth task information is included in the financial risk core system, wherein the fourth task information and the first task information have the same service code;
step S202a: if the fourth task information is not included, obtaining the first process state data information;
step S203a: if the fourth task information is included, obtaining a first output instruction;
step S204a: and according to the first output instruction, terminating the first task and returning execution failure result information.
Specifically, if there is still a same-class (same-service-coded) task that is being executed in the financial insurance core system, that is, the fourth task information, the first task cannot be executed, and the first task is terminated and execution failure result information is returned.
Further, step S200 in the embodiment of the present application further includes:
step S201b: obtaining a first starting instruction;
step S202b: starting a first asynchronous task according to the first starting instruction;
step S203b: and maintaining the first flow state data information through the first asynchronous task according to a first preset frequency.
Specifically, after first process state data information of the first task information is obtained and recorded, the first asynchronous task is started, and state data is maintained at regular time according to the first preset frequency to inform other processors that the task is being executed, so that a task protection function is realized.
Further, step S500 in the embodiment of the present application further includes:
step S501: after the configuration data is initialized, obtaining a first checking instruction, wherein the first checking instruction is used for checking whether the configuration data meets a preset rule;
step S502: if the configuration data meet the preset rule, a first operation instruction is obtained;
step S503: and according to the first operation instruction, after main file data in the first task data is extracted and a first main file table is established, according to the configuration data, fragmentation operation processing is carried out on the main file data.
Specifically, after the configuration data is initialized, the configuration data is subjected to specification check through the preset rule, if the configuration data meets the specification, the main file data in the first task data is extracted according to the first operation instruction, a first main file table is established, and then the main file data is subjected to fragmentation operation processing according to the configuration data.
Further, step S300 in the embodiment of the present application further includes:
step S301: if the first process state data information is the re-running data, third output information is obtained;
step S302: and according to the third output information, after the flow state of the first task is set to be the state to be executed, returning the information of successful execution.
Specifically, if the first process state data is the rerun data, the task state is reset to a to-be-executed state, and the process is ended.
Further, step S203b in the embodiment of the present application further includes:
step S203b1: obtaining a first termination instruction;
step S203b2: and terminating the first asynchronous task according to the first termination instruction.
Specifically, after the fragmentation operation processing is performed on the master file data and the first fragmentation data is obtained, the first asynchronous task is terminated.
Further, step S700 in the embodiment of the present application further includes:
step S701: judging whether each flow of the first task is abnormal or not;
step S702: if the exception exists, a first cleaning instruction is obtained;
step S703: according to the first cleaning instruction, after the execution content of each flow of the first task is cleaned, a second starting instruction is obtained;
step S704: and starting a retry mechanism of the CTM according to the second starting instruction.
Specifically, if a process abnormality is detected in the first task progress process, the financial risk core system clears the content executed this time, and starts a CTM retry mechanism to re-execute the first task.
Further, step S704 in the embodiment of the present application further includes:
step S7041: acquiring second task state information of the second task information according to the second task scheduling instruction;
step S7042: judging whether the second task state information meets the state of executing the second task;
step S7043: if yes, judging whether fifth task information is included in the financial risk core system or not, wherein the fifth task information and the second task information have the same fragmentation task;
step S7044: and if the fifth task information is included, obtaining a first return instruction, and obtaining second process state data information of the second task after returning to the CTM according to the first return instruction.
Specifically, at a block scheduling processing module of the fragment data, an application needs to provide and specify execute () of a Job interface and load () and getCacheType () implementations of a DataLoader interface in advance. And judging the task state, namely judging whether the second task is in a state of being capable of executing the slicing task, simultaneously judging whether the same slicing task is in execution, if so, returning to the CTM according to the first return instruction, and acquiring second process state data information of the second task.
Further, step S7041 in the embodiment of the present application further includes:
step S7041a1: acquiring a third starting instruction according to the second process state data information;
step S7041a2: starting a second asynchronous task according to the third starting instruction;
step S7041a3: and maintaining the second process state data information through the second asynchronous task according to a second preset frequency.
Specifically, after second process state data information of the second task is obtained, the second process state data is updated to be executed, the second asynchronous task is started, and state data is maintained at regular time to inform other processors that the task is being executed.
Further, step S7041 in the embodiment of the present application further includes:
step S7041b1: obtaining a fourth starting instruction;
step S7041b2: starting a third asynchronous task according to the fourth starting instruction;
step S7041b3: according to a third preset frequency, judging whether the second task information is cancelled or not through the third asynchronous task;
step S7041b4: if the second task information is cancelled, first reminding information is obtained;
step S7041b5: and sending the first reminding information to the main process.
Specifically, the third asynchronous task is started to check whether the second task information is cancelled at regular time, and if the second task information is cancelled, the main process task is notified of being cancelled.
Further, step S1100 in the embodiment of the present application further includes:
step S1101: after the first block data after the parallel scheduling is loaded into the first memory from a database, a second check instruction is obtained;
step S1102: judging whether the second task is cancelled or not according to the second check instruction, or judging whether the failure times in the second task exceed a preset number or not;
step S1103: if so, obtaining a second termination instruction, wherein the second termination instruction is used for terminating the blocking task.
Specifically, after the first block data is loaded to the first memory from the database once, whether tasks are cancelled or whether the number of failed tasks exceeds the preset number is checked, and if so, the loop is exited and the block task is ended.
Further, step S1300 in the embodiment of the present application further includes:
step S1301: after third flow state data information of the third task is obtained, a third check instruction is obtained;
step S1302: checking whether all the first fragment data are fragmented completely according to the third checking instruction;
step S1303: if all the fragmentation data are finished, first data state information and first execution content information of a plurality of fragmentation data of the first fragmentation data are obtained;
step S1304: obtaining a first storage instruction;
step S1305: and saving the first data state information and the first execution content information according to the first storage instruction.
Specifically, after the execution is completed, cancelled or the execution of the slicing task fails, the overall process enters a summarizing stage. After the third flow state data information is obtained, whether all the fragmentation tasks are completed is checked, the states of the tasks are identified, and then summarized data are obtained. In addition, after the task is started, the data state and the execution detailed information of each fragment are saved in the master file data.
Further, step S1303 in this embodiment of the present application further includes:
step S13031: obtaining a first summary instruction;
step S13032: counting the fragment execution results of the plurality of fragment data of the first fragment data according to the first summarizing instruction and a preset summarizing logic, wherein the fragment execution results comprise a first result and a second result, the first result is an execution success, and the second result is an execution failure;
step S13033: judging whether the number of the second results exceeds a preset retry number;
step S13034: if yes, obtaining fourth output information, wherein the fourth output information is execution success information;
step S13035: and if the first output information does not exceed the second output information, acquiring fifth output information, wherein the fifth output information is execution failure information.
Specifically, after all the blocks are executed, according to the preset summarizing logic, the data volumes of the plurality of fragmented data of the first fragmented data, which are executed successfully and unsuccessfully, are summarized, and the task state is evaluated. If the execution failure times exceed the maximum allowed retry times, the execution is forced to be successful, and if the execution failure times are less than or equal to the maximum allowed retry times, the failure information is executed.
In summary, the batch processing method for the financial insurance core system provided by the embodiment of the application has the following technical effects:
1. because the scheme of firstly fragmenting and then fragmenting and parallelly scheduling is adopted in task scheduling, the processing capacity of all servers is fully used, and the task processing speed is improved to the utmost extent; meanwhile, aiming at errors and exceptions, the framework provides specified fault-tolerant capability and provides retry capability under the condition that the fragmentation task fails, so that the normal operation of the service is ensured.
2. Due to the combination of the CTM and the scheduling monitoring capability in the prior art is fully reused, and operation and maintenance personnel can find and process the scheduling monitoring capability in time under the condition of major problems. The method is characterized in that the services of an insurance system are deeply combined, a new generation of core framework system is integrated close to the actual development situation, development and configuration are simplified, the processing efficiency and efficient monitoring of conventional timing tasks are improved, and the growing service requirements are met.
Example two
Based on the same inventive concept as the batch processing method for the financial insurance core system in the foregoing embodiment, the present invention also provides a batch processing system for the financial insurance core system, as shown in fig. 2, the system comprising:
a first obtaining unit 11, configured to obtain a first task scheduling instruction and first task information, where the first task information includes first task data;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain first process status data information of the first task information according to the first task scheduling instruction;
a first judging unit 13, wherein the first judging unit 13 is used for judging whether the first process state data information is the rerun data;
a third obtaining unit 14, where the third obtaining unit 14 is configured to obtain first output information if the rerun data is not the rerun data, where the first output information is preview-passing information;
a fourth obtaining unit 15, where the fourth obtaining unit 15 is configured to obtain configuration data of the first task according to the first output information;
a first executing unit 16, where after the first executing unit 16 is configured to extract master file data in the first task data and establish a first master file table, the first executing unit is configured to perform fragmentation operation processing on the master file data according to the configuration data;
a fifth obtaining unit 17, where the fifth obtaining unit 17 is configured to obtain first sliced data after the slicing operation processing is completed, where the first sliced data includes multiple sliced data;
a sixth obtaining unit 18, where the sixth obtaining unit 18 is configured to obtain a second task scheduling instruction and second task information, and retrieve all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information;
a seventh obtaining unit 19, where the seventh obtaining unit 19 is configured to obtain a first blocking instruction after the all buffer information is stored in the first memory;
an eighth obtaining unit 20, where the eighth obtaining unit 20 is configured to perform parallel scheduling on the first tile data by using a thread pool after the first tile data is blocked and the first tile data is obtained according to the first blocking instruction;
a second determining unit 21, where the second determining unit 21 is configured to determine whether the number of failures in the second task exceeds a preset number after the parallel scheduling is finished;
a ninth obtaining unit 22, where the ninth obtaining unit 22 is configured to, if the first task result information is not exceeded, invoke a predetermined service logic and obtain first task result information;
a tenth obtaining unit 23, where the tenth obtaining unit 23 is configured to obtain a third task scheduling instruction and third task information, and obtain second output information after summarizing the first task result information according to the third task scheduling instruction.
Further, the system further comprises:
a third determining unit, configured to determine whether a fourth task information is included in the financial risk core system after initializing the first task data according to the first task scheduling instruction, where the fourth task information and the first task information have a same service code;
an eleventh obtaining unit, configured to obtain the first process status data information if the fourth task information is not included;
a twelfth obtaining unit, configured to obtain a first output instruction if the fourth task information is included;
and the second execution unit is used for terminating the first task and returning execution failure result information according to the first output instruction.
Further, the system further comprises:
a thirteenth obtaining unit, configured to obtain a first start instruction;
the third execution unit is used for starting the first asynchronous task according to the first starting instruction;
and the fourth execution unit is used for maintaining the first flow state data information through the first asynchronous task according to a first preset frequency.
Further, the system further comprises:
a fourteenth obtaining unit, configured to obtain a first check instruction after initializing the configuration data, where the first check instruction is used to check whether the configuration data meets a preset rule;
a fifteenth obtaining unit, configured to obtain a first operation instruction if the configuration data satisfies the preset rule;
and the fifth execution unit is used for performing fragmentation operation processing on the master file data according to the configuration data after the master file data in the first task data is extracted and a first master file table is established according to the first operation instruction.
Further, the system further comprises:
a sixteenth obtaining unit, configured to obtain third output information if the first process state data information is the rerun data;
and the sixth execution unit is used for returning execution success information after setting the flow state of the first task as a to-be-executed state according to the third output information.
Further, the system further comprises:
a seventeenth obtaining unit to obtain a first termination instruction;
a seventh execution unit to terminate the first asynchronous task according to the first termination instruction.
Further, the system further comprises:
a fourth judging unit, configured to judge whether each flow of the first task is abnormal;
an eighteenth obtaining unit, configured to obtain a first cleaning instruction if there is an exception;
a nineteenth obtaining unit, configured to obtain a second start instruction after clearing execution content of each flow of the first task according to the first cleaning instruction;
an eighth execution unit, configured to start a retry mechanism of the CTM according to the second start instruction.
Further, the system further comprises:
a twentieth obtaining unit, configured to obtain second task state information of the second task information according to the second task scheduling instruction;
a fifth judging unit, configured to judge whether the second task state information satisfies a state of executing the second task;
a sixth judging unit, configured to judge whether a fifth task information is included in the financial risk core system if the fifth task information is satisfied, where the fifth task information and the second task information have the same fragmentation task;
a twenty-first obtaining unit, configured to obtain a first return instruction if the fifth task information is included, and obtain second process status data information of the second task after returning to the CTM according to the first return instruction.
Further, the system further comprises:
a twenty-second obtaining unit, configured to obtain a third start instruction according to the second process status data information;
a ninth execution unit, configured to start a second asynchronous task according to the third start instruction;
and the tenth execution unit is used for maintaining the second process state data information through the second asynchronous task according to a second preset frequency.
Further, the system further comprises:
a twenty-third obtaining unit, configured to obtain a fourth start instruction;
an eleventh execution unit, configured to start a third asynchronous task according to the fourth start instruction;
a seventh determining unit, configured to determine, according to a third preset frequency, whether the second task information is cancelled through the third asynchronous task;
a twenty-fourth obtaining unit, configured to obtain first reminder information if the second task information is cancelled;
and the first sending unit is used for sending the first reminding information to the main process.
Further, the system further comprises:
a twenty-fifth obtaining unit, configured to obtain a second check instruction after the first block data subjected to parallel scheduling is loaded from a database into the first memory;
an eighth determining unit, configured to determine whether the second task is cancelled or not according to the second check instruction, or determine whether the number of failures in the second task exceeds a preset number;
a twenty-sixth obtaining unit to obtain a second termination instruction if exceeded, wherein the second termination instruction is to terminate a blocking task.
Further, the system further comprises:
a twenty-seventh obtaining unit, configured to obtain a third check instruction after obtaining third flow state data information of the third task;
a twelfth execution unit, configured to check whether all the first fragment data is fragmented according to the third check instruction;
a twenty-eighth obtaining unit, configured to obtain, if all of the first data state information and the first execution content information of the plurality of sliced data of the first sliced data are completed, first data state information and first execution content information of the plurality of sliced data of the first sliced data;
a twenty-ninth obtaining unit to obtain a first store instruction;
a thirteenth execution unit, configured to save the first data state information and the first execution content information according to the first storage instruction.
Further, the system further comprises:
a thirtieth obtaining unit, configured to obtain a first aggregation instruction;
a thirteenth execution unit, configured to count, according to the first aggregation instruction and according to a preset aggregation logic, fragmentation execution results of multiple pieces of fragmentation data of the first fragmentation data, where the fragmentation execution results include a first result and a second result, the first result is an execution success, and the second result is an execution failure;
a ninth judging unit, configured to judge whether the number of the second results exceeds a preset number of retries;
a thirty-first obtaining unit, configured to obtain fourth output information if the execution result exceeds the first threshold, where the fourth output information is execution success information;
a thirty-second obtaining unit, configured to obtain fifth output information if the number of the second output information exceeds the number of the first output information, where the fifth output information is execution failure information.
Various changes and specific examples of the batch processing method for the financial insurance core system in the first embodiment of fig. 1 are also applicable to the batch processing system for the financial insurance core system in the present embodiment, and a batch processing system for the financial insurance core system in the present embodiment is obvious to those skilled in the art from the foregoing detailed description of the batch processing method for the financial insurance core system, so for the brevity of the description, detailed description is omitted here.
Exemplary electronic device
An electronic apparatus of an embodiment of the present application is described below with reference to fig. 3.
Fig. 3 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of a batch processing method for a financial risk core system as in the previous embodiments, the present invention further provides a batch processing system for a financial risk core system, on which a computer program is stored, which when executed by a processor implements the steps of any one of the methods of the batch processing method for a financial risk core system as described above.
Where in fig. 3a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
The embodiment of the application provides a batch processing method for a financial insurance core system, and the method is applied to the financial insurance core system, wherein the method comprises the following steps: obtaining a first task scheduling instruction and first task information, wherein the first task information comprises first task data; acquiring first process state data information of the first task information according to the first task scheduling instruction; judging whether the first process state data information is rerun data or not; if the re-running data is not the re-running data, obtaining first output information, wherein the first output information is pre-detection passing information; acquiring configuration data of the first task according to the first output information; after main file data in the first task data are extracted and a first main file table is established, performing fragment operation processing on the main file data according to the configuration data; after the fragmentation operation processing is completed, obtaining first fragmentation data, wherein the first fragmentation data comprises a plurality of fragmentation data; acquiring a second task scheduling instruction and second task information, and retrieving all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information; after all the buffer information is stored in a first memory, a first partitioning instruction is obtained; according to the first blocking instruction, after the first fragment data is blocked and first blocking data is obtained, a thread pool is adopted to perform parallel scheduling on the first blocking data; after the parallel scheduling is finished, judging whether the failure times in the second task exceed a preset number or not; if not, calling a preset service logic and obtaining first task result information; and acquiring a third task scheduling instruction and third task information, and acquiring second output information after summarizing the first task result information according to the third task scheduling instruction.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (14)

1. A batch processing method for a financial insurance core system, the method being applied to a financial insurance core system, wherein the method comprises:
obtaining a first task scheduling instruction and first task information, wherein the first task information comprises first task data;
after initializing the first task data according to the first task scheduling instruction, judging whether fourth task information is included in the financial risk core system, wherein the fourth task information and the first task information have the same service code;
if the fourth task information is not included, obtaining first process state data information of the first task information according to the first task scheduling instruction;
if the fourth task information is included, obtaining a first output instruction;
according to the first output instruction, terminating the first task and returning execution failure result information;
judging whether the first process state data information is rerun data or not;
if the re-running data is not the re-running data, obtaining first output information, wherein the first output information is pre-detection passing information;
acquiring configuration data of the first task according to the first output information;
after main file data in the first task data are extracted and a first main file table is established, carrying out fragment operation processing on the main file data according to the configuration data;
after the fragmentation operation processing is completed, obtaining first fragmentation data, wherein the first fragmentation data comprises a plurality of fragmentation data;
acquiring a second task scheduling instruction and second task information, and retrieving all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information;
after all the buffer information is stored in a first memory, a first partitioning instruction is obtained;
according to the first blocking instruction, after the first blocking data are blocked and obtained, a thread pool is adopted to perform parallel scheduling on the first blocking data;
after the parallel scheduling is finished, judging whether the failure times in the second task exceed a preset number or not;
if not, calling a preset service logic and obtaining first task result information;
and acquiring a third task scheduling instruction and third task information, and acquiring second output information after summarizing the first task result information according to the third task scheduling instruction.
2. The method of claim 1, wherein after the obtaining the first process state data information of the first task information, the method further comprises:
obtaining a first starting instruction;
starting a first asynchronous task according to the first starting instruction;
and maintaining the first flow state data information through the first asynchronous task according to a first preset frequency.
3. The method of claim 1, wherein after obtaining configuration data for the first task based on the first output information, the method further comprises:
after the configuration data is initialized, obtaining a first checking instruction, wherein the first checking instruction is used for checking whether the configuration data meets a preset rule;
if the configuration data meet the preset rule, a first operation instruction is obtained;
and according to the first operation instruction, after main file data in the first task data is extracted and a first main file table is established, according to the configuration data, fragmentation operation processing is carried out on the main file data.
4. The method of claim 1, wherein the method further comprises:
if the first process state data information is the re-running data, third output information is obtained;
and according to the third output information, after the flow state of the first task is set to be the state to be executed, returning the information of successful execution.
5. The method of claim 2, wherein after obtaining the first sliced data after the slicing operation is completed, the method further comprises:
obtaining a first termination instruction;
and according to the first termination instruction, terminating the first asynchronous task.
6. The method of claim 1, wherein after obtaining the first sliced data after the slicing operation is completed, the method further comprises:
judging whether each flow of the first task is abnormal or not;
if the exception exists, a first cleaning instruction is obtained;
according to the first cleaning instruction, after the execution content of each flow of the first task is cleaned, a second starting instruction is obtained;
and starting a retry mechanism of the CTM according to the second starting instruction.
7. The method of claim 6, wherein after obtaining the second task scheduling instructions and the second task information, the method further comprises:
acquiring second task state information of the second task information according to the second task scheduling instruction;
judging whether the second task state information meets the state of executing the second task;
if yes, judging whether fifth task information is included in the financial insurance core system or not, wherein the fifth task information and the second task information have the same slicing task;
and if the fifth task information is included, obtaining a first return instruction, and obtaining second process state data information of the second task after returning to the CTM according to the first return instruction.
8. The method of claim 7, wherein after the obtaining the second process state data information for the second task, the method further comprises:
acquiring a third starting instruction according to the second process state data information;
starting a second asynchronous task according to the third starting instruction;
and maintaining the second process state data information through the second asynchronous task according to a second preset frequency.
9. The method of claim 7, wherein after the obtaining the second process state data information for the second task, the method further comprises:
obtaining a fourth starting instruction;
starting a third asynchronous task according to the fourth starting instruction;
according to a third preset frequency, judging whether the second task information is cancelled or not through the third asynchronous task;
if the second task information is cancelled, first reminding information is obtained;
and sending the first reminding information to the main process.
10. The method of claim 1, wherein after the parallel scheduling is finished, before determining whether the number of failures in the second task is within a preset number, the method further comprises:
after the first block data after the parallel scheduling is loaded into the first memory from a database, a second check instruction is obtained;
judging whether the second task is cancelled or not according to the second check instruction, or judging whether the failure times in the second task exceed a preset number or not;
if so, obtaining a second termination instruction, wherein the second termination instruction is used for terminating the blocking task.
11. The method of claim 1, wherein obtaining a third task scheduling instruction and third task information, and obtaining second output information after aggregating the first task result information according to the third task scheduling instruction, the method further comprises:
after third flow state data information of the third task is obtained, a third inspection instruction is obtained;
checking whether all the first fragment data are fragmented completely according to the third checking instruction;
if all the fragmentation data are finished, first data state information and first execution content information of a plurality of fragmentation data of the first fragmentation data are obtained;
obtaining a first storage instruction;
and saving the first data state information and the first execution content information according to the first storage instruction.
12. The method of claim 11, wherein if all is completed, obtaining first data state information and first execution content information of a plurality of sliced data of the first sliced data, the method further comprises:
obtaining a first summary instruction;
counting the fragment execution results of the plurality of fragment data of the first fragment data according to the first summarizing instruction and a preset summarizing logic, wherein the fragment execution results comprise a first result and a second result, the first result is an execution success, and the second result is an execution failure;
judging whether the number of the second results exceeds a preset retry number;
if yes, obtaining fourth output information, wherein the fourth output information is execution success information;
and if the first output information does not exceed the second output information, acquiring fifth output information, wherein the fifth output information is execution failure information.
13. A batch processing system for a financial insurance core system, wherein the system comprises:
a first obtaining unit, configured to obtain a first task scheduling instruction and first task information, where the first task information includes first task data;
a second obtaining unit, configured to, after initializing the first task data according to the first task scheduling instruction, determine whether a fourth task information is included in the financial risk core system, where the fourth task information and the first task information have a same service code; if the fourth task information is not included, obtaining first process state data information of the first task information according to the first task scheduling instruction; if the fourth task information is included, obtaining a first output instruction; according to the first output instruction, terminating the first task and returning execution failure result information;
the first judging unit is used for judging whether the first process state data information is the re-running data or not;
a third obtaining unit, configured to obtain first output information if the rerun data is not the rerun data, where the first output information is preview-passing information;
a fourth obtaining unit, configured to obtain configuration data of the first task according to the first output information;
the first execution unit is used for performing fragmentation operation processing on the main file data according to the configuration data after the main file data in the first task data is extracted and a first main file table is established;
a fifth obtaining unit, configured to obtain first sliced data after the slicing operation is completed, where the first sliced data includes multiple sliced data;
a sixth obtaining unit, configured to obtain a second task scheduling instruction and second task information, and retrieve all buffer information in the financial risk core system according to the second task scheduling instruction and the second task information;
a seventh obtaining unit, configured to obtain a first blocking instruction after storing all the buffer information in the first memory;
an eighth obtaining unit, configured to perform blocking on the first fragment data according to the first blocking instruction, and perform parallel scheduling on the first fragment data by using a thread pool after obtaining the first fragment data;
a second judging unit, configured to judge whether the number of failures in the second task exceeds a preset number after the parallel scheduling is finished;
a ninth obtaining unit, configured to, if not exceeded, invoke a predetermined service logic and obtain first task result information;
and the tenth obtaining unit is used for obtaining a third task scheduling instruction and third task information, and obtaining second output information after summarizing the first task result information according to the third task scheduling instruction.
14. A batch processing system for a financial risk core system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 12 when executing the program.
CN202110189239.8A 2021-02-19 2021-02-19 Batch processing method and system for financial insurance core system Active CN113010278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110189239.8A CN113010278B (en) 2021-02-19 2021-02-19 Batch processing method and system for financial insurance core system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110189239.8A CN113010278B (en) 2021-02-19 2021-02-19 Batch processing method and system for financial insurance core system

Publications (2)

Publication Number Publication Date
CN113010278A CN113010278A (en) 2021-06-22
CN113010278B true CN113010278B (en) 2023-03-28

Family

ID=76403075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110189239.8A Active CN113010278B (en) 2021-02-19 2021-02-19 Batch processing method and system for financial insurance core system

Country Status (1)

Country Link
CN (1) CN113010278B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742381B (en) * 2021-08-30 2023-07-25 欧电云信息科技(江苏)有限公司 Cache acquisition method, device and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829506A (en) * 2018-07-04 2018-11-16 中国建设银行股份有限公司 Batch tasks processing method, device and service system
CN109144731A (en) * 2018-08-31 2019-01-04 中国平安人寿保险股份有限公司 Data processing method, device, computer equipment and storage medium
CN110727539A (en) * 2019-12-19 2020-01-24 北京江融信科技有限公司 Method and system for processing exception in batch processing task and electronic equipment
CN111400011A (en) * 2020-03-19 2020-07-10 中国建设银行股份有限公司 Real-time task scheduling method, system, equipment and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170293980A1 (en) * 2011-04-04 2017-10-12 Aon Securities, Inc. System and method for managing processing resources of a computing system
US11169846B2 (en) * 2018-08-29 2021-11-09 Tibco Software Inc. System and method for managing tasks and task workload items between address spaces and logical partitions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829506A (en) * 2018-07-04 2018-11-16 中国建设银行股份有限公司 Batch tasks processing method, device and service system
CN109144731A (en) * 2018-08-31 2019-01-04 中国平安人寿保险股份有限公司 Data processing method, device, computer equipment and storage medium
CN110727539A (en) * 2019-12-19 2020-01-24 北京江融信科技有限公司 Method and system for processing exception in batch processing task and electronic equipment
CN111400011A (en) * 2020-03-19 2020-07-10 中国建设银行股份有限公司 Real-time task scheduling method, system, equipment and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Aprigio Bezerra ; Porfidio Hernández ; Antonio Espinosa.Job scheduling in Hadoop with Shared Input Policy and RAMDISK.《IEEE Xplore》.2014,全文. *
人保财险核心业务系统运维自动化平台设计与实现;杨旸等;《财经界(学术版)》;20150920;第2015卷(第18期);全文 *
面向大数据流式计算的任务管理技术综述;梁毅等;《计算机工程与科学》;20170215;第39卷(第02期);全文 *

Also Published As

Publication number Publication date
CN113010278A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
Garg et al. Analysis of preventive maintenance in transactions based software systems
CN105357038B (en) Monitor the method and system of cluster virtual machine
US8458712B2 (en) System and method for multi-level preemption scheduling in high performance processing
US8108878B1 (en) Method and apparatus for detecting indeterminate dependencies in a distributed computing environment
US7340646B2 (en) Apparatus, system, and method for resource group backup
US8826286B2 (en) Monitoring performance of workload scheduling systems based on plurality of test jobs
US9319281B2 (en) Resource management method, resource management device, and program product
US7870424B2 (en) Parallel computer system
EP2357559A1 (en) Performing a workflow having a set of dependancy-related predefined activities on a plurality of task servers
US8020046B2 (en) Transaction log management
WO2010062423A1 (en) Method and apparatus for enforcing a resource-usage policy in a compute farm
US9244719B2 (en) Batch processing system
CN110928655A (en) Task processing method and device
EP3018581B1 (en) Data staging management system
CN113010278B (en) Batch processing method and system for financial insurance core system
US9128754B2 (en) Resource starvation management in a computer system
US8336053B2 (en) Transaction management
US8276150B2 (en) Methods, systems and computer program products for spreadsheet-based autonomic management of computer systems
CN114564281A (en) Container scheduling method, device, equipment and storage medium
CN116680055A (en) Asynchronous task processing method and device, computer equipment and storage medium
CN115454718A (en) Automatic database backup file validity detection method
CN112612604B (en) Task scheduling method and device based on Actor model
CN115480924A (en) Method and device for processing job data, storage medium and electronic equipment
CN112685334A (en) Method, device and storage medium for block caching of data
WO2011121681A1 (en) Job schedule system, job schedule management method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant