CN113807924A - Business processing distribution method, system, storage medium and equipment based on batch processing algorithm - Google Patents

Business processing distribution method, system, storage medium and equipment based on batch processing algorithm Download PDF

Info

Publication number
CN113807924A
CN113807924A CN202111123588.6A CN202111123588A CN113807924A CN 113807924 A CN113807924 A CN 113807924A CN 202111123588 A CN202111123588 A CN 202111123588A CN 113807924 A CN113807924 A CN 113807924A
Authority
CN
China
Prior art keywords
processing
order
batch
node
orders
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111123588.6A
Other languages
Chinese (zh)
Inventor
贾信明
林昱洲
杨宏
夏明月
雷华春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hua Analysis Technology Shanghai Co ltd
Original Assignee
Hua Analysis Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hua Analysis Technology Shanghai Co ltd filed Critical Hua Analysis Technology Shanghai Co ltd
Priority to CN202111123588.6A priority Critical patent/CN113807924A/en
Publication of CN113807924A publication Critical patent/CN113807924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a service processing distribution method, a system, a storage medium and equipment based on a batch processing algorithm, wherein the method comprises the following steps: starting a central main node to enable the central main node to obtain the working state of the main node; the central main node obtains order information needing to be subjected to batch deduction according to the batch deduction time query, and queries the slave node which is in a receivable state currently; distributing orders to be processed according to the number of the currently available slave nodes, and respectively sending order codes to each slave node; and the slave nodes distribute the orders to all threads to carry out order deduction processing according to the received orders to be processed, and feed back processing results to the central master node after the orders of the current nodes are completely processed. Through the technical scheme of the invention, the whole computing resource is effectively utilized to the maximum extent to process the order, the disaster tolerance capability of the system is increased, the expandability and the robustness of the system are ensured, and the efficiency of processing the large-data batch deduction is improved.

Description

Business processing distribution method, system, storage medium and equipment based on batch processing algorithm
Technical Field
The present invention relates to the field of business order processing technologies, and in particular, to a business processing allocation method based on a batch processing algorithm, a business processing allocation system based on a batch processing algorithm, a computer-readable storage medium, and an electronic device.
Background
In various stage trading scenes, a large amount of order data needs to be deducted within a limited time (within 1 day), and the amount of orders to be processed is continuously and greatly increased along with the accumulation of time, so that the processing capacity of a system is challenged.
In the existing order deduction processing technology, the problem of insufficient computing power exists in a single node, the problem of repeated processing of data conflict exists in multiple nodes, and the efficiency of processing large-data batch deduction is not high.
Disclosure of Invention
Aiming at the problems, the invention provides a service processing allocation method, a system, a storage medium and equipment based on a batch processing algorithm, which allocate and process batch processing orders in a batch synchronous processing mode between a central master node and a plurality of slave nodes, effectively utilize the whole computing resource to process the orders to the maximum extent, increase the disaster tolerance capability of the system, ensure the expandability and the robustness of the system and improve the efficiency of processing large data batch deductions.
In order to achieve the above object, the present invention provides a service processing allocation method based on batch processing algorithm, which comprises: starting a central main node, and enabling the central main node to acquire the working state of the main node; the central main node inquires and obtains order information needing batch deduction processing according to the batch deduction time, and inquires a slave node which is in a state capable of receiving batch deduction orders currently; distributing orders to be processed according to the number of the currently available slave nodes, and respectively sending order codes to each slave node; and the slave nodes distribute the orders to all threads to carry out order deduction processing according to the received orders to be processed, and feed back processing results to the central master node after the orders of the current nodes are completely processed.
In the above technical solution, preferably, in the process of processing the order deduction by the slave node according to the received order to be processed, the processing progress information of the current slave node is counted at preset time intervals, and the processing progress information is fed back to the central master node; and the central main node redistributes the orders to be processed to the distributed slave nodes according to the processing progress information.
In the above technical solution, preferably, a plurality of central host nodes are started simultaneously, and only one central host node obtains a node working state according to a preset start effective flag bit; and the central main nodes adopt a batch synchronous processing mode to realize the distribution of the orders to be processed.
In the above technical solution, preferably, the states of the slave nodes are divided into a waiting state, an order state that can be added, a busy state and a processing completion state, where the waiting state and the order state that can be added belong to a state that can receive batch deduction orders, the slave nodes in the busy state are executing deduction processing and cannot receive new batch deduction orders, and the slave nodes in the processing completion state have completed deduction processing and have not fed back processing results to the central master node.
In the above technical solution, preferably, the central master node uniformly distributes the order to be processed to the slave nodes currently in the state of being capable of receiving the batch deduction orders according to the number of currently available slave nodes; and the slave nodes are uniformly distributed to all threads for order deduction processing according to the orders to be processed distributed to the current nodes.
The invention also provides a service processing distribution system based on the batch processing algorithm, which applies the service processing distribution method based on the batch processing algorithm in any one of the above technical schemes, and comprises the following steps: the master node starting module is used for starting a central master node and enabling the central master node to acquire the working state of the master node; the information inquiry module is used for inquiring and obtaining order information needing batch deduction processing according to the batch deduction time and inquiring the slave node which is in a state capable of receiving the batch deduction order currently; the order distribution module is used for distributing the orders to be processed according to the number of the currently available slave nodes and respectively sending the order codes to each slave node; and the order processing module is used for distributing the received orders to be processed to all threads of the slave nodes to carry out order deduction processing, and feeding back processing results to the central main node after the orders of the current nodes are completely processed.
In the above technical solution, preferably, the order processing module counts processing progress information of the current slave node every preset time in the order deduction process, and feeds back the processing progress information to the central master node; and the order distribution module redistributes the orders to be processed to the distributed slave nodes according to the processing progress information received by the central master node.
In the above technical solution, preferably, the order distribution module evenly distributes the order to be processed to the slave nodes currently in the state capable of receiving the batch deduction order according to the number of currently available slave nodes; and the order processing module is uniformly distributed to all threads of the current slave node for order deduction processing according to the order to be processed distributed to the slave node.
The present invention further provides a computer-readable storage medium, where at least one instruction is stored, and the at least one instruction is executed by a processor, so as to implement the service processing allocation method based on batch processing algorithm as disclosed in any of the above technical solutions.
The invention further provides an electronic device, which includes a memory and a memory, wherein the memory is used for storing at least one instruction, and the processor is used for executing the at least one instruction, so as to implement the service processing allocation method based on the batch processing algorithm disclosed in any one of the above technical solutions.
Compared with the prior art, the invention has the beneficial effects that: the batch processing orders are distributed and processed in a batch synchronous processing mode between the central master node and the plurality of slave nodes, the whole computing resource is effectively utilized to the maximum extent to process the orders, the disaster tolerance capability of the system is improved, the expandability and the robustness of the system are ensured, and the efficiency of processing large-data batch deductions is improved.
Drawings
FIG. 1 is a schematic flow chart of a method for allocating business processes based on a batch processing algorithm according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a service processing distribution system based on a batch processing algorithm according to an embodiment of the present invention.
In the drawings, the correspondence between each component and the reference numeral is:
1. the system comprises a main node starting module, 2, an information inquiry module, 3, an order distribution module and 4, an order processing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention is described in further detail below with reference to the attached drawing figures:
as shown in fig. 1, a service processing allocation method based on a batch processing algorithm according to the present invention includes: starting a central main node, and enabling the central main node to acquire the working state of the main node; the central main node obtains order information needing batch deduction processing according to the batch deduction time inquiry, and inquires a slave node which is in a state capable of receiving batch deduction orders currently; distributing orders to be processed according to the number of the currently available slave nodes, and respectively sending order codes to each slave node; and the slave nodes distribute the orders to all threads to carry out order deduction processing according to the received orders to be processed, and feed back processing results to the central master node after the orders of the current nodes are completely processed.
In the embodiment, the batch processing orders are distributed and processed in a batch synchronous processing mode between the central master node and the plurality of slave nodes, so that the whole computing resource is effectively utilized to the maximum extent to process the orders, the disaster tolerance capability of the system is increased, the expandability and the robustness of the system are ensured, and the efficiency of processing the large-data batch deduction is improved.
Specifically, the central master node, i.e. the master node, queries the total amount of orders to be processed according to the batch deduction time, acquires an order ID, and then queries a slave node, i.e. a slave node, which is capable of receiving the batch deduction orders, to form a slave node list. And the batch deduction time is monthly deduction time automatically generated by the system when generating the data of the periodic deduction order.
Preferably, the states of the slave nodes are divided into a waiting state, an order state which can be added, a busy state and a processing completion state, wherein the waiting state and the order state which can be added belong to a state which can receive batch deduction orders, the slave nodes in the busy state execute deduction processing and cannot receive new batch deduction orders, and the slave nodes in the processing completion state complete deduction processing and do not feed back processing results to the central master node.
In the above embodiment, preferably, in the process of processing the order deduction by the slave node according to the received order to be processed, the processing progress information of the current slave node is counted every preset time, preferably once every 10 minutes, and the processing progress information is fed back to the central master node; and the central main node redistributes the orders to be processed to the distributed slave nodes according to the processing progress information.
Preferably, after the order is allocated to the slave node, for the slave node in which the order state can be added, the master node allocates an appropriate amount of orders to the slave node according to the number of the orders that can be added and are fed back by the slave node according to the state of each slave node fed back to the master node, so that each calculation resource is effectively utilized to the maximum extent to perform order processing.
In the above embodiment, preferably, a plurality of central host nodes are started simultaneously, and only one central host node obtains the operating state of the host node according to the preset start valid flag bit; and a plurality of central main nodes adopt a batch synchronous processing mode to realize the distribution of the orders to be processed. Through the setting of a plurality of master nodes, the task failure caused by the reasons such as downtime, system abnormity and the like of the master nodes can be avoided, the robustness and the reliability of the system are ensured, and meanwhile, the efficiency of processing the large data batch deduction is improved. By setting the start effective flag bit, it is ensured that only one master node obtains the working state of the master node, and the problem of data conflict repeatedly processed by multiple nodes is avoided.
In the above embodiment, preferably, the central master node uniformly distributes the order to be processed to the slave nodes currently in the state capable of receiving the batch deduction order according to the number of the currently available slave nodes; and the slave nodes are uniformly distributed to all threads for order deduction processing according to the orders to be processed distributed to the current slave nodes.
Through the uniform distribution of the orders and the uniform distribution of the processing threads, the whole batch deduction order processing system is in a whole calculation force balanced distribution state, the normal operation of the system is ensured, and the robustness of the system is ensured.
As shown in fig. 2, the present invention further provides a service processing distribution system based on batch processing algorithm, which applies any one of the service processing distribution methods based on batch processing algorithm in the above embodiments, including:
the master node starting module 1 is used for starting the central master node and enabling the central master node to acquire the working state of the master node;
the information inquiry module 2 is used for inquiring and obtaining order information needing batch deduction processing according to the batch deduction time and inquiring the slave node which is in a state capable of receiving the batch deduction order currently;
the order distribution module 3 is used for distributing the orders to be processed according to the number of the currently available slave nodes and respectively sending the order codes to each slave node;
and the order processing module 4 is used for distributing the received orders to be processed to each thread of the slave nodes for order deduction processing, and feeding back processing results to the central master node after the orders of the current nodes are completely processed.
In the embodiment, the batch processing orders are distributed and processed in a batch synchronous processing mode between the central master node and the plurality of slave nodes, so that the whole computing resource is effectively utilized to the maximum extent to process the orders, the disaster tolerance capability of the system is increased, the expandability and the robustness of the system are ensured, and the efficiency of processing the large-data batch deduction is improved.
In the above embodiment, preferably, the order processing module 4 counts the processing progress information of the current slave node every preset time, preferably every 10 minutes, and feeds back the processing progress information to the central master node in the process of processing the order deduction;
and the order distribution module 3 redistributes the orders to be processed to the distributed slave nodes according to the processing progress information received by the central master node.
In the above embodiment, preferably, the master node starting module 1 starts a plurality of central master nodes simultaneously, and enables only one central master node to acquire the working state of the master node according to a preset start effective flag bit; and aiming at a plurality of central main nodes, the allocation of the orders to be processed is realized by adopting a batch synchronous processing mode.
In the above embodiment, preferably, the states of the slave nodes are divided into a waiting state, an order appendable state, a busy state and a processing completion state, where the waiting state and the order appendable state belong to a batch deduction order receivable state, the slave node in the busy state is executing deduction processing and cannot receive a new batch deduction order, and the slave node in the processing completion state has completed deduction processing and has not fed back a processing result to the central master node.
In the above embodiment, preferably, the order distribution module 3 uniformly distributes the order to be processed to the slave nodes currently in the state capable of receiving the batch deduction order according to the number of currently available slave nodes;
the order processing module 4 is uniformly distributed to all threads of the current slave node for order deduction processing according to the order to be processed distributed to the slave node.
According to the service processing allocation system based on the batch processing algorithm disclosed in the above embodiment, the functions implemented by the modules correspond to the steps of the service processing allocation method based on the batch processing algorithm in the above embodiment, and are not described herein again.
The present invention also provides a computer-readable storage medium storing at least one instruction, which is executed by a processor, and is capable of implementing the business process allocating method based on batch processing algorithm as disclosed in any one of the above embodiments.
The invention further provides an electronic device, which includes a memory and a memory, wherein the memory is used for storing at least one instruction, and the processor is used for executing the at least one instruction so as to implement the business processing allocation method based on the batch processing algorithm disclosed in any one of the above embodiments.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A service processing distribution method based on batch processing algorithm is characterized by comprising the following steps:
starting a central main node, and enabling the central main node to acquire the working state of the main node;
the central main node inquires and obtains order information needing batch deduction processing according to the batch deduction time, and inquires a slave node which is in a state capable of receiving batch deduction orders currently;
distributing orders to be processed according to the number of the currently available slave nodes, and respectively sending order codes to each slave node;
and the slave nodes distribute the orders to all threads to carry out order deduction processing according to the received orders to be processed, and feed back processing results to the central master node after the orders of the current nodes are completely processed.
2. The batch processing algorithm-based business processing allocation method according to claim 1, wherein in the process of order deduction processing by the slave node according to the received order to be processed, the processing progress information of the current slave node is counted at preset time intervals, and the processing progress information is fed back to the central master node;
and the central main node redistributes the orders to be processed to the distributed slave nodes according to the processing progress information.
3. The batch processing algorithm-based service processing allocation method according to claim 1, wherein a plurality of central host nodes are started simultaneously, and only one central host node obtains a node working state according to a preset start valid flag bit;
and the central main nodes adopt a batch synchronous processing mode to realize the distribution of the orders to be processed.
4. The service processing allocation method based on the batch processing algorithm according to claim 1, wherein the states of the slave nodes are divided into a waiting state, an order-addable state, a busy state and a processing completion state, wherein the waiting state and the order-addable state belong to a batch withholding order receivable state, the slave nodes in the busy state are executing withholding processing and cannot receive new batch withholding orders, and the slave nodes in the processing completion state have completed withholding processing and have not fed back processing results to the central master node.
5. The batch processing algorithm-based business processing allocation method according to claim 1, wherein the central master node evenly allocates the order to be processed to the slave nodes currently in a state of being able to receive the batch deduction order according to the number of currently available slave nodes;
and the slave nodes are uniformly distributed to all threads for order deduction processing according to the orders to be processed distributed to the current nodes.
6. A business process distributing system based on batch processing algorithm, which applies the business process distributing method based on batch processing algorithm according to any one of claims 1 to 5, and is characterized by comprising:
the master node starting module is used for starting a central master node and enabling the central master node to acquire the working state of the master node;
the information inquiry module is used for inquiring and obtaining order information needing batch deduction processing according to the batch deduction time and inquiring the slave node which is in a state capable of receiving the batch deduction order currently;
the order distribution module is used for distributing the orders to be processed according to the number of the currently available slave nodes and respectively sending the order codes to each slave node;
and the order processing module is used for distributing the received orders to be processed to all threads of the slave nodes to carry out order deduction processing, and feeding back processing results to the central main node after the orders of the current nodes are completely processed.
7. The batch processing algorithm-based business processing distribution system according to claim 6, wherein the order processing module counts processing progress information of the current slave node at preset time intervals during order deduction processing, and feeds the processing progress information back to the central master node;
and the order distribution module redistributes the orders to be processed to the distributed slave nodes according to the processing progress information received by the central master node.
8. The batch processing algorithm-based business process distribution system according to claim 6, wherein the order distribution module is configured to distribute the order to be processed evenly to the slave nodes currently in a state of receiving batch withholding orders according to the number of currently available slave nodes;
and the order processing module is uniformly distributed to all threads of the current slave node for order deduction processing according to the order to be processed distributed to the slave node.
9. A computer-readable storage medium storing at least one instruction which is executable by a processor to implement a batch processing algorithm based business process allocation method according to any one of claims 1 to 5.
10. An electronic device, comprising a memory and a memory, wherein the memory is configured to store at least one instruction, and the processor is configured to execute the at least one instruction to implement the business process allocation method based on batch processing algorithm according to any one of claims 1 to 5.
CN202111123588.6A 2021-09-24 2021-09-24 Business processing distribution method, system, storage medium and equipment based on batch processing algorithm Pending CN113807924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111123588.6A CN113807924A (en) 2021-09-24 2021-09-24 Business processing distribution method, system, storage medium and equipment based on batch processing algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111123588.6A CN113807924A (en) 2021-09-24 2021-09-24 Business processing distribution method, system, storage medium and equipment based on batch processing algorithm

Publications (1)

Publication Number Publication Date
CN113807924A true CN113807924A (en) 2021-12-17

Family

ID=78940399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111123588.6A Pending CN113807924A (en) 2021-09-24 2021-09-24 Business processing distribution method, system, storage medium and equipment based on batch processing algorithm

Country Status (1)

Country Link
CN (1) CN113807924A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013067893A1 (en) * 2011-11-11 2013-05-16 青岛海信传媒网络技术有限公司 Slave node maintenance method, service processing method and master node of cluster system
CN105227616A (en) * 2014-07-03 2016-01-06 航天恒星科技有限公司 A kind of method of remote sensing satellite Ground Processing System task dynamic creation and distribution
CN109472372A (en) * 2018-10-17 2019-03-15 平安国际融资租赁有限公司 Resource data distribution method, device and computer equipment based on leased equipment
US20190108069A1 (en) * 2016-09-30 2019-04-11 Tencent Technology (Shenzhen) Company Limited Distributed resource allocation method, allocation node, and access node
CN112286669A (en) * 2020-11-23 2021-01-29 上海商汤智能科技有限公司 Task processing method and device
WO2021068850A1 (en) * 2019-10-11 2021-04-15 中兴通讯股份有限公司 Transaction management method and system, network device and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013067893A1 (en) * 2011-11-11 2013-05-16 青岛海信传媒网络技术有限公司 Slave node maintenance method, service processing method and master node of cluster system
CN105227616A (en) * 2014-07-03 2016-01-06 航天恒星科技有限公司 A kind of method of remote sensing satellite Ground Processing System task dynamic creation and distribution
US20190108069A1 (en) * 2016-09-30 2019-04-11 Tencent Technology (Shenzhen) Company Limited Distributed resource allocation method, allocation node, and access node
CN109472372A (en) * 2018-10-17 2019-03-15 平安国际融资租赁有限公司 Resource data distribution method, device and computer equipment based on leased equipment
WO2021068850A1 (en) * 2019-10-11 2021-04-15 中兴通讯股份有限公司 Transaction management method and system, network device and readable storage medium
CN112286669A (en) * 2020-11-23 2021-01-29 上海商汤智能科技有限公司 Task processing method and device

Similar Documents

Publication Publication Date Title
CN113238838B (en) Task scheduling method and device and computer readable storage medium
CN108804545B (en) Distributed global unique ID generation method and device
WO2022001136A1 (en) Inode number distribution management method for distributed storage system and related component
CN108829523A (en) Memory source distribution method, device, electronic equipment and readable storage medium storing program for executing
CN105354147A (en) Memory pool management method and management system
CN111858055A (en) Task processing method, server and storage medium
CN110659124A (en) Message processing method and device
CN111541762B (en) Data processing method, management server, device and storage medium
US8316367B2 (en) System and method for optimizing batch resource allocation
CN105824699A (en) Distributed task scheduling apparatus and method
CN111260253A (en) Information sending method and device, computer equipment and storage medium
CN112653746B (en) Distributed storage method and system for concurrently creating object storage equipment
CN113807924A (en) Business processing distribution method, system, storage medium and equipment based on batch processing algorithm
US10168937B2 (en) Storage space allocation
CN112286688B (en) Memory management and use method, device, equipment and medium
CN107239328B (en) Task allocation method and device
KR102124897B1 (en) Distributed Messaging System and Method for Dynamic Partitioning in Distributed Messaging System
US9110823B2 (en) Adaptive and prioritized replication scheduling in storage clusters
CN111124751A (en) Data recovery method and system, data storage node and database management node
CN108984105B (en) Method and device for distributing replication tasks in network storage device
CN113158173B (en) Account number allocation method, medium, device and computing equipment
CN112395063B (en) Dynamic multithreading scheduling method and system
CN111343152B (en) Data processing method and device, electronic equipment and storage medium
CN115426361A (en) Distributed client packaging method and device, main server and storage medium
CN107590003B (en) Spark task allocation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination