CN113283803B - Method for making material demand plan, related device and storage medium - Google Patents

Method for making material demand plan, related device and storage medium Download PDF

Info

Publication number
CN113283803B
CN113283803B CN202110674495.6A CN202110674495A CN113283803B CN 113283803 B CN113283803 B CN 113283803B CN 202110674495 A CN202110674495 A CN 202110674495A CN 113283803 B CN113283803 B CN 113283803B
Authority
CN
China
Prior art keywords
control server
server
task
calculation
subtasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110674495.6A
Other languages
Chinese (zh)
Other versions
CN113283803A (en
Inventor
李佳
冯玉春
王正
曾朝辉
蒋松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kingdee Software China Co Ltd
Original Assignee
Kingdee Software China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kingdee Software China Co Ltd filed Critical Kingdee Software China Co Ltd
Priority to CN202110674495.6A priority Critical patent/CN113283803B/en
Publication of CN113283803A publication Critical patent/CN113283803A/en
Application granted granted Critical
Publication of CN113283803B publication Critical patent/CN113283803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06315Needs-based resource requirements planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Abstract

The embodiment of the application discloses a method for making a material demand plan, a related device and a storage medium. The application comprises the following steps: after receiving a request made by a user's material demand plan (MRP, material requirement planning), the control server generates a calculation task and sends the calculation task to the transfer server. The transfer server analyzes the received calculation tasks to generate N subtasks, and sends the N subtasks to N calculation servers. N computing servers respectively process N subtasks, generate N task data and send the N task data to a transit server, the transit server sends the N task data to a control server, and the control server calculates MRP according to the N task data. In the application, the transfer server transmits data to the calculation server and the control server, the control server does not need to directly establish communication connection with the calculation server, the logic code relation between the control server and the calculation server is reduced, the efficiency of expanding the calculation server is improved, and the code maintenance cost is reduced.

Description

Method for making material demand plan, related device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for planning a material demand plan, a related device, and a storage medium.
Background
The material demand plan (MRP, material requirement planning) is a practical technique for determining a processing schedule and an ordering schedule of materials by calculating the demand amount and demand time of required materials by a computer, by making a production plan of the products according to market demand prediction and customer orders, and then generating a schedule plan based on the products, a material structure table and inventory conditions of the constituent products.
The basic data that needs to be acquired to formulate an MRP includes, but is not limited to: primary production plan, inventory records, lead times and bill of materials (BOM). In practical application, the method also needs to combine the characteristics of each enterprise production, expand the data of other aspects and be used for participating in the formulation of MRP. The data of each type needs to be configured with corresponding logic codes, and corresponding subtasks are established, so that the data is obtained through calculation. With the rapid development of large data of enterprises, massive data is in explosive growth, and generally, each subtask needs to be deployed in an independent computing device (such as a server) to realize computing and acquiring of the data. As shown in fig. 1, after a user initiates an MRP request, the control server configures each type of subtask and distributes the subtask to each independent computing server for execution.
In the MRP formulation process shown in fig. 1, a control server is required as a central node to perform configuration distribution of sub-tasks and processing of task results, so that logic codes in the control server are complex. Because the business logic codes between the calculation server and the control server are also solidified in the control server, when a new calculation server parameter needs to be expanded, a great deal of logic codes in the control server need to be modified, the expansion efficiency is lower, and the code maintenance cost is higher.
Disclosure of Invention
In view of the foregoing, the present application provides a method, an apparatus and a storage medium for planning a material demand plan, which are used for improving the efficiency of an extended computing server.
In one aspect, the present application provides a method for planning a material demand plan, including:
The transfer server receives a calculation task from the control server, wherein the calculation task is generated by the control server according to a material demand plan MRP making request initiated by a user;
the transit server generates N subtasks according to the calculation task, wherein N is an integer greater than or equal to 1;
the transit server sends the N subtasks to N computing servers;
the transit server receives N task data from the N computing servers;
and the transit server sends the N task data to the control server so that the control server calculates MRP according to the N task data.
In one possible implementation, the method further includes:
The transit server receives adjustment requests for the N computing servers;
and the transit server adjusts the N computing servers according to the adjustment request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
Another aspect of the present application provides a method for formulating a material demand plan, the method being applied to a control server cluster, the control server cluster including a first control server and a second control server, the method including:
The first control server receives a material demand plan MRP making request from a user;
the first control server generates a calculation task according to the MRP formulation request;
The first control server sends the calculation task to a target database, so that when the first control server fails, the second control server obtains the calculation task from the target database and sends the calculation task to a transfer server.
Another aspect of the present application provides a transit server, including:
The receiving unit is used for receiving a calculation task from the control server, wherein the calculation task is generated by the control server according to a material demand plan (MRP) making request initiated by a user;
The generating unit is used for generating N subtasks according to the calculation task, wherein N is an integer greater than or equal to 1;
A sending unit, configured to send the N subtasks to N computing servers;
the receiving unit is further used for receiving N task data from the N computing servers;
the sending unit is further configured to send the N task data to the control server, so that the control server calculates MRP according to the N task data.
In one possible implementation, the relay server further comprises an adjustment unit,
The receiving unit is used for receiving adjustment requests for the N computing servers;
And the adjusting unit is used for adjusting the N computing servers according to the adjusting request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
Another aspect of the present application provides a first control server, the first control server being from a control server cluster, the control server cluster including the first control server and a second control server, the first control server including:
The receiving unit is used for receiving a material demand plan MRP making request from a user;
The generating unit is used for generating a calculation task according to the MRP formulation request;
And the sending unit is used for sending the calculation task to a target database, so that when the first control server fails, the second control server acquires the calculation task from the target database and sends the calculation task to a transfer server.
Another aspect of the present application provides a system for planning a material demand plan, including the transit server of any one of the above aspects, and the first control server of any one of the above aspects.
Another aspect of the present application provides a computer apparatus comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is configured to execute the method for planning a material demand plan according to any one of the above aspects according to instructions in the program code.
Another aspect of the application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of formulating a material demand plan as described in any of the above aspects.
According to another aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of formulating a material demand plan as described in any one of the above aspects.
From the above technical solutions, the embodiment of the present application has the following advantages:
in the embodiment of the application, a method for making a material demand plan is provided, and after a control server receives an MRP making request initiated by a user, the control server generates a corresponding calculation task and sends the calculation task to a transfer server. The transfer server analyzes the received calculation tasks to generate N sub-tasks, and sends the N sub-tasks to N calculation servers, wherein N is an integer greater than or equal to 1. N computing servers respectively process N subtasks, generate N task data and send the N task data to the transit server, the transit server sends the N task data to the control server, and the control server can calculate MRP according to the N task data. By adopting the mode, the transfer server is adopted to transfer data for the calculation server and the control server, so that the control server does not need to directly establish communication connection with the calculation server, the logic code relation between the control server and the calculation server is reduced, the efficiency of expanding the calculation server is improved, and the code maintenance cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a conventional MRP formulation process;
FIG. 2 is a flow chart of a method for planning a material demand plan according to an embodiment of the present application;
FIG. 3 is a flow chart of a relay server management computing server;
FIG. 4 is a flowchart of another method for planning a material demand plan according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for planning a material demand plan according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a transit server according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a first control server according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method for making a material demand plan, a related device and a storage medium, which are used for improving the efficiency of an expansion calculation server.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
The application provides an access control method, referring to fig. 2, fig. 2 is a flowchart of a method for making a material demand plan according to an embodiment of the application, where the embodiment of the application includes the following steps:
101. The transfer server receives a calculation task from the control server, wherein the calculation task is generated by the control server according to a material demand plan making request initiated by a user;
The material demand plan (MRP, material requirement planning) is a practical technique for determining a processing schedule and an ordering schedule of materials by calculating the demand amount and demand time of required materials by a computer, by making a production plan of the products according to market demand prediction and customer orders, and then generating a schedule plan based on the products, a material structure table and inventory conditions of the constituent products.
The user can initiate an MRP formulation request to the control server according to the actual service requirement. And after receiving the MRP customization request, the control server generates a corresponding calculation task. In the conventional MRP formulation process, after a control server generates a calculation task, the control server analyzes and configures a plurality of subtasks and distributes the subtasks to each calculation server for processing. In the embodiment of the application, after the control server generates the calculation task, the calculation task can be directly sent to the transfer server without analyzing and configuring the subtask.
102. The transit server generates N subtasks according to the calculation tasks;
In the process of making an MRP, various types of service data are required, including but not limited to: primary production plan, inventory records, lead times and bill of materials (BOM). In practical application, the method also needs to combine the characteristics of each enterprise production, expand the data of other aspects and be used for participating in the formulation of MRP.
In the application, aiming at each type of data required to be acquired by the MRP making process, the transfer server needs to be configured with N sub-tasks (such as a main production planning task, an inventory recording task, an early period task, a bill of materials task and the like) for acquiring the data of each type, wherein N is an integer greater than or equal to 1.
103. The transfer server sends N subtasks to N computing servers;
The relay server itself does not execute the computation task nor each sub-task, and after the N sub-tasks are configured, the relay server may send the N computation servers to each computation server to execute the received sub-tasks.
For ease of understanding, referring to fig. 3, fig. 3 is a flowchart of a transit server management computing server. Each computing server for executing the subtasks needs to register with the transit server in advance, so that the transit server can manage each computing server. After the computing server is successfully registered, the computing server is in a waiting state, and the relay server distributes subtasks to the computing server in the waiting state.
In practical applications, the message queue component may be configured in the form of a software module or a hardware module in the relay server, so as to perform operations of the embodiment shown in fig. 2 or fig. 3, such as distributing subtasks, registering a computing server, or managing task data, etc.
It should be understood that, in the embodiment of the present application, the number of computing servers registered in the relay server is not limited, that is, in practical applications, the number of computing servers registered in the relay server may be greater than N. After the transfer server configures N subtasks, if the number of the computing servers is greater than N, the N computing servers may be selected to serve the subtasks.
104. The transfer server receives N task data from N computing servers;
After the N computing servers execute the N sub-tasks, the execution results of each sub-task, i.e., task data (e.g., main production plan data, inventory record data, lead date data, and bill of materials data) are obtained. Therefore, after the N computing servers perform N subtasks, N total task data are generated, and these task data can be applied to the formulation of MRP. In the embodiment of the application, the N computing servers do not directly send the task data to the control server, but send the respective task data (total N task data) to the transit server.
105. The transit server sends the N task data to the control server so that the control server calculates MRP according to the N task data;
after receiving the N task data from the N computing servers, the relay server sends the N task data to the control server, and the control server can calculate and obtain the MRP according to the received N task data.
In the embodiment of the application, after receiving an MRP formulation request initiated by a user, the control server generates a corresponding calculation task and sends the calculation task to the transfer server. The transfer server analyzes the received calculation tasks to generate N sub-tasks, and sends the N sub-tasks to N calculation servers, wherein N is an integer greater than or equal to 1. N computing servers respectively process N subtasks, generate N task data and send the N task data to the transit server, the transit server sends the N task data to the control server, and the control server can calculate MRP according to the N task data. By adopting the mode, the transfer server is adopted to transfer data for the calculation server and the control server, so that the control server does not need to directly establish communication connection with the calculation server, the logic code relation between the control server and the calculation server is reduced, the efficiency of expanding the calculation server is improved, and the code maintenance cost is reduced.
Further, in practical applications, the requirements of enterprises for MRP are often changed along with the development of services. Accordingly, the type of task data required for making the MRP may also change accordingly, and each subtask for acquiring the task data type may also change, for example, the number of subtasks may increase or decrease. Accordingly, when the number of subtasks increases or decreases, the number of corresponding computing servers also needs to be adjusted to perform an equal number of subtasks.
Specifically, the relay server receives an adjustment request for the N computing servers, and it should be understood that the adjustment request may be an increase in the number of N computing servers, or a decrease in the number of N computing servers, which is not limited herein. After receiving the adjustment request, the transit server may adjust the N computing servers, for example, increase or decrease the number of computing servers, so as to obtain M computing servers, which may be used to execute subsequent subtasks.
In this embodiment, when the adjustment expansion needs to be performed on the computing server for executing the subtasks, the corresponding adjustment request may be directly initiated to the transit server, without modifying the logic code of the control server, that is, the control server does not perceive the whole adjustment expansion process, thereby improving the efficiency of expanding the computing server.
On the other hand, the present application proposes another access control method, please refer to fig. 4, fig. 4 is a flowchart of another method for making a material demand plan provided by an embodiment of the present application, the method is applied to a control server cluster, the control server cluster includes a first control server and a second control server, and the embodiment includes the following steps:
201. The first control server receives an MRP formulation request from a user;
In the conventional MRP formulation process, a control server is used as a central node to perform the processing of computing tasks. However, when the control server as the only central node fails (e.g., is down), the entire MRP formulation process cannot be performed normally.
In order to solve the above-described problem, in the present embodiment, a control server cluster composed of a plurality of control servers may be employed as a central node for processing a calculation task. After the user initiates the MRP making request, load balancing is calculated by the proxy server (e.g., nginnx), that is, the proxy server distributes load to each control server cluster according to the number of control servers in the control server cluster. The user then selects a control server for processing the MRP formulation request based on the load fed back by the proxy server.
It should be understood that in the embodiment of the present application, the number of control servers in the control server cluster is not limited. For ease of understanding, the embodiment is described by taking the example that the centralized control server includes the first control server and the second control server, and in practical application, the control server may also include other numbers of control servers, for example, the third control server or the fourth control server, etc., which is not limited herein. Therefore, in this embodiment, the proxy server sends the allocated MRP making request to the first control server and the second control server.
202. The first control server generates a calculation task according to the MRP formulation request;
In this embodiment, step 202 is similar to step 102 shown in fig. 2, and the details of the participation in step 102 are not repeated here.
203. The first control server sends a calculation task to a target database;
In this embodiment, after the first control server generates the computing task, the computing task may be sent to a target database for storage and backup, where the target database may be a database configured outside the first control server, for example, may be a cloud server (e.g., dis) or other physical servers, and the method is not limited herein.
After the calculation task is saved in the target database, even if the first control server fails (e.g., is down), the calculation task processed by the first control server is still saved in the target database.
204. If the first control server fails, the second control server acquires a calculation task from the target database;
If the first control server fails, the calculation task of the first control server cannot be executed, at this time, as a second control server belonging to the same control server cluster, the calculation task pre-stored by the first control server can be obtained from the target database, and then the second control server can take over the calculation task of the first control server, so as to send the calculation task to the transfer server. The task data fed back by the subsequent transit server should also be processed by the second control server.
Further, in this embodiment, after the relay server receives the calculation task from the second control server, the operation of the embodiment shown in fig. 2 may be performed. For ease of understanding, referring to fig. 5, fig. 5 is a flowchart of another method for planning a material demand plan according to an embodiment of the present application. In this embodiment, after the user initiates the MRP formulation request, load balancing is calculated by the proxy server for the control server cluster. And the control server cluster generates a corresponding calculation task according to the MRP formulation request and then sends the calculation task to the transfer server. N subtasks are generated by the transit server according to the calculation tasks and distributed to the N calculation servers for execution.
In this embodiment, after the first control server generates the calculation task, the calculation task may be saved to the target database. When the first control server fails, the second control server which belongs to the same control server cluster can acquire the calculation task from the target database and send the calculation task to the transfer server for processing. By the mode, the problem that the MRP formulation flow fails when a single control server fails is avoided, and the reliability of the scheme is improved.
In order to better implement the above-described aspects of the embodiments of the present application, the following provides related apparatuses for implementing the above-described aspects. Referring to fig. 6, fig. 6 is a schematic structural diagram of a transfer server according to an embodiment of the present application, where the transfer server includes:
A receiving unit 301, configured to receive a calculation task from a control server, where the calculation task is generated by the control server according to a material requirement planning MRP making request initiated by a user;
a generating unit 302, configured to generate N subtasks according to the computing task, where N is an integer greater than or equal to 1;
a sending unit 303, configured to send the N subtasks to N computing servers;
the receiving unit 301 is further configured to receive N task data from the N computing servers;
the sending unit 303 is further configured to send the N task data to the control server, so that the control server calculates MRP according to the N task data.
Optionally, on the basis of the embodiment corresponding to fig. 6, the transit server further includes an adjustment unit 304, where the receiving unit 301 is configured to receive adjustment requests for the N computing servers;
And the adjusting unit 304 is configured to adjust the N computing servers according to the adjustment request, so as to obtain M computing servers, where M is an integer greater than or equal to 1.
In this embodiment, the transit server may perform the operations of any of the foregoing embodiments shown in fig. 2, 3 or 5, which are not described herein in detail.
In order to better implement the above-described aspects of the embodiments of the present application, the following provides related apparatuses for implementing the above-described aspects. Referring to fig. 7, fig. 7 is a schematic structural diagram of a first control server according to an embodiment of the present application, where the first control server is from a control server cluster, and the control server cluster includes the first control server and a second control server, and the first control server includes:
A receiving unit 401, configured to receive a material requirement planning MRP making request from a user;
a generating unit 402, configured to generate a computing task according to the MRP formulation request;
And a sending unit 403, configured to send the calculation task to a target database, so that when the first control server fails, the second control server obtains the calculation task from the target database, and sends the calculation task to a relay server.
In this embodiment, the first control server may perform the operations of any one of the foregoing embodiments shown in fig. 2, fig. 4 or fig. 5, which are not described herein in detail.
The embodiment of the application also provides a computer device for executing the operations of any one of the embodiments shown in the corresponding embodiments of fig. 2 to 5. Referring to fig. 8, fig. 8 is a schematic diagram of a computer device 800 according to an embodiment of the application. As shown, the computer device 800 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPU) 822 (e.g., one or more processors) and memory 832, one or more storage mediums 830 (e.g., one or more mass storage devices) storing applications 842 or data 844. Wherein the memory 832 and the storage medium 830 may be transitory or persistent. The program stored on the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations in a computer device. Still further, the central processor 822 may be configured to communicate with a storage medium 830 to execute a series of instruction operations in the storage medium 830 on the computer device 800.
The computer device 800 may also include one or more power supplies 826, one or more wired or wireless network interfaces 850, one or more input/output interfaces 858, and/or one or more operating systems 841, such as Windows Server TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM, or the like. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The steps performed in the above embodiments may be based on the structure of the computer device shown in fig. 8.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, an interactive video management device, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of planning a material demand plan, comprising:
The transfer server receives a calculation task from the control server, wherein the calculation task is generated by the control server according to a material demand plan making request initiated by a user;
the transit server generates N subtasks according to the calculation task, wherein N is an integer greater than or equal to 1;
the transit server sends the N subtasks to N computing servers;
the transit server receives N task data from the N computing servers;
and the transit server sends the N task data to the control server, so that the control server calculates the material demand plan according to the N task data.
2. The method according to claim 1, wherein the method further comprises:
The transit server receives adjustment requests for the N computing servers;
and the transit server adjusts the N computing servers according to the adjustment request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
3. A method for formulating a material demand plan, the method being applied to a control server cluster, the control server cluster comprising a first control server and a second control server, the method comprising:
The first control server receives a material demand planning request from a user;
The first control server generates a calculation task according to the material demand planning request; the computing tasks are used for being sent to a transit server, so that the transit server generates N subtasks according to the computing tasks, N is an integer greater than or equal to 1, the N subtasks are sent to N computing servers, N task data from the N computing servers are received, and the N task data are sent to the control server cluster;
The first control server sends the calculation task to a target database, so that when the first control server fails, the second control server obtains the calculation task from the target database and sends the calculation task to a transfer server.
4. A transit server, comprising:
The receiving unit is used for receiving a calculation task from the control server, wherein the calculation task is generated by the control server according to a material demand planning request initiated by a user;
The generating unit is used for generating N subtasks according to the calculation task, wherein N is an integer greater than or equal to 1;
A sending unit, configured to send the N subtasks to N computing servers;
the receiving unit is further used for receiving N task data from the N computing servers;
The sending unit is further configured to send the N task data to the control server, so that the control server calculates the material demand plan according to the N task data.
5. The relay server according to claim 4, further comprising an adjusting unit,
The receiving unit is used for receiving adjustment requests for the N computing servers;
the adjusting unit is configured to adjust the N computing servers according to the adjusting request to obtain M computing servers, where M is an integer greater than or equal to 1.
6. A first control server, wherein the first control server is from a control server cluster, the control server cluster comprising the first control server and a second control server, the first control server comprising:
The receiving unit is used for receiving a material demand plan MRP making request from a user;
The generating unit is used for generating a calculation task according to the MRP formulation request; the computing tasks are used for being sent to a transit server, so that the transit server generates N subtasks according to the computing tasks, N is an integer greater than or equal to 1, the N subtasks are sent to N computing servers, N task data from the N computing servers are received, and the N task data are sent to the control server cluster;
And the sending unit is used for sending the calculation task to a target database, so that when the first control server fails, the second control server acquires the calculation task from the target database and sends the calculation task to a transfer server.
7. A material demand planning system comprising the transit server of any one of claims 4 to 5, and the first control server of claim 6.
8. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to execute the method of formulating the material demand plan according to any one of claims 1 to 2 according to instructions in the program code.
9. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to execute the method for formulating the supply and demand plan of claim 3 according to instructions in the program code.
10. A computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of formulating a supply demand plan as claimed in any one of the preceding claims 1 to 2, or to perform the method of formulating a supply demand plan as claimed in claim 3.
CN202110674495.6A 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium Active CN113283803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674495.6A CN113283803B (en) 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674495.6A CN113283803B (en) 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium

Publications (2)

Publication Number Publication Date
CN113283803A CN113283803A (en) 2021-08-20
CN113283803B true CN113283803B (en) 2024-04-23

Family

ID=77284839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674495.6A Active CN113283803B (en) 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium

Country Status (1)

Country Link
CN (1) CN113283803B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071003B (en) * 2023-01-28 2023-07-18 广州智造家网络科技有限公司 Material demand plan calculation method, device, electronic equipment and storage medium
CN117314354B (en) * 2023-10-07 2024-04-16 广州石伏软件科技有限公司 Cross-system collaboration method and system based on flow engine

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6625616B1 (en) * 2000-07-05 2003-09-23 Paul Dragon Method and apparatus for material requirements planning
GB0410150D0 (en) * 2004-05-07 2004-06-09 Ibm Methods,apparatus and computer programs for recovery from failures in a computing environment
WO2005004024A1 (en) * 2003-07-02 2005-01-13 Chin Kok Yap Method and system for automating inventory management in a supply chain
CN101365213A (en) * 2008-09-24 2009-02-11 金蝶软件(中国)有限公司 Method, apparatus and system for remote data submission
KR20090063688A (en) * 2007-12-14 2009-06-18 현대중공업 주식회사 System and method for managing materials requirement
US7716077B1 (en) * 1999-11-22 2010-05-11 Accenture Global Services Gmbh Scheduling and planning maintenance and service in a network-based supply chain environment
CN101719239A (en) * 2009-12-28 2010-06-02 金蝶软件(中国)有限公司 MRP data processing method and device and MRP system
CN101877712A (en) * 2009-04-29 2010-11-03 美商定谊科技公司 Data transmission-controlling method, server and terminal equipment
CN102307233A (en) * 2011-08-24 2012-01-04 无锡中科方德软件有限公司 Cloud computing method for cloud computing server
KR20120076603A (en) * 2010-12-06 2012-07-09 현대중공업 주식회사 Apparatus and method for managing material requirement planning in vessel construction process
JP2015162099A (en) * 2014-02-27 2015-09-07 日本電信電話株式会社 Server resource management device
CN105678484A (en) * 2014-11-18 2016-06-15 金蝶软件(中国)有限公司 MRP calculation process control method and system
CN105740293A (en) * 2014-12-12 2016-07-06 金蝶软件(中国)有限公司 Data export method and device
WO2018072618A1 (en) * 2016-10-18 2018-04-26 阿里巴巴集团控股有限公司 Method for allocating stream computing task and control server
WO2018217273A1 (en) * 2017-05-25 2018-11-29 Western Digital Technonologies, Inc. Parity offload for multiple data storage devices
CN109815002A (en) * 2017-11-21 2019-05-28 中国电力科学研究院有限公司 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
WO2020140369A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Data recovery control method, server and storage medium
CN111510493A (en) * 2020-04-15 2020-08-07 中国工商银行股份有限公司 Distributed data transmission method and device
CN111787066A (en) * 2020-06-06 2020-10-16 王科特 Internet of things data platform based on big data and AI
CN112381485A (en) * 2020-11-24 2021-02-19 金蝶软件(中国)有限公司 Material demand plan calculation method and related equipment
CN112905338A (en) * 2021-02-05 2021-06-04 中国工商银行股份有限公司 Automatic allocation method and device for computing resources
CN114444751A (en) * 2020-11-04 2022-05-06 顺丰科技有限公司 Material demand prediction method and device, computer equipment and storage medium
CN116126935A (en) * 2022-12-16 2023-05-16 西安航天动力试验技术研究所 Distributed test data storage system and storage method
CN116192927A (en) * 2023-02-21 2023-05-30 金蝶软件(中国)有限公司 Data transmission method and device based on SaaS service, computer equipment and medium

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7716077B1 (en) * 1999-11-22 2010-05-11 Accenture Global Services Gmbh Scheduling and planning maintenance and service in a network-based supply chain environment
US6625616B1 (en) * 2000-07-05 2003-09-23 Paul Dragon Method and apparatus for material requirements planning
WO2005004024A1 (en) * 2003-07-02 2005-01-13 Chin Kok Yap Method and system for automating inventory management in a supply chain
GB0410150D0 (en) * 2004-05-07 2004-06-09 Ibm Methods,apparatus and computer programs for recovery from failures in a computing environment
KR20090063688A (en) * 2007-12-14 2009-06-18 현대중공업 주식회사 System and method for managing materials requirement
CN101365213A (en) * 2008-09-24 2009-02-11 金蝶软件(中国)有限公司 Method, apparatus and system for remote data submission
CN101877712A (en) * 2009-04-29 2010-11-03 美商定谊科技公司 Data transmission-controlling method, server and terminal equipment
CN101719239A (en) * 2009-12-28 2010-06-02 金蝶软件(中国)有限公司 MRP data processing method and device and MRP system
KR20120076603A (en) * 2010-12-06 2012-07-09 현대중공업 주식회사 Apparatus and method for managing material requirement planning in vessel construction process
CN102307233A (en) * 2011-08-24 2012-01-04 无锡中科方德软件有限公司 Cloud computing method for cloud computing server
JP2015162099A (en) * 2014-02-27 2015-09-07 日本電信電話株式会社 Server resource management device
CN105678484A (en) * 2014-11-18 2016-06-15 金蝶软件(中国)有限公司 MRP calculation process control method and system
CN105740293A (en) * 2014-12-12 2016-07-06 金蝶软件(中国)有限公司 Data export method and device
WO2018072618A1 (en) * 2016-10-18 2018-04-26 阿里巴巴集团控股有限公司 Method for allocating stream computing task and control server
WO2018217273A1 (en) * 2017-05-25 2018-11-29 Western Digital Technonologies, Inc. Parity offload for multiple data storage devices
CN109815002A (en) * 2017-11-21 2019-05-28 中国电力科学研究院有限公司 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
WO2020140369A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Data recovery control method, server and storage medium
CN111510493A (en) * 2020-04-15 2020-08-07 中国工商银行股份有限公司 Distributed data transmission method and device
CN111787066A (en) * 2020-06-06 2020-10-16 王科特 Internet of things data platform based on big data and AI
CN114444751A (en) * 2020-11-04 2022-05-06 顺丰科技有限公司 Material demand prediction method and device, computer equipment and storage medium
CN112381485A (en) * 2020-11-24 2021-02-19 金蝶软件(中国)有限公司 Material demand plan calculation method and related equipment
CN112905338A (en) * 2021-02-05 2021-06-04 中国工商银行股份有限公司 Automatic allocation method and device for computing resources
CN116126935A (en) * 2022-12-16 2023-05-16 西安航天动力试验技术研究所 Distributed test data storage system and storage method
CN116192927A (en) * 2023-02-21 2023-05-30 金蝶软件(中国)有限公司 Data transmission method and device based on SaaS service, computer equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MRPⅡ中库存管理系统的应用方案研究;李浩, 秦志强, 李涛, 谭建荣;计算机工程(第01期);全文 *
中石化MRP应用分析及优化策略研究;徐晓昉;;石油石化物资采购(第05期);全文 *

Also Published As

Publication number Publication date
CN113283803A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN113283803B (en) Method for making material demand plan, related device and storage medium
US10652319B2 (en) Method and system for forming compute clusters using block chains
US8612615B2 (en) Systems and methods for identifying usage histories for producing optimized cloud utilization
US8938416B1 (en) Distributed storage of aggregated data
US8046767B2 (en) Systems and methods for providing capacity management of resource pools for servicing workloads
US11290528B2 (en) System for optimizing distribution of processing an automated process
US9075659B2 (en) Task allocation in a computer network
CN109814997B (en) Distributed autonomous balanced artificial intelligence task scheduling method and system
US20080127191A1 (en) Request type grid computing
US20130318527A1 (en) Virtual server control system and program
US8214327B2 (en) Optimization and staging method and system
CN110928655A (en) Task processing method and device
JP5841240B2 (en) Method and system for an improved reservation system that optimizes repeated search requests
Delamare et al. SpeQuloS: a QoS service for BoT applications using best effort distributed computing infrastructures
CN111459641B (en) Method and device for task scheduling and task processing across machine room
Park et al. A multi-class closed queueing maintenance network model with a parts inventory system
CN110275764A (en) Call timeout treatment method, apparatus and system
US7925755B2 (en) Peer to peer resource negotiation and coordination to satisfy a service level objective
WO2017074320A1 (en) Service scaling for batch processing
CN113312359B (en) Distributed job progress calculation method and device and storage medium
CN116401024A (en) Cluster capacity expansion and contraction method, device, equipment and medium based on cloud computing
Delamare et al. SpeQuloS: a QoS service for hybrid and elastic computing infrastructures
US11461147B2 (en) Liaison system and method for cloud computing environment
US20200028739A1 (en) Method and apparatus for closed-loop and dynamic capacity management in a web-scale data center
Malathi et al. Energy Aware Load Balancing Algorithm for Upgraded Effectiveness in Green Cloud Computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant