CN113283803A - Material demand plan making method, related device and storage medium - Google Patents

Material demand plan making method, related device and storage medium Download PDF

Info

Publication number
CN113283803A
CN113283803A CN202110674495.6A CN202110674495A CN113283803A CN 113283803 A CN113283803 A CN 113283803A CN 202110674495 A CN202110674495 A CN 202110674495A CN 113283803 A CN113283803 A CN 113283803A
Authority
CN
China
Prior art keywords
control server
server
task
computing
material demand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110674495.6A
Other languages
Chinese (zh)
Other versions
CN113283803B (en
Inventor
李佳
冯玉春
王正
曾朝辉
蒋松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kingdee Software China Co Ltd
Original Assignee
Kingdee Software China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kingdee Software China Co Ltd filed Critical Kingdee Software China Co Ltd
Priority to CN202110674495.6A priority Critical patent/CN113283803B/en
Publication of CN113283803A publication Critical patent/CN113283803A/en
Application granted granted Critical
Publication of CN113283803B publication Critical patent/CN113283803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06315Needs-based resource requirements planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a method for making a material demand plan, a related device and a storage medium. The application includes: after receiving a Material Requirement Planning (MRP) formulation request of a user, the control server generates a calculation task and sends the calculation task to the transit server. The transfer server analyzes the received computing tasks to generate N subtasks and sends the N subtasks to the N computing servers. The N computing servers respectively process the N subtasks, generate N task data and send the N task data to the transfer server, the transfer server sends the N task data to the control server, and the control server calculates the MRP according to the N task data. In the application, the transfer server transmits data to the calculation server and the control server, and the control server does not need to directly establish communication connection with the calculation server, so that the logical code relation between the control server and the calculation server is reduced, the efficiency of expanding the calculation server is improved, and the code maintenance cost is reduced.

Description

Material demand plan making method, related device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for making a material demand plan, a related device, and a storage medium.
Background
The material demand planning (MRP) is a practical technique for making a production plan of a product according to a market demand forecast and a customer order, then generating a progress plan based on the product, composing a material structure table and an inventory condition of the product, and calculating a demand amount and a demand time of a required material by a computer, thereby determining a processing progress and an order schedule of the material.
The basic data required to be acquired for the formulation of the MRP include, but are not limited to: main production plan, inventory records, lead time, and bill of material (BOM). In practical application, data in other aspects are required to be expanded by combining the characteristics of production of each enterprise, so that the data can participate in the formulation of MRP. The data of each type needs to configure a corresponding logic code for the data of each type, and establish a corresponding sub-task, so as to obtain the data through calculation. With the rapid development of large enterprise data, mass data is increased explosively, and generally, each subtask needs to be deployed into an independent computing device (e.g., a server) to achieve computing and obtaining of data. As shown in fig. 1, after a user initiates a request for making an MRP, the control server configures subtasks of various types, and distributes and deploys the subtasks to individual computing servers for execution.
In the MRP formulation flow shown in fig. 1, a control server is required as a central node to perform configuration distribution of subtasks and processing of task results, so the logic code in the control server is complex. Since the business logic codes between the computing server and the control server are also solidified in the control server, when a new computing server parameter needs to be expanded, the logic codes in the control server need to be modified in a large amount, the expansion efficiency is low, and the code maintenance cost is high.
Disclosure of Invention
In view of the above, the present application provides a method for making a material demand plan, a related apparatus and a storage medium, which are used to improve the efficiency of expanding a computing server.
One aspect of the present application provides a method for making a material demand plan, including:
the method comprises the steps that a transfer server receives a calculation task from a control server, wherein the calculation task is generated by the control server according to a material demand plan (MRP) formulation request initiated by a user;
the transfer server generates N subtasks according to the calculation task, wherein N is an integer greater than or equal to 1;
the transfer server sends the N subtasks to N computing servers;
the transit server receives N task data from the N computing servers;
and the transfer server sends the N task data to the control server, so that the control server calculates the MRP according to the N task data.
In one possible implementation, the method further includes:
the transit server receives adjustment requests aiming at the N computing servers;
and the transfer server adjusts the N computing servers according to the adjustment request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
Another aspect of the present application provides a method for making a material demand plan, where the method is applied to a control server cluster, where the control server cluster includes a first control server and a second control server, and the method includes:
the first control server receives a material demand plan (MRP) formulation request from a user;
the first control server generates a calculation task according to the MRP formulation request;
and the first control server sends the calculation task to a target database, so that when the first control server fails, the second control server acquires the calculation task from the target database and sends the calculation task to a transit server.
Another aspect of the present application provides a transfer server, including:
the system comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a calculation task from a control server, and the calculation task is generated by the control server according to a material demand plan (MRP) formulation request initiated by a user;
the generating unit is used for generating N subtasks according to the computing task, wherein N is an integer greater than or equal to 1;
a sending unit, configured to send the N subtasks to N computation servers;
the receiving unit is further configured to receive N task data from the N computing servers;
the sending unit is further configured to send the N task data to the control server, so that the control server calculates an MRP according to the N task data.
In one possible implementation manner, the transit server further includes an adjusting unit,
the receiving unit is used for receiving adjustment requests aiming at the N computing servers;
and the adjusting unit is used for adjusting the N computing servers according to the adjusting request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
Another aspect of the present application provides a first control server, where the first control server is from a control server cluster, where the control server cluster includes the first control server and a second control server, and the first control server includes:
the receiving unit is used for receiving a material demand plan MRP formulation request from a user;
the generating unit is used for generating a computing task according to the MRP formulation request;
and the sending unit is used for sending the calculation task to a target database, so that when the first control server fails, the second control server acquires the calculation task from the target database and sends the calculation task to a transit server.
Another aspect of the present application provides a system for making a material demand plan, including the transfer server according to any one of the above aspects, and the first control server according to any one of the above aspects.
Another aspect of the present application provides a computer device, comprising: a memory, a processor, and a bus system; the memory is used for storing program codes; the processor is used for executing the method for making the material demand plan according to any one of the aspects.
In another aspect, the present application provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform the method for planning material demand according to any one of the above aspects.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to make the computer device execute the method for making the material demand plan according to any one of the above aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, a method for making a material demand plan is provided, and after receiving an MRP making request initiated by a user, a control server generates a corresponding calculation task and sends the calculation task to a transfer server. The transfer server analyzes the received computing tasks to generate N subtasks, and sends the N subtasks to the N computing servers, wherein N is an integer greater than or equal to 1. The N computing servers respectively process the N subtasks, generate N task data and send the N task data to the transfer server, the transfer server sends the N task data to the control server, and the control server can obtain the MRP through computing according to the N task data. Through the mode, the transfer server is adopted to transmit data to the calculation server and the control server, so that the control server does not need to directly establish communication connection with the calculation server, the logical code relation between the control server and the calculation server is reduced, the efficiency of expanding the calculation server is improved, and the code maintenance cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of a conventional MRP formulation process;
fig. 2 is a flowchart of a method for making a material demand plan according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a transit server managing a compute server;
fig. 4 is a flowchart of another method for making a material demand plan according to an embodiment of the present disclosure;
fig. 5 is a flowchart of another method for making a material demand plan according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a transit server according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a first control server according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method for making a material demand plan, a related device and a storage medium, which are used for improving the efficiency of expanding a computing server.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 2, fig. 2 is a flowchart of a method for making a material demand plan according to an embodiment of the present application, where the embodiment of the present application includes the following steps:
101. the method comprises the steps that a transfer server receives a calculation task from a control server, wherein the calculation task is generated by the control server according to a material demand plan making request initiated by a user;
the material demand planning (MRP) is a practical technique for making a production plan of a product according to a market demand forecast and a customer order, then generating a progress plan based on the product, composing a material structure table and an inventory condition of the product, and calculating a demand amount and a demand time of a required material by a computer, thereby determining a processing progress and an order schedule of the material.
The user can initiate an MRP formulation request to the control server according to actual service requirements. And after receiving the MRP customization request, the control server generates a corresponding calculation task. In a traditional MRP formulation flow, after a control server generates a computation task, the control server analyzes and configures a plurality of subtasks, and distributes the subtasks to each computation server for processing. In the embodiment of the application, after the control server generates the calculation task, the calculation task can be directly sent to the transfer server without analyzing and configuring the subtasks.
102. The transfer server generates N subtasks according to the calculation task;
in the process of formulating the MRP, various types of service data are required, specifically including but not limited to: main production plan, inventory records, lead time, and bill of material (BOM). In practical application, data in other aspects are required to be expanded by combining the characteristics of production of each enterprise, so that the data can participate in the formulation of MRP.
In the present application, for each type of data that needs to be obtained by the MRP process, the transit server needs to configure N subtasks (for example, a main production planning task, an inventory recording task, a lead time task, a bill of materials task, and the like) for obtaining the above each type of data, where N is an integer greater than or equal to 1.
103. The transfer server sends N subtasks to the N computing servers;
the relay server itself does not execute the calculation task, and does not execute each subtask, and after the relay server configures the N subtasks, the relay server may send the N subtasks to the N calculation servers, and each calculation server executes the received subtask.
For ease of understanding, please refer to fig. 3, and fig. 3 is a flowchart of the transit server managing the computing server. Each computing server for executing the subtasks needs to be registered with the relay server in advance, so that the relay server can manage each computing server. After the computing server is successfully registered, the computing server is in a waiting state, and the transfer server dispatches the subtasks to the computing server in the waiting state.
In practical applications, the message queue component may be configured in the form of a software module or a hardware module in the transit server, so as to perform the operations of the embodiments shown in fig. 2 or fig. 3, such as distributing subtasks, registering the computing server or managing task data, and the like.
It should be understood that, in the embodiment of the present application, the number of the computing servers registered in the transit server is not limited, that is, in an actual application, the number of the computing servers registered in the transit server may be greater than N. After the transfer server configures N subtasks, if the number of the computing servers is greater than N, the N computing servers may be selected to perform the dispatch of the subtasks.
104. The transfer server receives N task data from N computing servers;
after the N computation servers respectively execute the N subtasks, an execution result of each subtask, that is, task data (e.g., main production plan data, inventory record data, lead time data, and bill of material data) is obtained. Therefore, after the N computation servers execute the N subtasks, a total of N task data is generated, and the task data can be applied to the formulation of the MRP. In the embodiment of the application, the N calculation servers do not directly send the task data to the control server, but send the respective task data (N task data in total) to the relay server.
105. The transfer server sends the N task data to the control server, so that the control server calculates to obtain MRP according to the N task data;
after receiving the N task data from the N calculation servers, the transfer server sends the N task data to the control server, so that the control server can calculate the MRP according to the received N task data.
In the embodiment of the application, after the control server receives the MRP formulation request initiated by the user, the control server generates a corresponding calculation task and sends the calculation task to the transfer server. The transfer server analyzes the received computing tasks to generate N subtasks, and sends the N subtasks to the N computing servers, wherein N is an integer greater than or equal to 1. The N computing servers respectively process the N subtasks, generate N task data and send the N task data to the transfer server, the transfer server sends the N task data to the control server, and the control server can obtain the MRP through computing according to the N task data. Through the mode, the transfer server is adopted to transmit data to the calculation server and the control server, so that the control server does not need to directly establish communication connection with the calculation server, the logical code relation between the control server and the calculation server is reduced, the efficiency of expanding the calculation server is improved, and the code maintenance cost is reduced.
Further, in practical applications, the formulation requirements of the enterprise for the MRP often change along with business development. Therefore, the type of task data required for making the MRP may also change accordingly, and the respective subtasks for acquiring the task data type may also change, for example, the number of subtasks may increase or decrease. Accordingly, when the number of subtasks increases or decreases, the number of corresponding computing servers also needs to be adjusted so that the same number of subtasks are performed.
Specifically, the transit server receives an adjustment request for the N computing servers, and it should be understood that the adjustment request may be an increase of the number of the N computing servers or a decrease of the number of the N computing servers, which is not limited herein. After receiving the adjustment request, the relay server may adjust the N computing servers, for example, increase or decrease the number of the computing servers, so as to obtain M computing servers, and the M computing servers may be used to execute subsequent subtasks.
In this embodiment, when the computing server for executing the subtasks needs to be adjusted and expanded, the corresponding adjustment request may be directly initiated to the relay server without modifying the logic code of the control server, that is, the control server does not perceive the whole process of adjusting and expanding, thereby improving the efficiency of expanding the computing server.
On the other hand, the present application provides another access control method, please refer to fig. 4, where fig. 4 is a flowchart of another method for making a material demand plan according to an embodiment of the present application, where the method is applied to a control server cluster, the control server cluster includes a first control server and a second control server, and the embodiment includes the following steps:
201. a first control server receives an MRP formulation request from a user;
in the conventional MRP process, a control server is used as a central node to perform the processing of the calculation tasks. However, when the control server as the only central node fails (e.g., goes down), the whole MRP formulation process cannot be performed normally.
In order to solve the above problem, in this embodiment, a control server cluster including a plurality of control servers may be used as a central node for processing a computing task. After the user initiates the MRP formulation request, the load balancing is calculated by the proxy server (e.g., Nginx), that is, the proxy server distributes the load to each control server cluster according to the number of the control servers in the control server cluster. And then the user selects a control server for processing the MRP formulation request according to the load fed back by the proxy server.
It should be understood that, in the embodiment of the present application, the number of control servers in a control server cluster is not limited. For convenience of understanding, in this embodiment, it is described that the centralized control server includes the first control server and the second control server as an example, in an actual application, the control server may further include other numbers of control servers, for example, a third control server or a fourth control server, and the like, and the specific details are not limited herein. Therefore, in this embodiment, the proxy server sends the allocated MRP formulation request to the first control server and the second control server.
202. The first control server generates a calculation task according to the MRP formulation request;
in this embodiment, step 202 is similar to step 102 shown in fig. 2, and please refer to the content of step 102 for detailed description, which is not repeated herein.
203. The first control server sends a calculation task to a target database;
in this embodiment, after the first control server generates the computing task, the computing task may be sent to a target database for storage and backup, specifically, the target database may be a database configured outside the first control server, for example, a cloud server (e.g., Redis) or other physical servers, and the details are not limited herein.
After the calculation task is stored in the target database, even if the first control server fails (for example, goes down), the calculation task processed by the first control server is still stored in the target database.
204. If the first control server fails, the second control server acquires a calculation task from the target database;
if the first control server fails, the calculation task of the first control server cannot be executed, at this time, as a second control server belonging to the same control server cluster, the calculation task pre-stored by the first control server may be acquired from the target database, and then the second control server may take over the calculation task of the first control server, thereby sending the calculation task to the relay server. The task data fed back by the subsequent transit server should also be processed by the second control server.
Further, in this embodiment, after the transit server receives the computing task from the second control server, the operations of the embodiment shown in fig. 2 may be performed. For easy understanding, please refer to fig. 5, and fig. 5 is a flowchart of another material demand plan making method according to an embodiment of the present application. In this embodiment, after the user initiates the MRP formulation request, the proxy server calculates load balancing for the control server cluster. And the control server cluster generates a corresponding calculation task according to the MRP formulation request and then sends the calculation task to the transfer server. And the transfer server generates N subtasks according to the calculation tasks and distributes the subtasks to the N calculation servers for execution.
In this embodiment, after the first control server generates the calculation task, the first control server may save the calculation task to the target database. When the first control server fails, the second control server belonging to the same control server cluster may obtain the calculation task from the target database, and send the calculation task to the transfer server for processing. By the mode, the problem of failure of MRP (media management protocol) flow establishment caused by failure of a single control server is solved, and the reliability of the scheme is improved.
In order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects. Referring to fig. 6, fig. 6 is a schematic structural diagram of a transit server according to an embodiment of the present application, where the transit server includes:
a receiving unit 301, configured to receive a calculation task from a control server, where the calculation task is generated by the control server according to a material demand plan MRP formulation request initiated by a user;
a generating unit 302, configured to generate N subtasks according to the computation task, where N is an integer greater than or equal to 1;
a sending unit 303, configured to send the N subtasks to N computation servers;
the receiving unit 301 is further configured to receive N task data from the N computing servers;
the sending unit 303 is further configured to send the N task data to the control server, so that the control server calculates an MRP according to the N task data.
Optionally, on the basis of the embodiment corresponding to fig. 6, the transit server further includes an adjusting unit 304, and the receiving unit 301 is configured to receive adjustment requests for the N computing servers;
an adjusting unit 304, configured to adjust the N computing servers according to the adjustment request, so as to obtain M computing servers, where M is an integer greater than or equal to 1.
In this embodiment, the transit server may perform the operations of any one of the embodiments shown in fig. 2, fig. 3, or fig. 5, which will not be described herein again in detail.
In order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects. Referring to fig. 7, fig. 7 is a schematic structural diagram of a first control server provided in an embodiment of the present application, where the first control server is from a control server cluster, the control server cluster includes the first control server and a second control server, and the first control server includes:
a receiving unit 401, configured to receive a material demand plan MRP formulation request from a user;
a generating unit 402, configured to generate a computing task according to the MRP formulation request;
a sending unit 403, configured to send the calculation task to a target database, so that when the first control server fails, the second control server obtains the calculation task from the target database, and sends the calculation task to a transit server.
In this embodiment, the first control server may perform the operations of the embodiments shown in any one of fig. 2, fig. 4, or fig. 5, which will not be described herein again in detail.
The embodiment of the present application further provides a computer device, configured to perform the operations of any one of the embodiments shown in fig. 2 to 5. Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer apparatus 800 according to an embodiment of the present application. As shown, the computer device 800 may vary widely in configuration or performance, and may include one or more Central Processing Units (CPUs) 822 (e.g., one or more processors) and memory 832, one or more storage media 830 (e.g., one or more mass storage devices) storing applications 842 or data 844. Memory 832 and storage medium 830 may be, among other things, transient or persistent storage. The program stored in the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations for the computer device. Still further, a central processor 822 may be provided in communication with the storage medium 830 for executing a series of instruction operations in the storage medium 830 on the computer device 800.
The computer device 800 may also include one or more power supplies 826, one or more wired or wireless network connectionsA port 850, one or more input-output interfaces 858, and/or one or more operating systems 841, such as Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The steps performed in the above-described embodiment may be based on the structure of the computer apparatus shown in fig. 8.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a management apparatus for interactive video, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for making a material demand plan is characterized by comprising the following steps:
the method comprises the steps that a transfer server receives a calculation task from a control server, wherein the calculation task is generated by the control server according to a material demand plan making request initiated by a user;
the transfer server generates N subtasks according to the calculation task, wherein N is an integer greater than or equal to 1;
the transfer server sends the N subtasks to N computing servers;
the transit server receives N task data from the N computing servers;
and the transfer server sends the N pieces of task data to the control server, so that the control server calculates the material demand plan according to the N pieces of task data.
2. The method of claim 1, further comprising:
the transit server receives adjustment requests aiming at the N computing servers;
and the transfer server adjusts the N computing servers according to the adjustment request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
3. A method for making a material demand plan is applied to a control server cluster, wherein the control server cluster comprises a first control server and a second control server, and the method comprises the following steps:
the first control server receives a material demand planning request from a user;
the first control server generates a calculation task according to the material demand plan making request;
and the first control server sends the calculation task to a target database, so that when the first control server fails, the second control server acquires the calculation task from the target database and sends the calculation task to a transit server.
4. A transit server, comprising:
the system comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a computing task from a control server, and the computing task is generated by the control server according to a material demand plan making request initiated by a user;
the generating unit is used for generating N subtasks according to the computing task, wherein N is an integer greater than or equal to 1;
a sending unit, configured to send the N subtasks to N computation servers;
the receiving unit is further configured to receive N task data from the N computing servers;
the sending unit is further configured to send the N pieces of task data to the control server, so that the control server calculates the material demand plan according to the N pieces of task data.
5. The transit server of claim 4, wherein the transit server further comprises an adjustment unit,
the receiving unit is used for receiving adjustment requests aiming at the N computing servers;
and the adjusting unit is used for adjusting the N computing servers according to the adjusting request to obtain M computing servers, wherein M is an integer greater than or equal to 1.
6. A first control server, wherein the first control server is from a cluster of control servers, wherein the cluster of control servers includes the first control server and a second control server, and wherein the first control server comprises:
the receiving unit is used for receiving a material demand plan MRP formulation request from a user;
the generating unit is used for generating a computing task according to the MRP formulation request;
and the sending unit is used for sending the calculation task to a target database, so that when the first control server fails, the second control server acquires the calculation task from the target database and sends the calculation task to a transit server.
7. A material demand planning system comprising the transfer server according to any one of claims 4 to 5 and the first control server according to claim 6.
8. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is used for executing the method for making the material demand plan according to any one of claims 1 to 2 according to instructions in the program code.
9. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes; the processor is configured to execute the method of claim 3 according to instructions in the program code.
10. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of formulating a material demand plan as claimed in any one of claims 1 to 2 above, or the method of formulating a material demand plan as claimed in claim 3 above.
CN202110674495.6A 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium Active CN113283803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674495.6A CN113283803B (en) 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674495.6A CN113283803B (en) 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium

Publications (2)

Publication Number Publication Date
CN113283803A true CN113283803A (en) 2021-08-20
CN113283803B CN113283803B (en) 2024-04-23

Family

ID=77284839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674495.6A Active CN113283803B (en) 2021-06-17 2021-06-17 Method for making material demand plan, related device and storage medium

Country Status (1)

Country Link
CN (1) CN113283803B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071003A (en) * 2023-01-28 2023-05-05 广州智造家网络科技有限公司 Material demand plan calculation method, device, electronic equipment and storage medium
CN117314354A (en) * 2023-10-07 2023-12-29 广州石伏软件科技有限公司 Cross-system collaboration method and system based on flow engine

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7716077B1 (en) * 1999-11-22 2010-05-11 Accenture Global Services Gmbh Scheduling and planning maintenance and service in a network-based supply chain environment
US6625616B1 (en) * 2000-07-05 2003-09-23 Paul Dragon Method and apparatus for material requirements planning
EP1639531A1 (en) * 2003-07-02 2006-03-29 Chin Kok Yap Method and system for automating inventory managment in a supply chain
GB0410150D0 (en) * 2004-05-07 2004-06-09 Ibm Methods,apparatus and computer programs for recovery from failures in a computing environment
KR101072739B1 (en) * 2007-12-14 2011-10-11 현대중공업 주식회사 System and method for managing materials requirement
CN101365213A (en) * 2008-09-24 2009-02-11 金蝶软件(中国)有限公司 Method, apparatus and system for remote data submission
CN101877712B (en) * 2009-04-29 2013-11-20 美商定谊科技公司 Data transmission-controlling method, server and terminal equipment
CN101719239A (en) * 2009-12-28 2010-06-02 金蝶软件(中国)有限公司 MRP data processing method and device and MRP system
KR20120076603A (en) * 2010-12-06 2012-07-09 현대중공업 주식회사 Apparatus and method for managing material requirement planning in vessel construction process
CN102307233A (en) * 2011-08-24 2012-01-04 无锡中科方德软件有限公司 Cloud computing method for cloud computing server
JP6085266B2 (en) * 2014-02-27 2017-02-22 日本電信電話株式会社 Server resource management device
CN105678484A (en) * 2014-11-18 2016-06-15 金蝶软件(中国)有限公司 MRP calculation process control method and system
CN105740293B (en) * 2014-12-12 2019-07-23 金蝶软件(中国)有限公司 Data export method and device
CN107959705B (en) * 2016-10-18 2021-08-20 阿里巴巴集团控股有限公司 Distribution method of streaming computing task and control server
US10635529B2 (en) * 2017-05-25 2020-04-28 Western Digital Technologies, Inc. Parity offload for multiple data storage devices
CN109815002A (en) * 2017-11-21 2019-05-28 中国电力科学研究院有限公司 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
CN109857592B (en) * 2019-01-04 2023-09-15 平安科技(深圳)有限公司 Data recovery control method, server and storage medium
CN111510493B (en) * 2020-04-15 2023-09-26 中国工商银行股份有限公司 Distributed data transmission method and device
CN111787066B (en) * 2020-06-06 2023-07-28 王科特 Internet of things data platform based on big data and AI
CN114444751A (en) * 2020-11-04 2022-05-06 顺丰科技有限公司 Material demand prediction method and device, computer equipment and storage medium
CN112381485A (en) * 2020-11-24 2021-02-19 金蝶软件(中国)有限公司 Material demand plan calculation method and related equipment
CN112905338B (en) * 2021-02-05 2024-02-09 中国工商银行股份有限公司 Automatic computing resource allocation method and device
CN116126935A (en) * 2022-12-16 2023-05-16 西安航天动力试验技术研究所 Distributed test data storage system and storage method
CN116192927A (en) * 2023-02-21 2023-05-30 金蝶软件(中国)有限公司 Data transmission method and device based on SaaS service, computer equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071003A (en) * 2023-01-28 2023-05-05 广州智造家网络科技有限公司 Material demand plan calculation method, device, electronic equipment and storage medium
CN116071003B (en) * 2023-01-28 2023-07-18 广州智造家网络科技有限公司 Material demand plan calculation method, device, electronic equipment and storage medium
CN117314354A (en) * 2023-10-07 2023-12-29 广州石伏软件科技有限公司 Cross-system collaboration method and system based on flow engine
CN117314354B (en) * 2023-10-07 2024-04-16 广州石伏软件科技有限公司 Cross-system collaboration method and system based on flow engine

Also Published As

Publication number Publication date
CN113283803B (en) 2024-04-23

Similar Documents

Publication Publication Date Title
JP5206674B2 (en) Virtual machine management apparatus, virtual machine management method, and virtual machine management program
US20190258506A1 (en) Systems and methods of host-aware resource management involving cluster-based resource pools
EP2535810B1 (en) System and method for performing distributed parallel processing tasks in a spot market
US8566835B2 (en) Dynamically resizing a virtual machine container
JP4621087B2 (en) System and method for operating load balancer for multiple instance applications
CN115328663B (en) Method, device, equipment and storage medium for scheduling resources based on PaaS platform
US20170180469A1 (en) Method and system for forming compute clusters using block chains
US20100131959A1 (en) Proactive application workload management
US10108458B2 (en) System and method for scheduling jobs in distributed datacenters
CN109814997B (en) Distributed autonomous balanced artificial intelligence task scheduling method and system
CN113283803A (en) Material demand plan making method, related device and storage medium
KR20170029263A (en) Apparatus and method for load balancing
US11237862B2 (en) Virtualized network function deployment
CN110928655A (en) Task processing method and device
CN111459641B (en) Method and device for task scheduling and task processing across machine room
Delamare et al. SpeQuloS: a QoS service for BoT applications using best effort distributed computing infrastructures
EP1880286A1 (en) Data processing network
Hu et al. Multi-objective container deployment on heterogeneous clusters
CN111160873A (en) Batch processing device and method based on distributed architecture
CN113687956A (en) Message routing distribution method and device, computer equipment and storage medium
CN111240848A (en) Task allocation processing method and system
US7925755B2 (en) Peer to peer resource negotiation and coordination to satisfy a service level objective
CN112565391A (en) Method, apparatus, device and medium for adjusting instances in an industrial internet platform
CN113312359B (en) Distributed job progress calculation method and device and storage medium
CN115421920A (en) Task management method and device for financial product, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant