CN116777182A - Task dispatch method for semiconductor wafer manufacturing - Google Patents

Task dispatch method for semiconductor wafer manufacturing Download PDF

Info

Publication number
CN116777182A
CN116777182A CN202311020974.1A CN202311020974A CN116777182A CN 116777182 A CN116777182 A CN 116777182A CN 202311020974 A CN202311020974 A CN 202311020974A CN 116777182 A CN116777182 A CN 116777182A
Authority
CN
China
Prior art keywords
dispatching
data
task
database
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311020974.1A
Other languages
Chinese (zh)
Other versions
CN116777182B (en
Inventor
张磊
王增超
吴钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Keyyang Technology Co ltd
Original Assignee
Beijing Keyyang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Keyyang Technology Co ltd filed Critical Beijing Keyyang Technology Co ltd
Priority to CN202311020974.1A priority Critical patent/CN116777182B/en
Publication of CN116777182A publication Critical patent/CN116777182A/en
Application granted granted Critical
Publication of CN116777182B publication Critical patent/CN116777182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing

Abstract

The invention provides a dispatching method for a semiconductor wafer manufacturing execution task, which synchronizes data in a manufacturing execution system database to a dispatching system database, and ensures consistency and accuracy of the data; the invention generates dispatching task information according to port state change, and distributes the dispatching task information to one or more dispatching systems through the message middleware and the load balancing algorithm, thereby ensuring the high efficiency and the reliability of the information; in each dispatching system, dispatching task logic is adopted according to task information and data information, so that dispatching efficiency and quality are optimized; the invention uses the distributed memory database to improve the data access speed and reliability. The invention realizes efficient, reliable and intelligent task dispatch of the photoetching machine by utilizing the technologies of data synchronization, message middleware, load balancing and memory database, can effectively improve the efficiency and quality of task dispatch of the photoetching machine, reduce the manufacturing cost and improve the production benefit.

Description

Task dispatch method for semiconductor wafer manufacturing
Technical Field
The invention relates to the technical field of semiconductor wafer manufacturing, in particular to a method for synchronizing task data of a wafer manufacturing execution system database to a dispatching system for task dispatching.
Background
Semiconductor wafers refer to circular silicon wafers used to fabricate integrated circuits on which hundreds of millions of tiny circuit elements can be patterned. The fabrication process of semiconductor wafers is a complex multi-step process involving a variety of equipment, materials, parameters, and procedures. In order to ensure the quality and efficiency of semiconductor wafers, the entire manufacturing process needs to be effectively monitored, controlled and optimized. A photolithography machine is one of the core devices in the fabrication of semiconductor wafers, which exposes photoresist using light sources of different wavelengths (e.g., uv light, X-ray, etc.) to form a desired circuit pattern on the wafer. The performance and stability of a lithographic apparatus directly affect the quality and throughput of semiconductor wafers. Therefore, performing a reasonable task dispatch on the lithography machine, that is, transferring the appropriate wafer and mask to the appropriate machine for performing the lithography operation, is an important step in the semiconductor wafer manufacturing process. To enable monitoring, control, and optimization of the semiconductor wafer manufacturing process, a manufacturing execution system MES (Manufacturing Execution System) is typically employed throughout the wafer production process to monitor the flow of product from work order to finished product delivery, and to communicate information to optimize production activities. The MES can collect, analyze and feed back various data in the production process, such as orders, materials, equipment, personnel, quality, energy consumption and the like, in real time, so that the monitoring, control and optimization of the production process are realized. The MES comprises a data acquisition layer, a database layer, a general service layer, a data display layer, a service analysis layer and other functional parts. Wherein the generic service layer: is responsible for realizing the core functions of an MES system, such as resource management, procedure management, unit management, production tracking, performance analysis, document management, human resource management, equipment maintenance management, process management, quality management, data acquisition and the like. The dispatch module is an important component of a common business layer of the MES system and is mainly responsible for achieving specific personnel, equipment and stations under the production plan and scheduling results so as to ensure the smooth proceeding of the production process. The main functions of the dispatch module include: generating a dispatch list: and automatically generating a dispatch list according to the production plan and the scheduling result, wherein the dispatch list comprises work order information, process information, material information, personnel information, equipment information, working hour information and the like. And (3) issuing a dispatch list: the dispatch lists are delivered to corresponding personnel, equipment and stations in various modes (such as printing, email, short message and the like), and the receiving party is required to confirm the receiving. The dispatch list performs: the execution condition of the dispatch list is monitored, the actual starting time, the actual ending time, the actual completion quantity, the actual bad quantity, the actual consumed materials, the actual consumed time and the actual consumed time are recorded, and compared with a plan, the deviation reason is analyzed. And (3) dispatching list adjustment: according to the change conditions (such as emergency bill insertion, equipment failure, personnel absences, and the like) in the production process, the dispatch bill is adjusted, resources are redistributed, the scheduling is optimized, and the delivery period and quality are ensured.
In the prior art, a photomask automatic dispatching control method and a control system are disclosed, the publication number is CN104166317A, and the publication date is 2014, 11, 26. In this document, it is described that, due to the specificity of the product (typically a wafer blank) during the semiconductor manufacturing process, it is necessary to send the product into the machine of the corresponding mask. Because the throughput of the corresponding photomask machine is limited, the manual delivery of the product into the machine may result in productivity imbalance or process stagnation affecting the production. The patent document proposes that the photomask automatic dispatch control system comprises a photomask and machine information acquisition unit, a photomask distribution unit, an automatic dispatch control table generation unit and a machine information modification unit. According to paragraph 0056 of the patent document, the mask and tool information acquiring unit acquires data information such as the number of tools available for the mask, the number of products using the mask, and the number of tools available for the mask for each lithography tool from a Manufacturing Execution System (MES). The system can balance the productivity according to the data information and the high matching of the product photomask priority and the machine working capacity, and the purpose of productivity balance is realized by generating a matching table.
In the publication number CN1287419C, the patent name is: the invention provides a dispatching control method of a semiconductor manufacturing process and a method for manufacturing semiconductor components, which are applied to a computer and a conveying system for determining which of a plurality of wafer products can be conveyed to the manufacturing system. First, the computer obtains the relative throughput and latency constraints for each wafer product. Then, a priority is calculated for each wafer product, the priority being based on the relative throughput and latency constraints. Then, the wafer product with the highest priority is selected by the computer and sent to the manufacturing system. If there are multiple wafer products with the same priority, one of them is selected randomly by the computer. The invention has the advantages that the dispatching strategy can be dynamically adjusted according to the requirements and the emergency degree of different wafer products, so as to improve the utilization rate and the production efficiency of the manufacturing system.
In publication number CN101996359a, patent name: the invention provides a dispatching method of a semiconductor manufacturing process, which comprises the following steps: judging whether the work-in-process comprises work-in-process with the maximum allowable waiting time smaller than a set value; if the maximum allowable waiting time in the product is smaller than the preset value, and the manufacturing system is running the secondary product or the product is triggered, sending the product with the minimum maximum allowable waiting time in the product into the manufacturing system; and if the work-in-process does not comprise work-in-process with the maximum allowable waiting time smaller than the set value, sending the work-in-process with the maximum allowable waiting time in the work-in-process into the manufacturing system. The invention has the advantages that the dispatching strategy can be dynamically adjusted according to the maximum allowable waiting time of different products, so as to reduce the waiting time of the products and improve the production efficiency.
From the above documents, it is known that the smooth data information acquisition from the MES in the prior art is a precondition for the subsequent dispatch matching based on the productivity balance target.
The applicant research finds that the following technical defects exist in the prior art:
the first drawback is that the polling method in the prior art is to read data information from the database of the manufacturing execution system, which increases the burden in the database of the manufacturing execution system and reduces the working efficiency of the manufacturing execution system. Because a large amount of data information of the wafer processing technology is stored in the manufacturing execution system database, if data read-write operation is frequently performed, the performance of the database is reduced, the efficiency of the whole manufacturing execution system is reduced and the operation is unstable due to the increase of load of data delay and increase, and the whole production process is influenced.
The second drawback is that the prior art cannot give out an execution signal according to the change of the port state data in real time, which results in inaccurate task allocation of the dispatching system. The port status data refers to a status identifier indicating whether each port on the machine has a wafer set. The change of the port state data reflects the in-out condition of the wafer group on the machine, and is an important basis for triggering the dispatching task. If the change of the port state data cannot be timely monitored and compared, and corresponding execution signals are generated according to the change condition, the dispatching task may be delayed or repeated, and the efficiency and quality in the wafer manufacturing process are affected.
A third drawback is that when multiple dispatch systems are juxtaposed, the processing capacity and load of each dispatch system are not balanced, and it is not possible to implement balanced dispatch according to the load capacity of the dispatch system. Because of the variety of equipment, materials, parameters and processes involved in the semiconductor wafer manufacturing process, multiple dispatch systems are required to work cooperatively to meet different types and scales of dispatch task requirements. However, when multiple dispatch systems are juxtaposed, if there is no effective message allocation mechanism, some dispatch systems may be overloaded to affect performance, and some dispatch systems are idle to waste resources. Therefore, there is a need for a method that can uniformly distribute a message to multiple dispatch systems using an appropriate load balancing algorithm based on the weight of the message and the load of the dispatch systems.
The fourth drawback is that the conventional relational database is used as the database of the dispatching system in the prior art, which cannot meet the requirements of high concurrency, high performance and the like. Because a large amount of data read-write operations are involved in the dispatching task process, and various types and scales of dispatching task requests need to be responded and processed quickly, the conventional relational database often cannot meet the requirements, and problems such as low data access speed may occur. Therefore, there is a need for a method of using an in-memory database as a database of a dispatch system and an in-memory database to improve data access speed and reliability.
In order to improve the execution efficiency and stability of the dispatch system and ensure the processing efficiency and quality of the wafer group, the applicant proposes a method for synchronizing task data of a wafer manufacturing execution system database to a database in the dispatch system, generating dispatch task messages and distributing the messages to one or more dispatch systems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a task dispatch method for semiconductor wafer manufacturing, which aims to realize efficient, reliable and intelligent task dispatch of a photoetching machine by utilizing the technologies of data synchronization, message middleware, load balancing and memory database.
A method for dispatching task in semiconductor wafer manufacturing includes,
the manufacturing execution system records the data information of the wafer processing technology of the photoetching machine in a database of the manufacturing execution system and at least comprises port state data in machine table data;
The data synchronization unit is used for reading and processing dispatching task data in the manufacturing execution system;
the dispatching task information is used for storing dispatching task data stored in the manufacturing execution system database and dispatching task information;
and a dispatch system for applying the dispatch task data processed by the data synchronization unit and executing the dispatch task according to the task dispatch method,
step one: the data synchronization unit is used for synchronizing dispatching task data from a database of the manufacturing execution system to a database in the dispatching system, and the data synchronization unit synchronizes data information by the following steps:
configuration information, which is to perform information configuration on dispatching task data in a manufacturing execution system and generate a mapping file about all flapping task data;
the method comprises the steps of synchronizing data, namely synchronizing dispatching task data of a database of a manufacturing execution system to the database of the dispatching system;
step two: writing cache data, namely writing port state data in machine data into a cache of a data synchronization unit;
step three: generating a dispatching task message, comparing the port state data of the cache written by the data synchronization unit in the second step with the port state data in the stored history cache, generating a message when the comparison result is that the port state data is empty, and distributing the message to one or more dispatching systems;
Step four: and D, executing dispatching task logic by the dispatching system according to the task message in the step three.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
manufacturing execution system database: a database for storing various data information in the semiconductor wafer manufacturing process, such as orders, materials, equipment, personnel, quality, energy consumption, etc.;
dispatch system database: a database for storing data information for dispatch task logic, such as wafer data, tool data, mask data, process time data, production limit data, and production index data;
a data synchronization unit: a unit for synchronizing the data obtained in the manufacturing execution system database to the dispatch system database; each data synchronization unit comprises one or more reading modules and one or more writing modules; the reading module is responsible for reading data information in a corresponding type or range from the manufacturing execution system database and converting the data information into a data format suitable for the dispatching system database; the writing module is in charge of writing the data information converted by the reading module into a dispatching system database and updating the corresponding data version and the corresponding time stamp;
and (3) caching: the memory space for temporarily storing the data information has the characteristic of high-speed reading and writing; in the cache, each writing module compares the current written message with the historical written message to acquire whether the port state is empty; if the current written message is compared with the historical written message, a certain port is found to be changed from a wafer group to a wafer group, and the state of the port is changed, so that a dispatching task is required to be executed; at this time, the writing module generates a task message of a dispatching task according to the port state change and sends the task message to the message middleware;
Message middleware: the software component for transmitting the information between different application programs has the characteristics of high efficiency, reliability, asynchronism, decoupling and the like; in the message middleware, adopting a proper load balancing algorithm, and selecting different algorithm strategies according to different scenes and requirements so as to realize the purpose of uniformly distributing task messages to one or more dispatching systems;
dispatching system: the software component for executing the dispatch task logic has the characteristics of high efficiency, reliability, intelligence and the like; in each dispatching system, proper dispatching task logic is adopted according to the task information, the wafer data, the machine data, the photomask data, the processing time data, the production limit data and the production index data, and wafer groups conforming to dispatching conditions are distributed for machines conforming to the dispatching conditions, so that the overall dispatching efficiency and the overall dispatching quality are optimal.
Further, the method for configuring information in the first step is as follows:
step S1, listing all tables in a database according to an execution system database;
step S2: selecting one or more tables corresponding to the step S1 according to the dispatching task data;
step S3, selecting one or more trigger types according to the table corresponding to the dispatching task data in the step S2;
S4, generating a corresponding mapping file;
and S5, repeating the steps S2 to S4 until all the information required by the dispatching task data is configured.
The trigger types comprise an insertion trigger, an update trigger and a deletion trigger, and the trigger types are respectively used for triggering corresponding actions when data are inserted, updated or deleted in the execution system database. The mapping file is a file for describing the corresponding relation and conversion rule between the table in the execution system database and the table in the dispatching system database, and comprises information such as source table name, target table name, source field name, target field name, field type, field length and the like.
Further, the method for synchronizing data in the first step is as follows:
step S6: creating a trigger in the table selected in step S2 according to the mapping file generated in step S4;
step S7, one or more tables for storing dispatching task data are created in a database of the dispatching system according to the mapping file generated in the step S4;
step S8: the trigger writes the dispatching task data in the database of the execution system into a queue table in the execution system;
step S9, reading and writing the data in the queue table in the step S8 into the message middleware by executing a system database data synchronization program;
And step S10, reading and writing the dispatching task data of the message middleware in the step S9 into a database of the dispatching system through a dispatching system database data synchronization program.
The queue table is a table for temporarily storing task dispatching data in the execution system database, and has the characteristic of first-in first-out; the queue table comprises fields of a message number, a message content, a message state and the like, wherein the message number refers to an identification representing the sequence of each message in the queue table, the message content refers to specific information representing dispatch task data, and the message state refers to an identification representing whether the message is read or written.
The message generated in the third step is distributed through a message middleware, so that the message is distributed to a plurality of dispatching task systems in a balanced mode, and the method comprises the following steps:
step S11, the message middleware adopts a load balancing algorithm according to the weight of the dispatching system and the load condition of the dispatching task system, and selects one or more proper dispatching task systems;
step S12, the message middleware sends the message to the selected dispatching task system and receives feedback information thereof;
step S13, the message middleware updates the load condition of the dispatching task system according to the feedback information and adjusts the parameters of a load balancing algorithm;
And step S14, repeating the steps S11 to S13 until all the messages are distributed and processed.
Preferably, the load balancing algorithm in step S13 is a weighted polling method, specifically: the following steps are performed: according to the thread number of a request pool and the length of a waiting queue in a dispatching system, calculating a weight value of each dispatching system, representing the capacity of the dispatching system for processing dispatching tasks, wherein the larger the thread number of the request pool and the length of the waiting queue, the larger the weight value; recording the weight value, the number of received messages and the number of processed messages of each dispatching system in a message middleware; when a message is dispatched, calculating a distribution weight value of each dispatching system, representing the priority of the assigned message, wherein the distribution weight value=the weight value of the dispatching system/(the number of received messages-the number of processed messages), and the higher the distribution weight value, the higher the priority; the message middleware selects one or more proper dispatching systems according to the distribution weight value, sends the messages to the dispatching systems and updates the number of the received messages; the message middleware updates the number of the processed messages according to the feedback information of the dispatching system; repeating the above steps until all the messages are distributed and processed. The algorithm dispatching method has the advantages that different weight values can be distributed to each server according to the performance and the load condition of the back-end server, so that the more fair and uniform distribution is realized, certain servers are prevented from being overloaded or idle, and the efficiency and the quality of the servers are improved.
Preferably, the load balancing adjustment algorithm in step S13 is a hash matching method, specifically:
each dispatching system is distributed with a weight value to represent the capacity of the corresponding dispatching system to process dispatching tasks, and when a new message arrives, the following steps are executed:
the method comprises the steps of clockwise searching for the next dispatching system on a hash ring, and taking a module for the number of the dispatching systems to obtain an index of the next dispatching system, wherein the current index is the index of the dispatching system selected last time, the initial value is the negative number of the dispatching systems, the number of the dispatching systems is the total number of the rear-end dispatching systems, taking the module is the operation for obtaining a remainder, the index of the next dispatching system is the index of the dispatching system to be selected this time, and the index is equal to the result of taking the module for the number of the dispatching systems after the current index is added;
subtracting the greatest common divisor of all the dispatching system weights from the current weight if the current index is equal to zero, and resetting the greatest common divisor to the greatest value of all the dispatching system weights if the current weight is less than or equal to zero, wherein the current weight refers to the weight value of the dispatching system selected last time according to a certain proportion, the initial value is zero weight, and the greatest common divisor of all the dispatching system weights refers to the greatest positive integer which can be divided by all the weight values in all the dispatching system weight values;
If the weight of the next dispatching system is greater than or equal to the current weight, selecting the dispatching system to process the message, and updating the current index and the current weight, wherein the weight of the next dispatching system refers to the weight value of the dispatching system to be selected at the time;
if the weight of the next dispatch system is smaller than the current weight, skipping the dispatch system, and returning to the first step to continue searching for the proper dispatch system. The algorithm dispatch has the advantage that when the server nodes are increased or decreased, only a small amount of messages need to be redistributed, and most of messages can still hit the original server nodes, so that the cost of cache invalidation and data migration is reduced.
Preferably, the load balancing algorithm in step S13 is a minimum connection method, specifically:
maintaining a current number of active connections and a minimum number of active connections variable in the message middleware;
when a new message arrives, the following steps are performed:
traversing all dispatching task systems, acquiring the movable connection number of each dispatching task system, comparing the movable connection number with the minimum movable connection number,
the minimum number of active connections refers to the number of active connections of each dispatching task system, which is monitored in real time by the load balancer with the minimum number of active connections in all the dispatching task systems, and compared with the minimum number of active connections;
If the number of the movable connections of a certain dispatch task system is smaller than or equal to the minimum number of the movable connections, the dispatch task system is added into the candidate list, the minimum number of the movable connections is updated to be the number of the movable connections of the dispatch task system,
the candidate list is a collection of dispatching task systems which can be selected to process the message, if the number of the movable connections of a certain dispatching task system is smaller than the minimum number of the movable connections, the dispatching task system is indicated to have lower load, the message can be selected to be processed and added into the candidate list, the minimum number of the movable connections is updated to be the number of the movable connections of the dispatching task system, and if the number of the movable connections of the certain dispatching task system is equal to the minimum number of the movable connections, the dispatching task system is added into the candidate list;
if the number of the movable connections of a certain dispatching task system is larger than the minimum number of the movable connections, skipping the dispatching task system;
and randomly selecting one dispatching task system from the candidate list to process the message, and updating the current active connection number to be the sum of the active connection numbers of all dispatching task systems in the candidate list. The algorithm has the advantages that the dynamic distribution of the requests can be carried out according to the load condition of the cluster nodes, namely, the machine performance is good, the processing requests are fast, and the nodes with fewer backlog requests distribute more requests. And the downtime or too slow response of a certain node caused by processing more than the self-sustainable request amount is avoided.
In each dispatching system, a wafer group meeting the dispatching conditions is distributed to each machine meeting the dispatching conditions, so that the overall dispatching efficiency and quality are optimal or near optimal.
Preferably, the memory database is a distributed memory database, i.e. the dispatching task data is stored in a plurality of memory nodes in a scattered way, and the dispatching task data and the memory nodes are mapped through a consistent hash algorithm,
wherein, the liquid crystal display device comprises a liquid crystal display device,
the consistent hashing algorithm includes the steps of:
mapping all the memory nodes and all the dispatching task data onto a hash ring, namely calculating a hash value for each memory node and each dispatching task data, and taking the hash value as a point on the ring;
for each dispatching task data, searching a memory node nearest to the dispatching task data on the hash ring clockwise, and storing the dispatching task data in the memory node;
for each memory node, a number of memory nodes closest to it are found clockwise on the hash ring and used as backup nodes for the memory node to recover data from the backup nodes in the event of a failure of the memory node. The method disperses dispatch task data among a plurality of memory nodes and determines on which node each data should be stored and which backup node of each node is through a hash ring. The method has the advantages that the data migration and buffer invalidation overhead can be reduced, and the data access efficiency and reliability are improved. And simultaneously reduces the cost of the memory and the disk.
Preferably, the memory database is a distributed memory database, and comprises a main node and a plurality of standby nodes, the main node and the plurality of standby nodes are in data synchronization, the data consistency of all data nodes is ensured, a dispatching system inquires that all data are stored in any node, a data synchronization unit writes dispatching task data into the main node, the main node synchronizes the task data into the standby nodes through a data synchronization mechanism, and the dispatching system reads the data from the standby nodes; the data synchronization mechanism is as follows: when a certain data node has data writing, the data change of the current node is recorded in a redo log file, a sending program in the current data node synchronizes the redo log to other data nodes in real time, and after receiving the redo log, a receiving program of the other data nodes applies the data change in the log to the receiving program and records the data change in the redo log; the client driver configures information of the database nodes, one or more nodes to be connected are configured, the configuration data synchronization unit writes data through the master node, and the dispatching system reads the data through the slave nodes; when the main node is down or can not work normally, the standby node detects the fault of the main node through heartbeat detection or client notification, and the standby node selects a new main node through a distributed consistency protocol. And synchronously storing the dispatching task data in a main node and a plurality of standby nodes, and ensuring the consistency and availability of the data through read-write separation and high availability of the main node and the standby nodes. This has the advantage of improving the performance of data writing and the concurrency of data reading, and enabling fast switching to the standby node to continue providing service in the event of a failure of the primary node.
The dispatching method for the semiconductor wafer manufacturing execution task has the following beneficial effects:
the efficiency and stability of the manufacturing execution system are improved. By synchronizing the dispatching task data in the manufacturing execution system to the dispatching system, the access pressure to the database of the manufacturing execution system is reduced, risks of performance degradation, data delay increase and the like of the database are avoided, and the working efficiency and stability of the manufacturing execution system are improved.
The dispatching task message is generated in real time according to the port state change. By writing the message for judging whether the dispatching task needs to be executed into the cache in the data synchronization process and comparing the message with the historical data to acquire whether the port state is empty or not so as to generate the task message of the dispatching task, the task message is generated in real time according to the port state change, and the timeliness and the accuracy of the task allocation of the dispatching system are improved.
And the balanced dispatching according to the load capacity of the dispatching system is realized. By means of the method, the system and the device, under the condition that dispatching messages are generated rapidly, the dispatching messages are sent to the dispatching system in an equalizing mode through the algorithm of the message middleware to be executed, the purpose that dispatching is balanced according to the load capacity of the dispatching system is achieved, and the cooperative efficiency and the resource utilization rate of the dispatching systems are improved.
By storing dispatching task data in a plurality of memory database nodes in a scattered manner, the cost of data migration and cache failure can be reduced, the efficiency and reliability of data access are improved, the performance of data writing and the concurrency of data reading are improved, the consistency and usability of data are ensured, efficient dispatching based on the memory database distributed data storage nodes is realized, and the efficiency and stability of a dispatching system are improved. According to the load capacity of the dispatching system, the dispatching is balanced, and the cooperative efficiency and the resource utilization rate of a plurality of dispatching systems in parallel are improved.
The method and the system synchronize the data in the manufacturing execution system database to the dispatching system database, so that the consistency and the accuracy of the data are ensured; the invention generates dispatching task information according to port state change, and distributes the dispatching task information to one or more dispatching systems through the message middleware and the load balancing algorithm, thereby ensuring the high efficiency and the reliability of the information; in each dispatching system, dispatching task logic is adopted according to task information and data information, so that dispatching efficiency and quality are optimized; the invention uses the distributed memory database to improve the data access speed and reliability. The invention realizes efficient, reliable and intelligent task dispatch of the photoetching machine by utilizing the technologies of data synchronization, message middleware, load balancing and memory database, can effectively improve the efficiency and quality of task dispatch of the photoetching machine, reduce the manufacturing cost and improve the production benefit.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flow chart of a method for performing task dispatch in semiconductor wafer fabrication in accordance with the present invention;
FIG. 2 is a system configuration diagram of a method for performing task dispatch in semiconductor wafer fabrication in accordance with the present invention;
fig. 3 is a logic diagram of task message generation and dispatch in the semiconductor wafer fabrication execution task dispatch method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
As shown in fig. 1 and 2, the present invention provides a method for dispatching tasks in semiconductor wafer manufacturing, which includes four parts, namely a manufacturing execution system 100, a data synchronization unit 200, a message middleware 300 and a dispatching system 400.
The manufacturing execution system 100 refers to a system for managing various equipment, materials, parameters, and processes in a wafer manufacturing process, wherein various data information related to the wafer processing process is stored in a database, including port status data in machine data. The port status data refers to a status identifier indicating whether each port on the machine has a wafer set. The change of the port state data reflects the in-out condition of the wafer group on the machine, and is an important basis for triggering the dispatching task.
The data synchronization unit 200 refers to a unit for reading and processing dispatch task data from a manufacturing execution system database, and includes three steps of configuration information, synchronization data, and cache data writing. The configuration information is used for carrying out information configuration on dispatching task data stored in a database in the manufacturing execution system and generating a mapping file related to all the dispatching task data; the synchronous data refers to the step of synchronizing dispatching task data of a database in a manufacturing execution system to the database of the dispatching system; the writing of the cache data refers to writing the port state data in the machine data into the cache of the data synchronization unit 200, which is performed simultaneously with the synchronization data. The step of generating dispatch task information is to compare the port state data written in the cache data with the port state data stored in the history cache, and generate information when the comparison result is that the port state data is empty.
Message middleware 300 is used to store dispatch task data stored in a manufacturing execution system database and distribute it to one or more dispatch systems 400, and includes two steps, namely, storing dispatch task data and message distribution. Storing dispatch task data is writing the data read by data synchronization unit 200 into message middleware. Message distribution refers to distributing the generated message to one or more dispatch systems 400.
The dispatch system 400 is used for distributing wafer groups meeting dispatch conditions to each machine meeting dispatch conditions by adopting proper dispatch task logic according to task information and various data information, and comprises two steps of dispatch task logic decomposition and parallel processing. The dispatch task logic decomposition refers to screening out wafer groups needing dispatch according to dispatch task data in a dispatch system database; the parallel processing refers to decomposing the dispatch task logic into a plurality of dispatch task data after receiving the task message, and distributing each dispatch task data to a plurality of threads for parallel processing, so that the execution speed and the execution efficiency of dispatch rules are improved.
Step one, data synchronization
Specifically, the present invention can realize data synchronization in the following manner:
firstly, in the manufacturing execution system 100, a data synchronization program is set, which is used for reading dispatch task data from a database of the manufacturing execution system 100 and writing the dispatch task data into the message middleware 300;
secondly, in the dispatching system 400, a data synchronization program is set, which is used for reading the dispatching task data from the message middleware 300 and writing the dispatching task data into a database of the dispatching system 400;
Message middleware 300 is a software component for transferring messages between different applications to implement asynchronous communication and decoupling. Any of the commonly used message middleware, such as RabbitMQ, kafka, activeMQ, etc., may be employed in the present invention.
In the present invention, in order to ensure the correctness and integrity of data synchronization, it is necessary to perform information configuration on the dispatch task data stored in the database in the manufacturing execution system 100 and generate a mapping file regarding all the dispatch task data. Specifically, the present invention can realize information configuration in the following manner:
first, in the data synchronization unit 200, all tables in the database are listed according to the database of the data synchronization unit 200;
secondly, in the data synchronization unit 200, one or more tables and columns in the tables corresponding to the list in the last step are selected according to the dispatching task data;
again, in the data synchronization unit 200, one or more trigger types are selected according to the dispatch task data selection table selected in the previous step;
finally, in the data synchronization unit 200, a corresponding mapping file is generated;
the trigger is a special stored procedure, and is automatically executed when a certain table is subjected to an insert, update or delete operation. Any of the commonly used trigger types may be used in the present invention, such as INSERT, UPDATE, DELETE, etc.
In the present invention, in order to ensure timeliness and efficiency of data synchronization, triggers need to be established in the manufacturing execution system 100. Specifically, the present invention may implement the creation of a trigger in the following manner:
first, in the manufacturing execution system 100, a trigger is created in the table selected in the previous step from the map file generated in the previous step;
secondly, in the dispatch system 400, one or more tables for storing dispatch task data are created according to the mapping file generated in the previous step;
when the trigger and the table are established between the manufacturing execution system 100 and the dispatch system 400, the attributes such as the field name, the field type, the field length, etc. between the trigger and the table can be kept consistent.
In the present invention, in order to ensure reliability and security of data synchronization, it is necessary to establish a queue table in the manufacturing execution system 100 and transfer data through the queue table. In particular, the present invention may implement the establishment and transfer of a queue table in the following manner:
first, in the manufacturing execution system 100, the trigger writes dispatch task data in the manufacturing execution system 100 database into a queue table in the manufacturing execution system 100;
secondly, the data synchronization program of the data synchronization unit 200 reads and writes the data in the previous queue table into the message middleware 300;
Thirdly, the data synchronization program of the data synchronization unit 200 reads and writes the data in the message middleware 300 of the previous step into the database of the dispatch system 400;
the queue table is a special table for storing data to be transferred, and implements the first-in first-out principle. Any of the commonly used queue tables may be used in the present invention.
Step two: writing cache data, namely synchronously writing port state data in machine data into a cache of the data synchronization unit 200;
as shown in fig. 3, specifically, the present invention may implement the cache data writing in the following manner:
first, in the manufacturing execution system 100, the trigger writes port state data in the machine data in the database of the manufacturing execution system 100 into a queue table in the manufacturing execution system 100;
next, in the data synchronization unit 200, the data synchronization program reads and writes port status data in the previous queue table into the message middleware 300;
again, in the data synchronization unit 200, the data synchronization program reads and writes the port status data in the message middleware 300 of the previous step into the database of the dispatch system 400;
finally, the data synchronization unit 200 writes the port status data in the database of the dispatching system 400 of the previous step into the cache;
The cache is a high-speed storage for temporarily storing frequently accessed or recently accessed data, so that the data access speed and efficiency are improved. Any common caching technique may be employed in the present invention.
Step three: generating a dispatch task message, comparing the port state data of the buffer memory written in the step two by the data synchronization unit 200 with the port state data in the history buffer memory which is already stored, generating a message when the comparison result is that the port state data is empty, and distributing the message to one or more dispatch systems 400;
specifically, the invention can realize the generation and distribution of the dispatch task message in the following way:
firstly, in the data synchronization unit 200, a message generating program is set for reading port state data from the cache and comparing the port state data with port state data in the history cache;
secondly, in the data synchronization unit 200, when the comparison result indicates that the machine port status data is empty, which means that the machine needs to perform a dispatching task, a message containing the machine information is generated and written into the message middleware 300;
again, in message middleware 300, a message distribution program is set and distributed to one or more dispatch systems 400 according to a load balancing algorithm;
The load balancing algorithm is an algorithm for distributing requests among a plurality of dispatch systems 400, so as to realize load balancing and high availability. The invention can adopt any common load balancing algorithm, such as a weighted polling method, a minimum connection method and the like.
Step four: and (3) executing dispatch task logic by the dispatch system 400 according to the task message in the step three.
Specifically, the present invention may implement dispatch task logic in the following manner:
first, in each dispatching system 400 that receives the task message, a wafer set that needs to be dispatched is screened out according to the machine information in the task message.
In the present invention, in order to increase the speed and efficiency of execution of the dispatch task logic, the dispatch task logic may be executed in a concurrent execution manner. Specifically, the present invention may be implemented and executed in the following manner:
firstly, a thread pool is set in each dispatching system 400 receiving task information and is used for managing operations such as creation, destruction, scheduling and the like of threads;
secondly, in each dispatching system 400 receiving the task message, after receiving the task message, decomposing the dispatching task logic into a plurality of dispatching task data, and distributing each dispatching task data to one or a plurality of idle threads;
Again, in each dispatch system 400 that receives task messages, the dispatch task data assigned to them is executed and the results returned to the main thread;
finally, in each dispatch system 400 that receives the task message, the results of the respective dispatch task data are summarized in the main thread, and dispatch results are generated.
As shown, the method including configuration information in this embodiment is to perform information configuration on dispatch task data stored in a database in a manufacturing execution system and generate a mapping file about all the dispatch task data. The method comprises the following steps: step S1, listing all tables in a database according to a manufacturing execution system database, wherein the manufacturing execution system database comprises a machine table, a wafer group table, a photomask table, a process table and the like; step S2: selecting one or more tables corresponding to the step S1 according to the dispatch task data, for example, if the dispatch task data includes machine data, wafer group data, mask data and process data, selecting a machine table, a wafer group table, a mask table and a process table; step S3, selecting one or more trigger types according to the corresponding table in the step S2, for example, selecting an insert trigger, an update trigger and a delete trigger if the dispatching task data needs real-time synchronization; and S4, generating a corresponding mapping file. One or more SQL statements for creating triggers are generated from the selected table and trigger type, e.g., the following SQL statements are generated: CREATE TRIGGER insert_machine AFTER INSERT ON machine FOR EACH ROW BEGIN INSERT INTO queue (table_name, operation, data) value ('machine', 'insert', new. Machine_id);
CREATE TRIGGER update_machine AFTER UPDATE ON machine FOR EACH ROW BEGIN INSERT INTO queue (table_name, operation, data) VALUES (‘machine’, ‘update’, NEW.machine_id); END;
CREATE TRIGGER delete_machine AFTER DELETE ON machine FOR EACH ROW BEGIN INSERT INTO queue (table_name, operation, data) VALUES (‘machine’, ‘delete’, OLD.machine_id); END;
Where machine is the name of the table, queue is the name of the queue table, machine_id is the primary key field in the table, and NEW and OLD represent the record after insertion or update and before deletion, respectively.
And S5, repeating the steps S2 to S4 until all the information required by the dispatching task data is configured, namely, similar operations are performed on the wafer group table, the photomask table and the process table.
In this embodiment, the method for synchronizing data refers to synchronizing dispatch task data of a database in a manufacturing execution system to the database of the dispatch system. The method comprises the following steps: step S6: creating triggers in the table selected in the step S2 according to the mapping file generated in the step S4, that is, creating corresponding insertion triggers, update triggers and deletion triggers in tables such as a machine table, a wafer group table, a photomask table and a process table in a manufacturing execution system database according to the SQL statement; step S7, one or more tables for storing dispatching task data are created in a database of the dispatching system according to the mapping file generated in the step S4, namely, tables and fields corresponding to the tables and fields are created in the database of the dispatching system according to the tables and fields related in the mapping file, for example, a machine table, a wafer group table, a photomask table, a working procedure table and the like are created in the database of the dispatching system; step S8: the trigger writes the dispatch task data in the database of the manufacturing execution system into a queue table in the manufacturing execution system, namely, when the tables such as a machine table, a wafer group table, a photomask table and a process table in the database of the manufacturing execution system are subjected to insertion, updating or deleting operation, the trigger automatically writes the related dispatch task data into the queue table in the database of the manufacturing execution system, wherein the queue table refers to a table for temporarily storing data to be synchronized and comprises three fields: table_name, operation, data, respectively representing the table name, operation type and main key value related to the operation; step S9, the data synchronization unit 200 reads and writes the data in the queue table in the step S8 into the message middleware through the data synchronization program, namely, a background program is operated at the database end of the manufacturing execution system, reads the data from the queue table in real time, converts the data into a message format, and sends the message to the message middleware through network connection; step S10, the data synchronization unit 200 reads and writes the dispatching task data of the message middleware in step S9 into the database of the dispatching system through a synchronization program, namely, a background program is operated at the database end of the dispatching system, receives the message from the message middleware in real time, converts the message into SQL sentences, and executes the SQL sentences in the database of the dispatching system, thereby realizing data synchronization.
The method for distributing the messages in the embodiment refers to distributing the generated messages through message middleware, so that the messages are distributed to a plurality of dispatching task systems in an equalizing mode. The method comprises the following steps: step S11, the message middleware adopts a load balancing algorithm according to the weight of the dispatching system and the load condition of the dispatching task system, and selects one or more proper dispatching task systems; wherein, the weight refers to a value representing the processing capacity of the dispatching system, the load condition refers to a value representing the number of requests currently processed by the dispatching system, and the load balancing algorithm refers to an algorithm for dynamically selecting a proper dispatching system according to the weight and the load condition, such as a weighted polling method or a minimum connection method; step S12, the message middleware sends the message to the selected dispatching task system and receives feedback information thereof; wherein the feedback information is a signal indicating whether the dispatch task system successfully processes the message; step S13, the message middleware updates the load condition of the dispatching task system according to the feedback information and adjusts the parameters of a load balancing algorithm; the load updating condition refers to increasing or decreasing the number of the movable connections of the corresponding dispatching task system according to the feedback information, and the parameter adjustment of the load balancing algorithm refers to modifying the weight value or the current index value of the corresponding dispatching task system according to the feedback information; and step S14, repeating the steps S11 to S13 until all the messages are distributed and processed.
The load balancing adjustment algorithm in step S13 is a weighted polling method, specifically: which includes a method for message distribution using a weighted round robin method. The method is characterized in that each dispatching system is distributed with a weight value to represent the capacity of the corresponding dispatching system to process dispatching tasks, and when a new message arrives, the following steps are executed: assuming that three back-end dispatch task systems A, B, C are provided, and the corresponding weight values are 4, 3 and 2, the message is distributed as follows according to the weighted polling method:
the first time: adding one to the current index, and taking a model of 3 to obtain 0; subtracting 1 (the maximum positive integer which can be divided by all weight values in the weight values of the dispatching task system) from the current weight if the current index is equal to 0 to obtain-1; if the current weight is less than or equal to 0, resetting the current weight to 9 (the maximum value in the weight values of all dispatching task systems); if the weight 4 of the next dispatching task system A (index is 0) is greater than or equal to the current weight-1, selecting the dispatching task system A to process the message, and updating the current index and the current weight to be 0 and 4; second time: adding one to the current index, and taking a model from 3 to obtain 1; if the weight 3 of the next back-end dispatching task system B (index 1) is smaller than the current weight 4, skipping the back-end dispatching task system B; returning to the first step to continuously find a proper back-end dispatching task system; third time: adding one to the current index, and taking a model from the 3 to obtain 2; if the weight 2 of the next back-end dispatching task system C (index is 2) is smaller than the current weight 4, skipping the back-end dispatching task system C; returning to the first step to continuously find a proper back-end dispatching task system; fourth time: adding one to the current index, and taking a model of 3 to obtain 0; if the weight 4 of the next back-end dispatching task system A (index is 0) is greater than or equal to the current weight 4, selecting the back-end dispatching task system A to process the message, and updating the current index and the current weight to be 0 and 4; fifth time: adding one to the current index, and taking a model from 3 to obtain 1; if the weight 3 of the next back-end dispatching task system B (index 1) is greater than or equal to the current weight 4, selecting the back-end dispatching task system B to process the message, and updating the current index and the current weight to be 1 and 3; sixth time: adding one to the current index, and taking a model from the 3 to obtain 2; if the weight 2 of the next back-end dispatching task system C (index is 2) is more than or equal to the current weight 3, selecting the back-end dispatching task system C to process the message, and updating the current index and the current weight to be 2 and 2; and so on until all messages have been assigned and processed.
And D, logically decomposing the dispatch task in the fourth step into: the specific implementation mode is as follows: the invention provides a dispatching method for a task executed in semiconductor wafer manufacturing, which comprises a dispatching task logic decomposition method, wherein the method is to distribute wafer groups meeting dispatching conditions to each machine meeting the dispatching conditions by adopting proper dispatching task logic according to task information and various data information. The method comprises the following steps: step S15, screening out the machine stations needing dispatching according to dispatching task data in a dispatching system database; the port state data is a state identifier indicating whether each port on the machine has a wafer group or not, and if the port state data is empty, the port has no wafer group and needs to be dispatched; if the port state data is not null, indicating that the port has a wafer group, and dispatching is not needed; s16, calculating the priority between the current machine and all wafer groups according to the data information of the machine; the data information of the machine table refers to information representing performance, state, position, service condition and the like of the current machine table, the priority refers to a numerical value representing the relative degree of dispatch between the current machine table and all wafer groups, and the numerical value can be calculated according to different objective functions, such as minimizing total processing time, maximizing resource utilization rate, minimizing energy consumption and the like; step S17, a wafer group is allocated to the current machine according to the priority; wherein, a best or suboptimal wafer group is selected from high to low according to the priority, and is locked to prevent repeated allocation.
In the invention, the database of the dispatching system is a distributed memory database, which is a database for storing data in a plurality of physical memory nodes in a scattered way and communicating and coordinating the data through a network, and has the characteristics of high concurrency, high availability and high expansion.
The distributed memory database is used as a database of a dispatching system, and the following functions can be realized:
the dispatching task data processed by the data synchronization unit 200 are stored in a distributed memory database of the dispatching system 400, wherein one memory node is used as a master node and is responsible for receiving and writing the dispatching task data; the other memory nodes are used as standby nodes and are responsible for copying and reading dispatching task data; the data consistency of all data nodes is ensured between the main node and the standby node through a data synchronization mechanism, namely when a certain data node has data writing, the data change of the current node is recorded in a redo log file; the sending program in the current data node can synchronize the redo log to other data nodes in real time; after receiving the redo log, the receiving program of other data nodes applies the data change in the log to the receiving program of other data nodes, and simultaneously records the data change into the redo log of the receiving program of other data nodes. This allows for strong consistency and fault tolerance of the data.
The dispatch trigger message generated by the message middleware 300 is stored in a distributed memory database, and a publish-subscribe mode (publish-subscribe) is used for realizing the distribution and the reception of the message, namely, the message middleware 300 is used as a publisher to send the message to one or more topics (topics); each memory node is used as a subscriber to subscribe the message according to the interested subject; when a publisher publishes a message, subscribers receive messages of the corresponding topic. This allows asynchronous decoupling of messages. And executing dispatching task logic according to the dispatching trigger message, reading corresponding dispatching task data and machine station data, wafer group data, photomask data and process data from the distributed memory database, screening, calculating and distributing, and writing the result into the distributed memory database. A read-write separation mechanism (read-write separation) provided by a distributed memory database is used for ensuring high performance and load balancing of data, namely a client driver configures information of database nodes and one or more nodes to be connected are configured; the configuration data synchronization unit 200 writes data through the master node, and the dispatch system 400 reads data through the backup node. This allows for fast writing and fast reading of data.
A master-slave high availability mechanism (master-slave high availability) provided by a distributed memory database is used for ensuring high availability and fault recovery of data, namely, when a master node is down or cannot work normally, a slave node detects the fault of the master node through heartbeat detection or client notification; the standby nodes select a new main node through a distributed consistency protocol. Thus, seamless switching and automatic recovery of data can be realized.
In the present invention, the database of the dispatch system is another distributed memory database, which includes a method of using the distributed memory database as the database of the dispatch system. The distributed memory database is a database which dispersedly stores data in a plurality of physical memory nodes and communicates and coordinates through a network, and has the characteristics of high concurrency, high availability and high expansion.
The distributed memory database is used as a database of a dispatching system, and the following functions can be realized:
the dispatching task data processed by the data synchronization unit 200 are stored in a plurality of memory nodes in a scattered manner, the dispatching task data and the memory nodes are mapped by using a consistent hash algorithm, namely, a hash value is calculated for each memory node and each dispatching task data, and the hash value is used as a point on a ring; for each dispatching task data, searching a memory node nearest to the dispatching task data on the hash ring clockwise, and storing the dispatching task data in the memory node; for each memory node, a number of memory nodes closest to it are found clockwise on the hash ring and used as backup nodes for the memory node to recover data from the backup nodes in the event of a failure of the memory node. Thus, load balancing and fault tolerance of the data can be realized.
Distributing and receiving the dispatching trigger message generated by the message middleware 300 in a plurality of memory nodes, wherein the message is distributed and received by using a publish-subscribe mode (publish-subscribe), namely the message middleware 300 is used as a publisher to send the message to one or more topics (topics); each memory node is used as a subscriber to subscribe the message according to the interested subject; when a publisher publishes a message, subscribers receive messages of the corresponding topic. This allows asynchronous decoupling of messages. And executing dispatching task logic according to the dispatching trigger message, reading corresponding dispatching task data and machine data, wafer group data, photomask data and process data from the distributed memory database, screening, calculating and distributing, writing the result into the distributed memory database, and using key-value pairs (key-value) to represent the machine and photomask distributed by each wafer group.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. A method for dispatching task in semiconductor wafer manufacturing is characterized in that it comprises,
the manufacturing execution system records data information of a wafer process technology of the photoetching machine in a database of the manufacturing execution system and at least comprises port state data in machine table data;
the data synchronization unit is used for reading and processing dispatching task data stored in the manufacturing execution system database;
message middleware for storing dispatch task data stored in the manufacturing execution system database and dispatching task messages;
and a dispatch system for applying the dispatch task data processed by the data synchronization unit and executing the dispatch task according to the task dispatch method,
step one: the data synchronization unit is used for synchronizing dispatching task data from a database of the manufacturing execution system to a database in the dispatching system, and the data synchronization unit synchronizes data information by the following steps:
configuration information, which is to perform information configuration on dispatching task data stored in a database in a manufacturing execution system and generate a mapping file about all dispatching task data;
the synchronous data are used for synchronizing dispatching task data of a database in the manufacturing execution system to the database of the dispatching system;
Step two: writing cache data, namely writing port state data in machine data into a cache of a data synchronization unit;
step three: generating a dispatching task message, comparing the port state data of the cache written by the data synchronization unit in the second step with the port state data in the stored history cache, generating a message when the comparison result is that the port state data is empty, and distributing the message to one or more dispatching systems;
step four: and D, executing dispatching task logic by the dispatching system according to the task message in the step three.
2. The method for dispatching tasks in semiconductor wafer manufacturing according to claim 1, wherein the method for configuring information in the first step comprises:
step S1, listing all tables in a database according to a manufacturing execution system database;
step S2: selecting one or more tables corresponding to the step S1 according to the dispatching task data;
step S3, selecting one or more trigger types according to the table corresponding to the dispatching task data in the step S2;
s4, generating a corresponding mapping file;
and S5, repeating the steps S2 to S4 until all the information required by the dispatching task data is configured.
3. The method for dispatching tasks in semiconductor wafer manufacturing according to claim 1, wherein the method for synchronizing data in the first step comprises:
step S6: creating a trigger in the table selected in step S2 according to the mapping file generated in step S4;
step S7, one or more tables for storing dispatching task data are created in a database of the dispatching system according to the mapping file generated in the step S4;
step S8: the trigger writes dispatching task data in the manufacturing execution system database into a queue table in the manufacturing execution system;
step S9, reading and writing the data in the queue table in the step S8 into the message middleware through a database data synchronization program of the manufacturing execution system;
and step S10, reading and writing the dispatching task data in the message middleware in the step S9 into a database of the dispatching system through a dispatching system database data synchronization program.
4. The method for dispatching tasks in semiconductor wafer manufacturing according to claim 1, wherein the message generated in the third step is distributed through a message middleware, so that the message is distributed to a plurality of dispatching task systems in an equalizing manner, and the method comprises the following steps:
Step S11, the message middleware adopts a load balancing algorithm according to the weight of the dispatching system and the load condition of the dispatching task system, and selects one or more proper dispatching task systems;
step S12, the message middleware sends the message to the selected dispatching task system and receives feedback information thereof;
step S13, the message middleware updates the load condition of the dispatching task system according to the feedback information and adjusts the parameters of a load balancing algorithm;
and step S14, repeating the steps S11 to S13 until all the messages are distributed and processed.
5. The method for dispatching a task in semiconductor wafer manufacturing according to claim 4, wherein the load balancing algorithm in step S13 is a weighted polling method, specifically:
the following steps are performed: according to the thread number of a request pool and the length of a waiting queue in a dispatching system, calculating a weight value of each dispatching system, representing the capacity of the dispatching system for processing dispatching tasks, wherein the larger the thread number of the request pool and the length of the waiting queue, the larger the weight value; recording the weight value, the number of received messages and the number of processed messages of each dispatching system in a message middleware; when a message is dispatched, calculating a distribution weight value of each dispatching system, representing the priority of the assigned message, wherein the distribution weight value=the weight value of the dispatching system/(the number of received messages-the number of processed messages), and the higher the distribution weight value, the higher the priority; the message middleware selects one or more proper dispatching systems according to the distribution weight value, sends the messages to the dispatching systems and updates the number of the received messages; the message middleware updates the number of the processed messages according to the feedback information of the dispatching system; repeating the above steps until all the messages are distributed and processed.
6. The method for dispatching tasks in semiconductor wafer manufacturing according to claim 4, wherein the load balancing algorithm in step S13 is a hash-matching method, specifically: each dispatching system is distributed with a weight value to represent the capacity of the corresponding dispatching system to process dispatching tasks, and when a new message arrives, the following steps are executed:
the method comprises the steps of clockwise searching for the next dispatching system on a hash ring, and taking a module for the number of the dispatching systems to obtain an index of the next dispatching system, wherein the current index is the index of the dispatching system selected last time, the initial value is the negative number of the dispatching systems, the number of the dispatching systems is the total number of the rear-end dispatching systems, taking the module is the operation for obtaining a remainder, the index of the next dispatching system is the index of the dispatching system to be selected this time, and the index is equal to the result of taking the module for the number of the dispatching systems after the current index is added;
subtracting the greatest common divisor of all the dispatching system weights from the current weight if the current index is equal to zero, and resetting the greatest common divisor to the greatest value of all the dispatching system weights if the current weight is less than or equal to zero, wherein the current weight refers to the weight value of the dispatching system selected last time according to a certain proportion, the initial value is zero weight, and the greatest common divisor of all the dispatching system weights refers to the greatest positive integer which can be divided by all the weight values in all the dispatching system weight values;
If the weight of the next dispatching system is greater than or equal to the current weight, the dispatching system is selected to process the message, the current index and the current weight are updated, if the weight of the next dispatching system is smaller than the current weight, the dispatching system is skipped, and the first step is returned to continue to search for a proper dispatching system, wherein the weight of the next dispatching system refers to the weight value of the dispatching system to be selected at this time.
7. The method for dispatching a task in semiconductor wafer manufacturing according to claim 4, wherein the load balancing algorithm in step S13 is a minimum connection method, specifically:
maintaining a current number of active connections and a minimum number of active connections variable in the message middleware;
when a new message arrives, the following steps are performed:
traversing all dispatching task systems, acquiring the movable connection number of each dispatching task system, comparing the movable connection number with the minimum movable connection number,
the system comprises a load balancer, a task assigning system and a load balancer, wherein the movable connection number is the number of requests being processed by the task assigning system, the minimum movable connection number is the smallest movable connection number in all the task assigning systems, and the load balancer can monitor the movable connection number of each task assigning system in real time and compare the movable connection number with the minimum movable connection number;
If the number of the movable connections of a certain dispatch task system is smaller than or equal to the minimum number of the movable connections, the dispatch task system is added into the candidate list, the minimum number of the movable connections is updated to be the number of the movable connections of the dispatch task system,
the candidate list is a collection of dispatching task systems which can be selected to process the message, if the number of the movable connections of a certain dispatching task system is smaller than the minimum number of the movable connections, the dispatching task system is indicated to have lower load, the message can be selected to be processed and added into the candidate list, the minimum number of the movable connections is updated to be the number of the movable connections of the dispatching task system, and if the number of the movable connections of the certain dispatching task system is equal to the minimum number of the movable connections, the dispatching task system is added into the candidate list;
if the number of the movable connections of a certain dispatching task system is larger than the minimum number of the movable connections, skipping the dispatching task system;
and randomly selecting one dispatching task system from the candidate list to process the message, and updating the current active connection number to be the sum of the active connection numbers of all dispatching task systems in the candidate list.
8. The method for dispatching task in semiconductor wafer manufacturing according to claim 1, wherein the database of the dispatching system is a memory database, the dispatching system executes the dispatching task logic in the step four by adopting a concurrent execution mode, and after receiving the task message, the dispatching task logic is submitted to a plurality of threads for parallel processing, thereby improving the execution speed and efficiency of dispatching rules,
Wherein, the liquid crystal display device comprises a liquid crystal display device,
and screening out wafer groups to be dispatched according to dispatching task data in a dispatching system database, wherein the dispatching task data at least comprises the following steps:
wafer data including wafer lattice data and priority data for wafer dispatch execution;
the machine data comprises machine related data which can be subjected to the subsequent photoetching process if the wafer is subjected to the first photoetching process and the wafer is not subjected to the first photoetching process;
mask data, wherein the wafer enters a mask corresponding to a task executed by the photoetching machine;
process time data, a sum of task times experienced by the wafer in performing tasks over time, and a sum of times of tasks expected to be performed by the wafer in the future;
production limit data, the number of events detected by the wafer and the accumulated time checked;
production index data, the number of wafers for executing tasks specified in a predicted time period;
the dispatching task data parallel processing method comprises the following steps:
creating a thread pool in a dispatch system for managing operations such as thread creation, destruction, scheduling and the like;
after receiving the task message, according to the number and complexity of dispatching task data, acquiring one or more idle threads from the thread pool, and distributing the dispatching task data to the idle threads;
Executing dispatch task data assigned to each thread and returning the result to the main thread;
and in the main thread, summarizing the results of the dispatching task data and generating dispatching results.
9. The method for performing task dispatch in semiconductor wafer fabrication as recited in claim 8, wherein,
the memory database is a distributed memory database, namely, dispatching task data is stored in a plurality of memory nodes in a scattered way, the dispatching task data and the memory nodes are mapped through a consistent hash algorithm,
wherein, the liquid crystal display device comprises a liquid crystal display device,
the consistent hashing algorithm includes the steps of:
mapping all the memory nodes and all the dispatching task data onto a hash ring, namely calculating a hash value for each memory node and each dispatching task data, and taking the hash value as a point on the ring;
for each dispatching task data, searching a memory node nearest to the dispatching task data on the hash ring clockwise, and storing the dispatching task data in the memory node;
for each memory node, a number of memory nodes closest to it are found clockwise on the hash ring and used as backup nodes for the memory node to recover data from the backup nodes in the event of a failure of the memory node.
10. The method for dispatching the task of manufacturing and executing the semiconductor wafer according to claim 8, wherein the memory database is a distributed memory database, and comprises a main node and a plurality of standby nodes, the main node and the plurality of standby nodes are in data synchronization, the data consistency of all data nodes is guaranteed, a dispatching system inquires that all data are stored in any node, a data synchronization unit writes dispatching task data into the main node, the main node synchronizes the task data into the standby nodes through a data synchronization machine, and the dispatching system reads the data from the standby nodes;
the data synchronization mechanism is as follows: when a certain data node has data writing, the data change of the current node is recorded in a redo log file, a sending program in the current data node synchronizes the redo log to other data nodes in real time, and a receiving program of the other data nodes applies the data change in the log to the receiving program after receiving the redo log, and meanwhile, the data change is recorded in the redo log; the client driver configures information of the database nodes, one or more nodes to be connected are configured, the configuration data synchronization unit writes data through the master node, and the dispatching system reads the data through the slave nodes;
When the main node is down or can not work normally, the standby node detects the fault of the main node through heartbeat detection or client notification, and the standby node selects a new main node through a distributed consistency protocol.
CN202311020974.1A 2023-08-15 2023-08-15 Task dispatch method for semiconductor wafer manufacturing Active CN116777182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311020974.1A CN116777182B (en) 2023-08-15 2023-08-15 Task dispatch method for semiconductor wafer manufacturing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311020974.1A CN116777182B (en) 2023-08-15 2023-08-15 Task dispatch method for semiconductor wafer manufacturing

Publications (2)

Publication Number Publication Date
CN116777182A true CN116777182A (en) 2023-09-19
CN116777182B CN116777182B (en) 2023-11-03

Family

ID=88013681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311020974.1A Active CN116777182B (en) 2023-08-15 2023-08-15 Task dispatch method for semiconductor wafer manufacturing

Country Status (1)

Country Link
CN (1) CN116777182B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236822A (en) * 2023-11-10 2023-12-15 合肥晶合集成电路股份有限公司 Intelligent goods delivery method, device, equipment and medium
CN117557072A (en) * 2024-01-11 2024-02-13 上海朋熙半导体有限公司 Photomask scheduling and advanced scheduling algorithm, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996359A (en) * 2009-08-26 2011-03-30 中芯国际集成电路制造(上海)有限公司 Dispatching method of semiconductor manufacturing process
CN102013043A (en) * 2009-09-04 2011-04-13 中芯国际集成电路制造(上海)有限公司 Optimization method and system for dispatching in semiconductor manufacture
CN103227838A (en) * 2013-05-10 2013-07-31 中国工商银行股份有限公司 Multi-load equalization processing device and method
CN104166317A (en) * 2014-08-27 2014-11-26 上海华力微电子有限公司 Method and system for controlling automatic dispatch of photo-masks
CN104977903A (en) * 2014-04-03 2015-10-14 中芯国际集成电路制造(上海)有限公司 Real time dispatch system-based method and system for wafer batch dispatch under machine set
US20210239757A1 (en) * 2020-01-30 2021-08-05 Kla Corporation System and method for identifying latent reliability defects in semiconductor devices
CN114444948A (en) * 2022-01-28 2022-05-06 上海华力微电子有限公司 Control system and method for intelligently acquiring and dispatching WPH (WPH) in wafer production line
CN115982273A (en) * 2022-11-30 2023-04-18 中国农业银行股份有限公司 Data synchronization method, system, electronic equipment and storage medium
CN116525500A (en) * 2023-05-17 2023-08-01 北京燕东微电子科技有限公司 Wafer manufacturing dispatching method, dispatching device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996359A (en) * 2009-08-26 2011-03-30 中芯国际集成电路制造(上海)有限公司 Dispatching method of semiconductor manufacturing process
CN102013043A (en) * 2009-09-04 2011-04-13 中芯国际集成电路制造(上海)有限公司 Optimization method and system for dispatching in semiconductor manufacture
CN103227838A (en) * 2013-05-10 2013-07-31 中国工商银行股份有限公司 Multi-load equalization processing device and method
CN104977903A (en) * 2014-04-03 2015-10-14 中芯国际集成电路制造(上海)有限公司 Real time dispatch system-based method and system for wafer batch dispatch under machine set
CN104166317A (en) * 2014-08-27 2014-11-26 上海华力微电子有限公司 Method and system for controlling automatic dispatch of photo-masks
US20210239757A1 (en) * 2020-01-30 2021-08-05 Kla Corporation System and method for identifying latent reliability defects in semiconductor devices
CN114444948A (en) * 2022-01-28 2022-05-06 上海华力微电子有限公司 Control system and method for intelligently acquiring and dispatching WPH (WPH) in wafer production line
CN115982273A (en) * 2022-11-30 2023-04-18 中国农业银行股份有限公司 Data synchronization method, system, electronic equipment and storage medium
CN116525500A (en) * 2023-05-17 2023-08-01 北京燕东微电子科技有限公司 Wafer manufacturing dispatching method, dispatching device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117236822A (en) * 2023-11-10 2023-12-15 合肥晶合集成电路股份有限公司 Intelligent goods delivery method, device, equipment and medium
CN117236822B (en) * 2023-11-10 2024-01-30 合肥晶合集成电路股份有限公司 Intelligent goods delivery method, device, equipment and medium
CN117557072A (en) * 2024-01-11 2024-02-13 上海朋熙半导体有限公司 Photomask scheduling and advanced scheduling algorithm, equipment and medium
CN117557072B (en) * 2024-01-11 2024-04-16 上海朋熙半导体有限公司 Photomask scheduling and advanced scheduling algorithm, equipment and medium

Also Published As

Publication number Publication date
CN116777182B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN116777182B (en) Task dispatch method for semiconductor wafer manufacturing
US9489443B1 (en) Scheduling of splits and moves of database partitions
EP1654683B1 (en) Automatic and dynamic provisioning of databases
US8738649B2 (en) Distributed processing of streaming data records
CN109933631A (en) Distributed parallel database system and data processing method based on Infiniband network
CN106528853B (en) Data interaction managing device, inter-library data interaction processing unit and method
CN109857558A (en) A kind of data flow processing method and system
CN108183961A (en) A kind of distributed caching method based on Redis
US20130117226A1 (en) Method and A System for Synchronizing Data
CN109669929A (en) Method for storing real-time data and system based on distributed parallel database
US20050071842A1 (en) Method and system for managing data using parallel processing in a clustered network
CN103905537A (en) System for managing industry real-time data storage in distributed environment
JP2003022209A (en) Distributed server system
CN112199427A (en) Data processing method and system
CN109901948A (en) Shared-nothing database cluster strange land dual-active disaster tolerance system
CN116739318B (en) Method, equipment and storage medium for realizing load balancing of semiconductor photoetching machine
CN111625414A (en) Method for realizing automatic scheduling monitoring system of data conversion integration software
CN114938376B (en) Industrial Internet of things based on priority processing data and control method thereof
CN115934748A (en) Switch distribution and metrics collection and summary system and method based on distributed SQL
CN116226067A (en) Log management method, log management device, processor and log platform
CN115934819A (en) Universal distributed expansion method for industrial time sequence database
CN112256202B (en) Distributed storage system and method for deleting volumes in distributed storage system
CN109298949A (en) A kind of resource scheduling system of distributed file system
CN113110935A (en) Distributed batch job processing system
CN112597173A (en) Distributed database cluster system peer-to-peer processing system and processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant