CN113536202A - Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem - Google Patents

Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem Download PDF

Info

Publication number
CN113536202A
CN113536202A CN202110548423.7A CN202110548423A CN113536202A CN 113536202 A CN113536202 A CN 113536202A CN 202110548423 A CN202110548423 A CN 202110548423A CN 113536202 A CN113536202 A CN 113536202A
Authority
CN
China
Prior art keywords
sigma
task
max
vnd
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110548423.7A
Other languages
Chinese (zh)
Other versions
CN113536202B (en
Inventor
罗建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Shenzhen Institute of Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110548423.7A priority Critical patent/CN113536202B/en
Publication of CN113536202A publication Critical patent/CN113536202A/en
Application granted granted Critical
Publication of CN113536202B publication Critical patent/CN113536202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0637Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Tourism & Hospitality (AREA)
  • Computational Mathematics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Educational Administration (AREA)
  • Mathematical Optimization (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Mathematical Analysis (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Manufacturing & Machinery (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a variable neighborhood search method for minimizing the total delay time of a two-stage assembly scheduling problem, three greedy strategies are proposed to obtain a good initial search solution, and three neighborhood structures and a variable neighborhood gradient descent strategy are proposed to increase the probability of finding an optimal solution. The disturbance of disturbance strength can be adjusted according to the increase of a search algebra, and the variable neighborhood search algorithm is established by combining an initial variable neighborhood gradient descent algorithm and a disturbance strategy. The invention does not need to spend time to adjust the parameters; when the allowable time is shorter, a satisfactory solution can be obtained, when more time is given, a better solution can be obtained, the disturbance intensity and the disturbance method can be dynamically adjusted along with the searching, and in addition, the disturbance strategy also changes along with the increase of the continuous and unchangeable algebra of the current best solution, so that the local optimum can be better avoided, the influence of random factors is eliminated, and the algorithm is more stable.

Description

Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem
Technical Field
The invention relates to the field of two-stage assembly scheduling, in particular to a variable neighborhood searching method.
Background
Two-stage assembly scheduling problems (TASPs) are widely present in actual factories such as motorcycle production, fertilizer production systems, computer production, database systems, and the like. In recent years, this problem has been studied extensively. Many researchers have ignored the preparation time of the task on the machine when studying this problem or have assumed that the waiting time is independent of the machining sequence and can be included in the machining time. However, for many application scenarios, the preparation time and the processing time are independent of each other. Recently, more and more researchers have become aware of the importance of studying the two-phase assembly scheduling problem with independent preparation time. A two-stage assembly scheduling problem with 1 machine in each of the two stages was studied by Aydiekd et al described in document [1 ] (A.Aydiekk, H.Aydiekand A.Allahvedi, "minimizing maximum results in assembly times with setup times", International Journal of Production Research,2017,55(24): 7541-. In order to minimize the maximum delay, they propose a scheduling algorithm simulating the combination of annealing and insertion operations, and verify the effectiveness of the newly proposed algorithm through experiments. Missing an expiration date may result in a fine, and thus production managers are particularly concerned with minimizing the overall delay time of the task when planning. However, to our knowledge, only the document [ 2 ] (A. Allahverdi, H. Aydilek, and A. Aydilek, "Two-stage assembly scheduling protocol for minimizing total performance with setup times", Applied chemical modeling, 2016,40(17-18), 7796-. Allahvedi et al [ 2 ] propose an N-SA algorithm and an N-PSA algorithm. They also improve The existing algorithm AA-SA [ 3 ] (A. Allahverdi and H. Aydiek, "The two stage assembly flow scheduling protocol to minor total termination", Journal of Intelligent Manufacturing,2015, 26(2), 225-. Experimental results show that N-PSA is better than N-SA and improved algorithms.
The current prior art has the following disadvantages:
1) the parameter is more difficult to adjust, the scheduling performance depends on parameter adjustment, and the parameter setting is irregular and can not be adjusted easily;
2) the searching performance needs to be improved, the searching time of the existing algorithm is short, but the obtained scheduling performance is unstable, the time is good, the time is bad, and the result is difficult to guarantee.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a variable neighborhood search method for minimizing the total delay time of a two-stage assembly scheduling problem, and the invention provides a variable neighborhood search algorithm for the two-stage assembly scheduling problem containing independent preparation time by aiming at minimizing the total delay time. First, three greedy strategies are proposed to obtain a good initial search solution. Second, three neighborhood structures and a variable neighborhood gradient descent strategy are proposed to increase the probability of finding the optimal solution. Thirdly, a disturbance strategy capable of adjusting the disturbance strength according to the increase of the search algebra is provided. Fourthly, the initial variable neighborhood gradient descent algorithm and the disturbance strategy are combined together to establish a variable neighborhood search algorithm. The invention enables better scheduling than the most advanced methods available.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: two-stage assembly, scheduling and modeling;
the two-stage assembly scheduling problem comprises (m +1) machines and n tasks to be processed, wherein each task is composed of (m +1) subtasks, the first m subtasks are processed on the m machines in the first stage in parallel, and the last task is finished on the assembly machine in the second stage after the first m tasks are finished; all tasks in the system are initially processable; the processing of tasks does not allow preemption; the buffer space between the two phases is infinite; one machine can only process one task at most at one time;
let Ng1, 2.., g } represents a set of positive integers of 1-g, where g is a positive integer; j ═ Ji|i∈NnRepresents a collection of tasks; m ═ Mj|j∈Nm+1Represents a collection of machines; p is a radical ofijDenotes JiE.g. J in MjThe processing time belongs to M; sijDenotes JiE.g. J in MjPreparation time of E on M; diDenotes JiThe expiration date of (c); sigma0An initial schedule indicating that no tasks are scheduled; σ denotes the partial schedule after having | σ | tasks scheduled, where | σ | denotes the number of tasks in σ; sigmauRepresents a set of unscheduled tasks in σ; sigma' represents sigmauThe index of the task; sigma [ i ]]An index representing the ith task in σ; dσ[i]Represents the expiration date of the ith task in σ; p is a radical ofσ[i]jDenotes that the ith task in σ is at MjThe processing time belongs to M; sσ[i]jDenotes that the ith task in σ is at MjPreparation time of E on M; sigma[k]={Jσ[1], Jσ[2],...,Jσ[k]}; f (σ) represents the completion time of σ; f. ofj(σ) denotes σ is at MjThe completion time on the E M; Δ (σ, i) denotes task Jσ[i]Is arranged at Mm+1Up to result in Mm+1Increased idle time; TT (σ) represents the total delay time of σ; σ denotes σ minimizing the total delay timeuArrangement of the tasks;
fj(σ)(j∈Nm+1) F (σ), and Δ (σ, i) (i ∈ N)|σ|) The calculation formula of (a) is as follows:
Figure RE-GDA0003230470290000031
Figure RE-GDA0003230470290000032
wherein f ism+1[0])=0
f(σ)=max{fj(σ)|j∈Nm+1}=fm+1(σ)
Figure RE-GDA0003230470290000033
Task Jσ[j]Delay time calculation ofThe following were used:
TT(σ,j)=max{0,fm+1(σ)-dσ[j]}
thus TT (σ) is calculated as follows:
Figure RE-GDA0003230470290000034
step 2: three greedy strategies
The task with large delay is prioritized to avoid the delay time of the task from increasing; in addition, moving tasks with deadlines much greater than completion time backward does not increase their latency; based on the inspiration, a greedy strategy is proposed:
greedy strategy 1:
inputting: one partial scheduling σ;
and (3) outputting: one sigmauPermutation of intermediate tasks σu1
Step 2.1:
Figure RE-GDA0003230470290000035
(empty set);
step 2.2: if it is not
Figure RE-GDA0003230470290000036
Entering a cycle of 2.2.1-2.2.3, and ending if not;
step 2.2.1: given Ji∈σuLet I (σ, J)i)=max{maxj∈Nm{fj(σ)+tij+sij},fm+1(σ)+si(m+1)}+ pi(m+1)-di
Step 2.2.2: find Jk∈σuSuch that I (σ, J)k)=max{I(σ,Ji)|Ji∈σ};
Step 2.2.3: will JkInsert into sigmau1Tail part, will JkInserting into the tail part of sigma, inserting JkFrom σuDeleting;
step 2.3: output sigmau1
For each non-scheduled task J, in order to better schedule the non-scheduled tasksi∈σuTwo other indicators are given:
(1)
Figure RE-GDA0003230470290000037
(2)
Figure RE-GDA0003230470290000038
greedy strategy 2 will σuAccording to the index (1) theta1(σ,Ji) Sequencing in ascending order to obtain the sequence sigmau2Greedy strategy 3 will σuAccording to the index (2) theta2(σ,Ji) Sequencing in ascending order to obtain the sequence sigmau3
And step 3: variable neighborhood search algorithm
Variable Neighbor Search (VNS) is a meta-heuristic algorithm that systematically searches for different Neighborhood structures to avoid local optimality by invoking a Variable Neighbor Drop (VND) method by the system; if the current best scheduling is still not improved after all the neighborhood structures are tried once, executing a perturbation strategy, and alternately executing the VND and the perturbation strategy until a termination condition is met; the steps of VND, perturbation policy and VNS are as follows:
step 3.1VND
Firstly, three neighborhood structures used in VND are provided, wherein sigma represents a schedule, and x and y belong to NnRepresents two different positions in σ;
neighborhood 1: swap (σ, x, y), will task σ [ x ]]Move to position y, task σ [ y ]]Move to position x, order
Figure RE-GDA00032304702900000411
Figure RE-GDA00032304702900000412
Represents a set of neighborhoods 1;
neighborhood 2: insert (sigma)X, y) if x<y, will task σ [ x]Move to position y, task σ [ x +1 ]],σ[x+2],..., σ[y]Move to position x, x +1,.., y-1, respectively; if x>y, will task σ [ x]Move to position y, task σ [ x-1 ]],σ[x -2],...,σ[y]Move to position x, x-1, ·, y +1, respectively; order to
Figure RE-GDA0003230470290000041
Representing a neighborhood 2 set;
neighborhood 3: inverse (σ, x, y), if x<y, will task σ [ x]Move to position y, task σ [ y ]]Move to position x, let σ1Represents the schedule obtained after the move and executes Inverse (σ)1X +1, y-1); if x>y, return σ, order
Figure RE-GDA0003230470290000042
Figure RE-GDA0003230470290000043
Representing a set of neighborhoods 3.
VND first exploration
Figure RE-GDA0003230470290000044
If it is exploring
Figure RE-GDA0003230470290000045
Later, the current best solution is not updated, and the VND begins exploring
Figure RE-GDA0003230470290000046
If it is exploring
Figure RE-GDA0003230470290000047
Later, the current best solution is not updated, and the VND begins exploring
Figure RE-GDA0003230470290000048
Once the current best solution is updated, the VND is re-started
Figure RE-GDA0003230470290000049
Starting exploration; if the current best solution isExploration of
Figure RE-GDA00032304702900000410
If the VND is still not updated, the VND is terminated; the best solution is the solution with the minimum total delay time found currently;
step 3.2 perturbation strategy (Sharking)
The effect of the perturbation strategy is to jump out of local optimality. The intensity of perturbation strategies is usually invariant (Roshanaei V., B. Naderi, F. Jolai, and M. khalili, "A variable neighbor search for job shop scheduling with set-up times to minimize makespan," Future Generation Computer Systems,2009,25(6):654 661). Since the current best solution becomes better as the search proceeds, the difficulty of jumping out of the local optimum becomes greater. Therefore, as the search progresses, the perturbation should be stronger and stronger.
Let q denote the number of times the current best solution has not been updated, qmaxRepresenting the maximum times of no update of the current best solution, sigma representing the scheduling sequence to be disturbed, if q is less than or equal to qmax3, repeating the selection of two positions x and y and performing Swap (σ, x, y) q times; if q ismax/3<q≤2*qmax3, repeating the selection of two positions x and y and performing Insert (σ, x, y) q times; if 2 x qmax/3 <q≤qmaxRepeatedly selecting two positions x and y and performing Inverse (sigma, x, y) q times;
step 3.3VNS
Let three greedy strategies use sigma0The outputs obtained for the inputs are respectively sigmau1,σu2And σu3Let Ω (σ)0) Is expressed as sigmau1,σu2And σu3Scheduling sequence with the smallest total delay time, i.e., TT (omega (sigma))0))=min{TT(σu1),TT(σu2),TT(σu3) VNS alternately for Ω (σ)0) Executing VND and perturbation strategy, and making q represent the number of times that the current best solution is not updated, qmaxRepresenting the maximum number of times that the current best solution is not updated, if the current best solution is updated after the VND is executed, then let q be 1, otherwise let q be 1; if q is qmaxThen VNS resumes from qStarting the search as soon as the CPU time of the VNS operation exceeds a maximum CPU time, TmThe execution of VNS is terminated.
The specific steps of the VND (σ) are as follows:
inputting: one scheduling σ;
and (3) outputting: best current scheduling σ1
Step 3.1.1: let sigma1=σ;t=1;
Step 3.1.2: if t is less than or equal to 3, circularly executing the step 3.1.2.1, and skipping to the step 3.1.3 if t is greater than 3;
step 3.1.2.1: if present
Figure RE-GDA0003230470290000051
So that
Figure RE-GDA0003230470290000052
Figure RE-GDA0003230470290000053
Then let σ1=σ2And t is 1; otherwise, t is added by 1;
step 3.1.3: output sigma1
The perturbation strategy Sharking (q, sigma)1,qmax) The method comprises the following specific steps:
inputting: the number of current iterations q, a schedule σ, and qmax
And (3) outputting: perturbed scheduling σ1
Step 3.2.1: let sigma1=σ,i=1;
Step 3.2.2: if i is less than or equal to q, circularly executing the step 3.2.2.1-3.2.2.3, otherwise, jumping to 3.2.3;
step 3.2.2.1: randomly from NnTwo positions x and y are selected;
step 3.2.2.2: if q is less than or equal to qmax(v)/3, then let σ ═ Swap (σ, x, y); if q ismax/3<q≤2*qmax(v 3), then let σ Insert (σ, x, y); if 2 x qmax/3<q≤qmaxThen let σ equalInverse(σ,x,y);
Step 3.2.2.3: adding 1 to i;
step 3.2.3: output sigma1
The specific steps of the VNS are as follows:
inputting: omega (sigma)0),Tm
And (3) outputting: an optimized scheduling σ1
Step 3.3.1: let sigma1=VND(σ),q=1,flag=true,
Figure RE-GDA0003230470290000061
Step 3.3.2: if flag is true, then loop through steps 3.3.2.1 and 3.3.2.2, otherwise go to step 3.3.3;
step 3.3.2.1: let q be 1;
step 3.3.2.2: if q is less than or equal to qmaxIf yes, circularly executing the step 3.3.2.2.1-3.3.2.2.4, otherwise, skipping to the step 3.3.3;
step 3.3.2.2.1: let sigma2=Sharking(q,σ1,qmax);
Step 3.3.2.2.2: let sigma3=VND(σ2);
Step 3.3.2.2.3: if TT (σ)3)≤TT(σ1) If so, let σ1=σ3And q is 1, otherwise, adding 1 to q;
step 3.3.2.2.4: if the CPU time is greater than TmJump to step 3.3.3, if CPU time is less than or equal to TmGo to step 3.3.2.2;
step 3.3.3: output sigma1
The invention has the beneficial effects that:
(1) the variable neighborhood searching method provided by the invention only comprises one parameter (maximum CPU time), and the maximum CPU time can be set according to actual requirements, so that the method does not need to spend time for parameter adjustment; when the allowable time is smaller, a satisfactory solution can be obtained, and when more time is given, a better solution can be obtained;
(2) the disturbance strength is enhanced along with the increase of the continuous and unchangeable algebra of the current best solution, the disturbance intensity and the disturbance method can be dynamically adjusted along with the search, and in addition, the disturbance strategy also changes along with the increase of the continuous and unchangeable algebra of the current best solution, so that the situation that the current best solution falls into local optimum can be better avoided;
(3) a variable neighborhood search framework is provided, which can search the variable neighborhood by the iteration number q ═ qmaxSearching is started from q to 1 again until the appointed running time is reached, the randomness of the disturbance strategy can be eliminated, and the influence of random factors is eliminated to a certain extent, so that the algorithm is more stable;
(4) three greedy strategies are proposed to obtain a better initial solution, and three neighborhood operations and a neighborhood gradient descent algorithm are proposed to increase the probability of finding an optimal solution.
Detailed Description
The present invention will be further described with reference to the following examples.
Step 1: two-stage assembly, scheduling and modeling;
the two-stage assembly scheduling problem comprises (m +1) machines and n tasks to be processed, wherein each task is composed of (m +1) subtasks, the first m subtasks are processed on the m machines in the first stage in parallel, and the last task is finished on the assembly machine in the second stage after the first m tasks are finished; all tasks in the system are initially processable; the processing of tasks does not allow preemption; the buffer space between the two phases is infinite; one machine can only process one task at most at one time;
let Ng1, 2.., g } represents a set of positive integers of 1-g, where g is a positive integer; j ═ Ji|i∈NnRepresents a collection of tasks; m ═ Mj|j∈Nm+1Represents a collection of machines; p is a radical ofijDenotes JiE.g. J in MjThe processing time belongs to M; sijDenotes JiE.g. J in MjPreparation time of E on M; diDenotes JiThe expiration date of (c); sigma0An initial schedule indicating that no tasks are scheduled; sigma means having aPartial scheduling after | tasks are scheduled, where | σ | represents the number of tasks in σ; sigmauRepresents a set of unscheduled tasks in σ; sigma' represents sigmauThe index of the task; sigma [ i ]]An index representing the ith task in σ; dσ[i]Represents the expiration date of the ith task in σ; p is a radical ofσ[i]jDenotes that the ith task in σ is at MjThe processing time belongs to M; sσ[i]jDenotes that the ith task in σ is at MjPreparation time of E on M; sigma[k]={Jσ[1], Jσ[2],...,Jσ[k]}; f (σ) represents the completion time of σ; f. ofj(σ) denotes σ is at MjThe completion time on the E M; Δ (σ, i) denotes task Jσ[i]Is arranged at Mm+1Up to result in Mm+1Increased idle time; TT (σ) represents the total delay time of σ; σ denotes σ minimizing the total delay timeuArrangement of the tasks;
fj(σ)(j∈Nm+1) F (σ), and Δ (σ, i) (i ∈ N)|σ|) The calculation formula of (a) is as follows:
Figure RE-GDA0003230470290000071
Figure RE-GDA0003230470290000072
wherein f ism+1[0])=0
f(σ)=max{fj(σ)|j∈Nm+1}=fm+1(σ)
Figure RE-GDA0003230470290000073
Task Jσ[j]The delay time of (d) is calculated as follows:
TT(σ,j)=max{0,fm+1(σ)-dσ[j]}
thus TT (σ) is calculated as follows:
Figure RE-GDA0003230470290000074
step 2: three greedy strategies
The task with large delay is prioritized to avoid the delay time of the task from increasing; in addition, moving tasks with deadlines much greater than completion time backward does not increase their latency; based on the inspiration, a greedy strategy is proposed:
greedy strategy 1:
inputting: one partial scheduling σ;
and (3) outputting: one sigmauPermutation of intermediate tasks σu1
Step 2.1:
Figure RE-GDA0003230470290000081
(empty set);
step 2.2: if it is not
Figure RE-GDA0003230470290000082
Entering a cycle of 2.2.1-2.2.3, and ending if not;
step 2.2.1: given Ji∈σuLet us order
Figure RE-GDA0003230470290000083
Figure RE-GDA0003230470290000084
Step 2.2.2: find Jk∈σuSuch that I (σ, J)k)=max{I(σ,Ji)|Ji∈σ};
Step 2.2.3: will JkInsert into sigmau1Tail part, will JkInserting into the tail part of sigma, inserting JkFrom σuDeleting;
step 2.3: output sigmau1
For each non-scheduled task J, in order to better schedule the non-scheduled tasksi∈σuTwo other indicators are given:
(1)
Figure RE-GDA0003230470290000085
(2)
Figure RE-GDA0003230470290000086
greedy strategy 2 will σuAccording to the index (1) theta1(σ,Ji) Sequencing in ascending order to obtain the sequence sigmau2Greedy strategy 3 will σuAccording to the index (2) theta2(σ,Ji) Sequencing in ascending order to obtain the sequence sigmau3
And step 3: variable neighborhood search algorithm
Variable Neighbor Search (VNS) is a meta-heuristic algorithm that systematically searches for different Neighborhood structures to avoid local optimality by invoking a Variable Neighbor Drop (VND) method by the system; if the current best scheduling is still not improved after all the neighborhood structures are tried once, executing a perturbation strategy, and alternately executing the VND and the perturbation strategy until a termination condition is met; the steps of VND, perturbation policy and VNS are as follows:
step 3.1VND
Firstly, three neighborhood structures used in VND are provided, wherein sigma represents a schedule, and x and y belong to NnRepresents two different positions in σ;
neighborhood 1: swap (σ, x, y), will task σ [ x ]]Move to position y, task σ [ y ]]Move to position x, order
Figure RE-GDA0003230470290000087
Figure RE-GDA0003230470290000088
Represents a set of neighborhoods 1;
neighborhood 2: insert (σ, x, y), if x<y, will task σ [ x]Move to position y, task σ [ x +1 ]],σ[x+2],..., σ[y]Move to position x, x +1,.., y-1, respectively; if x>y, will task σ [ x]Move to position y, task σ [ x-1 ]],σ[x -2],...,σ[y]Move to position x, x-1, ·, y +1, respectively; order to
Figure RE-GDA0003230470290000091
Representing a neighborhood 2 set;
neighborhood 3: inverse (σ, x, y), if x<y, will task σ [ x]Move to position y, task σ [ y ]]Move to position x, let σ1Represents the schedule obtained after the move and executes Inverse (σ)1X +1, y-1); if x>y, return σ, order
Figure RE-GDA0003230470290000092
Figure RE-GDA0003230470290000093
Representing a set of neighborhoods 3.
VND first exploration
Figure RE-GDA0003230470290000094
If it is exploring
Figure RE-GDA0003230470290000095
Later, the current best solution is not updated, and the VND begins exploring
Figure RE-GDA0003230470290000096
If it is exploring
Figure RE-GDA0003230470290000097
Later, the current best solution is not updated, and the VND begins exploring
Figure RE-GDA0003230470290000098
Once the current best solution is updated, the VND is re-started
Figure RE-GDA0003230470290000099
Starting exploration; if the current best solution is under exploration
Figure RE-GDA00032304702900000910
Then still has noIf the VND is updated, the VND is terminated; the best solution is the solution with the minimum total delay time found currently;
step 3.2 perturbation strategy (Sharking)
The effect of the perturbation strategy is to jump out of local optimality. The intensity of perturbation strategies is usually invariant (Roshanaei V., B. Naderi, F. Jolai, and M. khalili, "A variable neighbor search for job shop scheduling with set-up times to minimize makespan," Future Generation Computer Systems,2009,25(6):654 661). Since the current best solution becomes better as the search proceeds, the difficulty of jumping out of the local optimum becomes greater. Therefore, as the search progresses, the perturbation should be stronger and stronger.
Let q denote the number of times the current best solution has not been updated, qmaxRepresenting the maximum times of no update of the current best solution, sigma representing the scheduling sequence to be disturbed, if q is less than or equal to qmax3, repeating the selection of two positions x and y and performing Swap (σ, x, y) q times; if q ismax/3<q≤2*qmax3, repeating the selection of two positions x and y and performing Insert (σ, x, y) q times; if 2 x qmax/3 <q≤qmaxRepeatedly selecting two positions x and y and performing Inverse (sigma, x, y) q times;
step 3.3VNS
Let three greedy strategies use sigma0The outputs obtained for the inputs are respectively sigmau1,σu2And σu3Let Ω (σ)0) Is expressed as sigmau1,σu2And σu3Scheduling sequence with the smallest total delay time, i.e., TT (omega (sigma))0))=min{TT(σu1),TT(σu2),TT(σu3) VNS alternately for Ω (σ)0) Executing VND and perturbation strategy, and making q represent the number of times that the current best solution is not updated, qmaxRepresenting the maximum number of times that the current best solution is not updated, if the current best solution is updated after the VND is executed, then let q be 1, otherwise let q be 1; if q is qmaxThen the VNS starts the search again from q ═ 1 once the CPU time for which the VNS is running exceeds a maximum CPU time, i.e. TmThe execution of VNS is terminated.
The experimental results are as follows:
VNS is realized by C + +, is compiled by MSbuild 4.0, and runs on a personal computer with 3.4-GHz CPU and 16GB memory. The operating system of the computer is Windows 7 professional edition. Allahverdi et al in [ 2 ] verified that the N-PSA algorithm is superior to all existing scheduling algorithms aiming at minimizing the total delay time of the two-stage assembly scheduling problem. Therefore, to verify the effectiveness of VNS, it is only necessary to compare it to N-PSA.
To generate a random two-stage assembly scheduling problem, the number of workpieces n, the number of machines m, the processing time, the preparation time, and the deadline of the workpieces need to be known. The number of workpieces considered in Allahverdi et al in reference [ 2 ] is 50,60,70, 80. The invention contemplates a wider range of workpieces, with n being 30,50,80, 120. The other parameter settings are the same as in document [ 2 ], i.e. m is 5,10, and 12; p is a radical ofij(i∈Nn,j∈Nm+1) Random from [1,100 ]]Uniformly extracting; sij(i∈Nn,j∈Nm+1) Random from [1,100k ]]Uniformly extracting, wherein k is 0,0.5, 1; di(i∈Nn) Random slave [ LCmax(1-T-R/2),LCmax(1-T+R/2)]Internal uniform extraction, where T is 0.2,0.4, and 0.6, R is 0.2,0.6, and 1,
Figure RE-GDA0003230470290000101
Figure RE-GDA0003230470290000102
a total of 108 sets of m, n, T, and R distinct two-stage assembly scheduling problems are generated, with 10 two-stage assembly scheduling problems within each set. Each 108 groups is a data set, yielding a total of 3 data sets with k values of 0,0.5, and 1, respectively. These two-phase assembly scheduling problems can be downloaded from GitHub with the download link https:// GitHub. com/JianchaoLuo/Results-for-TASPs.
The parameter settings of N-PSA are the same as in literature [ 2 ]. The present application tested two VNS algorithms with maximum run times of 10s and 30s, respectively, denoted as VNS10 and VNS30, respectively.
The best scheduled quantities obtained for the three test data sets, N-PSA, VNS10, and VNS30 are shown in tables 1-3. To save space, only the number of best schedules are listed, and data for detailed schedules and total delay time can be downloaded from the following website: https:// github. com/jianchaoLuo/Results-for-TASPs.
As can be seen from table 1, VNS30 gave the best solution for 992 of the 1080 random two-stage assembly scheduling problems in the k ═ 0 dataset, while VNS10, N-PSA gave the best scheduling of 764 and 413 problems, respectively. As can be seen from table 2, VNS30 gave the best solution for 979 of the 1080 random two-stage assembly scheduling problems in the dataset with k ═ 0.5, while VNS10 and N-PSA gave the best scheduling of 791 and 421 problems, respectively. As can be seen from table 3, VNS30 gave the best solution for 997 of the 1080 random two-stage assembly scheduling problems in the k ═ 1 dataset, while VNS10 and N-PSA gave the best scheduling of 802 and 440 problems, respectively. Clearly, VNS30 performed best, followed by VNS 10. Furthermore, VNS 30's performance is very stable for the two-stage assembly scheduling problem of different parameters. Finally, for the two-stage assembly scheduling problem of n 80 and 120, and T0.6, VNS30 and VNS10 get all the best schedules, which indicates that VNS performs particularly well for the large-scale, close-off-day two-stage assembly scheduling problem.
Table 1 number of optimal solutions obtained for the two-stage fitting scheduling problem N-PSA, VNS10, and VNS30 with k being 0.
Figure RE-GDA0003230470290000111
Table 2 number of optimal solutions obtained for the two-stage assembly scheduling problem N-PSA, VNS10, and VNS30 with k 0.5.
Figure RE-GDA0003230470290000112
Figure RE-GDA0003230470290000121
TABLE 3 number of optimal solutions obtained for the two-stage fitting scheduling problem N-PSA, VNS10, and VNS30 for k ═ 1
Figure RE-GDA0003230470290000122
To further validate the performance of VNS30 and N-PSA on small scale problems, the present invention randomly generated 120 sets of two-stage assembly scheduling problems with parameters T0.4 and 0.6, R0.6 and 1.0, N6, 7,8,9, and 10, k 0.5 and 1, and m 5,10, and 12, respectively. There are 10 two-stage assembly scheduling problems within each group. Note that the parameter settings for these two-stage assembly scheduling problems are the same as in document [ 2 ]. Optimal scheduling of small scale problems results from a brute force approach. The results of the experiments are shown in tables 4 and 5, where only the average relative error is listed for space saving. The relative error is defined as follows: (total delay time of scheduling obtained by a certain algorithm-total delay time of optimal scheduling)/(total delay time of optimal scheduling + 0.1). Note that to avoid dividing by 0, 0.1 is added to the denominator. The detailed two-stage assembly scheduling problem and the resulting scheduling sequences and total delay times for the scheduling sequences can be downloaded from the following website: https:// github. com/jianchaoLuo/Results-for-TASPs.
As can be seen from tables 4 and 5, N-PSA yields the optimal solution for the 73 out of 120 sets of the two-stage assembly scheduling problem, while VNS30 yields the optimal solution for the 115 out of 120 sets of the two-stage assembly scheduling problem. The average relative error of the N-PSA derived schedule is in the range of 0-0.3802, and the average relative error of the VNS30 derived schedule is in the range of 0-0.00079. Clearly, VNS30 still performed better than N-PSA for small scale problems.
TABLE 4 average relative error of scheduling obtained for small scale problem N-PSA
Figure RE-GDA0003230470290000131
TABLE 5 average relative error of resulting schedules for the small scale problem VNS30
Figure RE-GDA0003230470290000132
For most two-phase assembly scheduling problems, the run time of the N-PSA is less than 1s, which is the least time consuming in existing algorithms. To compare VNS and N-PSA with very tight allowable CPU time, VNS with a maximum CPU time of 1s, denoted as VNS1, was further tested on the three data sets mentioned before. A simple comparison is shown in table 6. The detailed two-stage assembly scheduling problem and the resulting scheduling sequences and the total delay times of the scheduling sequences can be downloaded from the following website: https:// github. com/jianchaoLuo/Results-for-TASPs. As can be seen from Table 6, VNS1 performed much better than N-PSA.
TABLE 6 comparison of VNS1 and N-PSA
Better than Is equal to Is worse than
k=0 612 415 53
k=0.5 612 431 37
k=1.0 589 439 52
In summary, the present invention has the following advantages:
(1) a new disturbance strategy is provided to dynamically adjust the disturbance intensity and the strategy, so that the situation that the disturbance intensity falls into local optimum is effectively avoided;
(2) a new variable neighborhood searching framework is provided, so that the algorithm can quickly update the current optimal solution and has stable performance;
(3) three greedy strategies, three neighborhood structures and a neighborhood gradient descent algorithm are provided to increase the probability of finding the optimal solution.

Claims (4)

1. A variable neighborhood search method for minimizing the total delay time of a two-stage assembly scheduling problem is characterized by comprising the following steps:
step 1: two-stage assembly, scheduling and modeling;
the two-stage assembly scheduling problem comprises (m +1) machines and n tasks to be processed, wherein each task is composed of (m +1) subtasks, the first m subtasks are processed on the m machines in the first stage in parallel, and the last task is finished on the assembly machine in the second stage after the first m tasks are finished; all tasks in the system are initially processable; the processing of tasks does not allow preemption; the buffer space between the two phases is infinite; one machine can only process one task at most at one time;
let Ng1, 2.., g } represents a set of positive integers of 1-g, where g is a positive integer; j ═ Ji|i∈NnRepresents a collection of tasks; m ═ Mj|j∈Nm+1Represents a collection of machines; p is a radical ofijDenotes JiIs e.g. J atMjThe processing time belongs to M; sijDenotes JiE.g. J in MjPreparation time of E on M; diDenotes JiThe expiration date of (c); sigma0An initial schedule indicating that no tasks are scheduled; σ denotes the partial schedule after having | σ | tasks scheduled, where | σ | denotes the number of tasks in σ; sigmauRepresents a set of unscheduled tasks in σ; sigma' represents sigmauThe index of the task; sigma [ i ]]An index representing the ith task in σ; dσ[i]Represents the expiration date of the ith task in σ; p is a radical ofσ[i]jDenotes that the ith task in σ is at MjThe processing time belongs to M; sσ[i]jDenotes that the ith task in σ is at MjPreparation time of E on M; sigma[k]={Jσ[1],Jσ[2],...,Jσ[k]}; f (σ) represents the completion time of σ; f. ofj(σ) denotes σ is at MjThe completion time on the E M; Δ (σ, i) denotes task Jσ[i]Is arranged at Mm+1Up to result in Mm+1Increased idle time; TT (σ) represents the total delay time of σ; σ denotes σ minimizing the total delay timeuArrangement of the tasks;
fj(σ)(j∈Nm+1) F (σ), and Δ (σ, i) (i ∈ N)|σ|) The calculation formula of (a) is as follows:
Figure FDA0003074483060000011
Figure FDA0003074483060000012
wherein f ism+1[0])=0
f(σ)=max{fj(σ)|j∈Nm+1}=fm+1(σ)
Figure FDA0003074483060000013
Task Jσ[j]The delay time of (d) is calculated as follows:
TT(σ,j)=max{0,fm+1(σ)-dσ[j]}
thus TT (σ) is calculated as follows:
Figure FDA0003074483060000014
step 2: three greedy strategies
The task with large delay is prioritized to avoid the delay time of the task from increasing; in addition, moving tasks with deadlines much greater than completion time backward does not increase their latency; based on the inspiration, a greedy strategy is proposed:
greedy strategy 1:
inputting: one partial scheduling σ;
and (3) outputting: one sigmauPermutation of intermediate tasks σu1
Step 2.1:
Figure FDA0003074483060000021
(empty set);
step 2.2: if it is not
Figure FDA0003074483060000022
Entering a cycle of 2.2.1-2.2.3, and ending if not;
step 2.2.1: given Ji∈σuLet us order
Figure FDA0003074483060000023
Figure FDA0003074483060000024
Step 2.2.2: find Jk∈σuSuch that I (σ, J)k)=max{I(σ,Ji)|Ji∈σ};
Step 2.2.3: will JkInsert into sigmau1Tail part, will JkInserted into the tail of sigma, willJkFrom σuDeleting;
step 2.3: output sigmau1
For each non-scheduled task J, in order to better schedule the non-scheduled tasksi∈σuTwo other indicators are given:
(1)
Figure FDA0003074483060000025
(2)
Figure FDA0003074483060000026
greedy strategy 2 will σuAccording to the index (1) theta1(σ,Ji) Sequencing in ascending order to obtain the sequence sigmau2Greedy strategy 3 will σuAccording to the index (2) theta2(σ,Ji) Sequencing in ascending order to obtain the sequence sigmau3
And step 3: variable neighborhood search algorithm
Variable Neighbor Search (VNS) is a meta-heuristic algorithm that systematically searches for different Neighborhood structures to avoid local optimality by invoking a Variable Neighbor Drop (VND) method by the system; if the current best scheduling is still not improved after all the neighborhood structures are tried once, executing a perturbation strategy, and alternately executing the VND and the perturbation strategy until a termination condition is met; the steps of VND, perturbation policy and VNS are as follows:
step 3.1VND
Firstly, three neighborhood structures used in VND are provided, wherein sigma represents a schedule, and x and y belong to NnRepresents two different positions in σ;
neighborhood 1: swap (σ, x, y), will task σ [ x ]]Move to position y, task σ [ y ]]Move to position x, order
Figure FDA00030744830600000310
Figure FDA00030744830600000311
Represents a set of neighborhoods 1;
neighborhood 2: insert (σ, x, y), if x<y, will task σ [ x]Move to position y, task σ [ x +1 ]],σ[x+2],...,σ[y]Move to position x, x +1,.., y-1, respectively; if x>y, will task σ [ x]Move to position y, task σ [ x-1 ]],σ[x-2],...,σ[y]Move to position x, x-1, ·, y +1, respectively; order to
Figure FDA00030744830600000312
Representing a neighborhood 2 set;
neighborhood 3: inverse (σ, x, y), if x<y, will task σ [ x]Move to position y, task σ [ y ]]Move to position x, let σ1Represents the schedule obtained after the move and executes Inverse (σ)1X +1, y-1); if x>y, return σ, order
Figure FDA0003074483060000031
Figure FDA0003074483060000032
Representing a set of neighborhoods 3.
VND first exploration
Figure FDA0003074483060000033
If it is exploring
Figure FDA0003074483060000034
Later, the current best solution is not updated, and the VND begins exploring
Figure FDA0003074483060000035
If it is exploring
Figure FDA0003074483060000036
Later, the current best solution is not updated, and the VND begins exploring
Figure FDA0003074483060000037
Once the current best solution is updated, the VND is re-started
Figure FDA0003074483060000038
Starting exploration; if the current best solution is under exploration
Figure FDA0003074483060000039
If the VND is still not updated, the VND is terminated; the best solution is the solution with the minimum total delay time found currently;
step 3.2 perturbation strategy (Sharking)
The effect of the perturbation strategy is to jump out of local optimality. The intensity of perturbation strategies is usually invariant (Roshanaei V., B.Naderi, F.Jolai, and M.Khalii, "A variable neighbor search for job shop scheduling with set-up times to minimize makespan," Future Generation Computer Systems,2009,25(6):654 661). Since the current best solution becomes better as the search proceeds, the difficulty of jumping out of the local optimum becomes greater. Therefore, as the search progresses, the perturbation should be stronger and stronger.
Let q denote the number of times the current best solution has not been updated, qmaxRepresenting the maximum times of no update of the current best solution, sigma representing the scheduling sequence to be disturbed, if q is less than or equal to qmax3, repeating the selection of two positions x and y and performing Swap (σ, x, y) q times; if q ismax/3<q≤2*qmax3, repeating the selection of two positions x and y and performing Insert (σ, x, y) q times; if 2 x qmax/3<q≤qmaxRepeatedly selecting two positions x and y and performing Inverse (sigma, x, y) q times;
step 3.3VNS
Let three greedy strategies use sigma0The outputs obtained for the inputs are respectively sigmau1,σu2And σu3Let Ω (σ)0) Is expressed as sigmau1,σu2And σu3Scheduling sequence with the smallest total delay time, i.e., TT (omega (sigma))0))=min{TT(σu1),TT(σu2),TT(σu3) VNS alternately for Ω (σ)0) Enforcing VND and perturbation policiesLet q denote the number of times the current best solution has not been updated, qmaxRepresenting the maximum number of times that the current best solution is not updated, if the current best solution is updated after the VND is executed, then let q be 1, otherwise let q be 1; if q is qmaxThen the VNS starts the search again from q ═ 1 once the CPU time for which the VNS is running exceeds a maximum CPU time, i.e. TmThe execution of VNS is terminated.
2. The variable neighborhood search method of minimizing the total delay time of a two-stage assembly scheduling problem of claim 1, wherein:
the specific steps of the VND (σ) are as follows:
inputting: one scheduling σ;
and (3) outputting: best current scheduling σ1
Step 3.1.1: let sigma1=σ;t=1;
Step 3.1.2: if t is less than or equal to 3, circularly executing the step 3.1.2.1, and skipping to the step 3.1.3 if t is greater than 3;
step 3.1.2.1: if present
Figure FDA0003074483060000041
So that
Figure FDA0003074483060000042
And TT (sigma)2)<TT(σ1) Then let σ1=σ2And t is 1; otherwise, t is added by 1;
step 3.1.3: output sigma1
3. The variable neighborhood search method of minimizing the total delay time of a two-stage assembly scheduling problem of claim 1, wherein:
the perturbation strategy Sharking (q, sigma)1,qmax) The method comprises the following specific steps:
inputting: the number of current iterations q, a schedule σ, and qmax
And (3) outputting: perturbed scheduling σ1
Step 3.2.1: let sigma1=σ,i=1;
Step 3.2.2: if i is less than or equal to q, circularly executing the step 3.2.2.1-3.2.2.3, otherwise, jumping to 3.2.3;
step 3.2.2.1: randomly from NnTwo positions x and y are selected;
step 3.2.2.2: if q is less than or equal to qmax(v)/3, then let σ ═ Swap (σ, x, y); if q ismax/3<q≤2*qmax(v 3), then let σ Insert (σ, x, y); if 2 x qmax/3<q≤qmaxLet σ equal Inverse (σ, x, y);
step 3.2.2.3: adding 1 to i;
step 3.2.3: output sigma1
4. The variable neighborhood search method of minimizing the total delay time of a two-stage assembly scheduling problem of claim 1, wherein:
the specific steps of the VNS are as follows:
inputting: omega (sigma)0),Tm
And (3) outputting: an optimized scheduling σ1
Step 3.3.1: let sigma1=VND(σ),q=1,flag=true,
Figure FDA0003074483060000043
Step 3.3.2: if flag is true, then loop through steps 3.3.2.1 and 3.3.2.2, otherwise go to step 3.3.3;
step 3.3.2.1: let q be 1;
step 3.3.2.2: if q is less than or equal to qmaxIf yes, circularly executing the step 3.3.2.2.1-3.3.2.2.4, otherwise, skipping to the step 3.3.3;
step 3.3.2.2.1: let sigma2=Sharking(q,σ1,qmax);
Step 3.3.2.2.2: let sigma3=VND(σ2);
Step 3.3.2.2.3: if TT (σ)3)≤TT(σ1) If so, let σ1=σ3And q is 1, otherwise, adding 1 to q;
step 3.3.2.2.4: if the CPU time is greater than TmJump to step 3.3.3, if CPU time is less than or equal to TmGo to step 3.3.2.2;
step 3.3.3: output sigma1
CN202110548423.7A 2021-05-19 2021-05-19 Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem Active CN113536202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110548423.7A CN113536202B (en) 2021-05-19 2021-05-19 Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110548423.7A CN113536202B (en) 2021-05-19 2021-05-19 Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem

Publications (2)

Publication Number Publication Date
CN113536202A true CN113536202A (en) 2021-10-22
CN113536202B CN113536202B (en) 2023-05-23

Family

ID=78094699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110548423.7A Active CN113536202B (en) 2021-05-19 2021-05-19 Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem

Country Status (1)

Country Link
CN (1) CN113536202B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080264159A1 (en) * 2004-09-22 2008-10-30 Volvo Lastvagnar Ab Method of Verifying the Strength of a Vehicle in Combination with Auxiliary Equipment Fitted
CN106611229A (en) * 2015-12-04 2017-05-03 四川用联信息技术有限公司 Iterated local search algorithm by employing improved perturbation mode for solving job-shop scheduling problem
CN110554429A (en) * 2019-07-23 2019-12-10 中国石油化工股份有限公司 Earthquake fault identification method based on variable neighborhood sliding window machine learning
US20200359227A1 (en) * 2019-05-09 2020-11-12 King Fahd University Of Petroleum And Minerals Search-based heuristic for fixed spectrum frequency assignment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080264159A1 (en) * 2004-09-22 2008-10-30 Volvo Lastvagnar Ab Method of Verifying the Strength of a Vehicle in Combination with Auxiliary Equipment Fitted
CN106611229A (en) * 2015-12-04 2017-05-03 四川用联信息技术有限公司 Iterated local search algorithm by employing improved perturbation mode for solving job-shop scheduling problem
US20200359227A1 (en) * 2019-05-09 2020-11-12 King Fahd University Of Petroleum And Minerals Search-based heuristic for fixed spectrum frequency assignment
CN110554429A (en) * 2019-07-23 2019-12-10 中国石油化工股份有限公司 Earthquake fault identification method based on variable neighborhood sliding window machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘巍巍;马雪丽;刘晓冰;: "面向柔性作业车间调度问题的改进变邻域搜索算法", 计算机应用与软件 *

Also Published As

Publication number Publication date
CN113536202B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
Byong-Hun et al. Single facility multi-class job scheduling
CN105117286B (en) The dispatching method of task and streamlined perform method in MapReduce
US8881158B2 (en) Schedule decision device, parallel execution device, schedule decision method, and program
Sanmarti et al. Combinatorial framework for effective scheduling of multipurpose batch plants
CN111079921A (en) Efficient neural network training and scheduling method based on heterogeneous distributed system
CN105912387A (en) Method and device for dispatching data processing operation
CN110458326B (en) Mixed group intelligent optimization method for distributed blocking type pipeline scheduling
CN106611270A (en) Hybrid heuristic shifting bottleneck procedure for solving parallel-machine job-shop scheduling
Vancheeswaran et al. Two-stage heuristic procedure for scheduling job shops
Orr et al. Integrating task duplication in optimal task scheduling with communication delays
Chamnanlor et al. Hybrid genetic algorithms for solving reentrant flow-shop scheduling with time windows
CN113536202A (en) Variable neighborhood search method for minimizing total delay time of two-stage assembly scheduling problem
CN110088730B (en) Task processing method, device, medium and equipment
CN116700173A (en) Dynamic scheduling method of production line based on graph representation learning
CN115619200B (en) Scheduling and multi-functional scheduling combination optimization method and device for split-type serum
CN113505910B (en) Mixed workshop production scheduling method containing multi-path limited continuous output inventory
CN113191662B (en) Intelligent cooperative scheduling method and system considering workpiece domination and energy consumption minimization
CN112766811B (en) Comprehensive scheduling method for dynamically adjusting leaf node process
CN109635328A (en) Integrated circuit layout method and distributed design approach
Mao et al. An Adaptive Population-based Iterative Greedy Algorithm for Optimizing the Maximum Completion Time of Hybrid Flow Shop
Crauwels et al. Branch and bound algorithms for single machine scheduling with batching to minimize the number of late jobs
CN113946424A (en) Software and hardware division and task scheduling model based on graph convolution network and method thereof
Balázs et al. Hybrid bacterial iterated greedy heuristics for the permutation flow shop problem
CN113010319A (en) Dynamic workflow scheduling optimization method based on hybrid heuristic rule and genetic algorithm
Huang et al. The application of improved hybrid particle swarm optimization algorithm in job shop scheduling problem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant