Skip to content

PerformanceStudy

Eike Broda edited this page Jun 8, 2016 · 2 revisions

Performance Study of Dispatching Rules

Performance Study of Dispatching Rules.

Motivation

The purpose of this study is to provide a benchmark for the comparison of different approaches for the (automated) design of scheduling heuristics, and dispatching rules in particular. By making the descriptions and Java classes of test problems and dispatching rules publicly available, we want to encourage others to reproduce our results and facilitate the comparability of different methods for the design of dispatching rules.

Benchmark Problems

As benchmark problems, we adopt the dynamic job shop and flow shop problems, defined and used in

C. Rajendran and O. Holthaus
A comparative study of dispatching rules in dynamic flowshops and jobshops
European Journal of Operational Research 116(1), 1999, pp. 156–170

The dynamic nature of these scheduling problems presents a specific challenge for solution methods, which have to be able to quickly adapt a solution in response to unexpected changes. This ability is one of the main advantages of dispatching rules, and the problems are therefore particularly suitable for their evaluation. Moreover, job shop and flow shop problems belong to the most widely studied scheduling problems in the literature, and thus serve well as benchmarks in our view. A more detailed description of the test problems can be found here.

Tested Dispatching Rules

The set of dispatching rules tested in this study includes rules generated by various automated methods (also called hyper-heuristics), as well as several manually developed rules that have been specifically designed for the benchmark problems. The hyper-heuristics are based on different implementations of Genetic Programming (GP), and a Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A more detailed description of the dispatching rules and hyper-heuristics can be found here.

Results

The performance of dispatching rules is assessed in how well they minimise the mean flow time of jobs across the different test problems. In addition to the absolute mean flow time, according to which dispatching rules are ranked, we also measure performance by a normalised index that indicates the average relative improvement over the mean flow time performance of the SPT rule.

The currently best dispatching rules are:

|| Rank || Dispatching Rule || Hyper-Heuristic || Mean Flow Time || Performance Index|| || 1. || EC2014-TREE_BASE_NORM_ND-Rule || GP_NORM_ND || 665.74 || 0.9009 || || 2. || EC2014-TREE_EXT_NORM_ND-Rule || GP_NORM_ND || 669.18 || 0.9069 || || 3. || GECCO2010-genSeed-2reps || GP || 669.83 || 0.9057 || || 4. || GECCO2010-genSeed-10reps || GP || 671.30 || 0.9050 || || 5. || IFT−UIT+NPT || - || 677.14 || 0.9091 || || 6. || EC2014-LIN_BASE-Rule || CMA-ES || 702.27 || 0.9394 || || 7. || 2PT+WINQ+NPT || - || 712.88 || 0.9491 || || 8. || ASP2013-Rule #6 || MOGP || 722.46 || 0.9611 || || 9. || PT+WINQ || - || 723.47 || 0.9587 || || 10. || SPT || - || 761.51 || 1.0000 || || 11. || Tie-Breaker || - || 1014.62 || 1.2807 ||

Detailed results, subdivided by test problem, can be found here.

Source Code

The source code to replicate these experiments can be downloaded from performanceStudy/src/ (you will also need to install jasima®, see here for instructions on how to get started).