|Session 1 - Foundations - Session Chair: John Woodward|
|9:20-9:50||Paper 1 - Sean Harris, Travis Bueter, Daniel R. Tauritz "A Comparison of Genetic Programming Variants for Hyper-Heuristics"|
|9:50-10:20||Paper 2 - Eric O. Scott, Jeffrey K. Bassett "Learning Genetic Representations for Classes of Real-Valued Optimization Problems"|
|10:20-10:50||Paper 3 - Matthew A. Martin, Daniel R. Tauritz "Hyper-Heuristics: A Study On Increasing Primitive-Space"|
|Session 2 - Applications - Session Chair: Daniel Tauritz|
|11:10-11:40||Paper 4 - Gopinath Chennupati, R. Muhammad Atif Azad, Conor Ryan "Synthesis of Parallel Iterative Sorts with Multi-Core Grammatical Evolution"|
|11:40-12:10||Paper 5 - Patricia Ryser-Welch, Julian F. Miller, Asta Shahriar "Generating Human-readable Algorithms for the Travelling Salesman Problem using Hyper-Heuristics"|
|12:10-12:40||Invited Talk by Holger Hoos - Invited Talk Chair: Manuel López-Ibáñez|
|Speaker: Holger H. Hoos
Title: From Programs to Program Spaces: Leveraging Machine Learning and Optimisation for Automated Algorithm Design
Abstract: In recent years, there has been a significant increase in the use of automated algorithm design methods, notably automated algorithm configuration and selection techniques, across many areas within artificial intelligence, operations research and beyond. These methods are based on cutting-edge machine learning and optimisation techniques and, in many cases, have prompted advances in those areas. In this presentation, I will give a brief overview of these automated algorithm design methods and introduce Programming by Optimisation (PbO), a principled approach for developing high-performance software based on them. I will explain how PbO can fundamentally change the nature of software development and give examples for its successful application from a range of prominent problems, including propositional satisfiability, mixed integer programming and machine learning classification. I will also discuss how PbO can be used to not only to obtain performance-optimised software for particular use cases, but to automatically create effective parallel algorithms from sequential sources.
Short Bio: Holger H. Hoos is a Professor for Computer Science and a Faculty Associate at the Peter Wall Institute for Advanced Studies at the University of British Columbia (Canada). His main research interests span empirical algorithmics, artificial intelligence, bioinformatics and computer music. He is known for his work on the automated design of high-performance algorithms and on stochastic local search methods. Holger is a co-author of the book "Stochastic Local Search: Foundations and Applications", and his research has been published in numerous book chapters, journals, and at major conferences in artificial intelligence, operations research, molecular biology and computer music. Holger was elected a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) in 2015 and won two prestigious IJCAI/JAIR best paper prizes in 2009 and 2010. He is a past president of the Canadian Artificial Intelligence Association / Association pour l'intelligence artificielle au Canada (CAIAC) and Associate Editor of the Journal of Artificial Intelligence Research (JAIR). Currently, his group is helping UBC to produce better exam timetables, Actenum Inc. to increase production efficiency in the oil and gas industry, and IBM to improve their CPLEX optimisation software, which is used by 50% of the world's largest companies and thousands of universities. (For further information, see Holger's web page at http://www.cs.ubc.ca/~hoos.)
How can we automatically generate algorithms on demand? While this was one of the original aims of Machine Learning and Artificial Intelligence in the early 1950s, and more recently Genetic Programming in the early 1990s, existing techniques have fallen-short of this elusive goal. This workshop will outline a number of steps in the right direction on the path to achieving this goal. In particular, this workshop will focus on the burgeoning field of hyper-heuristics which are meta-heuristics applied to a space of algorithms; i.e., any method of sampling a set of candidate algorithms. Genetic Programming has most famously been employed to this end, but random search and iterative hill-climbing have both also successfully been employed to automatically design novel (components of) algorithms.This approach is in contrast to standard Genetic Programming which attempts to build programs from scratch from a typically small set of atomic functions. Instead we take already existing programs and allow evolution to improve on them. When the automatic design of algorithms is done using a Genetic Programming system (effectively in vitro), the methodology is typically referred to as Genetic Programming as a Hyper-heuristic. When the automatic design of algorithms is done directly on source code (effectively in situ), the methodology is typically referred to as Genetic Improvement [5, 7].
Although most Evolutionary Computation techniques are designed to generate specific solutions to a given instance of a problem, some of these techniques can be explored to solve more generic problems. For instance, while there are many examples of Evolutionary Algorithms for evolving classification models in data mining and machine learning, the work described in  employed a hyper-heuristic using Genetic Programming to create a generic classification algorithm which will, in turn, generate a specific classification model for any given classification dataset, in any given application domain. In other words, the hyper-heuristic is operating at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., we are searching the space of algorithms as opposed to directly searching in the problem solution space , raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. For instance, a hyper-heuristic can generate a generic heuristic for solving any instance of the traveling salesman problem, involving any number of cities and any set of distances associated with those cities , whilst a conventional evolutionary algorithm would just evolve a solution to one particular instance of the traveling salesman problem, involving a predefined set of cities and associated distances between them.
Hyper-heuristics have some distinctions from standard Genetic Programming. In essence, they consist of a stage where an algorithmic framework or template is defined along with algorithmic primitives, and another stage where a meta-heuristic such as Genetic Programming searches the program space defined by the aforementioned template and primitives. This approach can be identified with the Template Method pattern from Designed Patterns associated with Object Oriented programming. In short, the human provides the overall architecture of the algorithm (e.g., loops and conditional branching) and the meta-heuristic fills in the details (for example, the bodies of the loops, or the condition and actions of conditional branching statements). While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.
As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, genetic programming was used to evolve mate selection in evolutionary algorithms ; in 2011, linear genetic programming was used to evolve crossover operators ; more recently, genetic programming was used to evolve complete black-box search algorithms [10,11].
The main objective of this workshop is to discuss hyper-heuristics employing evolutionary computation methods for generating algorithms. These methods have the advantage of producing solutions that are applicable to any instance of a problem domain, instead of a solution specifically produced to a single instance of the problem. The areas of application of these methods include, for instance, data mining, machine learning, and optimization [1-11].
To submit, E-mail papers to both email@example.com and firstname.lastname@example.org.
All accepted papers will be presented at the workshop and appear in the GECCO Conference Companion Proceedings.
E-mail: email@example.comDaniel R. Tauritz is an Associate Professor in the Department of Computer Science at the Missouri University of Science and Technology (S&T), on sabbatical at Sandia National Laboratories for the 2014-2015 academic year, a former Guest Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He served previously as GECCO 2010 Late Breaking Papers Chair, COMPSAC 2011 Doctoral Symposium Chair, GECCO 2012 GA Track Co-Chair, and GECCO 2013 GA Track Co-Chair. For several years he has served on the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and search-based software engineering. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.
E-mail: firstname.lastname@example.orgJohn R. Woodward is a Lecturer at the University of Stirling, within the CHORDS group and is employed on the DAASE project, and for the previous four years was a lecturer with the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 100 talks at international conferences and as an invited speaker at universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.
E-mail: email@example.comManuel López-Ibáñez is a Postdoctoral Researcher (Chargé de recherches) of the Belgian Fund for Scientific Research (F.R.S.-FNRS) working at the IRIDIA laboratory of Université libre de Bruxelles, Brussels, Belgium. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 13 journal papers, 6 book chapters and 33 papers in peer-reviewed proceedings of international conferences on diverse topics such as evolutionary algorithms, ant colony optimisation, multi-objective optimisation, and various combinatorial and real-world optimisation problems. His current research interests are experimental analysis, automatic configuration and automatic design of optimisation algorithms, for single and multi-objective optimisation. He is the lead developer and current maintainer of the irace automatic configuration method (http://iridia.ulb.ac.be/irace).