CPC 2018

20th Workshop on Compilers for Parallel Computing

The programme for CPC 2018 is now online.

April 16-18 2018, Dublin, Ireland

The CPC (Compilers for Parallel Computing) workshop is a venue for researchers in the area of parallel compilation to meet and present their latest research activities and results. The workshop is held in an informal and relaxed atmosphere in order to exchange ideas and to foster collaboration. The scope encompasses all areas of parallelism and optimization: in principle, any topic that is of interest to a designer of parallel systems and/or compilers is of interest for this workshop.

CPC 2018 will be held in Trinity College Dublin in the centre of Dublin, Ireland.

About the workshop

CPC is unusual: it’s a true workshop, with no published proceedings. Instead, it’s a meeting of international research specialists, to present research and exchange ideas.  There is no peer review – we simply aim to select talks that will make an interesting programme.  Talks can cover work that is in-progress, under review or already published. We also welcome researchers and practitioners in from relevant fields who do not want to give a formal talk, but would like to participate in the workshop.

The CPC series started in Oxford, England (1989) and continued, with an 18-month period, in Paris (1990), Wien (1992), Delft (1993), Malaga (1995), Aachen (1996), Linköping (1998), Aussois (2000), Edinburgh (2001), Amsterdam (2003), Chiemsee (2004), A Coruña (2006), Lisbon (2007), Zürich (2009), Wien (2010), Padova (2012), Lyon (2013), London (2015) and Valladolid (2016).

Topics of interest

The main goal of the workshop is to bring together researchers in compilation and associated areas, in an informal and relaxed atmosphere in order to exchange ideas and to foster collaboration. The scope encompasses all areas of parallelism and optimization, from embedded systems to large scale parallel systems and computational grids. Here is a representative list of topics:

  • Parallel programming models and languages.
  • Parallelization techniques: user-directed, semi-automatic, and automatic.
  • Optimizations for exploiting the memory hierarchy.
  • Optimizations for exploiting instruction level parallelism.
  • Optimizations for power consumption.
  • Profile directed and feedback assisted compilation.
  • Program analysis and program understanding tools.
  • High-level specification and domain-specific languages compilation.
  • Architectural models and performance prediction.
  • Just-in-time compilation.
  • Static and dynamic optimization techniques for performance and scalability.
  • Parallel runtime systems.
  • Continuous program optimization.
  • Program analysis frameworks and tools.
  • Back-end code generation and optimizations.
  • Compilation and optimization for multi-core systems.
  • Performance modeling and tools for performance tuning.
  • Architectural support for productive parallelization.