Keynotes

(all times are CEST)

June 16 (2 PM) - Compiler enabled optimizations for restricted hardware and software-hardware co-designs

Alexandra Jimborean - University of Murcia

Aiming for ease of programmability to boost productivity, programming languages develop towards higher levels of abstraction and expressiveness, enabling programmers to write increasingly sophisticated software. At the other end, to deliver high-performance, hardware continues to grow in complexity, heterogeneity, and diversity, towards platforms highly specialized for a particular task, rather than a one-size-fit-them-all solution. The only bridge between programming languages that grow closer to natural languages and the underlying hardware, ultra-specialized and increasingly complex, is the compiler. Thus, as software becomes too complex and hardware too diverse to be handled by programmers at design-time, compilers take on the responsibility to automatically analyze and optimize applications. This talk will tell the story of compiler optimizations, starting from loop-oriented transformations, automatic parallelization, and reaching state-of-the-art optimizations co-designed between software and hardware. Whether the optimization goal is performance, energy-efficiency, security or all together, this talk will expose the science behind the spark and will shed light on some of the most interesting aspects of compilation.

Short bio:Alexandra Jimborean received her PhD from France in 2012 researching in automatic parallelization of loops enabled by the compiler. She continued with a postdoctoral fellowship at University of Uppsala, Sweden, where she became Assistant Professor in 2015 and Associate Professor in 2018. In 2019, she was awarded the Ramón y Cajal grant and started as researcher at University of Murcia, Spain in May 2020. Her main research interests are compile-time code analysis and optimization and software-hardware co-designs for performance, energy-efficiency, and security. In particular, Alexandra's research focuses on compiler techniques to analyze memory access patterns which she then leverages for optimizing software for the emerging architectures. She was awarded the prestigious Google Anita Borg Memorial scholarship in recognition of excellent research, received over 30 distinctions, awards and grants, including the Starting Grant for outstanding young researchers in Sweden. She is actively involved in conference organization, most notably the International Conference on Code Generation and Optimization in 2019, International Conference on Compiler Construction in 2020, acted as reviewer for over 30 journals, conferences, and workshops in compilers and computer architecture, and was selected as the ACM featured member of the month in 2017.

June 17 (2 PM) - Testing compiler effectiveness

David Padua - University of Illinois

Testing software, and compilers, for correctness is an important part of the development life cycle. Because of its importance, testing for correctness has been studied extensively. On the other hand, testing for the quality of the output has received less attention in the literature. In the case of compilers, this means testing the quality of the code generated, especially its performance. Due to the complexity of compilers and target machines, compilers today are brittle and have unexpected outcomes. This makes today’s compilers poor optimization tools. Better strategies to test compiler code generation effectiveness should help improve compilers by helping identify optimization weaknesses. This talk will be about our recent work at Illinois on strategies to address this problem.

Short bio:David Padua received his PhD from the University of Illinois in 1980. In 1985, after a few years at the Universidad Simón Bolívar in Venezuela, he returned to the University of Illinois where he is now Donald Biggar Willet Professor of Engineering. He has served as program committee member, program chair, or general chair to more than 70 conferences and workshops. He was the Editor-in-Chief of Springer-Verlag’s Encyclopedia of Parallel Computing and is currently a member of the editorial board of the Communications of the ACM, the Journal of Parallel and Distributed Computing, and the International Journal of Parallel Programming. Padua has supervised the dissertations of 30 PhD students. He has devoted much of his career to the study of languages, tools, and compilers for parallel computing and has authored or co-authored more than 170 papers in these areas. He received the 2015 IEEE Computer Society Harry H. Goode Award. In 2017, he awarded an honorary doctorate from the University of Valladolid in Spain. He is a Fellow of the ACM and the IEEE.

June 18 (2 PM) - Some Humans Generate Better Code Than Compilers or Machines: Leveraging Their Expertise

Nelson Amaral - University of Alberta

Specialized libraries have traditionally outperformed code written by knowleadegeable programmers and compiled by sophisticated compilers to produce efficient code for matrix multiplication. Thus, when the Matrix Multiply Assist (MMA) hardware support was integrated into the design of IBM's new Power10 processor, the strategic decision was to dedicate engineering effort to integrate its instructions into linear-algebra libraries. A way to leverage that engineering effort for application programs that contain an implementation of matrix multiplication in native code is to raise the representation of these computations in order to recognize idioms that perform such computations and then replace the idiom with a call to a specialized library. The successful use of this approach requires data-dependence analysis and code rewriting. Introducing a new dependency on the availability of a library in the workflow of a system also has system implecations that must be sorted out for deployment. Once we had a robust pattern-recognition system, we could then focus on improving the performance of code generated by a compiler-only path for code generation that is independent of libraries. To do so we implemented the layered approach for code generation -- heretofore only available in libraries -- in the LLVM compiler framework. This talk describes our approach that led to efficient use of the Power10 MMA and that is also applicable to the hardware accelerations provided by Intel's AMX and ARM's ME.

Short bio:J. Nelson Amaral, a Computing Science professor at the University of Alberta, Ph.D. from The University of Texas at Austin in 2004, has published in optimizing compilers and high-performance computing. Scientific community service includes general chair for the 23rd International Conference on Parallel Architectures and Compilation Techniques in 2014, for the International Conference on Performance Engineering in 2020, and for the International Conference on Parallel Processing in 2020. Accolades include ACM Distinguished Engineer, IBM Faculty Fellow, IBM Faculty Awards, IBM CAS "Team of the Year", awards for excellence in teaching, and the GSA Award for Excellence in Graduate Student Supervision, and recent awards for University of Alberta Award for Outstanding Mentorship in Undergraduate Research & Creative Activities and for Remote Teaching during the COVID-19 pandemic.