Contexte et atouts du poste
Compilers must restructure the application to use as best as possible computing and storage resources. In general, parallel runtimes (interpreters) do a better job than compilers, as the availability of dynamic informations (variable values, tests, loop iterations) make possible to make the right decisions.
However, runtimes come with an overhead, which push them to coarse-grain task scheduling, while compilers are usually in charge of mapping the tasks to computation units (GPU, FPGA, etc) and then to extract fine-grain parallelism.
We focus on programs from the polyhedral model, where the operations of the execution trace depends only on input size and where the compilation schemes are affine functions (schedule, resource allocation, etc).
We believe this will make possible to obtain levels of optimizations out-of-reach by a purely static compilation.
In this PhD thesis, we focus in the inference of high-performance computing compilation schemes , thanks to dynamic analysis on a selection of execution traces.
The PhD student will revisit the key ingredients of parallel / optimizing compilers : data placement, computation scheduling and partitionning and code generation.
In particular, the PhD student will investiguate how to select execution traces to ensure code coverage and how to extrapolate the results of dynamic analysis to polyhedral compilation mappings.
A compilation infrastructure will be built on top of LLVM to validate the results on the polyhedral compilation benchmarks.
Notions in compilers, experience with C++