Loading…
For full conference details, see the website: http://llvm.org/devmtg/2019-10/

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Tuesday, October 22
 

8:00am

Registration & Breakfast
Registration opens and breakfast is served.

Tuesday October 22, 2019 8:00am - 9:00am
Foyer

9:00am

Welcome
Welcome to the 2019 LLVM Developers' Meeting - Bay Area.

Speakers
avatar for Tanya Lattner

Tanya Lattner

President, LLVM Foundation
President, LLVM Foundation


Tuesday October 22, 2019 9:00am - 9:15am
General Session (LL20ABC)

9:15am

Generating Optimized Code with GlobalISel
So far, much of the focus of GlobalISel development has been on supporting targets with minimal optimization work. Recently, attention has turned towards optimization and bringing it to the point where it can take over from SelectionDAGISel. In this talk, we'll mainly focus on the combiner which is a key component of producing optimized code with GlobalISel. We'll talk about the overall design of the combiner, the components that support it, how it fits with the rest of GlobalISel, how to test it, and how to debug it. We'll also talk about the current and future work on the combiner to enhance it beyond SelectionDAGISel’s capabilities.

Speakers
DS

Daniel Sanders

Compiler Engineer, Apple


Tuesday October 22, 2019 9:15am - 10:00am
General Session (LL20ABC)

10:00am

Break
Morning Break

Tuesday October 22, 2019 10:00am - 10:20am
Foyer

10:20am

arm64e: An ABI for Pointer Authentication
arm64e is a variant of Apple's arm64 ABI which supports pointer authentication using the ARMv8.3 PAC instructions. All code pointers and some data pointers are signed using a cryptographic hash, improving the security of the system by making Return-Oriented Programming and Jump-Oriented Programming attacks harder to carry out. In this talk, we go over the pointer authentication mechanisms, how they're represented at each level in the compiler, and how arm64e takes advantage of them in programming languages.

Speakers

Tuesday October 22, 2019 10:20am - 10:55am
General Session (LL20ABC)

10:20am

Roundtables
Round Tables:

MLIR meets
LLVM MLIR (Multi-Level IR) has been recently accepted as a sub-project in LLVM. Let's talk about what it means, what are the opportunities for LLVM users, and future evolution for the project.
Mehdi Amini, Jacques Pienaar, Tatiana Shpeisman

Well-defined C++ APIs for LLVM libraries
This Round Table is for users of the LLVM C++ APIs. We will discuss next steps for formalizing the C++ APIs within LLVM, how to reduce the number of symbols exported by LLVM shared libraries, the feasibility of a stabilizing parts of the API and other related topics.
Tom Stellard

Tuesday October 22, 2019 10:20am - 11:30am
Round Tables (LL21EF)

10:20am

An overview of LLVM
Abstract coming soon

Speakers
avatar for Eric Christopher

Eric Christopher

Google, Inc
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory


Tuesday October 22, 2019 10:20am - 11:30am
Breakout Room-2 (LL21CD)

10:20am

Student Research Competition
Cross-Translation Unit Optimization via Annotated Headers
William S. Moses
LLVM automatically derives facts that are only used while the respective translation unit, or LLVM module, is processed (i.e. constant function, error-throwing, etc). This is true both in standard compilation but also link-time-optimization (LTO) in which the module is (partially) merged with others in the same project at link time. LTO is able to take advantage of this to optimize functions calls to outside the translation unit. However, LTO doesn't solve the problem for two reasons of practicality: LTO comes with a nontrivial compile-time investment; and many libraries upon which a program could depend, do not ship with LTO information, simply headers and binaries. In this extended abstract, we solve the problem by generating annotated versions of the source code that also include this derived information. Such an approach has the benefits of both worlds: allowing optimizations previously limited to LTO without running LTO and only providing headers. Such headers are created by modifying Clang to understand three custom attributes to represent arbitrary LLVM function attributes and modifying LLVM to emit C-compatible headers with the aforementioned attribute. Finally, we test the approach experimentally on the DOE RSBench proxy application and verify that it provides the expected speedups.

Quantifying Dataflow Analysis with Gradients in LLVM
Abhishek Shah
Dataflow Analysis has been used in the secure software development cycle for exploit analysis, identifying privacy violations, and guided fuzzing. One approach, dynamic taint tracking, tracks dataflow between a set of source and sink variables with binary taint labels but suffers from high false positives/negatives due its imprecise propagation rules. To address this, we introduce a new theoretically grounded approach, Proximal Gradient Analysis (PGA), to track more accurate and fine-grained dataflow information than dynamic taint tracking. This information is captured in the form of a gradient, which precisely quantifies dataflows (magnitude and direction) between program variables. Because programs contain many discontinous operations (e.g. bitwise operations), we use proximal gradients, a mathematical generalization of gradients for discontinuous functions, to apply the chain rule from calculus to accurately compose and propagate gradients over a program's discontinuous functions with minimal error. 
We implement PGA as a new type of code sanitizer in the LLVM Framework based on the existing dynamic taint tracking sanitizer DataFlowSanitizer. Our main contributions are to use non-smooth calculus for dataflow tracking in real-world programs with PGA and to implement our PGA framework for automatically computing and tracking proximal gradients as an LLVM Sanitizer.

Floating Point Consistency In the Wild: A practical evaluation of how compiler optimizations affect high performance floating point code
Jack J Garzella
Using the FLiT compilation tool, we evaluate Physics-related high-performance codebases, by recompiling each one with LLVM, GCC, and Intel, at every optimization level and all sorts of FP-related flags.


Static Analysis of OpenMP data mapping for target offloading
Prithayan Barua
OpenMP offers directives for offloading computations from CPU hosts to accelerator devices such as GPUs. A key underlying challenge is in efficiently managing the movement of data across the host and the accelerator. We have developed a static analysis tool to address this challenge. The first component is an LLVM analysis pass to interpret the semantics of the OpenMP map clause and deduce the data transfers introduced by them. The second component of our tool is an interprocedural data flow analysis, developed on top of LLVM memory SSA to capture the use-def information of Array variables. Our tool computes how the OpenMP data mapping clauses modify the use-def chains of the baseline program. And finally, it validates if the data mapping in the OpenMP program respects the original use-def chains of the baseline sequential program. Our tool reports diagnostics, to help the developer debug and understand the usage of map clauses in their application. We evaluated our tool over some standard benchmarks and also show its effectiveness by detecting commonly reported bugs.

Speakers
WS

William S. Moses

Massachusetts Institute of Technology
AS

Abhishek Shah

Columbia University
JJ

Jack J Garzella

University of Utah
PB

Prithayan Barua

Georgia Institute of Technology


Tuesday October 22, 2019 10:20am - 12:15pm
Breakout Room-1 (LL21AB)

10:55am

Code-Generation for the Arm M-profile Vector Extension
In this talk we share design and implementation details how the code-generation techniques auto-vectorisation, hardware-loops, and predication are combined to enable efficient code-generation for tail-predicated hardware loops that are introduced in Arm's new M-Profile Vector Extension.

Speakers
SM

Sjoerd Meijer

ARM
This is a brief bio.


Tuesday October 22, 2019 10:55am - 11:30am
General Session (LL20ABC)

11:30am

Ownership SSA
Reference-counted memory management is used by a number of programming languages, including Swift, Python, PHP, Perl, and Objective-C. Reference counting operations are usually introduced as part of lowering to the compiler's IR as, e.g., calls to builtins or library functions, but this approach leads to both missed optimization opportunities (if the presence of these calls inhibit optimizations) and correctness bugs (if optimizations reason about reference-counting incorrectly). In Swift, we have mitigated these problems by changing the Swift Intermediate Language (SIL) to express statically verifiable ownership constraints on def-use chains defining an augmented form of SSA called Ownership SSA (OSSA). OSSA has yielded many benefits such as improved frontend correctness/verification and the implementation of safer, more aggressive reference counting optimizations. The improvements allowed by OSSA may be of interest to other developers of high level languages that use reference counting memory management.

Speakers

Tuesday October 22, 2019 11:30am - 12:05pm
General Session (LL20ABC)

11:30am

Roundtables
Round Tables:

Discuss llvm-libc
A good number of people have expressed interest in llvm-libc during the discussion on the RFC thread. We would like to get everyone interested in this new project to get together and discuss the direction in which we would like to take it forward.
Siva Chandra

VPlan: Next Steps
There are a number of features people are working on and we need to make sure we don't duplicate efforts and have a common understanding of the overall design.

sourcekit-lsp and clangd
Discuss topics related to implementing the language server protocol, code indexing, editor support, build system integration, etc. in clangd and sourcekit-lsp.
Ben Langmuir

Debug Info for optimized code
Do you want to discuss implementation details for representing optimized code debug info? Do you have a question about LiveDebugValues? Do you wonder what's up with entry values? Or do you just wonder how to make your pass debug-info invariant?
Adrian Prantl

Tuesday October 22, 2019 11:30am - 12:40pm
Round Tables (LL21EF)

11:30am

An overview of Clang
Abstract coming soon.

Speakers
avatar for Anastasia Stulova

Anastasia Stulova

Senior Compiler Engineer, Arm
GPGPU, OpenCL, Parallel Programming, Frontend, SPIR-V


Tuesday October 22, 2019 11:30am - 12:40pm
Breakout Room-2 (LL21CD)

12:05pm

An MLIR Dialect for High-Level Optimization of Fortran
The LLVM-based Flang project is actively developing a standards-compliant compiler for Fortran—the world’s first high-level programming language and still an important language for science and engineering today.  While Fortran’s core strength of writing computations on arrays remains, the standard language continues to add new facilities such as object-oriented programming. The Flang project has been exploring the use of MLIR, specifically the definition of Flang’s Fortran IR (FIR) as a framework upon which to build a more comprehensive and regular set of optimizations for both better performance and overall reliability of Flang. This talk will explore what the FIR dialect is, how it is built upon and uses other aspects of MLIR, as well as some of the high-level optimizations achieved.

Speakers

Tuesday October 22, 2019 12:05pm - 12:40pm
General Session (LL20ABC)

12:40pm

Lunch
Lunch is served.

Tuesday October 22, 2019 12:40pm - 2:00pm
Lunch Area - TBD

2:00pm

Link Time Optimization For Swift
The code size of iOS apps is critical due to the size limit of apple store. More and more iOS apps are written in Swift. The Swift programming language provides many new language features such as protocol to facilitate software development. In order to support the implementation of these new features, the existing Swift compiler has to generate the protocol related code and data. However, these generated code and data may not be used in the whole project. For example, some protocol definition is used one module as public and not really consumed by any other modules. Preliminary experiment shows the size of some commercial iOS app can be potentially reduced by 9% through aggressive dead code elimination. Those unused code and data cannot be eliminated easily by the compiler optimizations since they are recorded in llvm.used data structure. In addition, the generated code and data might be implicitly used by Swift runtime library. This calls for a smarter, much more advanced static analysis and novel additions to the classic dead code elimination technique. 
We introduce a novel building pipeline that eliminates the protocol from swift class by leveraging the link time optimization in existing LLVM compiler. In this framework, the swift files are first compiled as LLVM bitcode files and the llvm-link is used to merge all the LLVM bitcode files as one bitcode file. A new LLVM optimization is proposed to eliminate the protocol conformance related variables from the LLVM.used array in this bitcode file. It enables more opportunities for link time optimization to transform global variables into local variables and then identify the dead local variables. The following dead code elimination is extended to update the protocol conformance tables as well as LLVM.used array. The experiment shows that this novel approach reduces the code size of some commercial iOS app by 2%.

Speakers
avatar for Jin Lin

Jin Lin

Compiler Engineer, Uber


Tuesday October 22, 2019 2:00pm - 2:35pm
General Session (LL20ABC)

2:00pm

LLVM-Reduce for testcase reduction
LLVM-Reduce is a new and powerful tool that can reduce IR testcases in new and interesting ways; reducing IR code to almost a fraction of the original size. 
In this talk I will demonstrate how to use the tool and how to build a proper interesting-ness test - a key element used by llvm-reduce in order to minimize testcases. The more powerful the test is, the better testcase it will produce.


Tuesday October 22, 2019 2:00pm - 2:35pm
Breakout Room-1 (LL21AB)

2:00pm

Roundtables
Round Tables:


Flang
Discuss flang, old flang, and Fortran
Steve Scalpone

Tuesday October 22, 2019 2:00pm - 3:10pm
Round Tables (LL21EF)

2:00pm

How to Contribute to LLVM
Tutorial on how to contribute to LLVM.

Speakers
avatar for Kit Barton

Kit Barton

Technical lead for LLVM on Power and XL Compilers, IBM Canada


Tuesday October 22, 2019 2:00pm - 3:10pm
Breakout Room-2 (LL21CD)

2:35pm

LLVM-Canon: Shooting for Clear Diffs
Comparing intermediate representation dumps after various transformations can be extremely laborious. This is especially true when reasoning through differences in shaders or compute modules which have undergone several optimization passes. Most of these differences tend to be semantically equivalent and are just a consequence of irregular instruction ordering and naming. In the need to save time we have developed a tool called llvm-canon which transforms the code into a canonical form. Thereby in ideal conditions, canonicalized semantically identical code should result in a clear diff, making important semantic differences stand out.
The challenges we faced during the development of this project gave us many sleepless nights. Puzzling over the right point of reference for canonicalization, calculating the odds of similarity, and finding the golden mean between precision and being human-friendly resulted in a very useful tool. A tool with broad possibilities for further expansion and improvements.
In this talk I will go through countless ideas for what is known today as llvm-canon (including ditched ones). Discuss the algorithms behind all the transformations, including instruction reordering and all the magic behind naming values. Yet more importantly I will demonstrate the benefits of diffing canonical code and what we have learned from this interesting experiment.





Speakers

Tuesday October 22, 2019 2:35pm - 3:10pm
Breakout Room-1 (LL21AB)

2:35pm

Panel: Inter-procedural Optimization (IPO)
Interprocedural optimizations (IPOs) have been historically weak in LLVM. The strong reliance on inlining can be seen as a consequence or cause. Since inlining is not always possible (recursion, parallel programs, ...) or beneficial (large functions), the effort to improve IPO has recently seen an upswing again. In order to capitalize this momentum, we would like to talk about the current situation in LLVM, and goals for the immediate, but also distant, future.

We will ask our expert panel questions as follows:
- What are the current and potential problems with IPO?
- How does the new pass manager impact IPO?
- Is function cloning & IPO as an alternative to inlining?
- How does the desired (new PM) pipeline differ from what we have right now?
- How is, and how should, IPO interact with (thin-)LTO and PGO?
- What are the most desirable IPO analyses and optimizations we are lacking today?

This guided panel discussion is a follow-up to the BoF at EuroLLVM'19. Both experts and newcomers are welcome to attend. Questions can be send to the organizers prior to the conference to allow consideration.



Speakers
avatar for Teresa Johnson

Teresa Johnson

Software Engineer, Google
Teresa Johnson works on compiler optimization at Google. Prior to joining Google in 2011, she developed compiler optimizations for the Itanium compiler at HP, and received a PhD from the University of Illinois at Urbana-Champaign.
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory
PR

Philip Reames

Azul Systems
Contributor to LLVM since 2013. Work on the Falcon JIT, a Java bytecode to X86-64 compiler based on LLVM. Mostly contribute to loop transform passes and general infrastructure (most recently atomics in backend) as needed.


Tuesday October 22, 2019 2:35pm - 3:30pm
General Session (LL20ABC)

3:10pm

Porting by a 1000 Patches: Bringing Swift to Windows
Swift is a modern language based upon the LLVM compiler framework.  It takes advantage of Clang to provide seamless interoperability with C/C++.  The Swift compiler and language are designed to take advantage of modern Unix facilities to the fullest, and this made porting to Windows a particularly interesting task.  This talk covers the story of bringing Swift to Windows from the ground up through an unusual route: cross-compilation on Linux. The talk will cover interesting challenges in porting the Swift compiler, standard library, and core libraries that were overcome in the process of  bringing Swift to a platform that challenges the Unix design assumptions.


Tuesday October 22, 2019 3:10pm - 3:45pm
Breakout Room-1 (LL21AB)

3:10pm

Roundtables
Round Tables:

Improving LLVM's buildbot instance
http://lab.llvm.org:8011/console is a mess. It has too many builders, yet is missing builders for some platforms. It's slow to load, many of the builders are perma-red, many take hours to cycle, the buildbot page itself is slow to load, etc. Can we improve this?
Nico Weber, Hans Wennborg

OpenMP in LLVM
Opportunity to discuss anything OpenMP related that happens under the LLVM umbrella
Johannes Doerfert, Alexey Bataev, Ravi Narayanaswamy

Challenges using LLVM for GPU compilation
- Canonicalization vs. GPUs: Type mutation
- Control flow mutation (and that graphics shaders are more sensitive to this.)
- Divergence/reconvergence sensitivity
- Address-space awareness
Anastasia Stulova

End-to-end Testing
Discussion of end-to-end testing; what kinds of tests we want; logistics
David Greene

Security Hardening Features in clang
We will discuss the status of security hardening features in clang, (e.g. -fstack-clash-protection, -D_FORTIFY_SOURCE, and others) look at how clang compares to gcc and talk about future work in this area.
Tom Stellard

LLVM for Embedded Baremetal Targets
We will discuss how to build and test LLVM based toolchains that work for Baremetal targets.
Hafiz Abid Qadeer


Tuesday October 22, 2019 3:10pm - 4:20pm
Round Tables (LL21EF)

3:10pm

Writing an LLVM Pass: 101
This is a tutorial on writing LLVM passes targeting beginners and
newcomers. It focuses on out-of-tree development and is based on LLVM 9
(i.e. the latest release). You will learn how to register a pass (legacy
vs new pass manager) and how to build a standalone LLVM-based analysis
and transformation tools. Testing will be set-up using LIT - LLVM's
testing infrastructure. You will see how to configure the necessary
scripts, implement tests and fix them when failing.

This tutorial is self-contained and no prior knowledge of LLVM or
compilers is required. Some familiarity with CMake and LLVM IR (or any
other assembly language) will be helpful, but is not essential. The
presented examples can be tested with pre-build packages available
online (i.e. building LLVM from sources is optional). All examples will
be available on GitHub.

Content:
* implementing a pass (including CMake set-up)
* registering a pass, new vs legacy pass manager
* examples of analysis and transformation passes (IR <-> IR)
* running and debugging an out-of-tree pass
* integrating a pass into an out-of-tree LLVM-based tool
* setting up and running LIT tests for out-of-tree passes
* integration with clang/opt/bugpoint


Tuesday October 22, 2019 3:10pm - 4:20pm
Breakout Room-2 (LL21CD)

3:30pm

The Loop Optimization Working Group
The Loop Optimization Working Group has been meeting bi-weekly since June 5, 2019. The primary focus of the group is to discuss loop optimizations within LLVM. This panel will contain several active members of the workgroup. It will begin with an overview of the working group and describe the topics that are currently being pursued by the workgroup, including status updates for loop optimizations that are currently under active development.  It will then open up the discussion to more general topics of loop optimizations and the loop optimization pipeline. These discussions may include:  - Specific loop optimizations that are missing, or need improvement  - General infrastructure for loop optimizations  - Organization of loop optimizations in the loop optimization pipeline (e.g., the loop optimization strategy)  - The advantage/necessity of a LoopPass in the NewPassManager

Speakers
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory
avatar for Kit Barton

Kit Barton

Technical lead for LLVM on Power and XL Compilers, IBM Canada
SM

Sjoerd Meijer

ARM
This is a brief bio.
MK

Michael Kruse

Argonne National Laboratory
PR

Philip Reames

Azul Systems
Contributor to LLVM since 2013. Work on the Falcon JIT, a Java bytecode to X86-64 compiler based on LLVM. Mostly contribute to loop transform passes and general infrastructure (most recently atomics in backend) as needed.


Tuesday October 22, 2019 3:30pm - 4:20pm
General Session (LL20ABC)

3:45pm

Optimizing builds on Windows: some practical considerations
We will share our experience on using Clang & LLD on large (50M LoC) video game codebases, and we will show some pitfalls and considerations for improving build times on Windows 10. Profile traces based on practical scenarios will be used to demonstrate our changes. Finally we intend to present new ways for compiling code with Clang to ultimately increase iteration times.

Speakers
AG

Alexandre Ganea

Ubisoft Inc.


Tuesday October 22, 2019 3:45pm - 4:20pm
Breakout Room-1 (LL21AB)

4:20pm

Break
Break with snack.

Tuesday October 22, 2019 4:20pm - 4:40pm
Foyer

4:40pm

Better C++ debugging using Clang Modules in LLDB
Expression evaluators in the C++ debuggers we use today still struggle to consistently support many language features. In this talk we show by using Clang’s C++ Modules, LLDB can support most of the previously unsupported language features in its expression evaluator.

Speakers

Tuesday October 22, 2019 4:40pm - 5:15pm
Breakout Room-1 (LL21AB)

4:40pm

Propeller: Profile Guided Large Scale Performance Enhancing Relinker
We discuss the design of Propeller which is a framework for Post Link Optimizations and we show how Propeller can optimize binaries beyond PGO and ThinLTO via basic block layout.

Speakers
avatar for Sriraman Tallam

Sriraman Tallam

Software Engineer, Google Inc.
Link time Optimizations, Scalability of builds, Linkers, Code Layout.


Tuesday October 22, 2019 4:40pm - 5:15pm
General Session (LL20ABC)

4:40pm

Roundtables
Roundtables

SYCL
Let's talk about SYCL -- the programming model, the open standard, the effort to add SYCL support to clang, etc.
Alexey Bader

Upstreaming SYCL C++
SYCL is a single-source C++ standard from Khronos Group (OpenGL, Vulkan, OpenCL, SPIR...) for heterogeneous computing targeting various accelerators (CPU, GPU, DSP, FPGA...). There are several implementations available and one has even started to be upstreamed into Clang/LLVM: https://github.com/intel/llvm
This is a huge collaborative effort and a great opportunity to have wide & strong support for modern C++ on a wide range of accelerators from many vendors. Come to learn about SYCL, how to help reviewing the patches, how it interacts with existing off-loading support, OpenMP, OpenCL, CUDA, address spaces, SPIR-V...
Alexey Bader, Ronan Keryell

Loop Optimizations in LLVM
Discuss current problems, progress and future plans for optimizing loops, existing passes and new passes and analysis being worked on.
Kit Barton, Michael Kruse

full restrict support in llvm
Now that patches for full restrict are available, this roundtable gives a chance to discuss the design and the patches in person.
Jeroen Dobbelaere

Tuesday October 22, 2019 4:40pm - 5:50pm
Round Tables (LL21EF)

4:40pm

Getting Started with the LLVM Testing Infrastructure
A strong testing infrastructure is critical for compilers to maintain a high quality of correctness and performance.  This tutorial will cover the various elements of the LLVM testing infrastructure.  The focus will be to for newcomers to learn to write and run the unit, regression and whole program tests in the LLVM infrastructure as well as the integration of external suites into the LLVM test suite. We will additionally cover the various frameworks and tools used within the test suites, including using LNT to track performance data.

Speakers
BH

Brian Homerding

Argonne National Laboratory
MK

Michael Kruse

Argonne National Laboratory


Tuesday October 22, 2019 4:40pm - 5:50pm
Breakout Room-2 (LL21CD)

5:15pm

Hot Cold Splitting Optimization Pass In LLVM
Hot Cold splitting is an optimization to improve instruction locality. It is used to outline basic blocks which execute less frequently. The hot/cold splitting pass identifies cold basic blocks and moves them into separate functions. The linker can then put newly-created cold functions away from the rest of the program . The idea here is to have these cold pages faulted in relatively infrequently, and to improve the memory locality of code outside of the cold area.
The algorithm is novel in the sense it is based on region and implemented at the IR level. Because it is implemented at the IR level, all the backend targets benefit from this implementation. Other implementations of hot-cold splitting outline each basic block separately and are implemented at the RTL level.

Speakers
avatar for Aditya Kumar

Aditya Kumar

Senior Compiler Engineer, Facebook
I've been working on LLVM since 2012. I've contributed to GVNHoist, Hot Cold Splitting, Hexagon specific optimizations, clang static analyzer, libcxx, libstdc++, and graphite framework of gcc.


Tuesday October 22, 2019 5:15pm - 5:50pm
General Session (LL20ABC)

5:15pm

Memoro: Scaling an LLVM-based Heap profiler
Memoro is a heap profiler built using the LLVM Sanitizer infrastructure. It instruments your program during the compilation and its visualiser helps you navigate the collected profile by highlighting bad patterns, such as frequent allocation and waste of memory. Collecting data proved to be a challenge: instrumented programs don’t meet our expectations and the run-time overhead makes Memoro impractical to use on larger services. This talk presents our work to overcome those constraints, understand the source of the overhead and reduce it, so Memoro can be applied more easily on Facebook services.

Speakers
avatar for Thierry Treyer

Thierry Treyer

Performance and Capacity Engineer, Facebook
I'm working on Memoro, a heap memory profiler, and want to better how memory impact performance!


Tuesday October 22, 2019 5:15pm - 5:50pm
Breakout Room-1 (LL21AB)

5:50pm

LLDB BoF
LLDB has seen an influx of contributions over the past year, with the highest level of activity we've seen in the past 4 years. Let's use this BoF to discuss everybody's goals and identify places where we can synchronize our efforts. Some potential topics include breaking up dependencies in LLDB, support cross-module references, upstreaming of language supports (swift, rust), and improving Windows support.

Tuesday October 22, 2019 5:50pm - 6:20pm
Breakout Room-2 (LL21CD)

5:50pm

Roundtables
Round Tables TBD

Tuesday October 22, 2019 5:50pm - 6:20pm
Round Tables (LL21EF)

5:50pm

Maturing an LLVM backend: Lessons learned from the RISC-V target
The RISC-V backend will ship as an official target in the 9.0 release, due the end of August. This talk will give a brief overview of the current status, but primarily focus on elaborating on the development and testing process, picking out lessons to be learned for other backends and for the LLVM community as a whole. Which aspects of our methodology should others adopt? Are there opportunities to improve LLVM to make it easier to bring up new backends? Or opportunities to better share tests? How can we make it easier for language frontends like Rust to support new targets?

Speakers
avatar for Alex Bradbury

Alex Bradbury

Co-founder and Director, lowRISC CIC


Tuesday October 22, 2019 5:50pm - 6:20pm
Breakout Room-1 (LL21AB)

5:50pm

The Attributor: A Versatile Inter-procedural Fixpoint Iteration Framework
This is a technical talk on the Attributor. There is also a tutorial: link

The Attributor fixpoint iteration framework is a new addition to LLVM that, first and foremost, offers powerful inter-procedural attribute deduction. While it was initially designed as a replacement for the existing “function attribute deduction” pass, the Attributor framework is already more than that. The framework, as well as the deduced information which does not directly translate to LLVM-IR attributes, can be used for various other purposes where information about the code is required.In this talk we will give an overview about the design, showcase current and future use cases, discuss the interplay with other (inter-procedural) passes, highlight ongoing and future extensions, and finally present an evolution. Actual deduction (and use) of attributes will be described but also discussed in our lighting talk presentations and poster.

Speakers
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory


Tuesday October 22, 2019 5:50pm - 6:20pm
General Session (LL20ABC)

6:45pm

Reception
Reception at GlassHouse.

Address:
2 S Market St
San Jose, CA 95113



Tuesday October 22, 2019 6:45pm - 10:00pm
GlassHouse (Offsite)
 
Wednesday, October 23
 

8:00am

Registration & Breakfast
Registration and breakfast.

Wednesday October 23, 2019 8:00am - 9:00am
Foyer

9:00am

Even Better C++ Performance and Productivity: Enhancing Clang to Support Just-in-Time Compilation of Templates
Just-in-time (JIT) compilation can take advantage of information only known once an application starts running in order to produce very-high-performance code. LLVM is well known for supporting JIT compilation, and moreover, Clang, LLVM's best-in-class C++ frontend, enables the highly-optimized compilation of C++ code. Clang, however, uses purely an ahead-of-time compilation model, and so we leave on the table performance which might come from dynamic specialization.
In this talk, I'll describe ClangJIT, an enhancement to Clang, and an extension to the C++ language, which brings JIT-compilation capabilities to the C++ ecosystem. Critically, ClangJIT enables the dynamic, incremental creation of new template instantiations. This can provide important performance benefits, and in addition, can decrease overall application compile times. I'll describe how Clang was enhanced to support this feature - what I needed to do to turn Clang into an incremental C++ compilation library - and how LLVM's JIT infrastructure was leveraged. ClangJIT supports Clang's CUDA mode, and how that works will be described. Some application use cases will be highlighted and I'll discuss some future directions.

Speakers
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory


Wednesday October 23, 2019 9:00am - 9:45am
General Session (LL20ABC)

9:50am

LLVM Foundation BoF
LLVM Foundation BoF. More coming soon.

Speakers
avatar for Tanya Lattner

Tanya Lattner

President, LLVM Foundation
President, LLVM Foundation


Wednesday October 23, 2019 9:50am - 10:25am
Breakout Room-2 (LL21CD)

9:50am

Address spaces in LLVM
Address spaces have various uses in different languages and targets, but are commonly misunderstood. The rules for address spaces have not always been clear and there are differing interpretations. I will describe features address spaces currently have, rules surrounding casting, aliasing, bit-representation/non-integral pointers, dereferencability and intended uses.

Speakers

Wednesday October 23, 2019 9:50am - 10:25am
General Session (LL20ABC)

9:50am

LLVM Tutorials: How to write Beginner-Friendly, Inclusive Tutorials
As a beginner with no connection to the LLVM community, getting into contributing to LLVM is hard. To keep the LLVM developer community healthy with a steady stream of new developers coming in, we need tutorials that explain how to accomplish basic tasks in the real LLVM code base. Examples include writing/improving a Clang warning, and adding/improving an optimization pass. Those tutorials are not only helpful for unaffiliated beginners, but can also help onboard new employees as well as provide insights for experienced LLVM developers into parts of the project we are not familiar with.
To start this effort, we wrote three new tutorials with supporting documentation: “My First Typo Fix” (explaining the end-to-end development workflow), “My First Clang Warning”, and “My First Clang/LLVM Tutorial” (showcasing the contents of this talk), with more tutorials to come. To scale this effort of creating new tutorials and cover most parts of the LLVM project, we need to engage more members of the LLVM community to join us.
We will share our experience of writing and testing the tutorials we created and give recommendations on how to write beginner-friendly, inclusive tutorials for the LLVM project.

Speakers

Wednesday October 23, 2019 9:50am - 10:25am
Breakout Room-1 (LL21AB)

10:25am

Break
Break

Wednesday October 23, 2019 10:25am - 10:45am
Foyer

10:45am

The clang constexpr interpreter
Constexpr enables C++ to implement NP-complete solutions in constant time at execution time. In order to ensure that programmers do not grow old while such sources compile, C++ frontends should provide effective constexpr evaluators. In order to improve on the performance of the existing tree-walking evaluator  and provide a mechanism which scales as the complexity of constexpr use cases  increases, we present an interpreter which we are upstreaming, aimed to  completely replace the existing evaluator.

Speakers
NL

Nandor Licker

University of Cambridge


Wednesday October 23, 2019 10:45am - 11:20am
General Session (LL20ABC)

10:45am

Roundtables
Roundtables

Complex Types in LLVM
A discussion on supporting native complex types in LLVM IR for performance and diagnostics.
David Greene, Stephen Neuendorffer

Development practices to help avoid security vulnerabilities in llvm
The goal for this round table is to explore what development practices the LLVM community could consider adopting to reduce the probability of security vulnerabilities in some or all of its components.
Let's get together at this round table to discuss which development practices - and to which degree - could be good to adopt for which LLVM components. A few examples that could kick off the discussion are:
• Using static analyzers.
• Using code standard checkers (e.g. clang-tidy -checks=cert-*,bugprone-*,llvm-*).
• Using fuzzers more extensively (oss-fuzz does fuzzing on some LLVM components, but do bug reports get reacted on?).
• Requiring full code coverage by unit tests on code that is newly added.
• Making certain build options on by default (e.g. stack protection).
Many of these techniques require buy-in from the whole community to get accepted - a round-table discussion is hopefully a first step towards understanding cost/benefit tradeoffs.
Kristof Beyls

CMake in LLVM
Discuss the current state of CMake and future areas for improvement.
Tom Stellard, Chris Bieneman

Wednesday October 23, 2019 10:45am - 11:55am
Round Tables (LL21EF)

10:45am

Developing the Clang Static Analyzer
This tutorial is about getting around the internals of the static analyzer. You'll learn how to figure out what exactly is the static analyzer thinking when it analyzes any particular code. You'll learn how to debug false positives and other bugs in a methodical, principled manner. We'll show how the analyzer represents program behavior as a graph and walk through a few such graphs step-by-step and then see how to debug it further when we believe that anything about these graphs is incorrect.
This tutorial will be useful to anybody who wants to get involved in the development of the static analyzer, sub-project of LLVM that is both complex and also friendly to newcomers. The tutorial is a complement to the talk “How to Write a Checker in 24 Hours” from LLVM DevMtg'2012; here we will focus on getting started contributing to the analyzer core.

Speakers

Wednesday October 23, 2019 10:45am - 11:55am
Breakout Room-2 (LL21CD)

10:45am

Getting Started With LLVM: Basics
This tutorial serves as a tour of LLVM, geared towards beginners interested in implementing LLVM passes. Both LLVM middle-end (IR) and back-end (MIR) passes are covered. At the end of this tutorial, newcomers will be armed with the tools necessary to create their own passes, and improve upon existing passes.
This tutorial contains
- A brief, high-level explanation of LLVM’s pass-based architecture.
- An explanation of analysis and tranformation passes, and how they interact.
- Examples of important analysis passes, such as Dominator Trees and Target Transform Information.
- An introduction to fundamental data structures and APIs for LLVM pass development.
- A sample project which ties together the tutorial material, for use as a reference.

Speakers

Wednesday October 23, 2019 10:45am - 11:55am
Breakout Room-1 (LL21AB)

11:20am

From C++ for OpenCL to C++ for accelerator devices
In this talk we will describe the new language mode that has been added into Clang for using functionality of C++17 in the OpenCL kernel language - C++ for OpenCL. As this language mode is fully backwards compatible with OpenCL C 2.0, existing OpenCL applications can gradually switch to using C++ features without any major modifications.
During the implementation the strategy was chosen to generalize features that exist in a range of accelerator devices to C++. For example, address space support was improved in C++ to be used as a language extension and OpenCL functionality was built on top of it. This was done to take advantage of common logic in some language features among multiple C++ dialects and extensions that are available in Clang.
At the end of the talk we will describe the future roadmap. Some documentation has been started in Clang (https://clang.llvm.org/docs/LanguageExtensions.html#opencl-features). There is also discussion with the Khronos Group about wider adoption of this language mode and possibly more formal documentation to appear in the future. Additionally we would like to highlight our positive experience of community engagement and the help we have received with early testing and feature evaluation from the users of Clang.

Speakers
avatar for Anastasia Stulova

Anastasia Stulova

Senior Compiler Engineer, Arm
GPGPU, OpenCL, Parallel Programming, Frontend, SPIR-V


Wednesday October 23, 2019 11:20am - 11:55am
General Session (LL20ABC)

11:55am

Roundtables
Roundtables

Interpreting C++
C++ JITs and REPLs are useful in number of areas such as education and science. The LLDB's expression evaluator, Cling and ClangJIT are based on Clang and its incremental processing facility. The C++ interpreters target to solve different sets of issues, however they share common requirements.
We would like to use the round table to discuss common requirements when interpreting C++. We would like to invite developers and users interested in interpreting C++. We would like to discuss a possible way forward for implementing JIT and REPL support infrastructure in Clang and LLVM.
Vassil Vassilev, Hal Finkel

LLVM Releases
Discuss the current release process and possible areas for improvement.
Tom Stellard, Hans Wennborg

Vector Predication
Support for predicated SIMD/vector in LLVM is lacking. At this table, we are discussing concrete steps to change that. Topics include:

- LLVM-VP proposal (https://reviews.llvm.org/D57504)
- Intersection with constrained-fp intrinsics and backend support (also
complex arith).
- Design of predicated reduction intrinsics (intersection with
llvm.experimental.reduce[.v2].*).
- Compatibility with SVE LLVM extension.
- <Your topic here>"
Simon Moll

Wednesday October 23, 2019 11:55am - 12:30pm
Round Tables (LL21EF)

11:55am

Mitigating Undefined Behavior: security mitigations through automatic variable initialization
clang recently started supporting automatic variable initialization, where it unconditionally initializes stack variables. It addresses concrete issues in security-related C and C++ applications, and serves as a last-defense guardrail against some stack use-after-free and memory disclosures. We’ll dive into how this removes sharp edges in C-based languages, what optimizations are required to make this option palatable, and what current overheads look like.

Speakers
avatar for JF Bastien

JF Bastien

Apple
JF is a compiler engineer. He leads C++ development at Apple.


Wednesday October 23, 2019 11:55am - 12:30pm
General Session (LL20ABC)

11:55am

Souper-Charging Peepholes with Target Machine Info
Souper, a LLVM-based superoptimization framework, has seen adoption in both academic research and industry projects. Given an LLVM IR as an input, Souper tries to generate peephole patterns by synthesizing semantically equivalent but shorter instruction sequences. However, as a platform-independent framework, it lacks a model of the actual cost of an instruction sequence on the target machine. This leads to missing optimizations opportunities or generation of peephole patterns which degrade performance.
In this talk, we’re going to demonstrate how Souper can benefit from target machine information. Then, we will explore some possible approaches to providing Souper with target machine info to steer the superoptimizer to find more patterns with improvements than regressions. This will enable Souper to be used in a more automated way and reduce the manual intervention required.

Speakers
avatar for Min-Yih Hsu

Min-Yih Hsu

University of California, Irvine


Wednesday October 23, 2019 11:55am - 12:30pm
Breakout Room-2 (LL21CD)

11:55am

The Penultimate Challange: Constructing bug reports in the Clang Static Analyzer
Static analysis is used to find errors or code smells statically. As the highest cost factor regarding static analysis is the human effort the expert makes evaluating whether a report is a true positive, presenting our findings in an easy-to-understand manner is of the utmost importance.
This talk will explore the techniques and data structures used by the Clang Static Analyzer to construct bug reports. It will briefly explain the construction of the ExplodedGraph during symbolic execution, and how it will be processed after the analysis. Using a combination of data and control dependency analysis with the help of the inspection of the ExplodedGraph, the analyzer tries to construct user friendly diagnostics. Since symbolic execution is a kind of path sensitive analysis, the idea behind the solution the analyzer employs is general enough to create diagnostics for other kinds of analyses. We will also discuss the challenges the analyzer faces and future development possibilities.

Speakers
KU

Kristóf Umann

Eötvös Loránd University, Ericsson


Wednesday October 23, 2019 11:55am - 12:30pm
Breakout Room-1 (LL21AB)

12:30pm

Lunch
Lunch.

Wednesday October 23, 2019 12:30pm - 1:45pm
Lunch Area - TBD

1:45pm

Lightning Talks
NEC SX-Aurora as a Scalable Vector Playground    
Simon Moll

A unified debug server for deeply embedded systems and LLDB
Simon Cook

Speculative Compilation in ORC JIT
Praveen Velliengiri

Loom: Weaving Instrumentation for Program Analysis
Brian Kidney

Lowering tale: Supporting 64 bit pointers in RISCV 32 bit LLVM backend
Reshabh Sharma

Clang Interface Stubs: Syntax Directed Stub Library Generation.
Puyan Lotfi

Flang update
Steve Scalpone

Virtual Function Elimination in LLVM
Oliver Stannard

Making a Language Cross Platform: Libraries and Tooling
Gwen Mittertreiner

Optimization Remarks for Human Beings
William Bundy

Grafter - A use case to implement an embedded DSL in C++ and perform source to source traversal fusion transformation using Clang
Laith Sakka

Improving your TableGen Descriptions
Javed Absar

Speakers
avatar for Simon Cook

Simon Cook

Compiler Engineer, Embecosm
avatar for Puyan Lotfi

Puyan Lotfi

Compiler Engineer, Facebook
SS

Steve Scalpone

NVIDIA
Flang, F18, and NVIDIA C, C++, and Fortran for high-performance computing.
OS

Oliver Stannard

Software Engineer, Arm/Linaro
WB

William Bundy

Sony Interactive Entertainment
BK

Brian Kidney

Memorial University of Newfoundland


Wednesday October 23, 2019 1:45pm - 2:55pm
General Session (LL20ABC)

1:45pm

Roundtables
Roundtables

Floating point
Discussions related to FENV_ACCESS support, fast math, FMA, complex, etc.
Andy Kaylor

ClangBuiltLinux
Building the Linux kernel with Clang and LLVM utils
Nick Desaulniers

Wednesday October 23, 2019 1:45pm - 2:55pm
Round Tables (LL21EF)

1:45pm

ASTImporter: Merging Clang ASTs
ASTImporter is part of Clang's core library, the AST library. There are cases when we have to work with more than one AST contexts, but we would like to view the set of the ASTs as if they were one big AST resulting from the parsing of all files together. ASTImporter imports nodes of an AST context into another AST context.
Existing clients of the ASTImporter library are Cross Translation Unit (CTU) static analysis and the LLDB expression parser. CTU static analysis imports a definition of a function if its definition is found in another translation unit (TU). This way the analysis can breach out from the single TU limitation. LLDB’s "expr" command parses a user-defined expression, creates an ASTContext for that and then imports the missing definitions from the AST what we got from the debug information (DWARF, etc).

Speakers

Wednesday October 23, 2019 1:45pm - 2:55pm
Breakout Room-2 (LL21CD)

1:45pm

The Attributor: A Versatile Inter-procedural Fixpoint Iteration Framework
This is a tutorial on the Attributor. There is a technical talk as well: link


In this tutorial we will:
  1. create a new llvm::Attribute, incl. all the pluming,
  2. deduce it with the Attributor, and
  3. se the new attribute to improve alias information :)

Please consider joining us in the Attributor talk prior to attending this tutorial, though it is not strictly required.

---

 The Attributor fixpoint iteration framework is a new addition to LLVM that, first and foremost, offers powerful inter-procedural attribute deduction. While it was initially designed as a replacement for the existing “function attribute deduction” pass, the Attributor framework is already more than that. The framework, as well as the deduced information which does not directly translate to LLVM-IR attributes, can be used for various other purposes where information about the code is required.In this talk we will give an overview about the design, showcase current and future use cases, discuss the interplay with other (inter-procedural) passes, highlight ongoing and future extensions, and finally present an evolution. Actual deduction (and use) of attributes will be described but also discussed in our lighting talk presentations and poster.

Speakers
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory


Wednesday October 23, 2019 1:45pm - 2:55pm
Breakout Room-1 (LL21AB)

2:55pm

Lightning Talks
GWP-ASan: Zero-Cost Detection of Memory Safety Bugs in Production    
Matt Morehouse

When 3 Memory Models Aren’t Enough – OpenVMS on x86
John Reagan

FileCheck: learning arithmetic
Thomas Preud'homme

-Wall found programming errors and engineering effort to enable across a large codebase
Aditya Kumar

Handling 1000s of OpenCL builtin functions in Clang
Sven van Haastregt

Implementing Machine Code Optimizations for RISC-V
Lewis Revill

Optimization Remarks Update Francis
Visoiu Mistrih

Supporting Regular and Thin LTO with a Single LTO Bitcode Format
Matthew Voss

Transitioning Apple’s downstream llvm-project repositories to the monorepo
Alex Lorenz

State of LLDB and deeply embedded RISC-V
Simon Cook

Supporting a Vendor ABI Variant in Clang
Paul Robinson

Improving the optimized debugging experience
Orlando Cazalet-Hyams

Speakers
avatar for John Reagan

John Reagan

Principal Software Engineer, VMS Software Inc
Head of compiler development for OpenVMS at VMS Software Inc.John has been working on OpenVMS compilers since 1983 and has generated code for VAX, Alpha, Itanium, x86-64, MIPS, and others. John is leading the team to rehost LLVM to OpenVMS and attach most of the existing OpenVMS frontends... Read More →
PR

Paul Robinson

Sr Staff Compiler Engineer, Sony Interactive Entertainment
avatar for Simon Cook

Simon Cook

Compiler Engineer, Embecosm
avatar for Aditya Kumar

Aditya Kumar

Senior Compiler Engineer, Facebook
I've been working on LLVM since 2012. I've contributed to GVNHoist, Hot Cold Splitting, Hexagon specific optimizations, clang static analyzer, libcxx, libstdc++, and graphite framework of gcc.
MV

Matthew Voss

Sony Interactive Entertainment
OC

Orlando Cazalet-Hyams

Sony Interactive Entertainment
Improving the debugging experience for optimized builds


Wednesday October 23, 2019 2:55pm - 4:00pm
General Session (LL20ABC)

2:55pm

Roundtables
Roundtables

GitHub Next-step: bug-tracker (GitHub issues?) & workflows (pre-merge testing, pull-request, etc.)
We're almost on GitHub! It is a good opportunity to brainstorm on possible evolutions of the workflow and the infrastructure. There are opportunities around being able to run builds and test on code before pushing, and the possibility of using pull-requests either for the purpose of pre-merge testing or for code reviews!
Also is bugzilla still providing what we want to track issues? Should we consider GitHub issues? Are there other options?
Christian Kühnel, Dmitri Gribenko, Kristof Beyls, Mehdi Amini

ASTImporter
Discuss future work and direction of ASTImporter
Gabor Marton

State of the GN build
LLVM's GN build is an (unsupported!) faster alternative to the CMake build. Try it out, learn about the current state, wallow in build system arcana.
Nico Weber

Clang Static Analyzer
Let's talk about where we are and how we want to move forward with the Static Analyzer.
Artem Dergachev

Scalable Vector Extension
Discuss current progress and forward plans for both SVE and RiscV implementations, focusing on agreed IR representations more than looking at each back-end independently.

Wednesday October 23, 2019 2:55pm - 4:00pm
Round Tables (LL21EF)

2:55pm

My first clang warning
Abstract coming soon

Speakers

Wednesday October 23, 2019 2:55pm - 4:00pm
Breakout Room-2 (LL21CD)

2:55pm

Writing Loop Optimizations in LLVM
LLVM contains an evolving set of classes and tools specifically designed to interact with loops. The Loop and LoopInfo classes are being continually improved, as are supporting data structures such as the Data Dependence Graph (DDG) and Program Dependence Graph (PDG). The pass manager infrastructure (both New and Legacy pass managers) provide infrastructure to write both function passes and loop passes. However, putting all of these concepts together to write a functioning loop optimization pass can still be a somewhat daunting task. 
This tutorial will start by introducing basic terminology that is used within LLVM to describe loops (for example, many of the concepts introduced in https://reviews.llvm.org/D65164). It will then look at the Loop and LoopInfo classes, and go over the interfaces they have to work with loops. It will provide examples of how these classes can be used to implement different types of loop optimizations, using examples from both the Loop Fusion and Loop Distribution passes. It will discuss the differences between a function pass and a loop pass, including a discussion of the advantages and disadvantages of each one when writing loop optimizations. It will also provide guidance on when each type of pass should be used. Finally, it will go through many of the useful utility functions that need to be used in order to write a loop optimization efficiently (e.g., updating the dominator tree, updating Scalar Evolution, etc.).

Speakers
ET

Ettore Tiotto

Senior Developer, IBM Canada
avatar for Hal Finkel

Hal Finkel

Argonne National Laboratory
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory
MK

Michael Kruse

Argonne National Laboratory
avatar for Kit Barton

Kit Barton

Technical lead for LLVM on Power and XL Compilers, IBM Canada


Wednesday October 23, 2019 2:55pm - 4:00pm
Breakout Room-1 (LL21AB)

4:00pm

Poster Session
TON Labs Backend for TON blockchain    

LLVM build times using a Program Repository
Russell Gallop

LLVM build times using a Program Repository
Greg Bedwell

RISC-V bit manipulation support in the Clang/LLVM tool chain
Simon Cook

RISC-V bit manipulation support in the Clang/LLVM tool chain
Ed Jones

RISC-V bit manipulation support in the Clang/LLVM tool chain
Lewis Revill

Attributor, a Framework for Interprocedural Information Deduction
Johannes Doerfert

Attributor, a Framework for Interprocedural Information Deduction
Hideto Ueno

Attributor, a Framework for Interprocedural Information Deduction
Stefan Stipanovic

Overflows Be Gone: Checked C for Memory Safety Mandeep
Singh Grang

NEC SX-Aurora as a Scalable Vector Playground
Kazuhisa Ishizaka

A unified debug server for deeply embedded systems and LLDB
Simon Cook

Speculative Compilation in ORC JIT
Praveen Velliengiri

Loom: Weaving Instrumentation for Program Analysis
Brian Kidney

Lowering tale: Supporting 64 bit pointers in RISCV 32 bit LLVM backend
Reshabh Sharma

Cross-Translation Unit Optimization via Annotated Headers
William S. Moses

Quantifying Dataflow Analysis with Gradients in LLVM
Abhishek Shah

Floating Point Consistency in the Wild: A practical evaluation of how compiler optimizations affect high performance floating point code
Jack J Garzella

Static Analysis of OpenMP Data Mapping for Target Offloading
Prithayan Barua

Speakers
avatar for Simon Cook

Simon Cook

Compiler Engineer, Embecosm
avatar for Mandeep Singh Grang

Mandeep Singh Grang

Senior Engineer
I make LLVM deterministic.
GB

Greg Bedwell

Sony Interactive Entertainment
avatar for Johannes Doerfert

Johannes Doerfert

Argonne National Laboratory
RG

Russell Gallop

Software Engineer, Sony Interactive Entertainment
BK

Brian Kidney

Memorial University of Newfoundland


Wednesday October 23, 2019 4:00pm - 5:00pm
Foyer

5:00pm

Towards better code generator design and unification for a stack machine
By design, LLVM backend infrastructure is geared towards classical register-based architectures. Thus, adapting it to a stack machine implies additional LLVM passes that are likely to vary depending on a target. For instance, the Selection DAG cannot produce instructions that directly handle the stack. Instead, it selects a relevant instruction version designed to work with registers. Then, MIR passes are performed to insert stack manipulations (pushes, pops, exchanges) and to convert instructions handling virtual registers into those handling stack slots. The suggested logic seems quite generic and not limited to a specific stack-based virtual machine. It is similar to other optimizations and analytical approaches that can be applied to stack machines regardless of the specific instruction set.
Previously, WebAssembly was the only implementation that needed a comprehensive stackification logic, now we created an option for the TON virtual machine (TVM). Given that stack machines are great for binary size minimization, stackification solutions are likely to face demand from other domains. So, we would love to discuss whether or not the community needs generic algorithms that can be integrated with various backends and if stack-machine support might benefit the target-independent code generator.

Speakers

Wednesday October 23, 2019 5:00pm - 5:35pm
Breakout Room-2 (LL21CD)

5:00pm

Roundtables
Roundtables

GSoC
LLVM Google Summer of Code: organization, ideas, projects, etc.
Anton Korobeynikov



Wednesday October 23, 2019 5:00pm - 5:35pm
Round Tables (LL21EF)

5:00pm

Alive2: Verifying Existing Optimizations

Alive is regularly used to verify InstCombine optimizations. However, it is limited mostly to InstCombine-style optimizations, and it can only verify optimizations written in Alive's own IR-like DSL.

Alive2 is a re-implementation of Alive that removes several limitations of the previous tool. It supports floating point operations and has better support for memory and loops. It handles optimizations beyond those found in InstCombine. It includes a standalone tool that can prove equivalence / refinement between two bitcode functions as well as an `opt` plugin that can prove that an LLVM optimization is correct. Neither of these new tools requires optimizations to be rewritten in the Alive DSL.

In this talk, we will give an overview on Alive2 and show how you can use it to 1) ensure your optimization is correct, and 2) to find that bug that is triggering a miscompilation.


Speakers
NL

Nuno Lopes

Microsoft Research


Wednesday October 23, 2019 5:00pm - 5:35pm
General Session (LL20ABC)

5:00pm

Loop-transformation #pragmas in the front-end
Code-transformation directives allow the programmer to specify which trans- formation the compiler should apply and in which order (e.g. tile the loop nest, then parallelize the outermost and vectorize the inner most loop) with- out impacting the source’s maintainability. Currently, Clang only supports the "#pragma clang loop directives" which do not reliably take a sequence of trans- formations into account.
We present the "#pragma clang transform" directive that specifically supports chaining transformations. These directives must be parsed, represented in the AST, instantiated for templates, (de-)serialized, dumped, semantically verified, and its LLVM-IR generated.

Speakers
MK

Michael Kruse

Argonne National Laboratory


Wednesday October 23, 2019 5:00pm - 5:35pm
Breakout Room-1 (LL21AB)

5:35pm

Debug Info BoF
As evidenced by the debug info quality metrics introduced at last year's Debug Info BoF session, there have been significant improvements to LLVM's handling of debug info in optimized code throughout 2019. With a growing number of debug info contributors in the LLVM community, this session provides a forum to highlight recent improvements and areas that need attention. We will use the opportunity to summarize the current state of LLVM debug info quality and then open the floor to a discussion about future directions.

Speakers
avatar for Adrian Prantl

Adrian Prantl

Apple
Ask me about debug information in LLVM, Clang and Swift!


Wednesday October 23, 2019 5:35pm - 6:10pm
Breakout Room-2 (LL21CD)

5:35pm

Roundtables
Roundtables:




Wednesday October 23, 2019 5:35pm - 6:10pm
Round Tables (LL21EF)

5:35pm

Transitioning the Networking Software Toolchain to Clang/LLVM
In this talk we will share our experience in the journey of transitioning Cisco Enterprise Networking software with a high market share to Clang/LLVM as the primary compiler. For performance and business reasons, our software stack should run on many different processors. We will describe several contributions to the MIPS and PPC backends to make LLVM on parity with gcc compiler for these processors. We will summarize our contributions to debugging optimized code and enabling LLVM on Cisco data plane component where code must be highly optimized with LTO to forward network packets in the correct byte order.

Speakers
IB

Ivan Baev

CIsco Systems
BS

Bharathi Seshadri

Cisco Systems
JS

Jeremy Stenglein

CIsco Systems


Wednesday October 23, 2019 5:35pm - 6:10pm
Breakout Room-1 (LL21AB)

5:35pm

Using LLVM's portable SIMD with Zig
While not every application that uses SIMD uses only the portable subset that LLVM provides, writing to LLVM instead of assembly (or , , , et cetera) provides more than multiple platforms. It also allows your app to benifit from a rich library of LLVM optimizations. While the portable SIMD features of C (and Rust) are insufficient to write an application, LLVM provides much more. In addition to exporting the full power of LLVM's SIMD functionality, novel Zig features such as comptime are also provided for vector intrinsics.
We show that LLVM and Zig enable a new single libmvec implementation, instead of the many currently in use and in development.

Speakers
SL

Shawn Landden

Position at Company ABC.


Wednesday October 23, 2019 5:35pm - 6:10pm
General Session (LL20ABC)

6:10pm

Closing
Closing.

Speakers
avatar for Tanya Lattner

Tanya Lattner

President, LLVM Foundation
President, LLVM Foundation


Wednesday October 23, 2019 6:10pm - 6:20pm
General Session (LL20ABC)