Leon Sixt

I am a Ph.D. student in the Landgraf Lab at the Freie Universität Berlin. I am interested in unsupervised learning, information theory, and the interpretability of machine learning.

So far, my work focuses on evaluating interpretability and inventing interpretability algorithms that are both correct and understandable.

Short CV

11/2020-02/2021:  Internship at Google with Martin Maas and Been Kim.

since 04/2019:  Elsa-­von-­Neumann Scholarship.

since 2018:  PhD Student at FU Berlin.

2018:  Master, Computer Science, FU Berlin.

2017/2018:  Masterthesis, BethgeLab, Tübingen.

2016:  Bachelor, Computer Science, FU Berlin.

Publications

When Explanations Lie: Why many Modifed BP Attributions fails, Leon Sixt, Maximilian Granz, and Tim Landgraf. ICML (2020).

tl;dr: We examined the most prominent modified BP Attribution Methods and found them do not explain the decisions of deep neural networks faithfully.


Restricting the Flow: Information Bottlenecks for Attribution, Karl Schulz*, Leon Sixt*, Federico Tombari, and Tim Landgraf. ICLR oral (2020)

tl;dr: We applied noise to an intermediate feature map to measure which areas are unimportant for the network's prediction (in bits/pixel). (*equal contribution)


Rendergan: Generating realistic labeled data. Sixt, Leon, Benjamin Wild, and Tim Landgraf. In Frontiers in Robotics and AI 5 (2018).

tl;dr: We combined a 3D model and a GAN to generate realistic looking data to decode honeybee tags.

Other Projects

emojicite is a fun project to bring emojis to scientific citations. Flag self-citations as in (Sixt et al., 2019 🤳), appreciate the hard work of others (Smith, 2014 ❤️), add some negativity (Wakefield et. al, 1998 🤦), or mark how thoroughly you read (Van Wesel et al., 2014 🙈).

Another project is typeengine.js. You see it in action right now. It brings Latex quality typesetting to the web. typeengine.js support advanced microtypography features such as font stretching and margin protrusion.