Emerging Applications of Normalizing Flows in Reinforcement Learning

This is Part 2 on Normalizing Flows.

Because of the discussion in part 1, I want to talk about:

  • Requirements on the mapping
    • Map does not need to be bijective
    • Contractions -> useful for high dim cases, e.g. images?
  • Experiment (Mischa)
    • Decoder part is injective I.e. bijective on image
      1. Train encoder as VAE
      2. Train decoder as flow (fine-tune)
  • Convolutional flow networks in more detail
    • How does Glow work? Is it bijective? Same I/O dimensions? (no)
  • Piecewise invertible transformations for flow
  • Dimensionality reduction flows

In this series