Paper Pills October 2022

What we read in October

The TransferLab continuously monitors advancements in artificial intelligence. We create concise summaries of articles, libraries, speeches, and other events that we think our community would find interesting and publish them as so-called paper pills. What we discovered in October is summarized in this blog.


Local calibration: metrics and recalibration

A calibration method that takes sample similarity into account, automatically providing group calibration even when groups are unknown.

Large language models

DictBERT: Dictionary Description Knowledge Enhanced Language Model Pre-training via Contrastive Learning

External knowledge can be injected into pre-trained language models (here specifically BERT) by training an additional language model on various dictionary related tasks. This strategy may result in better representations and should be especially useful for sentences that use a lot of jargon.

Decision-focused learning

Optimization Based Modelling

Optimization layers allow incorporating downstream optimization problems into neural network training. The performance gap between the first-learn-then-optimise and the end-to-end approach in the perfect model setting is considered. The authors find that the gap can be arbitrarily large for non-linear cost functions. Additionally, the authors identify several classes of practically relevant optimization problems where the end-to-end approach yields optimal solutions in the perfect model setting.

Prompt-to-prompt image editing

Image Editing

Prompt-to-Prompt Image Editing with Cross Attention Control

Generating images from text is complicated by the fact that small changes in prompt gives very different results. This new study shows how to do minimal changes to generated images through manipulation of the neural network attention maps.