[Submitted on 7 Apr 2023]

Download PDF

Abstract: Humans have a powerful and mysterious capacity to reason. By working through
a series of purely mental steps, we can make inferences we would not be capable
of making directly — despite that fact that we get no additional data from the
world. Similarly, large language models can perform better at complex tasks
through chain-of-thought reasoning, where they generate intermediate steps
before answering a question. We use language models to investigate the
questions of when and why reasoning is helpful, testing the hypothesis that
reasoning is effective when training data consisting of local clusters of
variables that influence each other strongly. These training conditions enable
the chaining of accurate local inferences in order to estimate relationships
between variables that were not seen together in training. We train an
autoregressive transformer on samples from joint distributions defined by Bayes
nets, but only include a subset of all the variables in each sample. We compare
language models’ ability to match conditional probabilities both with and
without intermediate reasoning steps, finding that intermediate steps help only
when the training data is locally structured with respect to dependencies
between variables. Furthermore, intermediate variables need to be relevant to
the relationship between observed information and target inferences. Our
results illustrate how the statistical structure of training data drives the
effectiveness of reasoning step by step.

Submission history

From: Ben Prystawski [view email]


[v1]
Fri, 7 Apr 2023 21:04:03 UTC (385 KB)

Read More