bridge entity: The intermediate entity (e2) that connects the two hops (e.g., 'John Lennon' in the query about Imagine's performer's spouse)
Patchscopes: A framework that translates hidden representations into natural language descriptions by patching them into a separate prompt, used here to decode what entity is encoded in a vector
vocabulary projection: A method to interpret a hidden state by multiplying it with the output embedding matrix to see which token it most strongly predicts
back-patching: A proposed analysis method where a hidden state from a later layer is injected back into an earlier layer at the same position to simulate having more computation depth
latent reasoning: The internal process where a model computes intermediate steps (like finding the bridge entity) implicitly within its hidden states without outputting them as text
MLP sublayer: The Feed-Forward Network component of a Transformer block, often hypothesized to act as a key-value memory for factual knowledge
hop: A single step of reasoning in a knowledge graph, moving from one entity to another via a relation