I’m wondering - why is it the case that learning this way (i.e. by predicting things, like “why does this person think AOC is to blame for the Boulder attack?”)” is so much more… memorable. Like, I don’t need to make flashcards to remember the outcome, the learning, how the prediction cashed out. What extra context ChatGPT gave me about the Boulder attack. The facts are just in my head now, somehow??
And my current working model is that — it’s probably care, care is the thing. I made a prediction, and then I had skin in the game. Am I right, am I wrong? Am I missing something blindingly obvious? Am I about to discover a giant blind spot?
And then, the answer gives a eureka moment, it fills in the gaps, it provides an insight into brand new parts of a world model. “Oh shit, I didn’t even think of that!“. And that’s super fun, and engaging!
And it can also be super humbling - “holy shit, I can’t believe I didn’t know that, how embarrassing”, which makes the new learning extra sticky, extra salient.
So, that’s my current working model. Learning by reading an economics textbook because it’ll be useful one day maybe, feels so different from posing a question and then solving it.
It feels the same with programming - you can learn all the syntax, do “bottom-up” learning, or you can get thrown in the deep end and try to make something work. And for sure, you still have to do bottom-up learning, there’s absolutely still a place for it, but I think I’ve been 99.9% bottom-up learning, 0.1% learning to solve problems.