Learning to think the Defender of Basic way
Oh god there’s someone else like me
I’ve always had the strong sense that I never learned how to think.
What does this look like? For one thing, (and I’ve gotten better at this, more opinionated, more grounded, but it’s still broadly true) - I typically read books in a way where I’m like “I don’t know anything so I’m just gonna make flashcards from the author’s key points, and one day I’ll know enough that I can form my own opinions”.
It feels like I need to learn a bunch of stuff first, and then I can have my own opinions once I have the requisite knowledge. This still feels fairly correct to me, the problem is that it takes so long to learn things!!!
E.g. I have a broad project to “improve my world model”, which so far has looked mostly like slowly reading a textbook on economics, because I figure that’s an important part of the world that I know nothing about. And once I have my economics knowledge, I can move on to learning the basics of the next thing, and then the next, etc.
In an ideal world, I’d be a kid or a teenager, and I think this’d be a totally viable strategy. A book-devouring autodidact. But I’m 28 and I’m also trying to get a full time job, and also I’d love to learn software engineering, and I’d love to meditate a bunch, have a great social life, etc. I’ve had a sense of “oh god I’m trying to learn way too much at once and it’s going to take so long before I get there”.
Yesterday I read this post by Defender of Basic where he says that he learned how to think at 30, and how incredibly exciting/enriching it was. It also sounds like it was much much more efficient than my plan.
I won’t summarise the post, I highly recommend you read it. Oh actually I can get ChatGPT to summarise, but still, hugely recommend reading the full thing:
🧠 Geoffrey Hinton on Developing Your Own Framework for Understanding Reality
1. Key insight
- You can’t outsource your worldview—no one can teach you to truly understand reality.
- But you can build your own framework from scratch, like Geoffrey Hinton did with neural networks.
2. The challenge
- Most learning relies on memorizing “what experts say.” That never teaches how to think.
- Hinton’s success came from creating a structure that learns, adapts, and reasons—a true internal model.
3. The Hinton method: meta-reasoning by construction
- Identify important patterns or problems in the world.
- Build a mechanism (e.g. a neural net) that can learn from data and detect patterns.
- Let it refine itself through feedback loops and emergent learning—don’t hard-code the answers.
4. The takeaway
- Bad news: no one can hand you an expert-level world model.
- Good news: with deliberate framework-building, you can bootstrap your own—just like Hinton did with AI.
Nice, so yeah basically it sounds like “make predictions about other people’s world views, log them, talk to them on twitter.
What’s the point in predicting the world view of other people?
At first I was like “hm, but I’m not trying to understand the political views of other people, maybe he’s just on a different kick than me”. Like, maybe he’s really interested in “cultural evolution” (which he is), and I’m just out here trying to (at first) know the basics of all the key fields, so that I have a better world model.
But it does make sense, actually let me make a prediction here. I predict that:
- Predicting other people lets you discover your own blind spots (“huh, I thought they’d justify x with y, actually they said z”
- The imaginative act of trying to predict something forces you to properly and actively engage with it. Rather than passively reading books and making flashcards to get someone else’s knowledge in your head first, you can think “ok, based on this chapter being called ‘Media of Epistemology’”, I predict that his main point will be x” (I did this yesterday for the first time whilst reading Amusing Ourselves To Death” and it immediately felt way more engaging and fun)
I imagine #2 is where the real juice is. Like Defender says, it turns anything from a passive activity to an active activity, like e.g. I watched the Sopranos recently and loved it, but I wish I did his active thing of pausing, predicting what the character was going to do, etc. Rather than being passive and along for the ride, becoming an active participant.
Reminds me of Michelene Chi’s ICAP framework for learning, where I > C > A > P as forms of learning, where interactive > constructive > active > passive. Basically, co-creation is the best way to learn. And my way of learning has been “I will put a bunch of facts in my head, so that I can use them at a later date, and maybe they’ll be useful”, rather than right out of the gate doing the thing of “hm, interesting, what do I predict, how does this interact with my model, how does this inspire me to change my model”.
Using Fatebook.io
I have a sense that deep/thorough documentation of the model isn’t the thing, because a world model is this inherently very tacit and embodied thing. So e.g. I’m not planning on using Obsidian to thoroughly document the changes to my world model, I think that’d be quixotic.
But I think using Fatebook.io to log predictions and their outcomes is useful! And tbf maybe getting more explicit about priors and posterior updates is a good idea… idk, will see how it goes. I wanna stay lightweight at first for sure. I imagine a lot of this is more akin to Gendlin’s Focusing than it is like, making a big spreadsheet of “updates”. That sounds very inert & like busywork.
However, I’ve already been using Fatebook.io to log predictions daily, so I’ll keep doing that.
I’ve been bored of it for a while because I set up a beeminder to make sure I make 2 predictions a day, but because I haven’t been that inspired/engaged, I’ve mostly done pretty mundane ones like “I will wake up to a voice note tomorrow” or “I will get to the next round of this job application”. They’ve pretty much all been about my experience. Whereas I think “the author of the book will make this point” or “the person I just tweeted asking for clarification re: their worldview will say x” feels like they’ll be much more data-rich/useful.
The end (well, the beginning!)
Ok I think that covers it. Maybe I’ll add a section to my learning log where I summarise learnings for the day, or intentions for the day (e.g. I want to talk to people on twitter about x topic). It definitely feels kinda overwhelming knowing where to begin…
Maybe I can take topics that I’ve already been learning about recently (like AI safety, Heidegger, John Boyd, Vervaeke) and find people who have been tweeting about that?
I wonder how Basic found people to tweet…
Made a specific post
Also I tweeted about following in Defender’s footsteps and he follows me now, hell yeah