I’m finding it hard at the moment to carve out brain time for proper deep writing. So I thought I’d try sprinkling in another format. This is the first of what might become an adhoc series of quote posts, where I highlight particular phrases that jumped out to me in something I’ve read. To save it getting unwieldy (I read a lot!) I’m going to start out doing them as a bundle. Here’s a first batch:
Paul Graham on X - February 7 2025
I have the nagging feeling that there's going to be something very obvious about AI once it crosses a certain threshold that I could foresee now if I tried harder. Not that it's going to enslave us. I already worry about that. I mean something subtler
He asked ChatGPT this and followed up with the output, to which others chimed in in the comments… Some interesting thoughts to emerge from the aggregate - eg:
AI will subtly rewire how we process ideas, make decisions, and interact with information. It won’t just provide answers, it will shape what we perceive as worth asking
AI will force human creators to become curators… it may not kill originality, but it might make originality feel less necessary
Just as microscopes didn’t just let us see small things, but fundamentally changed our understanding of life and disease, advanced AI might serve as a mirror that makes previously invisible aspects of human cognition suddenly apparent
Jack Clark on Import AI - January 27 2025
AI systems have got so useful that the thing that will set humans apart from one another is not specific hard-won skills for utilizing AI systems, but rather just having a high level of curiosity and agency… If we get it wrong, we're going to be dealing with inequality on steroids - a small caste of people will be getting a vast amount done, aided by ghostly superintelligences that work on their behalf, while a larger set of people watch the success of others and ask 'why not me?'.
I find this interesting as an AI era update on the notion of a digital divide. I’m not sure it’s quite THAT new a thing, in that the traits of having a high level of curiosity and agency I suspect have helped people progress in their careers and post-school learning journeys even before AI. But I agree it might become suddenly more pronounced.
I also have a hunch this impact is not going to be split by age, in the way that in past decades people talked about ‘digital natives’ having the advantage. I suspect that today’s education environment — with a focus on jumping exam hoops, and the ever-growing temptation to shortcut learning by outsourcing to genAI — is not preparing people well to thrive in such an environment, and it’s going to come down ultimately to people’s own innate talents and predispositions.
Dean Ball on Hyperdimensionality - December 26 2024
As these instruments improve, the questions we ask them will have to get harder, smarter, and more detailed. This isn’t to say, necessarily, that we will need to become better “prompt engineers.” Instead, it is to suggest that we will need to become more curious. These new instruments will demand that we formulate better questions, and formulating better questions, often, is at least the seed of formulating better answers.
A similar reference to the importance of curiosity as in the previous quote… but what I like here is the reference to the value of improving our ability to question. I think this is vital not just from the prompting input side, but also in the evaluation of the output. Even if you end up using an AI to help you expand and improve your prompt, you still review it before you copy/paste across, and the suggestions it makes can spark explorations and add nuances you might not previously have thought of. The key is that you keep a questioning mind - I sense the key is to engage with the spirit that the AI is a spark for YOUR thinking - helping you get further, faster - but not doing the thinking for you.
Ethan Mollick on One Useful Thing - December 9 2024
AI is often most useful where we're already expert enough to spot its mistakes, yet least helpful in the deep work that made us experts in the first place. It works best for tasks we could do ourselves but shouldn't waste time on, yet can actively harm our learning when we use it to skip necessary struggles. And perhaps most importantly, wisdom means knowing that these patterns will keep shifting as AI capabilities evolve, and as more research comes in, requiring us to keep questioning our assumptions about where it helps and where it hinders.
An apt note to finish this first post on. It’s changing so fast, I am still trying to learn what AI works well for and what it doesn’t.
I had an experience recently where I wanted to produce a short but insightful summary of key topics that different companies had been talking about in their last 6 months of blogging. I’d assumed this was a task that AI would excel at, if I gave it all the links, and a fully worked prototype to show the style of writing/depth/format. But I tried ChatGPT, Claude and NotebookLM to no avail… sure it delivered SOMETHING, but when I checked all had missed the wood for the trees in the summary. Rather than struggle on trying to use AI, I ended up writing it painstakingly by hand. Which turned out to be important, since I don’t think I’d have fully grokked the gaps and evolving trends comparing across companies if I’d not spent that time deep in the weeds.