The pitfalls of over-reliance on AI for self-directed learning
Overfit – the lessons I’ve learnt too well | Epoch 1.

“What’s the point in not using a calculator when I’m always going to have one with me?”
…misses a fundamental point that is crucial for effective learning.
Getting the right answer was never the point. The real value was in exercising your cognitive muscles, holding multiple values in your head, manipulating them and using the right technique or trick to get to the answer. If you don’t practice it, you’ll lose it. It may have already happened for mental arithmetic. The question is: do we want it to happen to writing, coding, comprehension, or even critical thinking as well?
For myself, and many others, learning doesn’t stop when you leave full time education. I’ve spent a lot of my free time reading, learning and working on projects to stretch my understanding. AI has dramatically changed the way I approach this. In many ways, it has made self-directed learning far easier than ever before. But as with everything else, it’s not without its challenges.
Contents
- We are blessed with the ability to ask dumb questions completely judgement free, but there are perhaps some trivial questions that shouldn’t be asked – at least not straight away.
- Over-reliance on AI tools to produce things for us at best removes any sense of satisfaction or accomplishment. At worst, it inhibits our ability to think, do, learn, and practice for ourselves.
- Self directed learning inherently lacks a structured curriculum. I don’t believe AI is a good substitute for this.
As always, these are my own thoughts from my own experience, you are allowed to disagree; I’d love to hear your perspective.

Part 1. Finding the right balance of questions to ask
We are at a time in history like no other. This period started with the birth of the internet, but has truly come into its own with the rise of LLM tools. At no time has humanity had the ability to ask so many dumb questions without looking stupid. It’s both a blessing and a curse.
The blessing is obvious. There’s a tendency for people in formal education not to ask questions when they get stuck. We’ve all been there: everyone else seems to be getting it while you’re sitting there absolutely clueless, watching the clock tick down, probably contemplating dropping out. You’re convinced the question you want to ask is ‘trivial’, and by now it’s been way too long. If you ask it now, everyone – including the lecturer – is going to realize you’ve been completely lost this whole time because you really should’ve just asked it last Tuesday.
I wish I had today’s AI tools when I was at university. So often it felt like once you got behind it was impossible to catch up because the next lecture wouldn’t make sense without understanding the previous one. AI tools remove that problem entirely. They allow us to clarify all the little confusions we’d rather not voice out loud, and in real-time rather than struggling for hours on what might turn out to be an unnecessary problem.
This is undoubtedly a step in the right direction and is, for me, perhaps the most beneficial use of AI tools as a learning aid: it helps us stay in the flow of a lesson without being sidetracked by simple, preventable, misunderstandings.
But naturally, as humans, we exploit these tools to our own detriment. If we aren’t careful, we lose the ability to think critically and solve problems on our own. It’s very easy to say “I know I can do it, but it’s easier to get AI to give me the answer”. The problem isn’t just that it’s easier – it’s that over time, if we keep taking the easy path, we stop being able to do it ourselves at all.
Not all questions should be asked immediately, and not all struggle is unproductive. There’s a meaningful difference going around in circles for hours on something trivial versus wrestling with a genuinely difficult concept. This applies to any field, but I’ll use mathematics as an example. Being stuck on step 3 of a proof is fundamentally different from not knowing where to even begin.
If you’re stuck on step 3, you’ve engaged with the problem. You understand what you’re trying to prove and you’ve made progress. This might be a genuine point that needs clarification – for me, this is a perfect use case for AI.
But if you don’t know where to begin, that may suggest you need to spend more time with the source material. Asking AI to start the problem for you might get you unstuck in the moment, but it skips the foundational thinking that would help you start the next problem on your own.
So before reaching for AI, give yourself some time with the problem. You’re a smart individual, there’s a good chance you can figure it out. But don’t let pride sidetrack you either – if you’ve genuinely tried and you’re still stuck, turn to the subject-matter expert waiting in your pocket.
Key Takeaway 1: Struggle productively, but don’t suffer needlessly. Learn to tell the difference.
For a more in-depth guide on getting more value out of AI tools, check out this article on prompt engineering tips and tricks:

Part 2. Don’t fall into the AI production trap
In the previous section I discussed when to ask AI for help with understanding something. But a problem that’s equally pertinent: using AI to produce the work itself.
Taking shortcuts feels efficient but undermines long-term learning. I think it’s important to remember writing is learning; it’s a powerful way to internalise information. Since we can think faster than we can write, it forces us to slow down, think deeply about the ideas at hand, and see how they link to other concepts.
AI generated work comes with some real costs:
- There’s a reason it’s referred to as AI generated slop: AI generated text is very recognisable. It’s generic, rambling, and full of predictable sentence structures. And that’s when it’s not just hallucinating information entirely. It’s not good writing; it just requires less effort than doing it yourself.
- It doesn’t capture your ideas. Which means your notes don’t reflect your understanding or what you thought was important to remember. These aren’t your notes, these are a stochastic parody of what people on the internet thought about it.
- Finally, you lose any sense of satisfaction, accomplishment or fulfilment in not producing the work yourself. Maybe that’s less important for our long-term learning outcomes, though it might dampen our motivation after a while.
Writing code is another great example. You’re not just producing a program: you’re practicing converting your ideas and theory into practice; you’re building familiarity with specific libraries and functions; and you’re learning how to spot common mistakes that you’ll undoubtedly make along the way. AI generated code doesn’t teach you any of this. It has the added drawback of producing errors that can be hard to spot and can have far-reaching consequences over the rest of your codebase that will only come back to bite you later. Without the skills learnt through practice, how can you even tell if the AI generated code is well-written and accurately fulfils its purpose? It’s not even like it’s rewarding either; there’s no joy in producing a program to solve a problem when you’re not the one who solved it.
There’s a balance to be struck, and it manifests more as a time-value trade-off.
AI is great for automating simple tasks that would require your time, not your cognitive effort. Need to summarise your notes or turn them into flashcards? AI can handle it. Need to refactor your code and tidy up the documentation? AI’s got your back. Want someone to review your work and check your understanding? You’re in luck.
Sometimes we just need things done quickly. In which case your new AI sidekick can help. Just remember the difference: AI can help you produce something, but it won’t help you learn how to do it. It’s a slightly different story if you have already mastered a skill. If you possess the necessary skillset to review AI generated material, assess its quality and adapt it to meet your needs, then it can be a helpful time-saver. Treating its word as gospel won’t teach you how to write or code better, and to anyone more experienced, it’ll be obvious you didn’t write it yourself.
Key Takeaway 2: Use AI to save time not effort. If you can’t assess its output, you’re not ready to use it.

Part 3. Devising a Curriculum
When learning something without the guidance of a teacher or course, one of the hardest questions is simply: where do I start?
Learning a new topic can be overwhelming at the best of times. For me, the worst case scenario is when it falls into the ‘known-unknown’ category: you know just enough to realize exactly how much you don’t know.
I think with the tools we have available now, it’s easy to reach this point too quickly. AI is particularly good at giving you sweeping overviews. Ask it about quantum mechanics or machine learning and it’ll happily lay out dozens of topics and adjacent ideas you might want to explore. Suddenly you’re aware of all these concepts you don’t understand, and the path forward feels paralysing rather than exciting.
Context can be useful to ground ideas and provide motivation for how things fit into the larger picture, but dwelling too much on the breadth of a subject is a surefire way to lose all motivation. You need a structured path through the material, not just a map of everything that exists.
I’m also not a fan of using AI tools to design a curriculum. From my experience, it doesn’t always cover topics in the ‘right’ order, or include topics that are relevant to each other and should be studied together. Good curricula aren’t just lists of topics; they’re carefully constructed sequences where each concept builds on the last, where examples are chosen to reinforce multiple ideas simultaneously, where the order itself teaches you something about how experts think about the field.
That kind of design comes from human, lived experience in exploring, learning, and teaching a topic. A textbook author has wrestled with the material themselves, taught it to many cohorts of students, seen where people get stuck, and refined their approach over years. AI hasn’t done any of that. It’s pattern-matching from curricula that exist, but it doesn’t understand why they work.
I’ve tried many times to use AI tools to produce a comprehensive curriculum or lesson plan in the hope that it can give me a complete roadmap through a subject. The problem is, AI optimizes for appearing comprehensive rather than being useful. Ask for a list of resources and you’ll get twenty textbooks when three good ones would do. It treats all sources as roughly equal rather than highlighting what’ll actually provide the most value. It can’t tell you to “read this chapter from this book, then follow up with this other resource” because it doesn’t have the judgment to know what matters most at your stage of learning.
For me, the better approach is to use the contents and structure of a well-respected textbook to guide my learning. Not just any textbook – one that’s stood the test of time, that people consistently recommend, that has a clear progression. If you can access the book itself, even better. If not, using its table of contents as a roadmap still gives you that structured path.
That said, AI can be helpful for finding which textbooks or resources to use in the first place. It can tell you what’s most commonly recommended in a field and point you toward well-established materials. Just don’t ask it to re-construct the curriculum those materials provide.
Key Takeaway 3: Use AI to find good resources, not to replace them. Trust human-designed curricula over AI-generated ones.
Metacognition and the bigger picture
There’s a running theme throughout each section that I haven’t explicitly put a name on yet: metacognition, or learning how to learn more effectively.
AI tools have fundamentally changed this process. Immediately jumping to asking AI for an answer, skips the moment of reflection and recognizing why you’re stuck. When AI produces your work, you lose the feedback loop that tells you what you actually understand and where you can make progress. When it designs your curriculum, you never develop the judgment to know what should be learnt next and why it’s important.
I don’t believe these are just minor conveniences we’re trading away – they’re core metacognitive skills we’re slowly losing. It’s the ability to recognize the difference between “I need a hint” and “I need to get my head down and learn this”; the awareness of when you’re genuinely learning versus just consuming information; or the judgment to know whether you’re ready to move forward or need more practice.
There’s definitely value in current AI tools, and choosing not to use them at all seems unwise to me. But we need to be thoughtful about how we adopt them. AI can be a powerful aid for self-directed learning but only if we’re intentional about how we use it. Learning is a skill in itself. It requires practice and conscious thought to get right. Be cautious when adopting these tools rather than jumping in headfirst and dealing with the consequences later. Use them to enhance your learning process, not replace it.
The point was never to get to the right answer; it was to develop the capacity to think, reason and solve problems ourselves. AI can help you learn, but only if you stay in the driver’s seat.