Skip to content

Generative AI Cope

Published: at 10:00 AM

This topic has been widely discussed and lingering in the back of my mind for a couple of years, but something recently compelled me to finally write about it. I was at a friend’s 40th birthday party when I encountered someone in tech who openly expressed disdain for people “in AI” who primarily use generative AI or LLMs. In his view, this wasn’t “real” tech. The conversation struck a nerve with me, perhaps because I’ve never considered myself a particularly strong coder (nor should I! LLMs’ do allow me to build production grade apps on my own, so I’m really grateful for how it all turned out).

Before the rise of LLMs, I coded only when absolutely necessary. Some Python, some TypeScript, but nothing exceptional. It was super slow and I did not enjoy it. Yet back in 2016, I successfully built a production-grade system for grading x-ray images of dogs’ hips and elbows. I trained a machine learning model that classified these images according to assessments made by the leading expert in that field, and the results were excellent.

So it surprises me how many people still hold disdain towards generative AI, especially within tech circles. Honestly, it feels like pure cope. Perhaps this discomfort arises because their unique, hard-earned skills (coding, interacting with complex technologies the hard way) are suddenly becoming accessible to anyone through generative models. Now, people who never put in years of dedicated study can achieve goals previously reserved for technical experts. This democratization of skill unsettles many who once saw themselves as gatekeepers of specialized knowledge.

Another fascinating group includes those who dismiss generative AI outright, confidently declaring it overhyped without ever seriously engaging with frontier LLMs. These skeptics often argue points about AI limitations that have already been proven outdated or outright false. Notice how the goalposts for AGI keep shifting? Each advancement is dismissed with new skepticism, ignoring how quickly and fundamentally these technologies have already reshaped numerous domains.

In contrast, within tech itself, many people are experiencing a productivity and creativity explosion. Solo developers can now tackle tasks that previously required extensive teams. Building sophisticated apps, controlling physical devices (robots, smart home tech, 3d-printing), and pioneering entirely new applications has become commonplace. Writing, coding, and research are permanently altered, yet the broader culture still struggles to fully adapt. This paradigm shift happened faster than any previous one, but we’re overdue to stop resisting and start integrating.

Take education, for example. We still require high-school and college students to produce essays, and they do, but not by thinking or writing in the traditional sense. Instead, they spend their time slightly editing text generated by their favorite LLM to hide its AI origins. Rather than clinging to old standards, it would be better to immediately raise expectations, openly allow AI, and leverage it as a personal, Socratic-style tutor.

Despite the persistent myth of diverse “learning styles,” the only intervention consistently shown to dramatically boost learning (around two standard deviations) is personalized one-on-one tutoring. Bloom’s seminal research highlights the effectiveness of individualized tutoring. Historically, personalized tutoring at scale was impractical. Now, with leading generative AI, we can offer every student their own infinitely patient, adaptable, personalized tutor. This could revolutionize education, making learning faster and deeper. Yet, progress toward this seems surprisingly slow (what a waste of potential).

Another example: PaperGraderPro, a tool I designed to alleviate the grading burden for educators. Teachers set grading criteria, and the AI generates comprehensive, nuanced feedback for each student. Unlike human graders, the AI does not experience fatigue or exhibit preferential treatment, ensuring consistency and fairness. The system effortlessly produces detailed, insightful feedback, writing a full page or two per student if required. Doing the same manually would be impractical for teachers handling large classes with 10-20 page essays.

Part of the resistance likely comes from significant portions of the humanities community who have had limited exposure to frontier LLMs (they don’t have an accurate read on what the capabilities and limitations are and how the models behave in different situations). Individual teachers can’t single-handedly overhaul education (they work within established frameworks). Yet, broadly speaking, every field stands to benefit enormously by fully embracing generative AI. Very few areas wouldn’t see immediate improvement with today’s frontier models.

What surprises me most is the vehement denial and active animosity towards the very idea that a computer program might now handle tasks once exclusive to human experts or expert teams. Instead of inventing flawed arguments against generative AI, it’s better to spend that energy exploring how to harness these powerful new tools. Focus first on your own expertise, experiment, innovate, and share your discoveries widely. The goal isn’t to protect outdated skills but to improve everyone’s capabilities.


References


Previous Post
Wilhelm von Humboldt's 'Limits of State Action': Study Guide
Next Post
When Impossible Becomes Muscle Memory