GPT-4 is here, and you’ve probably heard a good bit about it already. It’s a smarter, faster, more powerful engine for AI programs such as ChatGPT. It can turn a hand-sketched design into a functional website and help with your taxes. It got a 5 on the AP Art History test. There were already fears about AI coming for white-collar work, disrupting education, and so much else, and there was some healthy skepticism about those fears. So where does a more powerful AI leave us?
Perhaps overwhelmed or even tired, depending on your leanings. I feel both at once. It’s hard to argue that new large language models, or LLMs, aren’t a genuine engineering feat, and it’s exciting to experience advancements that feel magical, even if they’re just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAI’s newest model inevitably sidesteps crucial questions—ones that simply don’t fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when we’re still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT?
Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?