As California advances new AI regulations and companies continue to pour billions of dollars into building the most powerful systems yet, I hear a recurring grumble online: Why is AI being shoved down our throats? What is this good for? Does anyone actually want this??
In A recent Gallup pollThe percentage of Americans who think AI does more harm than good is twice that of those who think it does more good than harm. (However, ‘”neutral” is the most popular answer.) Fed up with AI hype and AI-generated text everywhere, many of us feel like AI is something that tech companies are forcing on people who were perfectly happy before, thank you very much.
Technology companies must, unequivocally, being careless. Only in AI would people claim that their work could lead to moderately mass death or even human extinction, and then argue that they should continue to do so completely unchecked. I see where people’s skepticism is coming from and I am skeptical too.
But these real problems with AI don’t justify every complaint about AI, and something about the “it’s being shoved down our throats” complaint doesn’t sit right with me.
One thing that’s easy to forget about generative AI is how new it is. Ten years ago, none of today’s everyday tools existed at all. Most of them didn’t exist five years ago, or they were little more than useless party tricks. Two years ago, some early versions of these tools were developed – but almost nobody knew about them. Then, OpenAI gave ChatGPT a friendly (not strictly scientific) interface. The app was two months after launch 100 million active users.
This story first appeared in the Future Perfect Newsletter.
Sign up here to explore the big, complex problems facing the world and the most effective ways to solve them Sent twice a week.
Bio-enthusiasm has created a world seemingly overrun by AIs A new AI technology has captured the public imagination overnight and many people are starting to use it. it is In response to ChatGPT Competitors that have doubled down on their own AI programs and released their own chatbots.
You don’t have to like AI. Your doubts are deeply justified. But the world we live in today is a direct result of ChatGPT’s meteoric rise — and like it or not, that rise was driven by the sheer number of people who wanted to use it.
There are good reasons for generative AI enthusiasm – and very real drawbacks
I see a lot of frustration with AI bubbling up when you’re trying to learn about something and stumble across a mass of AI-generated articles geared toward SEO. It is undoubtedly deeply disturbing to have high-quality text replaced by low-quality AI text. Especially when, as is often the case, it looks fine at first glance and only after closer reading you realize that it is inconsistent. Many of us have had that experience, and it poses a serious threat to the culture of authentic work sharing that made the Internet great.
Although “people are trying to put AI slop in our faces” is a very visible consequence of the AI boom, as such Cheating in exams and allegedly The death of artThe valuable uses of AI are often less obvious. But they have it: it Ridiculously helpful for programmersEnables new types Great and imaginative gameAnd act as a crude copy editor for people who can never afford it.
I find AI useless for writing, but I often use it to extract text from a screenshot or image that I previously had to type or pay for a service to handle. It’s great for inventing fantasy character names for my weekend D&D games. It’s easy to rewrite text at an easy reading level so I can design activities for my kids. In the right niche, it feels like a tool for imagination, allowing you to jump from vague ideas to tangible results.
And — again — this technology is incredibly new. We see the first light bulb and it seems a little The debate over whether electricity is really an improvement or a party trick. Even if we could stop the race to build ever more powerful AI systems without oversight (and I really think we should), there is still much to discover about how to use our systems effectively.
AI chatbots are derided when they’re mediocre, but if every small business could afford a functioning 24/7 customer service chat, it would make it easier for them to compete with larger businesses that already have those services. AI may make that happen in the next few years. It’s a good thing if people can grasp their ideas more easily. If the text they found unreadably confusing is now accessible to them, that’s a good thing.
AI can be used to check the quality of work rather than producing only mediocre work. we don’t have High-quality automated text review Still for statistical errors and misconduct in scientific papers, but it would be very valuable if we did. AI can be used to produce cheap, crappy essays, but it can also provide fairly useful feedback on the first draft of a piece — something many writers wish they could get but can’t.
And many of the disturbing things about AI are a product of a culture that has yet to adjust and respond to them, both with regulation and further innovation. I wish AI companies were more careful about deploying technology that makes the internet less usable overnight, but I also believe in our ability to adapt. Facebook AI spent three months full of spam, but the company adjusted some content filters and now (in my feed, at least) the spam is mostly gone.
The ease with which meaningless marketing copy is pumped out is a challenge for search engines, which tend to assume that having lots of text makes a source more authoritative. But frankly, this was a bad idea even before ChatGPT, and search engines just had to adapt and figure out how to do high-quality work.
Over time, all the feedback and grumbling and consumer behavior will shape future AI — and we can collectively shape it for the better.
This is why the AI results that concern me, have to create very powerful systems without oversight. Our society can adapt to many things — if we have time to react, adjust, regulate where appropriate, and learn new habits. AI may currently have more bad applications than good ones, but over time we can find and invest in the good ones. We only get into real trouble if humanitarian values cease to be a major input: if we stumble into giving more. AI’s decision-making capabilities. We could do it! I’m nervous!
But I’m not too worried about the pitch emails I get for lots of bad AI content or unnecessary AI products. We’re in the early stages of figuring out how to make this tool useful, and that’s okay.