The past year has been a rollercoaster in the world of AI, and no doubt many people are giddy with the amount of advances and regressions, the constant hype, and equally constant scaremongering. But let’s take a step back: AI is a powerful and promising new technology, but the conversation isn’t always genuine, and it generates more heat than light.
AI is interesting to everyone, from doctors to elementary school kids, for good reason. Not all new technologies make us question the fundamental natures of human intelligence and creativity, Y allows us to generate an infinite variety of dinosaurs fighting with lasers.
This broad appeal means that the debate about what AI is, isn’t, could or shouldn’t be has spread from trade conferences like NeurIPS to trade publications like this one, to the front page of in-store impulse buy newsmagazines. grocery. The threat and/or promise of AI (in a general sense, the lack of specificity of which is part of the problem) has become a familiar topic seemingly overnight.
For one thing, it should be validating for researchers and engineers who have worked in relative obscurity for decades on what they feel is important technology to see it so widely considered and discussed. But like the neuroscientist whose work results in a headline like “Scientists Have Pinpointed the Exact Center of Love,” or the physicist whose ironically named “god particle” leads to a theological debate, surely it must also be frustrating that the work of one is rejected. circling among the hoi polloi (meaning unscrupulous experts, not innocent laymen) like a beach ball.
“AI can now…” is a very dangerous way to start any sentence (although I’m sure I’ve done it myself) because it’s so hard to say for sure what AI is actually doing. He can certainly beat any human being at chess or go, and can predict the structure of protein chains; he can answer any question with confidence (if not correctly) and can do a remarkably good impression of any artist, living or dead.
But it’s hard to say which of these things is important, and to whom, and which ones will be remembered as parlor tricks for brief fun in 5 or 10 years, like so many innovations that we’ve been told are going to change the world. The capabilities of AI are widely misunderstood because they have been actively misrepresented both by those who want to sell it or drive investment in it and by those who fear or underestimate it.
Obviously there’s a lot of potential in something like ChatGPT, but nothing would like those who build products with it more than you, potentially a customer or at least someone who will encounter it, to think that it’s more powerful and less error prone. of what it is Billions are being spent to ensure that AI is at the center of all sorts of services, and not necessarily to improve them, but to automate them in the way that so much has been automated with mixed results.
Not to use the scary “they”, but they, meaning companies like Microsoft and Google that have a huge financial stake in the success of AI in their core businesses (after having invested so much in it), aren’t interested. in changing the world for the better, but earning more money. They’re businesses, and AI is a product they sell or hope to sell, that’s not a slander against them, just something to keep in mind when making their claims.
On the other hand, there are people who fear, for good reason, that their role will be phased out not because of actual obsolescence, but because some gullible manager took the hook, line, and sinker of the “AI revolution.” People aren’t reading ChatGPT scripts and thinking, “oh no, this software does what I do.” They’re thinking, “this software seems to do what I do, for people who don’t understand either.”
That’s very dangerous when your work is systematically misunderstood or undervalued, as is the case with many things. But it’s a problem with management styles, not the AI. according to. Fortunately, we have bold experiments like CNET’s attempt to automate financial advice columns: the graveyards of these ill-advised efforts will serve as dire pointers to those who plan to make the same mistakes in the future.
But it is just as dangerous to dismiss AI as a toy, or to say it will never do this or that simply because it can’t now, or because one has seen an example of its failure. It’s the same mistake the other side makes, but in reverse: proponents see a good example and say, “this shows it’s over for concept artists”; opponents see a bad example (or maybe the same one!) and say “this shows you can never replace concept artists”.
Both build their houses on quicksand. But both the click and the eyeballs are of course the fundamental currency of the online world.
And then you have these extreme shots of grief that grab attention not for being thoughtful but for being reactive and extreme, which shouldn’t surprise anyone since, as we’ve all learned in the last decade, conflict drives engagement. What feels like a cycle of hype and deception is just fluctuating visibility in an ongoing and not very helpful argument about whether AI is fundamentally this or that. It has the feeling that people in the 50s were arguing about whether we would colonize Mars or Venus first.
The reality is that many of those concept artists, not to mention the novelists, musicians, tax preparers, lawyers and all other professions who see the invasion of AI in one form or another, are really excited and interested. They know their job well enough to understand that even a very good imitation of what they do is fundamentally different from actually doing it.
Advances in AI are happening slower than you might think, not because there aren’t any advances but because those advances are the result of years and years of work that isn’t nearly as photogenic or shareable as stylized avatars. The biggest thing of the last decade was “Attention is all you need,” but we didn’t see that on the cover of Time. It’s certainly notable that as of this month or that, it’s good enough to do certain things, but don’t think of it as AI “crossing a line” so much as AI moving further down a gradient. o continuous long, long that even its most gifted practitioners cannot see more than a few months of.
This is all just to say, don’t get sucked into either the hype or the doomsayers. What AI can or can’t do is an open question, and if someone says they know, check to see if they’re trying to sell you something. However, what people may choose to do with the AI we already have is something we can and should talk about more. I can live with a model that can mimic my writing style; anyway, I’m imitating a dozen other writers. But I’d rather not work for a company that algorithmically determines salary or who gets fired, because I wouldn’t trust those who put that system in place. As usual, technology isn’t the threat, it’s the people who use it.