As is tradition this time of year, Apple announced a new line of iPhones last week. The promised centerpiece that would make us want to buy these new devices was artificial intelligence – or Apple Intelligence, as they called it. Yet the reaction of the collective world of consumer technology has been limited.
The lack of consumer enthusiasm was so evident that it immediately wiped over a hundred billion dollars off Apple’s stock price. Even the Wired Gadget Lab podcast, passionate about all things tech, found nothing in the new features that would make them upgrade to the iPhone 16.
The only thing that seemed to generate excitement wasn’t the AI features, but the addition of a new camera shutter button on the side of the phone. If a button is a better selling point than the most hyped technology of the last two years, something is clearly wrong.
The reason is that artificial intelligence has now passed what the technology blog The Media Copilot called its “wonder phase.” Two years ago, we were amazed that ChatGPT, DALL-E, and other generative AI systems were able to create coherent writing and lifelike images from just a few words in a text message.
But now AI must prove that it can actually be productive. Since their introduction, the models that drive these experiences have become much more powerful – and exponentially more expensive.
However, Google, NVidia, Microsoft and OpenAI recently met at the White House to discuss AI infrastructure, suggesting that these companies are doubling down on the technology.
According to Forbes, the industry is $500 billion away from recouping massive investments in AI hardware and software, and the $100 billion in projected AI revenue in 2024 isn’t even close to that figure.
But Apple has yet to enthusiastically introduce AI features into its products for the same reason Google, Samsung and Microsoft are doing so: to give consumers a reason to buy a new device.
Hard sell?
Before AI, the industry was trying to build hype around virtual reality and the Metaverse, an effort that likely reached its peak with the introduction of the Apple Vision Pro headphones in 2023 (a product that incidentally was barely mentioned in last week’s announcement).
After the Metaverse failed to take off, tech companies needed something else to boost sales, and AI became the shiny new thing. But it remains to be seen whether consumers will appreciate the AI-powered features included in phones such as photo editing and writing assistants.
This is not to say that current artificial intelligence is not useful. AI technologies are being used in billion-dollar industrial applications, in everything from online advertising to healthcare and energy optimization.
Generative AI has also become a useful tool for professionals in many fields. According to a survey, 97% of software developers have used AI tools to support their work. Many journalists, visual artists, musicians and filmmakers have adopted AI tools to create content more quickly and efficiently.
Yet most of us aren’t actually willing to pay for a service that draws funny animated cats or summarizes texts, especially since attempts at AI-supported search have proven error-prone. Apple’s approach to implementing AI appears to be mostly a hodgepodge of existing features, many of which are already built into popular third-party apps.
Apple’s AI can help you create a custom emoji, transcribe a phone call, edit a photo, or write an email – neat things, but no longer innovative. There’s also something called Minimize mode that’s supposed to bother you less and only let important notifications through, but how well that’ll work in reality is anyone’s guess.
The only forward-thinking characteristic is called Visual Intelligence. It lets you point the camera at something nearby and get information without explicitly searching. For example, you could photograph a restaurant sign and the phone will tell you the menu, show you reviews and maybe even help you reserve a table.
While this is very reminiscent of the goal of Google’s Pixel phones (or ChatGPT’s multi-modal capabilities), it points towards future use of AI that will be more real-time, interactive, and situated in real-world environments.
In the extension, Apple Intelligence and Reduce mode could evolve into so-called “context-sensitive computing,” which has been predicted and demonstrated in research projects since the 1990s, but in most cases has not yet become robust enough to be a real product category.
What’s interesting is that Apple Intelligence isn’t really available for anyone to try yet, as the new iPhones don’t include it yet. Perhaps they will turn out to be more valuable than the limited information seems to indicate.
But Apple was known to only release a product when it was truly ready, meaning the use case was crystal clear and the user experience had been honed to perfection.
This is what made the iPod and iPhone so much more attractive than all the MP3 players and smartphones released before them. No one knows whether Apple’s approach to AI will be able to recoup some of its lost stock price, not to mention the hundreds of billions invested by them and the rest of the tech industry.
After all, AI still has amazing potential, but it might be time to slow down a bit and take a moment to consider where it will actually be most useful.
Lars Erik Holmquist is professor of design and innovation at Nottingham Trent University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
#talking #AIpowered #smartphone #revolution #Asia #Times