Google is beating Apple in integrating AI into smartphones

Google is beating Apple in integrating AI into smartphones

[ad_1]

Key Takeaways

  • Google’s AI-focused event showed the growing gap between Apple and its rivals in the AI ​​race.
  • Google’s Pixel 9 offers advanced AI features now, while Apple’s Intelligence has been slow throughout the year.
  • Despite the similarities, Google’s Gemini vision for phones is ultimately ambitious.



Despite making a strong case for the launch of the Pixel 9 series in the next few weeks, it was hard not to think about Apple during Google’s event. Less because of the iPhone-like hardware that Google showed off, and more because the real star of the event — Google’s choice to make AI the new centerpiece of Android — made it incredibly clear just how far away Apple Intelligence is.

Google and Apple have a common problem to solve: turning AI models that seem so useful to computer engineers and medical researchers into consumer products that the average person can comfortably use. The key difference between what Google dropped during its event and what Apple announced during WWDC 2024, is time. Apple may present a brighter, safer, and friendlier idea of ​​how AI can work on the iPhone, but Google is ready to offer almost all the same features that Apple showed now, with an extended beta program. And several ideas Apple doesn’t even try.


New smartphones and smart watches may be the reason for Google to host its Made by Google event, but the main takeaway from this event is the existence of a growing gap between Apple and its competitors who are ready to profit from AI that generates attention. One that no amount of shiny new iPhones can fix.

Related

Does Apple Intelligence actually have a chance in the AI ​​race?

If Apple can leverage its in-app tools, it has a chance to stand out against the competition.

Google and Apple have similar ideas about AI in phones

An essential, contextual assistant for all your applications

Pixel 9 Pro AI enabled

While Google and Apple run very different businesses — Google focuses on services, Apple on hardware — the companies have finally come to common ideas about how AI should work in smartphones. Both have some kind of assistant (Gemini and Siri) that you can access directly for device information and general requests, and if you need it, the assistant can use context information from other applications to answer more difficult questions and tasks.


Both companies are also pursuing a mix of on-device processing and sending applications to the cloud. Google has long relied on its servers for some of the most difficult tricks that Pixels can do, such as Video Boost, which color corrects and smooths even the worst video images. On the Pixel 9, however, one of the best features, Pixel Screenshots, takes place entirely on the device thanks to an updated version of Google’s smaller Gemini Nano model. The app, which edits any screenshots you take and makes them searchable in natural language, isn’t something Apple is currently trying.

Google and Apple also distribute transcripts and summaries, two things AI is generally good at, across their apps. Google offers Call Notes, which record and summarize calls. Apple similarly adds Call Recording and Transcripts in iOS 18. Gemini can summarize the contents of your Gmail inbox, while the Mail app in iOS 18 simply adds summaries to the top of emails. Both companies offer in-device photo editing tools to create photos that you can use wherever you want on your phone, too.


The Pixel Screenshots app on the Pixel 9 is out.

Google / Pocket-lint

Gemini is more flexible than Siri in the types of questions it can answer, something Apple hopes to add by offering the option to send complex requests to ChatGPT, but mostly companies are aligned where AI is used in smartphones. Google is able to offer more complexity, combining images and text requests quickly (something Siri can’t do) or a life-like conversation with an AI assistant, with Gemini Live.


Importantly, it can do those things right now. The company hosted its live event and filled it with live demos of these new features. Not all of them worked, and everything was unusual, but it showed the point. Apple famously managed live keynotes before moving on to pre-recorded, inch-by-inch live video presentations during the early stages of COVID-19, and they haven’t looked back. Google “doing it live” was one of several ways the company tried to differentiate itself from Apple throughout the event. More importantly, it showed that these new AI features can be implemented now instead of in a few months or years.

Related

5 cool things Google’s Gemini AI can do on your Pixel 9

The new Google Pixel 9 phones have special AI features.

Apple Intelligence is still months away

It will be a while before we meet the new Siri

Apple Intelligence features running on macOS, iPadOS, iOS.

an apple

Go through Apple’s web page explaining the Apple Intelligence features, and you’ll find two important details that the company hasn’t talked about much:


  1. Apple Intelligence launches in “Beta” this fall with iOS 18, iPadOS 18, and macOS Sequoia.
  2. “More features, more languages, and platforms will come within the next year.”

The looseness of describing Apple Intelligence as beta, and suggesting that not all features will be available until 2025, gives Apple more flexibility to send something that looks very different from the experience it showed in its video presentation. If developer betas are anything to go by, several major Apple Intelligence features likely won’t be included when Apple’s new software launches later this year. Pocket-lint has been able to keep up with writing tools for text generation, Apple’s new shortcut and transcribing features, and Siri’s visual redesign, but everything else that Apple does, like Image Playground and Siri’s ability to work across apps of working and having. situational information about what’s on your screen, is missing.


iPhone 16 owners may be impatient when the average flagship Android phone can do so many things that their phones can’t.

Bloomberg reports that Apple plans to technically introduce Apple Intelligence with iOS and iPadOS 18.1, but features will be added over time, “with multiple iOS 18 updates in late 2024 and the first half of 2025.” Improved Siri features will reportedly be part of those 2025 updates, while new visuals arrive in the 18.1 update. That means one of the selling points of Apple’s new iPhone won’t be available at launch, and the secret sauce that comes closest to putting Apple Intelligence together, like Google’s Gemini does in the Pixel 9, is still a year away.


That’s not necessarily a disaster, but time is of the essence. Apple likes to take its time, but new iPhone 16 owners may be impatient when the average flagship Android phone can do great things their phone can’t.

It’s still early days for AI in smartphones

The jury is still out on whether Google’s new AI features are as important or as effective as they should be. A number of glitches during the live event suggest there may still be a fair number of rough edges to work out, but I can’t deny that I’m excited about what Google is showing. I’m primarily an iPhone user, but even Google’s vision of an AI assistant that’s been around for years sounds exciting. I’m not sure it won’t be clunky, but at least it looks like I can use it.


The Pixel 9

I’m not sure how much the slow release of Apple Intelligence is motivated by the (entirely justified) warning, if Apple is officially following its competitors, but the fact of the matter is that, at least until the end of 2024, the company. it’s on its hind legs. And with many of the Pixel’s basic AI ideas being similar or more ambitious than Apple’s, that’s not the best place to be.

[ad_2]

Source link

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *