It was while I was inside an AI sandbox, a makeshift room to experience Google’s new AI virtual assistant, Project Astra, that the company’s co-founder Sergey Brin walked in. Sporting a wind-weary messy hairdo, Brin looked like a typical Californian. Casual, easygoing, and concerned about how hot it was in the room. “Why don’t you turn on the aircon?” he suggested to one of the Google employees in the sandbox, showing his concern for the eight journalists from across the world who were in the demo to witness Google’s latest AI capabilities.

Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024. (AP Photo/Jeff Chiu)(AP) PREMIUM
Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024. (AP Photo/Jeff Chiu)(AP)

I was attending Google I/O, the company’s annual developer conference, in Mountain View’s Shoreline Amphitheatre, right next to the company’s headquarters. Security helicopters flew overhead as hundreds of developers, journalists, and employees from across the world headed to the conference, while millions joined online.

It was a pivotal I/O for the company. Last year, Google’s executives did a scrambled I/O after OpenAI had changed internet search forever by launching their prompt-based search model ChatGPT. Everyone who attended this I/O wanted to know one thing: What’s Google doing next in AI?

Three hours before I headed into the AI sandbox, the keynote opened with a hilarious act by Tiktok celebrity DJ Marc Rebillit, wearing a robe, who created a song using Music FX, Google’s experimental music mixing AI software. A few minutes later, CEO Sundar Pichai came on stage to announce AI integration into all existing Google products like Search, Chrome and Workspace.

With the AI race heating up, announcing these integrations had become a survival issue for the company – get on board or get left behind as users leave for a prompt-based search. The writing was on the wall for Google, but they did add their own signature to it.

OpenAI might have had the first mover advantage, but Google still has a loyal user base of over two billion people worldwide, he seemed to say. Pichai announced Gemini 1.5 Pro, a multimodal AI model which could reason across text, images, video, code and more as well as a lighter AI model, Gemini 1.5 Flash which is optimised for “narrow, high-frequency, low latency tasks.”

The next two hours were filled with a succession of announcements about AI integration. You can now dig deeper into Google Photos archives, point your phone’s live camera to decode or analyse anything you see in your life and talk to Search to get answers in multiple media (photos, videos and text).

With a paid Google Workspace account, you also get a live virtual assistant who can fetch any content from the deep depths of your Google account, create an Excel of it and work across Gmail, Drive, Docs and Chrome – using the content it has been scraping since a decade now. With Gemini Advanced, you also get to create Gems, customised AI virtual assistants, who can be instructed to do any set of tasks – from building a virtual motivation coach to a bedtime storyteller for your child.

Everyone who has tried to get work done with any virtual assistants (like Siri or Google Assistant), sighed with relief when Pichai announced Project Astra. An AI-driven assistant, Astra is a big upgrade to the last-generation virtual assistants we’ve been struggling

with. I can’t wait to get my hands on it, and show it my personal online life, so it can start sorting my stuff, create automatic reminders, organize my work and personal life, and do things right and not make calls to the wrong person.

In the two hours of announcements about AI integrations and updates, there was a delightful little cameo done by Google AR Glass. In a recorded video played at the venue, an employee from Google’s research arm, DeepMind, showed various capabilities of Project

Astra. Towards the very end of the video, she picked up a pair of glasses, wore them, and continued to talk to Astra, through the glass, which discussed with her through a code written on a whiteboard.

“Did they just show us Google Glass?” exclaimed a journalist sitting by my side. I could understand his excitement. The video had surreptitiously shown us the AR glass of one of the most iconic Google I/Os ever. In 2012, Brin showed a product demonstration of Google Glass by asking two skydivers to jump right into the Moscone Center where the I/O was being held. The product, which cost $1500, was launched with much fanfare but failed. I think it was ahead of its time.

Now, with the advancement in AR/VR technology which I mentioned in an earlier column, and the launch of products like Ray-Ban Meta Smart Glasses, Meta Quest 3 and even Apple’s Vision Pro, we’re ready as users to add AR to our eyes and carry smartphones on our nose. So it’s exciting to see that Google’s going to merge AI with smart glass technology.

Of course, when Brin walked into the AI sandbox as a surprise visit, I completely forgot to ask him about Google AR Glass. Instead, I asked him what he thought about Gemini’s capabilities. Had they exceeded his expectations? After all, without an official designation in the company, board member Brin was regularly seen in Google’s research building all through last year, closely overseeing Gemini’s development. Yes, he answered.

“We developed the AI model internally for scaling but it’s gone way beyond our expectations. We keep discovering new use cases every day,” he said, adding it’s the same model that Google’s integrating into all its products. And it just works for all of them.

After a group selfie with Brin, I headed out in search of a demo that no one had heard of. A 3D, holographic video chat product by Google, where the person you’re talking to is projected as a hologram right out of your screen, making it feel like you’re sitting across from them.

Don’t confuse Project Starline with Elon Musk’s similar-sounding internet satellite company, Starlink. Project Starline is the next-gen video chat in 3D. Right now, it’s expensive (no one told me the costs of the prototype I tested), uses a lot of bandwidth and has three cameras to render your hologram, but it’s a cool piece of tech to experience. And thanks to advances in AI, one of the employees told me, it’s quite possible to add multiple people in the same video and stream lifelike details across continents. Beam me up, Scotty, for I’m all for holograms! This year, Starline has tied up with HP to bring this tech out of the lab into Google Meet and Zoom by 2025.

As I walked out of Google I/O, I reflected on the oncoming change that technology companies are forcing on our everyday products – all thanks to advances in AI. Even people who built AI haven’t completely understood it, while people like Demis Hassabis (who heads Google DeepMind) and Sam Altman (CEO, OpenAI), relentlessly pursue the idea of AGI as the height of AI technology. A machine that can reason and think like a human. But then, have they ever stopped to wonder if we need it?

Shweta Taneja is an author and journalist based in the Bay Area. Her fortnightly column will reflect on how emerging tech and science is reshaping society in Silicon Valley and beyond. Find her online with @shwetawrites. The views expressed are personal.

Source link

By Tips Clear

Meet Thiruvenkatam, a professional blogger. With a keen interest in diverse subjects spanning technology, business, lifestyle, and more, He brings a unique perspective and wealth of knowledge to our platform. Drawing from years of experience and a passion for sharing insights, his articles and blog posts offer readers engaging and informative content that enriches their understanding and enhances their lives. Explore the world through his eyes and discover the depth of expertise they bring to our multi-author website

9 flowers that can survive heatwave 10 refreshing teas to beat the heat​ 7 easy exercise to fix hunchback posture 10 lesser-known Indian sweets 9 Mother's Day gift to make at home