- What's Up in AI
- Posts
- How Google is silently Killing OpenAI
How Google is silently Killing OpenAI
A DEEP DIVE on the Google Gemini 2.0 AI Studio 🔍

Welcome back, AI Geeks!
While OpenAI grabs headlines with its back to back announcements announcements in the past few months, Google has quietly dropped a bombshell called Gemini 2.0, AI studio & couple of other Research projects, a massive upgrade that might just change the AI landscape. 🚀
While everyone else is competing to be the next Deep Research, Google is thinking 10 steps ahead in building it’s game-changing Virtual Interactive AI. 🦾
And today’s deep dive is on how YOU can use Google AI Studio to do literally ANYTHING in the world – just keep away from the illegal stuff, please! 😜
🔍 GEMINI 2.0 - THE UNDERDOG OF AI MODELS
We thought Google was falling behind. We were dead wrong.
Google's latest AI update delivers major improvements in speed, accuracy, and context retention. Gemini 2.0 Flash supports a remarkable 1 million token context window—enough to process an entire book with perfect comprehension. 🤯
The experimental model is free during beta, with benchmark scores outperforming earlier models and competitors. What makes it stand out is its focus on real-time, dynamic interaction—not just answering questions but participating in conversations, anticipating needs, and adapting instantly.
Let’s see what it’s capable of:
Hire Ava, the Industry-Leading AI BDR
Your BDR team is wasting time on things AI can automate. Our AI BDR Ava automates your entire outbound demand generation so you can get leads delivered to your inbox on autopilot.
She operates within the Artisan platform, which consolidates every tool you need for outbound:
300M+ High-Quality B2B Prospects, including E-Commerce and Local Business Leads
Automated Lead Enrichment With 10+ Data Sources
Full Email Deliverability Management
Multi-Channel Outreach Across Email & LinkedIn
Human-Level Personalization
🩺 IT SAW A SPINAL CONDITION THAT DOCTORS MISSED 🤯
Perhaps the most groundbreaking feature of Gemini 2.0 is its screen sharing capability, turning it into a universal assistant for virtually any task on your computer.
Testing revealed impressive performance across diverse scenarios:
Medical diagnostics: Most surprisingly, Gemini accurately identified conditions like scoliosis in spine CT scans without any medical training, matching expert-level diagnosis.
The implications are enormous from helping students with homework to assisting professionals with specialized software, Gemini becomes your expert companion for virtually any screen-based task.
💰IT GAVE LIVE STOCK INSIGHTS AT YOUR FINGERTIPS 📈
When given NVIDIA's stock performance, Gemini provided a detailed breakdown of growth trends, highlighting potential green and red flags for investors. While not comprehensive enough for investment decisions alone, it offers valuable quick analysis through simple screen sharing.
Nvidia’s stock report
🔧 GEMINI 2.0’s Craziest Feature
Fine-tuning the model with your own data 📊
One of Gemini 2.0's most powerful features is its ability to be customized with your own data. Here's how it works:

The new AI Studio offers a tuner model feature that lets you train Gemini using your Google Drive CSV files. The process is surprisingly straightforward - you provide input and output examples for your specific use cases, and the model learns to fine-tune its outputs accordingly.
Real-world application: This opens up endless possibilities for businesses with specialized needs. Imagine a healthcare provider training Gemini on their documentation standards, a legal firm tuning it for contract analysis, or a marketing team customizing it to match their brand voice.
The days of generic AI may be numbered - Gemini 2.0 is bringing us personalized AI assistants tailored to specific industries and tasks.
🧠 SPATIAL UNDERSTANDING: SEEING THE WORLD DIFFERENTLY
Gemini 2.0's spatial understanding capabilities are nothing short of revolutionary, giving it an almost human-like ability to interpret visual scenes.
The Spatial Understanding app demonstrates this perfectly - it can scan an image, identify objects, tag them, and even map their positions in space with remarkable accuracy. In testing, it successfully analyzed an image of origami animals and their shadows, outputting a structured JSON with precise positional data.
Why this matters: This spatial awareness opens up applications ranging from augmented reality to autonomous navigation. Imagine an AI that can not just identify objects in your environment but understand how they relate to each other in physical space.
For designers, architects, and engineers, this feature could transform how they interact with 3D modeling and spatial planning tools.
Your daily AI dose
Mindstream is your one-stop shop for all things AI.
How good are we? Well, we become only the second ever newsletter (after the Hustle) to be acquired by HubSpot. Our small team of writers works hard to put out the most enjoyable and informative newsletter on AI around.
It’s completely free, and you’ll get a bunch of free AI resources when you subscribe.
🔮 GOOGLE’S FUTURE PROJECTS
Google’s future projects are set to redefine technology, with advancements in AI, quantum computing, and smart devices. These innovations promise to revolutionize industries and enhance our everyday digital experiences.
Project Astra: Your Eyes on the World 🌠
Beyond the features already available, Google's Project Astra shows where this technology is heading. By combining Gemini with camera input, it creates an AI assistant that can see and understand the physical world in real-time.
Point the camera at anything and ask questions - Astra can identify objects, read text, provide context, and even help you interact with your environment. The applications range from assistance for the visually impaired to industrial quality control and enhanced augmented reality experiences.
The technology essentially gives Gemini eyes to see the world, creating a more intuitive and natural way to interact with AI.
Project Mariner: Navigating the Web Autonomously 🌊
Another secret project revealed is Project Mariner an ambitious effort to revolutionize how we use the internet. This technology leverages Gemini 2.0's advanced multimodal understanding to automate complex web tasks.
Mariner can process everything on a webpage - from text and images to code and forms - and use this knowledge to navigate websites, submit information, and handle repetitive actions while keeping the user in control.
This goes far beyond simple web scraping. Mariner understands context and intent, asking for clarification when needed, and could potentially transform how we interact with online services.
JULES: The Developer’s Dream Assistant 💻️
For developers, Google is preparing Jules - a specialized coding assistant powered by Gemini 2.0. Set for release in early 2025, Jules will suggest complete lines of code, help with debugging, and assist in refactoring.
This could dramatically increase programming productivity while reducing errors, making development more accessible to newcomers and more efficient for experts.
🌟 Use Gemini & See the Magic for Yourself
Click on Show Gemini Feature & show any instruction to Gemini using your webcam & ask it to comprehend it will be able to do the best & let us know what you got as response.
💬 OUR TAKE: THE QUIET REVOLUTION
While most media attention has been focused on other players in the AI space, Google has been quietly building something potentially transformative with Gemini 2.0. The combination of massive context windows, multimodal understanding, and real-time interaction creates an AI assistant that feels more present and aware than anything we've seen before.
What's particularly notable is Google's decision to make the experimental model free during beta - a move that could accelerate adoption and steal thunder from competitors charging premium prices for their top models.
Is this the beginning of an AI power shift? It's too early to tell, but one thing is clear: the race is far from over, and Google has just reminded everyone why they shouldn't be counted out.
This is just the beginning. In the coming months & years, you can expect to see:
More specialized research tools for different domains
Better handling of conflicting information
Enhanced reasoning about expert disagreements
More interactive research experiences
The computational requirements behind these features are massive—processing dozens of sources in minutes requires significant parallel computing power. That 5-minute wait for ChatGPT's analysis? It's because the system is doing work that would take a human researcher hours.
🧩 DAILY CHALLENGE
Test your knowledge: What's the maximum context window size for Gemini 2.0 Flash, and approximately how much content can it process?
Hint: Think about the length of an average book
Reply us & tell us the answer!
What did you think of today’s issue?
What do you think of today's issue? |
Hit a reply to this email and tell us what think about this? We’ll be back with more.
Also, share this with your friends & ask them to subscribe here.
Stay Curious! 🤖
Team What’s Up in AI