ai • Feb 4, 2026
We’re diving deep into the latest paradigms in AI development, starting with the difference between traditional context files (Gemini.md) and the new "Agent Skills" dynamic. We also share a story about using the Vertex AI Prompt Optimizer to automate our YouTube descriptions. It took 5 hours and nearly 100 million tokens, but the results were surprisingly consistent. Finally, we geek out on the Model Context Protocol (MCP), experimenting with exposing Flutter application state as local tools using Unix sockets.
Watch on YouTube ai • Feb 1, 2026
AGENTS.md is dead weight. Discover the automated workflow for building lean, token-saving Agent Skills.
Dive in ai • Jan 26, 2026
In this next episode of our "untitled" podcast, Nohe and Rody take a "tech walk" to discuss the evolving landscape of AI development tools. We dive deep into the differences between the linear workflows of Gemini CLI and the asynchronous, project-level capabilities of Anti-Gravity. We also geek out on home lab setups—discussing the shift from Docker Compose to Kubernetes (K3s) on Raspberry Pi clusters—and share a game-changing workflow using NotebookLM to generate context files for your AI agents. Finally, we explore Stitch for generative UI, including how to instantly create shaders and animations from simple screenshots.
Watch on YouTube ai • Jan 20, 2026
Learn how to use the Vertex AI Prompt Optimizer to automatically tune your prompts to get better results by iterating on your prompts and then running an evaluation on the outputs assessing their quality to see if it has improved.
Read at firebase.blog