firebase • Mar 6, 2026
Are your users staring at a frozen screen while your massive LLM processes a prompt? In AI UX, a frozen screen means a broken app. Today, we are fixing that by turning your AI's "black box" into a "glass box" using Operational Transparency. In this video, I’ll show you how to eliminate AI wait anxiety by streaming the model's internal "thoughts" directly to your UI in real-time. Drawing on real-world transit psychology from the London Bus system, we dive into the code to show you exactly how to intercept LLM thought signatures using Firebase AI Logic and the Gemini API. Keep your users engaged, build system credibility, and drastically improve your app's user experience.
Watch on YouTube firebase • Mar 5, 2026
Leverage Chrome’s on-device Prompt API to deliver a private, infrastructure-free LLM experience. While on-device models like Gemini Nano offer improved privacy by keeping data local, hybrid AI experiences often require a fallback to a cloud model when local resources are unavailable. This post details how to implement essential transparency warnings and give users a choice to proceed, ensuring reliable functionality while maintaining user trust and preserving privacy as your core value proposition.
Dive in genkit • Mar 4, 2026
Ready to move beyond single-model agents? Dive into continuous improvement loops (Kaizen) using Genkit and Vertex AI Model Garden. We show you how to use models from multiple providers as a 'writer' and a 'critic' to build AI agents that critique and refine their own outputs for maximum quality and improved results.
Dive in genkit • Mar 3, 2026
Simplify your LLM development by using one Genkit plugin to access models like Claude, Mistral, Gemini, and Llama from Vertex AI's Model Garden. Learn how to switch between large language models without the hassle of rotating API keys or tracking multiple quotas.
Dive in