ai • Feb 25, 2026
Tired of generic chatbots? Discover how Firebase AI Logic lets you build unique, streamed conversational AI experiences with Gemini's native audio models! From practicing presentations with AI feedback to building custom voice assistants, the possibilities are endless. This video dives deep into Firebase AI Logic, showcasing how you can create immersive audio interactions within your apps. Learn how to leverage Gemini's native audio models to provide a unique vocal experience, keeping your API keys secure while offering features like custom personas, real-time feedback via tool calls, and diverse voice selections. We'll walk through a practical demo of building an AI assistant that helps practice public speaking, complete with live metrics and a distinct AI personality.
Watch on YouTube photography • Feb 21, 2026
My photos from a family trip to Disneyland in February 2026. I snapped as many photos as I could before it downpoured.
Dive in ai • Feb 12, 2026
Long LLM inference times can frustrate users. Learn how to use Operational Transparency and Firebase AI Logic to stream "thinking" steps, turning the black box into a glass box and keeping users engaged.
Dive in ai • Feb 4, 2026
We’re diving deep into the latest paradigms in AI development, starting with the difference between traditional context files (Gemini.md) and the new "Agent Skills" dynamic. We also share a story about using the Vertex AI Prompt Optimizer to automate our YouTube descriptions. It took 5 hours and nearly 100 million tokens, but the results were surprisingly consistent. Finally, we geek out on the Model Context Protocol (MCP), experimenting with exposing Flutter application state as local tools using Unix sockets.
Watch on YouTube