I like to tinker

Exploring the frontiers of AI, web development, and secure software architecture. Join me as I break things to see how they work.

ai • May 11, 2026

Google New AI UI Generation Tool - Full Demo | The Agent Factory

If you've ever shipped an app that worked perfectly under the hood but looked like it was built in 2003, this episode is for you. Designing beautiful, responsive user interfaces is notoriously difficult, but what if you could outsource the heavy lifting to an AI? In this episode, we explore Stitch, Google Labs’ AI-powered UI generation tool that acts as your personal creative director, building stunning interfaces on the fly without writing a single line of initial CSS. Host Nohe sits down with David East, DevRel Engineer at Google Labs, for a complete walkthrough of the Stitch platform. You'll see David design a fully custom Maryland crabbing tour website from scratch, establish non-negotiable constraints like color and theme, and dive deep into Design.md—the secret file that translates your creative intent into tokenized AI values. The episode also features an incredible demo using the Gemini CLI and the Stitch MCP server to pull down production-ready HTML and Tailwind CSS straight to your local environment, followed by a rapid-fire "hot takes" round on the future of frontend engineering. Whether you’re a backend developer avoiding Flexbox or a seasoned architect looking to speed up your prototyping, you'll walk away from this episode knowing exactly how to prompt an AI for highly-specific UI designs. You'll learn the concept of "semantic compression," how to leverage design variants, and why treating AI as your creative partner rather than just a code generator yields the best results.

Watch on YouTube
firebase • Apr 16, 2026

Implement hybrid inference in Android using Firebase AI Logic

In this deep dive, Nohe explores how to implement the hybrid SDK for Firebase AI Logic on Android. One of the biggest headaches in mobile AI is deciding between a cloud model (reliable but costly) and an on-device model (fast but fragmented). Now, you don't have to choose. With Hybrid Inference, your app can prefer the local model already managed by Android’s AICore and seamlessly fall back to Gemini 3.1 Flash in the cloud if the device isn't compatible.

Watch on YouTube