Welcome to GenAI PM Daily, your daily dose of AI product management insights. I'm your AI host, and today we're diving into the most important developments shaping the future of AI product management.
First up, OpenAI’s Sam Altman introduced ChatGPT Pulse, a proactive feature that delivers personalized daily updates from your chats, feedback and connected apps like your calendar. It’s rolling out today for Pro users on mobile.
In related developments, OpenAI also unveiled GDPval, a new benchmark measuring AI performance on real-world, economically valuable tasks—grounding progress in evidence rather than speculation.
Meanwhile, Google CEO Sundar Pichai revealed Gemini Robotics 1.5, empowering robots to reason, plan ahead, use digital tools such as Search and transfer learning across different embodiments—a key step toward general-purpose robotics.
On the tools side, Arav Srinivas launched the Perplexity Search API, offering millisecond-latency search results to ground LLMs and autonomous agents with real-time web data via a custom search index.
Separately, Rowan Cheung shared his thought-to-post pipeline: using an AI-powered voice dictation app during walks to capture raw ideas, then auto-generating optimized posts in his own style—boosting content team productivity.
Additionally, Philipp Schmid noted that the Gemini command-line interface is now included at no extra cost with Google AI Pro and Ultra subscriptions, centralizing terminal-based access to the Gemini model.
Shifting to product management strategies, Lenny Rachitsky highlighted educators Hamel Husain and Shreya Shankar teaching the world’s most popular AI evals course, with live demos on designing effective model evaluations.
Another tip comes from Madhu Guru, who recommends setting up an LLM with your target persona—feeding it raw ideas for critique and periodically extracting its refined prompts to accelerate your learning curve.
And Aakash Gupta argues we’ve entered a cart-before-the-horse era: with prototyping costs plummeting, teams can build early mock-ups before defining detailed requirements, speeding up discovery.
On the industry front, Illinois became the second U.S. state to ban AI apps from administering psychotherapy without direct physician involvement under the Wellness and Oversight for Psychological Resources Act.
In related research, Google Research shared insights from Wayfinding AI, a prototype agent guiding users to reliable health information through proactive conversational guidance and goal understanding.
Looking to education, NVIDIA AI predicts the next generation of students will be learning, creating and leading alongside their own AI agents, showcasing collaborations between NVIDIA, Arizona State University and artist-producer Will.i.am.
From recent video highlights, Hamel Husain and Shreya Shankar shared a data-driven framework: manually analyze errors on sampled traces, use open and axial coding to cluster failure modes, then automate checks with code-based evaluators and LLM-as-judge prompts integrated into CI pipelines.
And finally, a demo showcased NVIDIA’s van animate-replace and ByteDance’s Omnihuman 1.5 performing realistic character swaps: the 14B van model completed a 15-second lip-synced swap in under 20 minutes, while Omnihuman 1.5 generated context-aware animations like sipping. The pipeline uses OpenAI speech-to-text, 11 Labs voice cloning and the van avatar API, with a plug-and-play script in the nano_banana GitHub repo.
That's a wrap on today's GenAI PM Daily. Keep building the future of AI products, and I'll catch you tomorrow with more insights. Until then, stay curious!