GenAI PM

AI Concepts

43 entities tracked across daily AI PM newsletters

MCP18 mentions

A protocol for connecting tools to AI agents; the newsletter contrasts bulky MCP setups with lighter skill-based integrations.

MCP is a protocol for connecting external tools and services to AI agents through structured interfaces.

AI agents11 mentions

Autonomous or semi-autonomous systems used here in sales and coding workflows. The newsletter highlights their role in replacing human SDR tasks and orchestrating complex tasks.

AI agents shift product design from single prompts to goal-driven systems with tools, memory, and autonomy.

agentic coding11 mentions

A software-building pattern where AI agents generate, modify, and ship code with increasing autonomy. For PMs, it changes the economics of product development and accelerates prototyping.

Agentic coding refers to AI systems that can plan, write, modify, test, and iterate on software with growing autonomy.

vibe-coding8 mentions

A coding style where developers use AI to generate and iterate on code through conversational workflows. The newsletter frames it as reshaping developer workflows and increasing the importance of context management.

Vibe-coding describes building software through conversational AI workflows instead of manually writing all code.

context engineering5 mentions

An approach to structuring and supplying the right context to AI agents so they can behave reliably and perform complex tasks. It is especially relevant to agent product quality and tool use.

Context engineering focuses on structuring the full context stack for AI systems, not just writing better prompts.

RAG4 mentions

A common pattern for grounding model responses in retrieved documents. The newsletter contrasts LlamaIndex's newer agentic document processing approach against RAG.

RAG grounds LLM outputs in external data sources instead of relying only on model training.

coding agents4 mentions

AI agents that help write, analyze, and operate on codebases. The newsletter frames them as useful for documentation, maintainability, and terminal-based workflows.

Coding agents extend beyond autocomplete by reasoning over repositories, using tools, retaining memory, and completing multi-step tasks.

red/green TDD3 mentions

A test-driven development pattern adapted for coding agents. It emphasizes an iterative failure/success loop that can make agentic coding more reliable.

Red/green TDD adapts classic test-driven development into a structured prompt pattern for coding agents.

Retrieval-Augmented Generation3 mentions

A technique that combines retrieval with generation so models can ground responses in external information. It is cited here as one of the levers in agent and orchestration design.

RAG combines retrieval and generation so models can answer using trusted external information rather than training data alone.

lethal trifecta3 mentions

A security risk pattern where AI agents have private data access, ingest untrusted content, and can exfiltrate data. For AI PMs, it is a key framework for designing safe agent features.

The lethal trifecta describes the high-risk combination of private data access, untrusted content ingestion, and data exfiltration capability in AI agents.

agentic engineering3 mentions

The practice of building software systems where agents plan and execute tasks with autonomy. The newsletter uses it in the context of anti-patterns and agent behavior management.

Agentic engineering focuses on building software with agents that can plan, write, and execute tasks autonomously.

agentic AI3 mentions

An approach to AI systems where agents perform tasks autonomously with tools and browser interaction. The newsletter frames 2026 as a year focused less on novelty and more on trust in deployed agentic systems.

Agentic AI describes systems that can autonomously complete multi-step tasks using tools, APIs, and browser interaction.

LLM3 mentions

Large language models used in production systems, benchmarking, and agentic workflows. The newsletter emphasizes their failure modes, evaluation, and infrastructure sensitivity.

LLMs are powerful but probabilistic system components whose behavior depends heavily on prompts, context, and infrastructure.

Agentic Infrastructure3 mentions

A paradigm that treats cloud infrastructure as autonomous coding agents to automate deployment and operations. For AI PMs, it reframes infrastructure as an agentic workflow rather than a static system.

Agentic Infrastructure treats cloud operations as workflows executed by autonomous coding agents rather than static tooling alone.

agentic coding evals3 mentions

Evaluation setups for coding agents; the newsletter notes that infrastructure configuration can skew benchmark results significantly.

Agentic coding evals measure coding agents in full system settings, not just isolated model capability.

LLMs3 mentions

Large language models used for generation, summarization, and reasoning-like tasks. The newsletter contrasts their pattern-matching strengths with limits in true understanding and planning.

LLMs are strong at generation, summarization, and editing, but their apparent reasoning is often pattern matching rather than true understanding.

Agentic Engineering Patterns2 mentions

A collection of patterns for building and operating agentic systems. The newsletter highlights it as a reference hub for practical coding-agent workflows like red/green TDD.

Agentic Engineering Patterns is a practical reference hub for building and operating AI agent workflows.

LLM benchmarks2 mentions

A concept covering how organizations evaluate large language models consistently and meaningfully. The newsletter frames standardization of benchmarks as a major enterprise challenge.

LLM benchmarks give organizations a repeatable way to evaluate model quality against real business tasks instead of generic leaderboard scores.

Intent Engineering2 mentions

A framework for specifying goals, context, and guardrails in multi-agent systems. It helps PMs guide autonomous agents with explicit objectives and stop rules rather than rigid control.

Intent Engineering helps PMs specify objectives, context, and guardrails for autonomous agents.

Agent Skills2 mentions

Reusable capabilities or task-specific skills added to AI agents to extend what they can do. Here they are mentioned as part of Claude's healthcare and life sciences expansion.

Agent Skills are reusable capabilities that extend AI agents with more structured, task-specific behavior.

product-thinking2 mentions

A PM framework focused on user value, tradeoffs, and outcomes rather than just technical implementation. Mentioned here as a skill engineers should develop in AI product teams.

Product-thinking emphasizes user value, outcomes, and tradeoffs over pure technical execution.

Model Context Protocol2 mentions

A protocol for connecting AI models to external tools and servers. The newsletter references discovery of MCP servers and reducing MCP token usage.

Model Context Protocol standardizes how AI models and agents connect to external tools, data sources, and servers.

deepagents2 mentions

A component or pattern used in LangSmith Agent Builder to support more capable agent workflows.

Deepagents is positioned as a capability or pattern in LangSmith Agent Builder for more capable agent workflows.

anti-distillation poison pills2 mentions

A defensive technique mentioned as part of Claude Code's strategy to deter model distillation by misleading competitors' training runs.

Anti-distillation poison pills are defensive tactics intended to reduce the value of model outputs for unauthorized training.

Turing-AGI Test2 mentions

A test introduced by Andrew Ng for evaluating economic utility. It is framed as a way to assess whether AI systems provide meaningful real-world value.

The Turing-AGI Test evaluates AI progress based on economic utility rather than abstract intelligence claims.

layered memory2 mentions

A memory architecture pattern for AI agents that separates different memory layers to improve context retention and task performance. It is presented as part of the design of autonomous coding assistants.

Layered memory separates short-term, task-level, and longer-term memory to improve AI agent performance.

Large Memory Models2 mentions

A memory architecture that mimics human memory instead of relying on RAG or vector search. For PMs, it suggests alternative approaches to long-context recall and personalization.

Large Memory Models are positioned as a memory-native alternative to RAG and vector search.

frontier AI labs2 mentions

Leading AI labs that control high-demand model APIs and compute. The newsletter uses the term to describe vendors that might restrict API access to prioritize their own products and customers.

Frontier AI labs are leading model providers that control scarce compute and high-demand AI APIs.

agent-first software design2 mentions

A software architecture paradigm where engineers orchestrate agents instead of hard-coding decision trees. For PMs, it suggests product teams may design systems around LLM behavior rather than deterministic logic.

Agent-first software design shifts engineering from hard-coded decision trees to orchestrating AI agents.

Compound Engineering2 mentions

A practice of capturing learnings from prompts and agent interactions to steadily improve system behavior over time. For PMs, it is a feedback-loop mindset for iterative AI product improvement.

Compound Engineering treats each prompt or agent run as an opportunity to improve future system behavior.

prompt injection2 mentions

Attack technique where malicious prompts manipulate AI systems or agents. Here it is connected to a GitHub issue triage workflow exploit.

Prompt injection manipulates AI systems by embedding malicious instructions in untrusted inputs the model consumes.

task delegation2 mentions

An agent design pattern where work is split into sub-tasks and assigned dynamically. In the newsletter, it is one of the core ingredients for building autonomous coding agents.

Task delegation breaks complex agent objectives into smaller sub-tasks assigned dynamically across tools or specialized components.

COBOL modernization2 mentions

The process of updating legacy COBOL systems, often for enterprise migration and maintenance. AI agents are increasingly positioned as tools to accelerate this high-friction modernization work.

COBOL modernization focuses on updating legacy systems for maintainability, integration, migration, and modern operations.

cognitive debt2 mentions

A product and engineering concept describing the hidden cost of AI-accelerated development when teams lose shared understanding of the system. It reframes debt from code maintenance to team cognition and system comprehension.

Cognitive debt describes the hidden cost of AI-accelerated development when teams lose shared understanding of the system.

APIs2 mentions

Programmable interfaces that let AI agents and software systems access services and complete tasks. The newsletter positions APIs as one of the means for agents to act on behalf of users.

APIs let AI agents access services and take actions on behalf of users.

AGI2 mentions

AGI is referenced as the frontier toward which current AI development is moving. In PM terms, it frames long-term product strategy, governance, and risk discussions.

AGI represents a long-term strategic horizon that shapes AI roadmap, governance, and capability discussions.

BM252 mentions

Classic lexical retrieval scoring function referenced in the context of probabilistic framing and hybrid search calibration.

BM25 is a foundational lexical ranking function still widely used in modern search and RAG systems.

tool integration2 mentions

The practice of connecting agents to external developer tools such as linters and debuggers. It is highlighted here as a building block for effective coding agents.

Tool integration connects AI agents to external developer tools like linters, debuggers, and test runners.

skill.md2 mentions

A lightweight skills-based pattern for packaging agent capabilities in small context-efficient files.

skill.md packages agent capabilities into compact files that reduce context overhead and improve modularity.

COBOL2 mentions

A legacy programming language often targeted for modernization and migration efforts. For PMs, it represents enterprise technical debt and transformation risk.

COBOL remains embedded in mission-critical enterprise systems despite being a legacy programming language.

CRI2 mentions

A tool interface used with skill.md to reduce token usage and run MCP commands in a more efficient way.

CRI is a lightweight interface for running MCP commands with lower token overhead.

agent middleware2 mentions

A modular layer that adds tools, guardrails, and custom instructions to AI agents. It is described as a composable harness for production agent systems.

Agent middleware adds reusable tools, guardrails, and custom instructions to AI agents through a modular layer.

Agent Workflows2 mentions

A workflow framework for building customizable agentic systems. It is highlighted as integrating with ACP.

Agent Workflows is a framework for building customizable agentic systems within the LlamaIndex ecosystem.