An Agentic Future: Best Practices From Our Portfolio Roundtable

An Agentic Future: Best Practices From Our Portfolio Roundtable

Written by

Anne Gherini

Published on

February 7, 2025

Agentic AI is transforming how startups design intelligent systems. Unlike traditional AI applications, which rely on a single large model, agentic AI architectures break tasks into modular agents that collaborate dynamically. This approach allows for greater flexibility, scalability, and adaptability—key factors for any founder building in this space.

To foster collaboration and shared learning, we hosted a technical workshop with several of Sierra’s technical founders, both in person and remotely from all over the world. Industry leaders like Tript Singh Lamba, Chief Product Officer at Expedia, also joined to share their perspectives. The unconference format was designed to break traditional boundaries, enabling open, interactive discussions that truly added to the collective brain trust of Sierra founders. It also served as a platform to strengthen the network and build direct connections among technical leaders driving agentic AI innovation.

During the unconference, participants explored fundamental questions shaping the future of AI systems, including:

  • How does one build durable competitive advantage in this fast-moving generative AI wave?
  • What activities are wasteful, and where is defensible intellectual property created?

These themes framed a series of in-depth discussions of best practices, challenges, and lessons learned in designing agentic AI systems.

 

Agentic Evolution

The discussion emphasized the evolving general application stack for agentic AI, with a strong focus on agentic collaboration. While much innovation is concentrated on app development frameworks, deeper layers such as evaluation frameworks, agent orchestration, memory management, and state management are becoming increasingly critical. Founders must navigate key decisions, such as selecting agent frameworks, coordination mechanisms, and LLM combinations while leveraging tools and APIs from major labs to simplify development and focus on higher-value innovations.

Recent breakthroughs, such as the DeepSeek project, highlight the potential for global contributions to accelerate progress. With similar advancements expected from other regions, founders are encouraged to stay agile, adopt cutting-edge tools, and continuously innovate to remain competitive in this rapidly evolving landscape.

 
The Power of Modularity in AI Systems

One key theme that emerged was the importance of modularity in AI systems. Rather than building monolithic models that attempt to handle everything, many founders have found success by structuring AI as a system of specialized agents. These agents handle distinct tasks, interact dynamically, and can be iterated on independently.

 

"Design modules that are loosely coupled and composable. If one fails, you’re only taking a hit on that small module and can hot-swap it out. The open-closed principle from object-oriented design is more true in AI than ever—models should be closed for modification but open for extension."  - Tript Singh Lamba, VP of Product at Expedia

 

This approach improves adaptability, making it easier to introduce updates or integrate new functionalities without overhauling the entire system. It also enhances reliability, as failures in one agent do not necessarily compromise the entire workflow.

Architectural Patterns for AI Agents

The roundtable identified three major architectural patterns for AI agents:

  1. Single-Agent Systems with Memory: Early-stage startups often begin with single agents designed for well-defined tasks. These agents retain context across interactions, improving effectiveness over time.
  2. Multi-Agent Collaboration: As startups scale, multi-agent systems emerge, where agents specialize in specific functions and collaborate dynamically. This is particularly valuable for handling complex workflows.
  3. Human-in-the-Loop Oversight: While automation has advanced, human oversight remains essential in many applications to ensure quality control and compliance.

Latency is a significant technical challenge with multi-agent systems, as communication between agents can slow down response times. To address this issue, founders are exploring solutions such as caching, parallel processing, and optimizing data retrieval mechanisms.

Overcoming Model Death

A critical challenge discussed was the problem of models degrading over time, referred to as "model death." This issue, central to the field of continual learning, arises as models are continuously trained on new data.


"When you continually train models, these models tend to die over time. You won’t observe this in the first couple of months, but later on, you’ll find that the model starts dying. This is an emergent area called continual learning, which is really important as you think about the efficiency of training." - Vivek Farias, co-founder of Cimulate and Professor of AI at MIT

 

To combat this, teams implement techniques like neuron resetting, identifying, and reactivating inactive neurons to maintain model performance. Continual learning is especially critical in industries like commerce, where fresh data is constantly generated, requiring models to adapt dynamically without losing prior knowledge.

 

Common Pitfalls and Lessons Learned

Founders shared several common pitfalls encountered while developing agentic AI systems:

  • Overcomplicating agent behavior: Attempts to make agents overly autonomous often led to unintended behaviors and debugging challenges.
  • Underestimating data retrieval complexity: Poorly structured knowledge bases or inefficient retrieval mechanisms resulted in inaccurate outputs.
  • Lack of feedback loops: Without continuous feedback from users, agents struggled to evolve and improve their effectiveness.

These challenges underscore the importance of starting with simple, well-defined agents and iterating based on real-world performance.

 

The Future of Agentic AI

Emerging trends in agentic AI suggest significant potential for innovation. As agents interact more, emergent behaviors could lead to unexpected capabilities. Standardizing communication protocols between agents could streamline collaboration across applications.

Advancements in edge computing are also enabling AI agents to operate locally, reducing reliance on cloud infrastructure, improving real-time performance, and enhancing data privacy.

 

“It doesn’t feel like there’s a fundamental intelligence wall we’re about to hit. For instance, an LLM solving a routing problem like a traveling salesman is 10 orders of magnitude worse in performance compared to a CPU. That gap isn’t a physics limitation; it’s something we need to improve on, and there’s still a billion times improvement available in specific domains." - Justin Waugh, Founder of Approximate Labs

 

Undeniable Momentum 

Agentic AI is still evolving, but its potential is clear. Startups in this space should focus on modularity, iterative development, and strong feedback loops. By selecting the right frameworks and carefully designing AI architectures, founders can create systems that are not only intelligent but also adaptable and scalable.

As companies experiment with AI agents, new best practices and lessons will continue to emerge. For those actively developing in this space, the opportunity to shape the future of AI-driven automation has never been greater.