Presenters
Source
Navigating the AI Era: Evolving Architectures for a New Age 🚀
The world of technology is in constant flux, and with the meteoric rise of Artificial Intelligence, particularly Generative AI, the need for evolving architectures has never been more critical. This isn’t just about adopting new tools; it’s about a fundamental shift in how we design, build, and govern our systems.
In this deep dive, we’ll explore the challenges, new paradigms, and practical advice for architects and technical leaders navigating this exciting new landscape.
The AI Revolution Demands Generative Architectures 💡
We’re moving beyond simply automating existing processes. Generative AI introduces a new level of autonomy and emergent behavior that traditional, procedural architectures simply can’t handle. As Jesper, an Enterprise Architect Lead with DXC Technologies, aptly puts it, “If we’re introducing technical debt into AI into generative AI, it’s going to drift and it’s going to hallucinate.”
This means our mindset must shift. We can no longer treat AI as just another piece of software to be plugged into existing workflows. The core challenge lies in managing this newfound autonomy. When we “turn on the autonomy tap,” agents gain a degree of free will, leading to unpredictable, emergent behaviors. Unlike traditional systems where we can identify and fix issues within a defined set of 20 potential problems, emergent behavior is inherently less predictable. This necessitates a complete reevaluation of our architecture, design, guardrails, and governance.
The Pitfalls of Retrofitting: Why AI Needs Intentional Design 🛠️
A staggering 95% of AI proof-of-concepts fail, according to a recent MIT report. Why? A primary reason is the tendency to retrofit AI into existing procedural logic frameworks. This is like trying to mix oil and water. We get the costs associated with AI but none of the benefits.
The key is a mindset shift. Instead of controlling logic at runtime, we need to focus on defining boundaries. Think of it like containing a “naughty AI agent” that wants to get out. We must meticulously define these boundaries, understanding every seam, hole, and interface. Within these boundaries, we empower the AI, which is often far more intelligent than us, by defining what it can’t do and what we want to achieve – essentially, setting goals.
The Seven Seams of Generative Architecture 🧵
Jesper outlines a framework of seven critical “seams” that define the boundary of an agent and help ensure containment. These include:
- Goals: Clearly defining what the AI should achieve.
- Scope: Understanding the interaction points and context with other systems.
- Risk: Identifying and managing potential risks within the agent and the system.
- Authority & Decision Rights: Explicitly defining the decisions an agent can make.
- Policy: Establishing rules and constraints for the AI’s behavior.
- Semantics: Ensuring a shared understanding and consistent interpretation of terms across agents.
- Evidence: Establishing how to verify information and retain records for auditability.
By meticulously defining these seven seams, we can build confidence that the AI will operate within its intended boundaries.
Innovation Meets Governance: A Fused Approach 🤝
The rapid pace of AI innovation often outstrips the speed of traditional governance. However, in the AI era, these cannot be separate. Jesper uses the analogy of a merry-go-round spinning faster and faster. We can’t afford to let go of governance and fly off; instead, we need to move towards the center, integrating strategy, architecture, design, and governance.
This means designing governance into the agent or system at design time. They are not separate entities; they must fuse. Innovations and governance need to be “joined at the hips” to prevent the mismatch that leads to AI systems drifting and behaving unpredictably.
Maturity Levels of AI Adoption: From Ad Hoc to Autonomy 📈
Understanding where an organization stands in its AI journey is crucial. Jesper outlines maturity levels, with Level 3 being a significant leap:
- Level 1 (Ad Hoc): Basic adoption, like using AI code assistants, yielding scattered benefits.
- Level 2 (Repeatable): Deploying single-purpose AI agents (e.g., an onboarding agent) that can repeat business processes. These are manageable with current systems but can be brittle and expensive to maintain.
- Level 2.5 (Multi-Agent, Non-Autonomous): A step towards complexity, involving multiple agents but without full autonomy.
- Level 3 (Multi-Agent with Autonomy): This level requires a new operating model, design language, architecture, and governance approach. It’s a significant jump, demanding a new way of thinking.
As we move through these levels, the guardrails must evolve. For instance, the concept of authority and decision rights becomes paramount when granting autonomy.
The Power of System Thinking: Beyond Individual Agents 🌳
While individual agents are powerful, the real magic (and complexity) emerges when multiple agents interact. This highlights the need for a strong system-thinking approach, a skill that enterprise architects are uniquely positioned to provide. They understand scale and build foundations, a stark contrast to teams that assemble pieces without a holistic view.
Consider the analogy of removing wolves from a forest. Initially, it seems logical to eliminate aggressive predators. However, over time, the entire ecosystem degrades. Similarly, in AI, removing the “predators” (or in this case, the complex interdependencies and governance) can lead to unforeseen systemic problems. We need to understand how AI components interact and evolve within the larger ecosystem.
Guardrails in Action: Designing for the Unexpected 🛡️
When building AI systems, especially those with agentic components, we must implement robust guardrails. These aren’t just policy documents; they are concrete mechanisms for control.
Examples of guardrails include:
- Scope: Defining clear interaction points with external systems like ERPs or CRMs.
- Goals: Providing LLMs with guidance on how to balance competing objectives (e.g., profit maximization vs. margin maximization), often through policies that set minimum thresholds (like a 10% profit margin).
- Policy: Acting as the primary instrument for defining constraints and rules.
- Evidence: Establishing how to verify information and retain records for auditability.
These guardrails can be thought of as pulling procedural logic out of the core code and embedding it into the boundary of the AI system.
The Future of Architecture: Essential Roles in the AI Era 👨💻
The evolving landscape of AI necessitates a redefinition of architectural roles:
- Business Architects: Will become crucial in defining the policy anatomy and capturing essential business rules for AI systems. They will be the bridge between business needs and agentic AI.
- Data Architects: Will remain vital, as data quality and adherence to ontological and semantic layers are paramount for AI success.
Jesper emphasizes that roles like business architects and enterprise architects are no longer optional; they are becoming essential due to their focus on the entire ecosystem and the critical translation layers required for AI integration.
Agentic AI: Solving Real Problems or Creating New Ones? 🤔
The question arises: are we solving real problems with agentic AI, or are we simply applying a solution in search of a problem? The fundamental problem agentic AI solves is mindset. We must shed the old world’s logic-driven approach and embrace autonomy, understanding that boundaries and governance are essential for control.
Jesper argues that if this core understanding sinks in, 75% of the problems are solved. This leads to a new design language and governance language, providing answers to complex questions about system behavior and brittleness.
The Role of Small Language Models (SLMs) 🤏
While frontier models like GPT-4 and Gemini are powerful, their cost can be prohibitive for every agent in a multi-agent system. The rise of Small Language Models (SLMs) is critical. By strategically using SLMs for specific tasks, organizations can significantly reduce costs and expand the use cases for AI.
Trade-offs in the AI Age: Pace vs. Stability ⚖️
Navigating the AI era involves critical trade-offs:
- Pace vs. Stability: The speed of innovation must be balanced with the need for reliable and stable systems.
- Technical Debt in AI: Unlike traditional systems, technical debt in generative AI can lead to drift and hallucinations. The critical question becomes: how much drift can we afford?
- Business Impact: The acceptable level of drift depends on the criticality of the business problem being solved. For non-critical applications, a bit of drift might be acceptable. For critical systems (e.g., financial trading), the tolerance for error is near zero, demanding more stringent governance.
A Call to Action for Developers and Architects 📣
For developers, the advice is clear: get off the traditional development lifecycle. It’s becoming a race to the bottom. Instead, invest time in learning and experimenting with agentic systems. Consider architecting systems with infinite scalability and built-in governance from the outset.
For architects, it’s a fantastic time to be in the profession. The responsibility increases, requiring not only building the right systems but also upskilling engineers to think in systems and understand the broader impact of their work.
The Final Word: Remember the Boundary! 🚧
As we conclude, the most profound takeaway is Jesper’s repeated emphasis: “Remember the boundary.” Creating and maintaining these boundaries is paramount to staying in control of AI systems and preventing them from controlling us. The future of architecture is dynamic, challenging, and incredibly exciting. By embracing evolving architectures, intentional design, and robust governance, we can successfully navigate the AI era and harness its transformative power.