For 70+ years, the function has been the atom of software. You break a problem into pieces, write a function for each piece, and compose them into something that works. Deterministic input, deterministic output. Predictable, testable, debuggable. That contract between developer and code has held across the procedural, object-oriented and functional programming paradigms.
That contract is now being renegotiated.
Across codebases large and small, developers are replacing function calls with agent calls, swapping if/elif/else blocks for LLM queries, and reaching for Model Context Protocols (MCPs) instead of traditional APIs. This isn't a new framework sitting on top of the old architecture. It's a change to the primitives developers use to think, design, and build software.
This post explores what that shift looks like in practice, why it's happening, what it costs, and how to start navigating it.
To understand what's changing, it helps to be precise about what's being replaced.
Traditional programming is built on determinism. A function takes inputs, executes a defined sequence of operations, and returns the same output every time. Given the same inputs, you get the same outputs. Control flow (if, else, switch, while) routes execution based on conditions you've explicitly defined. Error handling catches exceptions you've anticipated and responds with logic you've written.
This model is powerful precisely because it's predictable. You can test it, trace it, and reason about it. When something breaks, you can find the line of code responsible.
But determinism has a ceiling. The real world is full of edge cases, ambiguous inputs, and decisions that don't reduce cleanly to boolean logic. Developers have always known this. The traditional answer was to write more conditions, more handlers, more rules. The agentic answer is different: delegate the decision.
A function call says: execute this defined logic with these inputs. An agent call says: here is a goal and some context — figure out how to achieve it.
The if/elif/else block is the backbone of conditional logic. It's explicit, readable, and exhaustive (or at least, it tries to be). But what happens when the condition you need to evaluate isn't a boolean, when it's a judgment call? In practice, an agent is a system that perceives its environment, reasons about what to do, takes actions (often by calling tools or other agents), and iterates until a goal is met. Where a function is a recipe, an agent is a cook who can improvise.
Consider a customer support routing system. Traditionally:
def route_ticket(ticket):
if "billing" in ticket.keywords:
return "billing_team"
elif "technical" in ticket.keywords:
return "tech_support"
elif "account" in ticket.keywords:
return "account_team"
else:
return "general_support"
This breaks the moment a customer writes: "I was charged twice and now I can't log in." The ticket touches billing and technical support simultaneously. Your keyword logic picks one or the other — or routes to general support and loses context entirely.
An LLM-driven router handles this differently:
def route_ticket(ticket):
prompt = f"""
Analyze this support ticket and determine which team(s) should handle it.
Teams: billing, tech_support, account_team, general_support.
Ticket: {ticket.body}
Return a JSON list of teams in priority order.
"""
return llm.complete(prompt)
Now the routing decision incorporates the full semantic content of the ticket. It can identify multiple relevant teams, gauge urgency, and handle edge cases that no keyword list would anticipate. This pattern, of replacing explicit conditional logic with LLM-based reasoning, is appearing in production systems like intent classification, content moderation, data validation, workflow routing, etc.
The second version handles ambiguity, multilingual content, and novel document types without a single additional rule. It also introduces non-determinism, latency, cost and trade-offs we'll return to.
The point isn't that one approach is universally better. It's that they represent fundamentally different primitives. The agent call is not a smarter function. It's a different kind of thing.
Alongside agent calls and LLM-driven logic, another shift is underway at the infrastructure level: the emergence of Model Context Protocols (MCPs) as a new abstraction layer.
Traditional software integrates systems through APIs. APIs are defined interfaces with documented endpoints, request formats, and response schemas. They are powerful but can be rigid. Every integration requires custom code to translate between systems.
MCPs, pioneered by Anthropic, take a different approach. They define a standardized protocol through which AI models can discover and interact with external tools, data sources, and services. They do this dynamically, at runtime. Instead of hardcoding an integration, you expose a capability through the MCP and let the model determine how and when to use it.
Think of it this way: a traditional API is a door with a specific key. An MCP is a door that can describe itself to anyone who approaches it, explain what's behind it, and negotiate how to open it.
For developers, this means:
Tools become composable. An agent can discover available tools at runtime and select the right one for the task, rather than following a hardcoded sequence.
Integrations become declarative. You describe what a tool does; the model decides when to use it.
Systems become more flexible. Adding a new capability doesn't require rewriting orchestration logic. Instead you expose it through the MCP and the agent incorporates it.
MCP is still maturing, but it represents a meaningful shift in how developers think about system integration. The abstraction layer is moving up the stack, from "how do I call this API" to "what capabilities do I need, and how do I expose them to an agent."
These aren't theoretical constructs. Agentic patterns are showing up in production across industries.
Code review automation. Engineering teams are deploying agents that don't just lint code but reason about it. They can identify style violations, architectural concerns, security implications, and alignment with team conventions. The agent reads the diff, understands the context, and produces a review that a human engineer | or agentic team can respond to.
Data pipeline orchestration. Traditional ETL pipelines are rigid: extract this, transform that, load here. Agentic pipelines can adapt. If a data source changes its schema, an agent can detect the change, reason about the impact, and either adjust the transformation or escalate to a human, rather than failing silently or crashing loudly.
Adaptive error handling. One of the more compelling agentic patterns is error recovery. Traditional error handlers catch specific exceptions and respond with predefined logic. An agentic error handler can reason about what went wrong, consult documentation or logs, attempt alternative approaches, and decide when to escalate. This is error handling that responds to context rather than to a fixed list of anticipated failures.
Research and synthesis workflows. Tasks that once required a human to gather information from multiple sources, synthesize it, and produce structured output are increasingly handled by agents. A legal team might deploy an agent that reviews contracts, flags non-standard clauses, cross-references precedents, and produces a summary. They can provide first passes that makes human review faster and more focused.
Non determinism. Some problems benefit from the ability to be non-deterministic. Simulation systems can include an LLM response that isn't predictable, yet reasoned on and derived from a given context.
There is a pattern, for the time-being at least, to delegate the judgment call to an agent, keep the human in the loop for decisions that matter, and let the system handle the volume.
The case for agentic programming is grounded in specific capabilities that traditional approaches genuinely struggle to provide.
Handling ambiguity at scale. Real-world data is messy. Natural language is ambiguous. User intent is rarely explicit. Agentic systems handle this gracefully in ways that rule-based systems cannot, without requiring developers to enumerate every possible edge case.
Reduced boilerplate for complex decisions. Writing exhaustive conditional logic for complex classification or routing problems is time-consuming and brittle. An LLM-based approach can replace hundreds of lines of rules with a well-crafted prompt, and handle cases the rules would have missed.
Composability and flexibility. Agents can be composed into multi-agent systems where each handles a specialized task. This mirrors how human organizations work: specialists collaborating on complex problems, with coordination handled by orchestration rather than rigid pipelines.
Adaptive behavior. Agentic systems can respond to novel situations without code changes. When the world changes, like new document formats, new customer intents, new error conditions, the agent adapts. A rule-based system requires a developer to update the rules.
Faster development for certain problem classes. For tasks involving natural language understanding, complex reasoning, or high variability, agentic approaches can reduce development time considerably. What might take weeks of rule engineering can sometimes be prototyped in hours.
Agentic programming introduces real costs and risks that deserve honest treatment. Adopting these patterns without understanding the trade-offs is how teams end up with systems that perform well in demos and struggle in production.
Non-determinism. LLMs don't guarantee the same output for the same input. For many applications, this is acceptable. For others, like financial calculations, legal document generation, safety-critical systems, it's a fundamental problem. Know which category your use case falls into before you reach for an agent.
Latency and cost. An LLM call is significantly slower and more expensive than a function call. At scale, this matters. A classification task that runs millions of times per day may be economically viable as a rule-based system and prohibitively expensive as an LLM call. Cost modelling is not optional.
Debugging and observability. When a function returns the wrong value, you can trace the execution path and find the bug. When an agent produces the wrong output, the reasoning is often opaque. Debugging agentic systems requires new tools and a different mental model. "Why did it do that?" becomes a harder question to answer.
Reliability and failure modes. Agents can fail in ways that traditional software cannot. They can hallucinate. They can produce confident, plausible, but incorrect outputs. They can get stuck in loops. They can misinterpret instructions in ways that are difficult to anticipate. Robust agentic systems require guardrails, validation layers, and human oversight mechanisms.
Security surface area. Agents take actions. They can call APIs, write files, send messages, and even run shell scripts. This introduces new attack vectors. Prompt injection, where malicious content in the environment manipulates agent behavior, is a real and underappreciated risk. Security thinking needs to extend to the agent's decision-making, not just its inputs and outputs.
Engineering culture shift. Perhaps the most underestimated cost is organizational. Developers trained in deterministic systems need to build new intuitions for probabilistic ones. Testing strategies change. Code review changes. The skills that make someone effective in traditional engineering don't automatically transfer.
Given the benefits and the risks, the practical question is: when does it make sense to reach for agentic patterns?
Good candidates:
Tasks involving natural language understanding or generation
Decisions with high variability and ambiguous inputs
Workflows where flexibility matters more than strict determinism
Problems where the cost of writing and maintaining rules exceeds the cost of LLM calls
Situations where adaptive error handling would meaningfully improve reliability
Areas that can benefit from non-determinism
Poor candidates:
Calculations requiring exact, reproducible results
High-frequency, low-latency operations where LLM call overhead is prohibitive
Safety-critical systems where non-determinism is unacceptable
Simple, well-defined tasks where a function is cleaner and faster
Contexts where explainability and auditability are regulatory requirements
The honest answer is that most production systems will be hybrid: deterministic logic where precision matters, agentic patterns where flexibility matters. The skill is knowing which is which.
The barrier to entry for agentic programming is lower than it might appear. Here's a practical path forward.
Start with a contained problem. Don't rearchitect your entire system. Find one decision point in your codebase where you're maintaining a long list of rules, handling a high volume of edge cases, or frequently updating logic to keep pace with changing inputs. That's your pilot.
Choose a framework. Several mature frameworks make it easier to build agentic systems without starting from scratch:
LangChain — broad ecosystem, good for getting started quickly
LlamaIndex — strong for document and data-heavy workflows
CrewAI — designed for multi-agent collaboration
AutoGen (Microsoft) — flexible multi-agent framework with solid research backing
Anthropic's Claude with MCP — a good entry point if you want to explore the MCP abstraction layer directly
Invest in observability from day one. Before you deploy anything, set up logging for every LLM call: the prompt, the response, the latency, the cost. You cannot debug what you cannot see. Tools like LangSmith, Helicone, and Weights & Biases offer tracing and monitoring for LLM-based systems.
Build evaluation into your workflow. Define what "correct" looks like for your use case and build a test set before you build the agent. Agentic systems are difficult to evaluate after the fact. Knowing your success criteria upfront shapes every design decision that follows.
Add guardrails. Validate agent outputs before acting on them. For high-stakes actions, require human confirmation. Implement retry logic with fallbacks. Treat the agent's output as a proposal, not a command, until you've built confidence in its reliability.
Read the documentation. Anthropic's guides on building effective agents, OpenAI's documentation on function calling and assistants, and the MCP specification are all worth reading carefully. The field is moving quickly, but the foundational concepts are stable enough to invest time in.
The shift to agentic programming isn't only a technical change — it's a cognitive one.
Traditional software development is about specification: you define exactly what the system should do in every case. Agentic development is about intention: you define what you want to achieve and give the system the tools and context to figure out how.
This requires a different kind of thinking. Instead of asking "what logic do I need to write," you ask "what goal do I need to express, and what context does the agent need to achieve it?" Instead of debugging execution paths, you evaluate outputs and refine prompts. Instead of writing tests that check exact outputs, you write evaluations that assess quality and correctness across a range of cases.
For many developers, this is uncomfortable at first. The loss of determinism can feel like a loss of control, and in some ways, it is. But it's also a trade: you give up some control over the how in exchange for meaningfully more flexibility in the what.
The developers who navigate this transition well will be those who can hold both models in mind. When to write a function and when to call an agent, when to trust the LLM and when to validate its output, when the flexibility is worth the cost and when it isn't.
The function isn't going away. Deterministic logic isn't obsolete. But the foundations of programming are expanding to include new primitives. Agent calls, LLM-driven control flow, MCP-based integrations handle a class of problems traditional code was never well-suited for.
This is a genuine paradigm shift. The question for developers isn't whether to engage with agentic programming, but when and how. The teams that work this out early, that develop the intuitions, the tooling, and the engineering culture to work effectively with agentic systems, will be better positioned as these patterns become standard.
The best place to start is a small experiment. Find one brittle rule-based system in your codebase, one decision point that's been a maintenance burden, and try replacing it with an agent. Measure the results honestly. Learn from what breaks.
The foundations are shifting. The developers who help lay the new ones will shape what software looks like for the next generation.
Ready to explore agentic programming? Start with the official documentation from Anthropic or OpenAI, or dive into frameworks like LangChain or CrewAI and subscribe to be notified when we publish new content @ PsiSpark. If you're already building with agents, share what you're learning. The field advances fastest when practitioners share what actually works in production.