Table of Contents
When words are not enough
So far, our agent can answer questions, stream responses, remember conversations, reduce chat history, and receive dynamic context.
But it still has one major limitation: it can only talk.
An LLM does not know your current application state by default. It cannot query your database, calculate values from your domain model, or place an order unless your application exposes that capability.
This is where tools come in.
A tool is a controlled C# function that the model can request during a run. The model does not execute arbitrary code. It can only call the functions you explicitly provide.
The flow looks like this:
- The user asks something that requires application logic.
- The model requests a tool call instead of producing a final answer.
- The framework invokes the matching C# method.
- The result is passed back to the model.
- The model uses that result to answer the user.
A small tool
Let’s stay with the barista agent from the previous article.
One useful tool is a brew recipe calculator. This does not need a database or external service. It is deterministic domain logic.
using System.ComponentModel;
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
public sealed record BrewRecipe(
double CoffeeGrams,
double WaterGrams,
string Ratio);
[Description("Calculates water amount for a pour-over coffee recipe.")]
public static BrewRecipe CalculatePourOverRecipe(
[Description("Coffee dose in grams.")]
double coffeeGrams,
[Description("Water per gram of coffee. Use 16 for a 1:16 ratio.")]
double waterPerGram)
{
return new BrewRecipe(
CoffeeGrams: coffeeGrams,
WaterGrams: Math.Round(coffeeGrams * waterPerGram, 1),
Ratio: $"1:{waterPerGram}");
}
AIAgent agent = chatClient.AsAIAgent(
instructions: "You are a barista assistant.",
tools: [AIFunctionFactory.Create(CalculatePourOverRecipe)]);
The Description attributes are important. They become part of the function schema that the model sees when deciding whether and how to call the tool.
But keep in mind, that they also cost tokens. Keep them short and concrete. The goal is not to document your whole domain. The goal is to help the model make the correct choice.
Now, a prompt like this can trigger the tool:
var response = await agent.RunAsync(
"I want to brew 18 grams of coffee at 1:16. How much water should I use?");
The model can call CalculatePourOverRecipe, receive the BrewRecipe, and then explain the result in normal language.
Registering multiple tools
One tool is easy to register manually. More tools get noisy quickly:
- Calculate a pour-over recipe
- Calculate espresso yield
- Convert a ratio into grams
- Suggest a grind adjustment
Reflection can reduce boilerplate, but do not register every public method on a class. That would turn every method into an AI-callable method. Use an explicit marker attribute or a whitelist.
[AttributeUsage(AttributeTargets.Method)]
public sealed class BaristaToolAttribute : Attribute
{
}
var brewTools = new BrewTools();
IList<AITool> tools = typeof(BrewTools)
.GetMethods(BindingFlags.Instance | BindingFlags.Public)
.Where(method => method.GetCustomAttribute<BaristaToolAttribute>() is not null)
.Select(method => (AITool)AIFunctionFactory.Create(method, brewTools))
.ToList();
This keeps registration convenient without exposing the whole class as an execution surface.
Tools with dependency injection
Calculation tools are useful, but real applications usually need services.
For example, the barista agent might need to check which beans are currently available. That data belongs within your application boundary, perhaps in a repository, an API client, or a database context.
To bridge this gap, you simply add an IServiceProvider parameter to your tool’s method. The framework automatically resolves this dependency locally at runtime, completely hiding it from the AI model’s tool schema.
[Description("Finds available coffee beans by roast level and flavor note.")]
public static Task<IReadOnlyList<CoffeeBean>> FindBeansAsync(
[Description("Roast level, for example light, medium, or dark.")]
string roast,
[Description("Flavor note, for example chocolate, citrus, or nutty.")]
string flavorNote,
IServiceProvider services,
CancellationToken cancellationToken)
{
var inventory = services.GetRequiredService<ICoffeeInventory>();
return inventory.FindBeansAsync(
roast,
flavorNote,
cancellationToken);
}
Then pass the service provider when creating the agent:
AIAgent agent = chatClient.AsAIAgent(
instructions: "You help users choose coffee beans based on taste and brew method.",
tools: [AIFunctionFactory.Create(FindBeansAsync)],
services: app.Services);
The model only supplies roast and flavorNote. The inventory service still comes from your application.
This distinction is important because model-supplied arguments are untrusted input. Services resolved from DI are trusted application dependencies.
Side effects need approval
Reading inventory is one thing. Placing an order is different.
Tools that spend money, delete data, send messages, or affect users should not run silently. For those cases, wrap the function in an approval-required tool.
[Description("Orders coffee beans from the supplier. Use only after explicit confirmation.")]
public static Task<string> OrderBeansAsync(
string productCode,
int bags,
IServiceProvider services,
CancellationToken cancellationToken)
{
var orders = services.GetRequiredService<ICoffeeOrderService>();
return orders.PlaceOrderAsync(productCode, bags, cancellationToken);
}
AIFunction orderBeans = AIFunctionFactory.Create(OrderBeansAsync);
AIFunction approvalRequiredOrderBeans = new ApprovalRequiredAIFunction(orderBeans);
With approval enabled, the agent can return a FunctionApprovalRequestContent instead of running the tool immediately. Your application then shows the function name and arguments to the user and sends the approval or rejection back into the same session.
The exact support depends on the provider and client type. Function tools are broadly supported, but approval is not universal across every provider.
Middleware and monitoring
Approval is for high-risk actions. Middleware is for cross-cutting concerns such as logging, validation, metrics, or blocking suspicious arguments.
Function calling middleware lets you inspect the function name and arguments before the method runs:
async ValueTask<object?> LogToolCallAsync(
AIAgent agent,
FunctionInvocationContext context,
Func<FunctionInvocationContext, CancellationToken, ValueTask<object?>> next,
CancellationToken cancellationToken)
{
Console.WriteLine($"Tool call: {context.Function.Name}");
return await next(context, cancellationToken);
}
Do not treat logging as authorization. Middleware is useful for observing and validating calls, but normal application permissions still need to exist behind the tool. Additionally, for standard logging and metrics, consider using the framework’s native .UseOpenTelemetry() extension rather than writing custom logging middleware from scratch.
When to use tools
Use tools when:
- The agent needs current application state
- The answer depends on deterministic business logic
- The result must come from your database, API, or domain model
- The action is narrow, describable, and easy to validate
Do not use tools when:
- A normal model answer is enough
- The function would become a broad “do anything” escape hatch
- The action cannot be validated before execution
- The tool would bypass existing authorization or business rules
- The side effect is too risky to run without approval
Tools should expose controlled capabilities, not bypass application design.
Conclusion
Tools turn an agent from a text generator into part of an application workflow.
For simple logic, AIFunctionFactory.Create is enough. For application behavior, pass services into the agent and keep dependencies behind your existing DI boundary. For state-changing actions, add approval and monitoring before letting the agent execute anything important.
Our agent can now use tools and request approval before executing them. But sometimes we do not want a conversational answer at all. We want a reliable C# object that can be validated, stored, or passed to the next workflow step.
In the next article, we will look at Structured Output.