Indirect Prompt Injection Is a Trust Boundary Problem

Indirect prompt injection is a trust-boundary failure; treat retrieved content as untrusted data, isolate it from instructions, and validate actions before execution.

March 23, 2026 · 6 min · 1171 words

Quick Tip 1 - Stop RAG Hallucinations with the Short-Circuit Pattern

How to reduce RAG hallucinations by short-circuiting generation when retrieval returns weak evidence, with a simple C# threshold check.

March 22, 2026 · 2 min · 316 words

RAG Is a Data Problem Before It’s a Prompt Problem

Why stale documents, weak chunking, and thin metadata usually break RAG before prompt tuning does.

March 9, 2026 · 6 min · 1272 words

Debugging LLM Timeouts in .NET

A repeatable local setup for timeout triage in .NET LLM workloads using Aspire, OpenTelemetry, and Ollama.

February 22, 2026 · 5 min · 1031 words

Local LLMs in .NET

A minimal .NET starter for running local LLMs with Ollama + OllamaSharp behind IChatClient—no API keys, streaming chat, system prompts, and capped conversation history.

February 8, 2026 · 3 min · 630 words

Eval-first: Why “It Worked Once” Is Not a Sign of Quality

Why eval-first matters for LLM apps and how to use datasets, scoring rubrics, and CI quality gates to catch regressions early.

February 7, 2026 · 3 min · 432 words