Indirect Prompt Injection Is a Trust Boundary Problem
Indirect prompt injection is a trust-boundary failure; treat retrieved content as untrusted data, isolate it from instructions, and validate actions before execution.
Quick Tip 1 - Stop RAG Hallucinations with the Short-Circuit Pattern
How to reduce RAG hallucinations by short-circuiting generation when retrieval returns weak evidence, with a simple C# threshold check.
RAG Is a Data Problem Before It’s a Prompt Problem
Why stale documents, weak chunking, and thin metadata usually break RAG before prompt tuning does.
Debugging LLM Timeouts in .NET
A repeatable local setup for timeout triage in .NET LLM workloads using Aspire, OpenTelemetry, and Ollama.
Local LLMs in .NET
A minimal .NET starter for running local LLMs with Ollama + OllamaSharp behind IChatClient—no API keys, streaming chat, system prompts, and capped conversation history.
Eval-first: Why “It Worked Once” Is Not a Sign of Quality
Why eval-first matters for LLM apps and how to use datasets, scoring rubrics, and CI quality gates to catch regressions early.
