GitHub Copilot
I spend most of my day inside VS Code, and GitHub Copilot has become the tool I reach for constantly — writing IaC, debugging KQL queries, scaffolding pipelines, reviewing pull requests. But the default experience, where Copilot just picks up whatever context it can find, leaves a lot on the table.
This section is about the techniques I've developed to get significantly better results from Copilot by steering the LLM deliberately. Not prompt engineering in the "write a better sentence" sense — I'm talking about the structural tools GitHub has shipped that let you shape the system prompt, inject persistent context, and build reusable agent workflows that actually understand your codebase.
What's covered here
- Prompting Guide — The three-layer architecture I use to customize Copilot: repository-level instructions, agent skills, and custom agents. How each one works, when to use which, and how they compose together.
Why this isn't just "tips and tricks"
The default Copilot experience uses a built-in system prompt that's generic by design. It doesn't know your team uses Terraform with a specific module structure, or that your Python projects always use ruff and pytest, or that your pipelines run on Airflow with particular DAG conventions. Every time I start a Copilot session without custom instructions, I'm burning tokens re-explaining context the tool should already have.
The features I document here — custom instructions, skills, and custom agents — are the mechanisms GitHub provides to solve that problem. I think of them as layers of specificity, and getting the layering right is what makes Copilot feel like a team member instead of a generic autocomplete engine.