Trying out GitHub Copilot Agent for a Functional Task?


I've been using GitHub for a while https://github.com/andywingate but had never touched its agentic features. Working on the Nubimancy project gave me a good excuse. This is a practical breakdown of what I used, what it saved me, and where human judgement still mattered.

Full context is in the Nubimancy project log - this post is about the tools and the approach.

The Task

Validate ~75 CSV schema files against Business Central master data import requirements. Check field names, required fields, data types, and format compatibility. Important, methodical, time-consuming if done manually. There are a number of extensions in the mix also on top of the standard BC tables. I gave it to a GitHub Copilot agent. Here's exactly what I did.

Cloud agent or local agent?

All of the code and notes for the Nubimancy project are all in the GitHub organisation, a lot of which is in the Knowledge repo, where this issue was raised. 

The local option would be to clone the repo and carry out the work using the VS Code AI chat, for this task I used the web based GitHub Copilot agent, after chatting with Claude;

I can see you're looking at a GitHub issue in the Nubimancy/Knowledge repo, with the "Assign Copilot to issue" dialogue open. You're being asked whether to work on the web interface or clone locally in VS Code. Let me walk you through the full picture.

The Two Approaches

Option 1: Web Interface (GitHub Copilot in the browser)

Pros:

Zero setup — works immediately in the browser
Good for well-scoped, single-file or read-heavy tasks
Copilot can browse the repo, read files, and produce a written analysis or report
Ideal for this specific issue since it's fundamentally a validation and documentation task, not code generation

Cons:

Limited ability to run code or execute scripts
Can't actually open and parse CSV files programmatically — it will reason about them but not execute
Less control over iteration; harder to review changes incrementally
No local tooling (linters, BC-specific validators, etc.)

Option 2: Clone and Work Locally in VS Code

Pros:

Full Copilot agent mode with access to your file system
Can actually read and process all 22 CSV files programmatically
You can run scripts, validate field mappings, and iterate
Better for producing structured outputs (e.g. a comparison matrix or amended CSVs)
More transparent — you see every change before it's committed

Cons:

Requires cloning the repo and having VS Code + GitHub Copilot extension set up
Slightly more overhead to get started
You'll need to manage the branch yourself locally

My Recommendation for This Issue

Use the web interface first, because Issue #1 is a review and validation task — Copilot needs to read 22 CSVs and assess them against BC master data requirements. It doesn't need to execute code; it needs BC domain knowledge and structured analysis. The web agent is well-suited to that.

What to Say to the Agent

Here's a prompt I'd suggest you paste into that "Optional prompt" box:

Please review all CSV files in the worldbuilding/data-planning/ folder. For each file, validate:

Field names against standard Business Central master data field references (e.g. Customer, Vendor, Item, G/L Account tables)

Whether required fields for BC configuration packages are present

Whether data types and formats are appropriate for BC import

Any fields that are missing but critical for BC functionality

Produce a structured summary table listing each CSV file, its apparent BC entity mapping, any missing required fields, any format concerns, and a recommended action (e.g. Add field / Rename field / No action required).

Work on a new branch. Do not modify the CSV files themselves — output your findings as a new markdown document at worldbuilding/data-planning/validation-report.md.

Here is how that looks:

On the issues page, the option to assign the issue to an agent show on the right where you assign issues to users.

Upon assigning the issue to a Copilot agent you have the option to provide an additional optional prompt, which could be handy if the issue comment itself is not quote sufficient to explain how to resolve the issue. 

GitHub Copilot creates a branch automatically, works through the task, and raises a Draft Pull Request when done. No branch setup needed. No manual file opening. The agent posts progress updates as PR comments as it works.

What the agent did that I didn't ask for is the part worth noting. Partway through, Jeremy Vyska commented on the PR asking whether the agent had cross-referenced the BC extension repo. It did - and found that an entire extension (18 AL tables) had zero CSV schema files. It created all 18 missing files, expanded the report scope from 73 to 91 files, and flagged a table ID conflict in the extension code that would cause a BC compile error.

Time saved: Reviewing 91 files manually against BC table definitions and writing a structured report would have taken a good few hours possibly the best part of a day. The agent did it in a couple of hours while I got on with other things.

Review the PR Properly

The agent created a draft PR (pull request) - that is all the proposed changes to files in the repo - in a separate branch - this means no changes are made of the main branch without a human in the loop to review, approve and then trigger the merger of the branch.

I worked through a simple checklist: was the report file there, were the new CSV files correctly derived from the AL definitions, had any existing files been modified, did the critical findings make sense from a BC perspective.

The agent's own PR write-up made this significantly faster than I expected. It summarised exactly what it had done, what was in scope, what wasn't, and what the recommended next steps were. As a reviewer, having that structure meant I was verifying claims rather than interpreting raw diffs.

In the GitHub pull request you can review all the files and chat with the agent or other users

What you still need to bring: BC functional knowledge. The agent can identify a missing field - knowing why it matters for a configuration package, or whether a blocker is critical vs. fixable later, is still the consultant's job.

Web Agent vs. Local VS Code - Which to Use

Use the GitHub web agent when the task is read, analyse, and report. Define the output upfront, let it run.

Use local VS Code Copilot agent mode when you need to create and iterate on multiple interconnected files, run things to validate outputs, or maintain referential consistency across a large dataset.

Issue #1 (validation and reporting) was a web agent job. Issue #2 (generating sample data for 5 hero companies with consistent cross-references) will be local VS Code. Using the wrong tool for the wrong task gives worse results.

The Practical Takeaways

Prompt quality matters more than tool choice. A vague prompt gives a vague result.

Draft PRs keep you in control. Review them properly - your domain knowledge is what makes the output trustworthy.

Human nudges change outcomes. Jeremy's comment mid-task uncovered an 18-table gap. Staying engaged with the PR as it develops is worth it.

This is not "AI does everything." I spent time on prompt design, PR review, and applying BC judgement. The agent did the methodical legwork. That is the right division of labour.

Try It Yourself

You need a GitHub repo and a GitHub Copilot licence. Create an issue with clear acceptance criteria, design your prompt carefully, assign Copilot from the Assignees section, and engage with the PR when it appears.

The Nubimancy Knowledge repo is public - PR #5 shows exactly what the output looks like if you want to see it before trying on your own work.