At South Coast Summit 2025, José Miguel Azevedo and I had the opportunity to share something we've been working on since our last Page Scripting talk at Dynamics Minds. Using AI to transform how we work with page scripting in Dynamics 365 ERP.
The Core Idea
Page scripting and task recording have been around for a while in Business Central and Finance & Operations respectively. They're powerful tools for capturing business processes, automating testing, and documenting workflows. But creating and maintaining these scripts? That's always been the tedious part or cost prohibitive part if you really wanted to scale the volume of test scripts.
The AI adoption approach - Stand, Walk, Run, Fly
Stand Phase: Getting the Foundation Right
The first step is all about creating a solid baseline. In page scripting this was around the creatin of a base script for our testing process "happy path". We used the example of creating and posting a purchase order. The output file for a BC page script is a YAML file.
With this clean baseline script, next we applied the AI asking it to analyse the structure, understand the pattern, and document what the script does step-by-step. This gives the AI context for everything that follows.
Walk Phase: Building the Prompt System
This is where things get interesting. Moving to VS Code with GitHub Copilot Pro, gave us access to multiple models and the ability to work with various files simultaneously.
The setup includes:
- Main prompt instructions that teach the AI how to work with your scripting format
- Business process definitions
- Variable lists for all the parameters
- Source control in GitHub to tracking changes
The benefit of working in VS Code rather than just a web interface? You can easily view a local folder structure for multiple levels of instructions and AI guidelines, we can use Model Context Protocol (MCP) servers to connect to Azure DevOps / Jira, if we needed to track tests against project work items or test cases.
Run Phase: Create multiple Variants
Once your AI understands your base scripts, you can start generating variants at scale. Want to test the same process across twenty different items? Different vendors? Edge cases with specific field combinations?
This is where the magic happens. Instead of manually creating script after script, you define your variants in structured files and let the AI generate the complete test suite. We demonstrated creating batch scripts for multiple items, validating them, and running them in the background.
By validating each step the AI was able to create its own detailed prompt instructions.
Fly Phase: High-Volume Generation with PowerShell
AI tools are excellent at understanding patterns and generating structured content, but they're not optimised for very high-volume data operations. When you need to create hundreds or thousands of test script variants, asking the AI to directly generate each one becomes inefficient.
The solution? Get the AI to write a PowerShell or Python script that does the heavy lifting.
Instead of asking the AI to create 500 page scripts directly, we asked it to generate a PS script that could:
- Read our variant parameters from structured files
- Apply the transformation logic to our base script template
- Output the complete set of page scripts in seconds
This approach plays to each tool's strengths: AI excels at understanding the pattern and writing the generation logic, while PowerShell handles the high-volume execution efficiently. The result is the ability to scale from dozens to thousands of test variants without hitting token limits or waiting for the AI to process each one individually.
It's a perfect example of using AI not to do all the work, but to automate the automation itself.
Real-World Gotchas
Because this is a practical session, we made sure to cover the things that will trip you up:
- MFA and test accounts: If you're using SaaS environments, your test-runner account needs MFA disabled or you'll have a bad time
- User context matters: Make sure your user's default company is set correctly before batch runs
- Permissions can be sneaky: Test your base script with the actual test-runner account - things like "show more" actions can break scripts if permissions aren't quite right
- BC performance tip: Consider running tests against a local Docker container. It's significantly faster than hitting cloud environments repeatedly
- AI security for work data: If you're working with sensitive data, use Azure OpenAI models deployed in your own tenant. You can connect VS Code to these and maintain data governance
The Bigger Picture
What we demonstrated with page scripting is really a blueprint for using AI with any structured output format. The approach - understanding the schema, creating clean examples, building comprehensive prompts, and then scaling to variants - applies far beyond just test automation.
José and I approached this from slightly different angles (BC vs F&O), but the general approach work the same for both systems which gave us confidence that the fundamental approach we used for prompt engineering for structured outputs in this case is likely transferable.
What's Next?
If you're thinking about applying this approach in your own work, start small. Pick one repetitive process you need to script, create one really good baseline, and experiment with getting AI to create a single variant. Once that works, you'll quickly see where the time savings compound.
For those who attended South Coast Summit, thanks for the great questions and discussion. For everyone else, the techniques we demonstrated are available to try right now - you just need a BC or F&O environment, access to an AI tool (ChatGPT, Claude, or GitHub Copilot), and a willingness to experiment.
The AI won't replace understanding your business processes or knowing how to create good tests. But it absolutely can multiply your ability to execute once you know what needs to be done.
Who was at the session?
Links
What do you think?
Connect or follow me on LinkedIn to get all my updates Andrew Wingate | LinkedIn
Click to Follow