From Vibe Coding to AI Native Development: Learn to build structured AI workflows with VS Code and GitHub Copilot using the PROSE Framework.
You've seen Copilot work magic on small projects. But when you tried it on your enterprise codebase, something felt off:
- Copilot generated code that ignored your team's patterns
- It used libraries you're not allowed to use (security, compliance)
- The output was technically correct but wrong for your context
- Every developer got different results for the same task
This is the "Vibe Coding Cliff" — ad-hoc prompting works until it doesn't. Enterprise brownfield codebases have constraints, legacy patterns, and tribal knowledge that Copilot can't learn from a single prompt.
By the end of this lab, you'll have a working multi-agent pipeline that generates documentation or tests following YOUR team's standards.
📚 Theory: This lab applies the PROSE Framework for AI Native Development.
| Exercise | What You Create | PROSE Element |
|---|---|---|
| 1 | .instructions.md — Scoped context with applyTo |
Engineering (Context) |
| 2 | .prompt.md — Structured reusable tasks |
Prompts |
| 3 | .agent.md pair — Analyzer hands off to Generator |
Orchestration |
| Scaling | Skills demo — Agent discovers capabilities | Skills |
→ The result? Reliability — consistent, repeatable output across your team.
"Reliability isn't a technique you apply—it's the outcome of applying all other PROSE components systematically."
| Section | Focus | Duration | Format |
|---|---|---|---|
| Setup | Verify environment, choose track | 5 min | Individual |
| Exercise 1 | Create modular instructions | 15 min | Hands-on |
| Exercise 2 | Build reusable prompts | 15 min | Hands-on |
| Exercise 3 | Design agents with handoffs | 20 min | Hands-on |
| Scaling | Skills + Copilot CLI hands-on | 20 min | Demo + hands-on |
| Wrap-Up | Q&A + resources | 10 min | Discussion |
📖 Docs track and 🧪 Testing track run the same exercises with different content, then everyone converges for the Scaling section.
- VS Code 1.108+ with GitHub Copilot extension
- GitHub Copilot Chat working (test: open chat, type "hello")
- This repo cloned:
git clone https://github.com/DevExpGbb/ai-native-dev-lab.git
Pick ONE track based on your interest — both teach the same concepts with different content:
| Track | Best For | You'll Build |
|---|---|---|
| 📖 Docs | Documenting legacy apps | Instructions → /generate-docs → Doc Analyzer + Writer |
| 🧪 Tests | Improving test coverage | Instructions → /generate-tests → Test Analyzer + Generator |
After completing exercises 1-3 in your track, everyone converges for the Scaling section.
Navigate to the brownfield sample: sample-projects/contoso-orders-python/
🏗️ This is a realistic FastAPI app with legacy auth, "DO NOT MODIFY" constraints, and intentional test gaps
Important: Complete exercises 1-3 in your chosen track, then everyone proceeds to Scaling Across the Enterprise.
Before creating anything, let's see why instructions matter:
-
Open
sample-projects/contoso-orders-python/src/services/order_service.py -
Ask Copilot (without any instructions):
"Add a docstring to the
create_ordermethod" -
Observe what happens:
- Does it use Google-style or NumPy-style docstrings?
- Does it match the existing style in the file?
- Does it include the
Raisessection for exceptions?
💡 This inconsistency is the problem. Every developer gets different results.
Your task: Create an instruction file that enforces YOUR documentation standards.
-
Create the file:
.github/instructions/documentation-standards.instructions.md -
Start with this skeleton (don't copy-paste the full solution!):
---
applyTo: "**/*.py"
---
# Documentation Standards
## Docstring Format
<!-- What style? Google, NumPy, reStructuredText? -->
## Required Sections
<!-- What must every docstring include? Args? Returns? Raises? -->
## Examples
<!-- Should docstrings include usage examples? When? -->- Fill in YOUR standards based on what you observed in the codebase:
- Look at existing docstrings in the project
- What patterns should be consistent?
- What's missing that should always be there?
- Reopen
order_service.py - Ask the same question: "Add a docstring to the
create_ordermethod" - Compare: Does Copilot now follow YOUR standards?
When you're done, compare your solution with our reference implementation:
→ golden-examples/documentation-track/.github/instructions/documentation-standards.instructions.md
Reflection questions:
- What did you include that we didn't?
- What did we include that you missed?
- Which version would work better for YOUR team?
Create a second instruction file for a different file type:
- Markdown style guide for
**/*.mdfiles - API endpoint patterns for
**/api/**/*.pyfiles
You've created great standards. But what if you want to generate docs for a whole file at once?
- Try asking Copilot: "Generate documentation for all functions in this file"
- Notice: You have to explain your requirements every time. That's tedious.
💡 Prompt files solve this — they're reusable commands you invoke with /.
Your task: Create a prompt file that generates documentation on demand.
-
Create the file:
.github/prompts/generate-docs.prompt.md -
Start with the template:
starter-templates/prompt-template.prompt.md -
Customize it for documentation generation:
- What
toolsdoes the agent need? (search,createFile,editFile) - How should it reference your instructions file?
- What variables will you use? (
${file},${selection})
- What
-
Key insight: Use
[text](path)syntax to reference your instructions:Follow the standards in [documentation-standards](../instructions/documentation-standards.instructions.md)
- Open any Python file in the sample project
- Type in Copilot Chat:
/generate-docs - Verify: Does the command appear? Does it use your prompt?
→ golden-examples/documentation-track/.github/prompts/generate-docs.prompt.md
Prompt files work great, but complex tasks need separation of concerns:
- Analysis should be thorough (read-only, no side effects)
- Generation should be focused (create/edit files)
Combining both in one prompt leads to:
- Analysis that's rushed to get to generation
- Generation that missed important context
💡 Multi-agent handoffs solve this — specialized agents that collaborate.
Your task: Create two agents that work together.
-
Create:
.github/agents/doc-analyzer.agent.md -
Start with template:
starter-templates/agent-template.agent.md -
Customize for analysis:
- Give it read-only tools:
search,usages,fetch - Define its persona: "You analyze code to identify documentation needs"
- Add a handoff to the writer agent (this is the key!)
- Give it read-only tools:
Handoff syntax (add to frontmatter):
handoffs:
- label: "📝 Write Documentation"
agent: doc-writer
prompt: "Based on my analysis, generate documentation."
send: false-
Create:
.github/agents/doc-writer.agent.md -
Give it editing tools:
createFile,editFile,search -
Define its persona: "You create documentation based on analysis"
-
Reference your instruction file in the body
- Open Copilot Chat
- Select "Doc Analyzer" from the Chat mode dropdown (click the mode selector at the top)
- Ask it to analyze a file
- Look for the handoff button — it should appear after analysis!
- Click the button to hand off to Doc Writer
- Analyzer:
golden-examples/documentation-track/.github/agents/doc-analyzer.agent.md - Writer:
golden-examples/documentation-track/.github/agents/doc-writer.agent.md
Option A: Add a third agent
Add a "Doc Reviewer" that reviews generated documentation, suggests improvements, and has a handoff back to Doc Writer for revisions.
Option B: Parallel subagents (Advanced)
Modify your Analyzer to:
- Break the project into independent documentation tasks (one per module)
- Output a task list with dependencies
- Have the Generator use
runSubagenttool to spawn parallel workers per critical path
This demonstrates the PROSE Orchestration principle: decompose big tasks into smaller, parallelizable units.
💡 Going Further: This same pattern works with the GitHub Coding Agent in the cloud. Using the GitHub MCP Server, your Generator can create GitHub Issues (
issue_write) and assign Copilot Coding Agents on the cloud to work them asynchronously (assign_copilot_to_issue). Watch the facilitator demo this in the Scaling section!
✅ Completed Exercises 1-3? Continue to Scaling Across the Enterprise
Before creating anything, let's see why instructions matter:
-
Open
sample-projects/contoso-orders-python/src/services/order_service.py -
Ask Copilot (without any instructions):
"Write a unit test for the
create_ordermethod" -
Observe what happens:
- Does it use
pytestorunittest? - Does it follow Arrange-Act-Assert pattern?
- Does it mock the
LegacyAuthProvider(which you MUST use)? - Does it use
structlogfor logging assertions?
- Does it use
💡 This inconsistency is the problem. Every developer gets different test patterns.
Your task: Create an instruction file that enforces YOUR testing standards.
-
Create the file:
.github/instructions/testing-standards.instructions.md -
Start with this skeleton (don't copy-paste the full solution!):
---
applyTo: "**/test_*.py"
---
# Testing Standards
## Framework
<!-- pytest? unittest? What's the project using? -->
## Test Structure
<!-- What pattern? AAA? Given-When-Then? -->
## Naming Convention
<!-- How should tests be named? -->
## Mocking Rules
<!-- What MUST be mocked? What auth provider? -->- Fill in YOUR standards by analyzing the existing tests:
- Look at
sample-projects/contoso-orders-python/tests/ - What patterns are already established?
- What constraints does the legacy auth impose?
- Look at
- Reopen
order_service.py - Ask the same question: "Write a unit test for the
create_ordermethod" - Compare: Does Copilot now follow YOUR standards?
When you're done, compare your solution with our reference:
→ golden-examples/testing-track/.github/instructions/testing-standards.instructions.md
Reflection questions:
- Did you capture the
LegacyAuthProviderconstraint? - How specific were your mocking rules?
- What edge cases did you think to include?
Create a second instruction file for integration tests:
- Different patterns for
**/integration_test_*.py - Database setup/teardown requirements
You've created great testing standards. But generating tests one-by-one is tedious.
- Try asking Copilot: "Generate tests for all untested methods in this file"
- Notice: You have to explain the context every time. That's inefficient.
💡 Prompt files solve this — they're reusable commands you invoke with /.
Your task: Create a prompt file that generates tests on demand.
-
Create the file:
.github/prompts/generate-tests.prompt.md -
Start with the template:
starter-templates/prompt-template.prompt.md -
Customize it for test generation:
- What
toolsdoes the agent need? (search,createFile,editFile) - How should it reference your instruction file?
- What should it analyze:
${file}or${selection}?
- What
-
Key insight: Reference your instruction file:
Follow the standards in [testing-standards](../instructions/testing-standards.instructions.md)
- Open any Python file in the sample project
- Type in Copilot Chat:
/generate-tests - Verify: Does the command appear? Does it use your testing standards?
→ golden-examples/testing-track/.github/prompts/generate-tests.prompt.md
Good tests require good analysis first:
- What needs testing? (public methods, edge cases, error paths)
- What needs mocking? (external services, legacy auth)
- What's already covered? (avoid duplicate tests)
A single prompt tries to do everything at once. That leads to:
- Shallow analysis
- Missed edge cases
- Redundant tests
💡 Multi-agent handoffs solve this — an Analyzer thinks, a Generator acts.
Your task: Create two agents that work together.
-
Create:
.github/agents/test-analyzer.agent.md -
Start with template:
starter-templates/agent-template.agent.md -
Customize for analysis:
- Read-only tools:
search,usages,fetch - Persona: "You analyze code to identify what needs testing"
- Output: List of test cases with edge cases and mocking requirements
- Read-only tools:
-
Add the handoff (in frontmatter):
handoffs:
- label: "🧪 Generate Tests"
agent: test-generator
prompt: "Based on my analysis, generate comprehensive tests."
send: false-
Create:
.github/agents/test-generator.agent.md -
Editing tools:
createFile,editFile,search -
Persona: "You implement tests based on the analysis provided"
-
Reference your testing instruction file
- Open Copilot Chat
- Select "Test Analyzer" from the Chat mode dropdown (click the mode selector at the top)
- Analyze a file — look for thorough output (edge cases, mocking needs)
- Click the handoff button when it appears
- Verify the Test Generator creates proper test files
- Analyzer:
golden-examples/testing-track/.github/agents/test-analyzer.agent.md - Generator:
golden-examples/testing-track/.github/agents/test-generator.agent.md
Option A: Add a third agent
Add a "Test Runner" that runs generated tests, reports failures, and has a handoff to Test Generator for fixes.
Option B: Parallel subagents (Advanced)
Modify your Analyzer to:
- Break the project into independent test tasks (one per untested module)
- Output a task dependency tree
- Have the Generator use
runSubagenttool to spawn parallel test writers per critical path
This demonstrates the PROSE Orchestration principle: decompose big tasks into smaller, parallelizable units.
💡 Going Further: This same pattern works with the GitHub Coding Agent in the cloud. Using the GitHub MCP Server, your Generator can create GitHub Issues (
issue_write) and assign GitHub Copilot Coding Agents on the cloud to work them asynchronously (assign_copilot_to_issue). Watch the facilitator demo this in the Scaling section!
✅ Completed Exercises 1-3? Continue below!
Everyone converges here — regardless of which track you chose.
You've built modular VSCode primitives. Now let's learn about Skills — a different packaging format for sharing capabilities across teams and tools.
| VSCode Primitives (Exercises 1-3) | Agent Skills |
|---|---|
.instructions.md, .prompt.md, .agent.md |
SKILL.md + reference files |
| Modular, separate files | Consolidated in one file |
Invoked explicitly (/command, menu selection) |
Discovered by agent based on intent |
Lives in .github/ |
Lives in .copilot/skills/ or installed globally |
Key insight: Skills aren't containers for VSCode primitives. The SKILL.md is the capability — it contains the guidance inline that agents discover and follow.
pdf-skill/
├── SKILL.md # Metadata + instructions (all inline)
├── reference.md # Optional: detailed docs the skill references
├── forms.md # Optional: specialized guidance
└── scripts/ # Optional: utility scripts
├── fill_fillable_fields.py
└── convert_pdf_to_images.py
📚 Standard: agentskills.io — the emerging specification for agent capabilities
Let's install the skill-creator skill — it will help you package your work from Exercises 1-3 into a shareable skill.
In your terminal, run:
copilotCodespaces: If prompted "Would you like to install the GitHub Copilot CLI extension?", press Y.
This opens the Copilot CLI interactive interface. All following commands run inside this TUI.
In the Copilot CLI, type:
/plugin marketplace add anthropics/skills
/plugin install example-skills@anthropic-agent-skills
Then type /exit to leave the Copilot CLI.
Back in your regular terminal:
ls ~/.copilot/installed-plugins/anthropic-agent-skills/example-skills/skills/You should see folders including skill-creator/.
💡 Notice: The CLI installed skills into
~/.copilot/installed-plugins/.... VSCode's default skill paths are~/.copilot/skills,.github/skills, etc. — so we need to add this new location.
Add this to your .vscode/settings.json:
"chat.agentSkillsLocations": {
"~/.copilot/installed-plugins/anthropic-agent-skills/example-skills/skills": true
}
⚠️ This is the key step. You're telling VSCode where to discover CLI-installed skills - more precisely, the example-skills plugin you just installed.
Press Cmd+Shift+P → Developer: Reload Window
Open a new Copilot Chat window and ask:
"What skills do you have available?"
The skill-creator should appear.
Now use skill-creator to package YOUR work from Exercises 1-3 into a reusable skill.
In Copilot Chat, ask:
If you followed the 📖 Documentation track:
"Help me create a skill that packages my documentation workflow. I have a documentation-standards.instructions.md, a generate-docs.prompt.md, and doc-analyzer + doc-writer agents in .github/. Bundle these patterns into a skill that other teams can use."
If you followed the 🧪 Testing track:
"Help me create a skill that packages my testing workflow. I have a testing-standards.instructions.md, a generate-tests.prompt.md, and test-analyzer + test-generator agents in .github/. Bundle these patterns into a skill that other teams can use."
- Analyzes your
.github/instructions/,.github/prompts/,.github/agents/ - Creates
SKILL.mdwith proper frontmatter (name, description, globs) - Consolidates the guidance inline in the SKILL.md body
- Places it in a skill folder in your repository
After creation, start a new chat and ask:
"Generate documentation for order_service.py" (or "Generate tests...")
The agent should discover your skill based on your intent — no explicit invocation needed.
| Primitive | How It's Used | Best For |
|---|---|---|
.instructions.md |
Auto-applied by applyTo glob |
Team standards (always on) |
.prompt.md |
Invoked with /command |
Reusable tasks |
.agent.md |
Selected from Chat dropdown | Specialized personas |
SKILL.md |
Discovered by intent | Shareable capabilities |
| Role | % | What They Do |
|---|---|---|
| Pioneers | 5% | Create skills from proven patterns |
| Validators | 20% | Test in real workflows |
| Consumers | 75% | Install and benefit — no authoring needed |
"Pioneers capture patterns once, everyone benefits forever."
Remember the Vibe Coding Cliff from the start? Copilot generating inconsistent code, ignoring team patterns, using forbidden libraries?
That's solved now.
You built a system where:
- Instructions enforce standards automatically — no one has to remember
- Prompts capture complex workflows — reusable with
/ - Agents separate concerns and orchestrate work — one specialized agent hands off to another
- Skills package everything — shareable across your organization and automatically picked by GitHub Copilot when it makes sense
This isn't just better prompting. It's AI Native Development — treating AI guidance as engineered artifacts with the same rigor you apply to code.
| If you want to... | Then... |
|---|---|
| Go deeper on theory | Read the PROSE Framework |
| Share with your team | Create a skill from your work today |
| Scale across enterprise | Establish a skills marketplace for your org |
| Keep experimenting | Try the Challenge sections you skipped |
The gap between "Copilot demo magic" and "enterprise-ready AI" is no longer a mystery. You just bridged it.
| Issue | Solution |
|---|---|
copilot command not found |
In Codespaces: should auto-prompt to install. Otherwise: gh extension install github/gh-copilot |
| Marketplace add fails | Check: gh auth status — ensure you're authenticated |
| VSCode doesn't see skills | Verify path in settings matches actual folder structure |
| Still not working | Open a NEW chat window after reload |
| Skill-creator not helpful | Manual fallback: see golden-examples/skills-demo/ |
| Folder | Purpose | When to Use |
|---|---|---|
sample-projects/ |
Brownfield code to practice on | Exercises 1-3 |
golden-examples/ |
Complete reference implementations | After each exercise to compare |
starter-templates/ |
Minimal scaffolds to start from | When building your own |
.github/ |
Where YOU create your files | During exercises |
| File | Location | Purpose |
|---|---|---|
| Instructions | .github/instructions/*.instructions.md |
Team standards that auto-apply |
| Prompts | .github/prompts/*.prompt.md |
Reusable /commands |
| Agents | .github/agents/*.agent.md |
Specialized AI personas |
Instructions:
---
applyTo: "**/*.py" # Glob pattern for auto-apply
---Prompts:
---
name: my-command
description: What it does
tools: ['search', 'editFile', 'createFile']
---Agents:
---
name: My Agent
description: What it does
tools: ['search', 'usages', 'runSubagent']
handoffs:
- label: "Next Step"
agent: other-agent
---| Tool | Purpose |
|---|---|
search |
Search codebase |
usages |
Find references |
editFile |
Modify existing files |
createFile |
Create new files |
runSubagent |
Spawn parallel subagent for independent task |
fetch |
Fetch URLs |
| Variable | Description |
|---|---|
${file} |
Current file path |
${selection} |
Selected text |
${input:name} |
User input prompt |
→ Full reference: cheatsheet.md
By the end of the lab, you should have:
| Deliverable | How to Verify |
|---|---|
| 1+ Instruction files | Open matching file → Copilot follows your standards |
| 1 Prompt file | Type /yourcommand → it appears and works |
| 2 Agent files with handoff | Select agent from dropdown → handoff button appears |
| skill-creator installed | ls ~/.copilot/installed-plugins/.../skill-creator/ shows SKILL.md |
| VSCode configured | .vscode/settings.json has chat.agentSkillsLocations with CLI path |
| Your skill created | .copilot/skills/your-skill/SKILL.md exists |
| Topic | Link |
|---|---|
| VS Code Customization | code.visualstudio.com/docs/copilot/customization |
| AI-Native Development Guide | danielmeppiel.github.io/awesome-ai-native |
| PROSE Framework Concepts | Awesome AI-Native: Concepts |
- VS Code and GitHub Copilot teams
- The AI-native development community
Happy AI Native Development! 🎉