Developing with GitHub Copilot Agent Mode and MCP
62 comments
·June 30, 2025skydhash
jcelerier
> The goal of software engineering is not to write code faster
That just really depends on your situation. Here's a case I had just last week: we had artists in residency who suddenly showed up with a new, expensive camera that didn't have any easy to use driver but requires the use of their huge and bulky custom SDK.
Claude whipped a basic working c++ proprietary-camera-sdk-to-open-video-sharing-protocol in, what, 2 minutes? From the first go with a basic prompt? Without that it'd have been at least a couple days of development, likely a day just to go through the humongous docs -- except I had at most two hours to put on this. And I already have experience doing exactly this, having written software that involves realsenses, orbbec, leapmotion, Kinect, and all forms of weird cameras that require the use of their c++ SDK.
So the artists would just not be able to do their residency the way they wanted because they only have 3 days on-site to work too.
Or I'd have spent two days for some code that is very likely to only ever being used once, as part of this residency.
Thus in my line of work, being able to output code that works, faster than humans, is absolutely game changer - this situation I'm describing is not the exception, it's pretty much a weekly occurrence.
skydhash
> Claude whipped a basic working c++ proprietary-camera-sdk-to-open-video-sharing-protocol in, what, 2 minutes? From the first go with a basic prompt? Without that it'd have been at least a couple days of development, likely a day just to go through the humongous docs
That's basically what I said. They are example generators. Their creators have not published the source of the data that goes in their training so we can assume that everything that is accessible from the web (and now from places that use their tools) was used.
So if you're already know the domain to provide the right keywords, and can judge the output to see if it's good enough, it's going to be fine. Especially, as you've said, it's something that you're used to do. But do you need the setup mentioned in TFA?
Most software engineering tasks involved more than getting some basic prototype working. After the 80% work done by the prototype, there's the other 80% to have reliable code. With LLMs, you're stuck with the first 80%, and that already require someone experienced to get there.
stpedgwdgfhgdd
It definitely takes a lot of experience writing code with a LLM. Like a junior engineer it makes tons of (small) mistakes. It takes years of practice to detect the LLM is introducing small bugs that will reveal themselves only after extensive testing or running in prod.
It will be interesting to see how beginning developers will deal with these bugs as they did not write the code and do not have a mental model of the code. Will quality drop? Perhaps some can be compensated by letting the LLM do extensive testing.
leetrout
Similar anecdata:
I was writing some automated infra tests with Terraform and Terratest and I wanted to deploy to my infra. My tests are compiled into a binary and shipped to ECS Fargate as an image to run.
Instead of doing docker in docker to pull and push my images and before googling for an existing lib for managing images directly I asked Claude to write code to pull the layer tarballs from docker hub and push them to my ECR. It did so flawlessly and even knew how to correctly auth to dockerhub with their token exchange on the first try.
I glanced at the code and surmised it would have taken me an hour or two to write and test as I read the docs on the APIs.
I am sure there is a lib somewhere that does this but even that would have likely taken more time than the code gen I got.
null
stpedgwdgfhgdd
“The goal of software engineering is not to write code faster”
Writing proper code including tests and refactorings takes substantial time.
It is definitely worth it to do this faster, if only to get faster feedback to go back to the first phase; requirements and analysis.
I have experienced this myself, using CC it took me a few hours less to realise i was on the wrong track.
skydhash
Requirements are filters for the set of implementations. The only feedback is the count and the nature of the results. And what you usually do is to either abandoning it or restricting it further. Because the source of the requirements is the business domain which exist outside the technical domain.
Selecting one of the implementation over the other is design, aka making decisions. Sometimes you have to prototype it out to where which parameters is the best. And sometimes a decision can revert an earlier one and you have to investigate the impact radius of that change.
But coding is straightforward translation. The illusion of going faster is that we forego making decisions. Instead we're hoping that the agent makes the correct ones based on some general direction, forgetting than an inch deviation can easily turn into a mile error. The hopeful things would have been an iteration, adding the correct decisions and highlighting the bad ones to avoid. But no one have that kind of patience. And those that use LLMs often finish with a "LGTM" patch.
The normal engineering is to attain a critical mass of decisions and turns that immediately to formal notation which is unambiguous. Then we validate with testing if that abrupt transformation was done properly. But all the decisions were made with proper information.
alterom
Oh, so Claude in this case was a bandaid over a communication problem (the artists not getting the memo about not suddenly showing up with new equipment that you have to support, with no prior discussion, warning, or heads-up).
It absolutely is a game changer.
Now the game for you is to deal with whatever equipment they throw at you, because nobody is going to bother consulting you in advance.
Just use AI, bro.
Good luck next time they show up with gear that Claude can't help you with. Say, because there's no API in the first place, and it's just incompatible the existing flow.
>So the artists would just not be able to do their residency the way they wanted because they only have 3 days on-site to work too.
That, to me, sounds like the good outcome for everyone involved.
It would have been their problem, which they were perfectly capable of solving by suddenly showing up with supported equipment on the job site.
Wanting you to deal with their "suddenly showing up" is not the right thing to want.
If want that, they shouldn't be able to do the residency the way they want.
Saying this as a performing musician: verifying that my gear will work at the venue before the performance is my responsibility, not the sound tech's. Ain't their job to have the right cables or power supplies. I can't fathom showing up with a setup and simply demanding to make it work.
IDK what kind of divas you work with, but what you described is a solid example of a situation when the best tool is saying "no", not using Claude.
The fact that it's a weekly occurrence is an organizational issue, not a software one.
And please — please don't use a chatbot to resolve that one either.
outofpaper
Chill Winston! Artists in residency are not know for being technical. They are not divas demanding support but individuals who are supposed to have access to resources, space, and support that allows them to develop as artists.
The spaces they are working with often benefit from having talented creatives but this isn't a performance gig we're talking about.
CPLX
Excellent point! His approach worked in practice, but it would never work in a theoretical situation where proving a point is more important than just solving the problem, so it's obviously worthless.
mtkd
Unnecessarily critical take on a quality write-up
Much of the criticism of AI on HN feels driven by devs who have not fully ingested what is going with MCP, tools etc. right now as not looked deeper than making API calls to an LLM
danielbln
OP's comment also seems to be firmly stuck in 2023 when you'd prompt ChatGPT or whatever. The fact that LLMs today, when strapped into an agentic harness, can do or help with all of these things (ideation, architecture, use linters, validate code, evaluate outputs, and a million other things) seems to elude them.
skydhash
Dothey do requirement gatherings? Like talking to stakeholder and getting their input of what the feature should, translating business jargon to domain terms?
No.
Do they do the analysis? Removing specs that conflict with each other, validating what's possible in the technical domain and in the business domain?
No.
Do they help with design? Helping coming up with the changes that impact the current software the least, fitting in the current architecture and be maintainable in the feature.
All they do is pattern matching on your prompt and the weights they have. Not a true debate or weighing options based on the organization context.
Do they help with coding?
A lot if you're already experienced with the codebase and the domain. But that's the easiest part of the job.
Do they help with testing? Coming up with tests plan, writing test code, running them, analysing the output of the various tools and producing a cohesive report of the defects?
I don't know as I haven't seen any demo on that front.
Do they help with maintenance? Taking the same software and making changes to keep it churning on new platforms, through dependencies updates and bug fixes?
No demo so far.
troupo
This is the crypto discussion again.
"All our critics are clueless morons who haven't realised the one true meaning of things".
Have you once considered that critics have tried these tools in all these combinations and found them lacking in more ways than one?
diggan
The huge gap between the people who claim "It helps me some/most of the time" and the other people who claim "I've tried everything and it's all bad" is really interesting to me.
Is it a problem of knowledge? Is it a problem of hype that makes people over-estimate their productivity? Is it a problem of UX, where it's hard to figure out how to use these tools correctly? Is it a problem of the user's skills, where low-skilled developers see lots of value but high-skilled developers see no value, or even negative value sometimes?
The experiences seem so different, that I'm having a hard time wrapping my mind around it. I find LLMs useful in some particular instances, but not all of them, and I don't see them as the second coming of Jesus. But then I keep seeing people saying they've tried all the tools, and all the approaches, and they understand prompting, yet they cannot get any value whatsoever from the tools.
This is maybe a bit out there, but would anyone (including parent) be up for sending me a screen recording of exactly what you're doing, if you're one of the people that get no value whatsoever from using LLMs? Or maybe even a video call sharing your screen?
I'm not working in the space, have no products or services to sell, only curious is why this vast gap seemingly exists, and my only motive would be to understand if I'm the one who is missing something, or there are more effective ways to help people understand how they can use LLMs and what they can use them for.
My email is on my profile if anyone is up for it. Invitation open for anyone struggling to get any useful responses from LLMs.
risyachka
>> is going with MCP, tools etc.
all these are just tools. there is nothing more to it. there is no etc.
hedgehog
I've used Copilot a bit and found it helpful for both coding and maintenance. My setup is pretty basic, and I only use it in places where the task is tedious and I am confident reviewing the diff or other output is sufficient. Things like:
"Refactor: We are replacing FlogSnarble with FloozBazzle. Review the example usage below and replace all usage across the codebase. <put an example>"
"In the browser console I see the error below. The table headers are also squished to the left while the row contents are squished to the right. Propose a fix. <pasted log and stack trace>."
"Restructure to early exit style and return an optional rather than use exceptions."
"Consolidate sliceCheese and all similar cheese-related utility functions into one file. Include doc comments noting the original location for each function."
By construction the resulting changes pass tests, come with an explainer outlining what was changed and why, and are open in tabs in VS Code for review. Meanwhile I can spend the time reading docs, dealing with house keeping tasks, and improving the design of what I'm doing. Better output, less RSI.
skydhash
The reason I tend not to use LLMs for these taks is that they are great for thinking moments. They're so mechanical that you tend to reflect instead. Also I use Vim and Emacs which are great for that type of works (fast navigation and good editing tools) and it's not as tedious as doing in editors like VS Code and Sublime (which are not great at editing). You can even concoct something with tmux, ripgrep/fzf, and nano that is better than VS Code at this.
kasey_junk
I use llms for each of those steps and modeling agent workflows following them has been very successful for me.
I think I’ve become disgruntled with the anti-llm crowd because every objection seems to boil down to “you are doing software engineering wrong” or “you have just described a workflow that is worse than the default”.
Stop for a minute and start from a different premise. There are people out there who know how to deliver software well, have been doing it for decades and find this tooling immensely productivity enhancing. Presume they know as much as you about the industry and have been just as successful doing it.
This person took the time to very specifically outline their workflow and steps in a clear and repeatable way. Rather than trying it and giving feedback in the same specific way you just said they have no idea what they are doing.
Try imagining that they do and it’s you who are not getting the message and see if you get your a different place.
skydhash
Criticism is not refutation. It's identifying flaws (subjectively or objectively) . I'm all for it if you can show me that those flaws don't exist or are inconsequential.
Workflows are personal and the only one who can judge them are the one who is paying for the work. At most, we can compare them in order to improve our own personal workflow.
My feedback is maybe not clear enough. But here are the main points:
- Too complicated in regards to the example provided, with the actual benefits for the complication not explained.
- Not a great methodology because the answer to the queries are tainted by the query. Like testing for alcohol by putting the liquid in a bottle of vodka. When I search for something that is not there, I expect "no results" or an error message. Not a mirage.
- The process of getting information, making decisions, and then acting is corrupted by putting it only at some irrelevant moments: Before even knowing anything; When presented with a restricted list of options with no understanding of the factors that play in the restriction; and after the work is done.
mvanbaak
I fear for the coming 2 to 3 generation of software engineers. Will they be able to handle problems if the AI is not available or is the source of the problem? Only time will tell.
mtkd
Same was said about dejanews, stackoverflow etc. and intellisense
alterom
Stack overflow didn't create a positive feedback loop where the solution to having to deal with an obscure, badly written, incomprehensible code base is creating an more incomprehensible sloppy code to glue it all together.
Neither did intellisense. If anything, it encouraged structuring your code better so that intellisense would be useful.
Intellisense does little for spaghetti code. And it was my #1 motivation to document the code in a uniform way, too.
The most important impact of tools is that they change the way we think and see the world, and this shapes the world we create with these tools.
When you hold a hammer, everything is a nail, as the saying goes.
And when you hold a gun, you're no longer a mere human; you're a gunman. And the solution space for all sorts of problems starts looking very differently.
The AI debate is not dissimilar to the gun debate.
Yes, both guns and the AI are powerful tools that we have to deal with now that they've been invented. And people wielding these tools have an upper hand over those who don't.
The point that people make in both debates that tends to get ignored by the proponents of these tools is that excessive use of the tools is exacerbating the very problem these tools are ostensibly solving.
Giving guns to all schoolchildren won't solve the problem of high school shootings — it will undeniably make it worse.
And giving the AI to all software developers won't solve the problem of bad, broken code that negatively impacts people who interact with it (as either users or developers).
Finally, a note. Both the gun technology and the AI have been continuously improved since their invention. The progress is undeniable.
Anyone who is thinking about guns in 1850 terms is making a mistake; the Maxim was a game changer. And we're not living in ChatGPT 2.0 times either.
But with all the progress made, the solution space that either tool created hasn't been changing in nature. A problem that wasn't solveable with a flintlock musket or several remains intractable for an AK-74 or an M16.
Improvements in either tech certainly did change the scale at which the tools were applied to resolve all sorts of problems.
And the first half of the 20th century, to this day, provides most of the most brilliant, masterful examples of using guns at scale.
What is also true is that the problems never went away. Nor did better guns made the lives of the common soldier any better.
The work of people like nurse Nightingale did.
And most of that work was that the solution to increasingly devastating battlefield casualties and dropping battlefield effectiveness wasn't giving every soldier a Maxim gun — it was better hygiene and living conditions. Washing hands.
The Maxim gun was a game changer, but it wasn't a solution.
The solution was getting out of the game with stupid prizes (like dying of cholera or typhoid fever). And it was an organizational issue, not a technological one.
* * * * *
To end on a good note, an observation for the AI doomers.
Genocides have predated the guns by millenia, and more people have died by the machete and the bayonet than by any other weapon even in the 20th century. Perhaps the 21st too.
Add disease and famine, and death by gun are a drop in the bucket.
Guns aren't a solution to violence, but they're not, in themselves, a cause of it on a large enough scale.
Mass production of guns made it possible to turn everyone into a soldier (and a target), but the absolute majority of people today have never seen war.
And while guns, by design, are harmful —
— they're also hella fun.
xena
But skydhash, if you don't nuke the anthill how can you be sure the ants are dead? Nuke it from orbit, it's the only way to be sure!
jpalomaki
Not sure how things are with Copilot, but with Claude Code a good alternative for MCP is in some cases old fashioned command line tools.
GitHub has gh, there's open source jira-cli, Cloudflare has wrangler and so on. No configuration needed, just mention on the agent doc that this kind of tool is available. Likely it will figure out the rest.
And if you have more complicated needs, then you can combine the commands, add some jq magic, put to package.json and tell agent to use npm run to execute it. Can be faster than doing it via multiple MCP calls.
null
skatanski
Really cool article. Personally I think the really cool bit about MCP is that you can very easily write your own server which can talk to the db or call various APIs. That server can run locally and be used by GitHub Copilot for answering questions and executing tasks. I also find it useful in a tight corporate environment where it’s more difficult to get a dedicated LLM API key. You can easily do POCa with what every dev has access to.
null
luckystarr
Playwright MCP is intriguing. I'll definitely give it a run today. Anybody got any tipps or gotchas?
never_inline
Can someone elucidate how using a full blown browser is improvisation over using say markitdown / pandoc / whatever? Given that most useful coding docs sites are static (made with sphinx or mkdocs or whatever)
Kostarrr
If they didn't change it, Playwright uses the aria (accessibility) representation for their MCP agent. It strongly depends on the web page whether or not that yields good results.
We at Octomind use a mix of augmented screenshots and page representation to guide the agent. If Playwright MCP doesnt work on you page, give our MCP a try. We have a free tier.
mohsen1
I've had success using BrowserMCP
It really feels magical when the AI agent can browse and click around to understand the problem at hand
Also, sometimes an interactive command can stop agents from doing things. I wrote a small wrapper to always return so agents never stop from working
WhitneyLand
What’s with Copilot “agent mode” anyway, how does it compare to using Claude Code or Gemini CLI?
null
jonstewart
Just yesterday I was reading a critique of MCP that specifically mentioned the GitHub MCP server as being harder to use (from model perspective) and requiring more tokens than having the agent execute git commands directly. I am surprised to see it listed here and also surprised to see two different web search servers and the time one. I would appreciate more detail from the author about the utility of each MCP server—overloading an agent with servers seems like it could be counterproductive.
And again, the most convoluted setup for development with an example that fails to demonstrate why you should adopt such practice. It’s like doing a GDB demo with an hello world program. Or doing Linux From Scratch to show how you can browse the web.
The goal of software engineering is not to write code faster. Coding is itself a translation task (and a learning workflow, as you can’t keep everything in your head). What you want is the power of decision, and better decision can be made with better information. There’s nothing in the setup that helps with making decision.
There are roughly six steps in software engineering, done sequentially and iteratively. Requirements gathering to shape the problem, Analysis to understand it, Design to come up with a solution, Coding to implement it, Testing to verify the solution, and Maintenance to keep the solution working. We have methods and tooling that help with each, giving us relevant information based on important parameters that we need to decide upon.
LLMs are example generators. Give it a prompt and it will gives the answer that fits the conversation. It’s an echo chamber powered by a lossy version of the internet. Unlike my linting tool which will show me the error when there’s one and not when I tell it to.
ADDENDUM
It's like an ivory tower filled with yes-men and mirrors that always reply "you're the fairest of them all". My mind is already prone to lie to itself. What I need most is tooling that is not influenced by what I told it, or what others believe in. My browser is not influencing my note taking tool, telling it to note down the first two results it got from google. My editor is not telling the linter to sweep that error under a virtual rug. And QA does not care that I've implemented the most advanced abstraction if the software does not fit the specs.